Vapnik-Chervonenkis dimension and the mind’s eye

Friday, March 4, 20162:00 pmGC 4102 (Science Center)

Vapnik-Chervonenkis dimension and the mind’s eye

Brown University -- Division of Applied Mathematics

Google engineers routinely train query classifiers, for ranking advertisements or search results, on more examples than any human being hears in a lifetime. A human being who sees a meaningfully new image every second for one-hundred years will not see as many images as Google has in its libraries for training object detectors and image classifiers. Children learn more efficiently, achieving nearly perfect competence on about 30,000 categories in their first eight years. Upper bounds on the number of training samples needed to learn a classifier with similar competence can be derived using the Vapnik-Chervonenkis dimension, or the metric entropy, but these suggest that not only does Google need more examples, but all of evolution might fall short.

I will discuss machine learning and human learning, with a focus on representation. I will argue that brains simulate rather than classify, and I will suggest that the rich relational information available in an imagined world argues against abstraction and in favor of topological, almost literal, representation. I will speculate about physiological mechanisms that would support topologically organized neuronal activity patterns.

This talk is sponsored by the Initiative for the Theoretical Sciences at the CUNY Graduate Center.

Posted by on March 2nd, 2016