Abstract: A key question in machine learning research is understanding the trade-off between the size of the training set and the accuracy of the classification function learned by the algorithm. This trade-off can be fully characterized by a single quantity: the VC-dimension of the family of functions that the algorithm may learn. Beautifully combinatorial in nature, the VC-dimension is elusive to compute exactly, but upper bounds to it are sufficient to understand the trade-off. In this talk, we report on our recent results on improved upper bounds to the VC-dimension of intersections of half-spaces in high dimensions, a very popular class of functions. We show a novel connection with convex polytopes and with planar graphs. All the terms and results will be explained without assuming any specific background in the audience.
Refreshments will be served at 3 p.m. in Science Center C209.