In statistical learning theory, the notion of VC-dimension was developed by Vapnik and Chervonenkis in the context of approximating probabilities of events by the relative frequency of random test points. This notion has been widely used in combinatorics and computer science, and is also directly connected to model theory through the study of NIP theories. This talk will start with an overview of VC-dimension, with examples motivated by discrete geometry and additive combinatorics. I will then present several model theoretic applications of VC-dimension. The selection of topics will focus on the use of finitely approximable Keisler measures to analyze the structure of algebraic and combinatorial objects with bounded VC-dimension.
- Séminaire Géométrie et théorie des modèles