BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Département de mathématiques et applications - ECPv6.2.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Département de mathématiques et applications
X-ORIGINAL-URL:https://www.math.ens.psl.eu
X-WR-CALDESC:évènements pour Département de mathématiques et applications
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20100328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20101031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20100517T133000
DTEND;TZID=Europe/Paris:20100517T133000
DTSTAMP:20260414T013908
CREATED:20100517T113000Z
LAST-MODIFIED:20211104T085154Z
UID:7896-1274103000-1274103000@www.math.ens.psl.eu
SUMMARY:The complexity of algorithmic teaching\, query learning\, and sample compression
DESCRIPTION:Computational learning theory is concerned with the complexityof machine learning problems\, for instance with the question ofhow many training examples are needed for a machine toidentify a concept in a given class of possible target concepts.   This presentation focuses on the combinatorial structureof concept classes that can be learned from a small number ofexamples. When learning from randomly chosen examples\, theVC-dimension is known to provide bounds on the numberof examples needed for learning. However\, for previously studiedmodels of algorithmic teaching (i.e.\, learning from helpfulexamples)\, no such combinatorial parameters are known.   Our new results show that the recently introduced notion of recursiveteaching dimension (RTD) produces more natural complexitybounds for teaching. For numerous quite general cases\, the RTDis upper-bounded by the VC-dimension\, thus creating links betweenteaching complexity and the complexity of learning from randomexamples. The RTD further establishes new lower bounds on thequery complexity for a variety of efficient query learning models.Moreover\, some of our results exhibit a remarkable connectionbetween the RTD and the size of sample compression schemes\,upper bounds of which are a long-standing target in computationallearning theory. The RTD model can hence be considered as defining acombinatorial structure that characterizes learning from a smallnumber of examples.   (Joint work with Thorsten Doliwa and Hans Simon)
URL:https://www.math.ens.psl.eu/evenement/the-complexity-of-algorithmic-teaching-query-learning-and-sample-compression/
LOCATION:IHP Salle 314
CATEGORIES:SMILE in Paris
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20100517T143000
DTEND;TZID=Europe/Paris:20100517T143000
DTSTAMP:20260414T013908
CREATED:20100517T123000Z
LAST-MODIFIED:20211104T085317Z
UID:7911-1274106600-1274106600@www.math.ens.psl.eu
SUMMARY:Information-based complexity of black-box convex optimization: a new look via feedback information theory
DESCRIPTION:In this talk\, I will revisit information complexity of black-box convex  optimization\, first studied in the seminal work of Nemirovski and  Yudin\, from the perspective of feedback information theory. These  days\, large-scale convex programming arises in a variety of  applications\, and it is important to refine our understanding of its  fundamental limitations. The goal of black-box convex optimization is  to minimize an unknown convex objective function from a given class over a compact\, convex domain using an iterative scheme that generates approximate solutions by querying an oracle for local information  about the function being optimized. The information complexity of a  given problem class is defined as the smallest number of queries  needed to minimize every function in the class to some desired  accuracy. We present a simple information-theoretic approach that not  only recovers many of the results of Nemirovski and Yudin\, but also  gives some new bounds pertaining to optimal rates at which iterative  convex optimization schemes approach the solution. As a bonus\, we give  a particularly simple derivation of the minimax lower bound for a  certain active learning problem on the unit interval.  \n This is a joint work with Alexander Rakhlin (UPenn).
URL:https://www.math.ens.psl.eu/evenement/information-based-complexity-of-black-box-convex-optimization-a-new-look-via-feedback-information-theory/
LOCATION:IHP Salle 314
CATEGORIES:SMILE in Paris
END:VEVENT
END:VCALENDAR