BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Département de mathématiques et applications - ECPv6.2.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.math.ens.psl.eu
X-WR-CALDESC:évènements pour Département de mathématiques et applications
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20160327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20161030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20170326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20171029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20180325T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20181028T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20190331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20191027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20230522T130000
DTEND;TZID=Europe/Paris:20230522T140000
DTSTAMP:20260416T200606
CREATED:20230526T094128Z
LAST-MODIFIED:20230526T094128Z
UID:16518-1684760400-1684764000@www.math.ens.psl.eu
SUMMARY:Valentin De Bortoli - Generative modelling with diffusion: theory and practice
DESCRIPTION:25 May 2023\, 13h00-14h00 (Paris time)\, room Amphi Jaures (29 Rue d’Ulm).\nValentin De Bortoli (CNRS and ENS)\nTitle: Generative modelling with diffusion: theory and practice\nAbstract: Generative modeling is the task of drawing new samples from an underlying distribution known only via an empirical measure. There exists a myriad of models to tackle this problem with applications in image and speech processing\, medical imaging\, forecasting and protein modeling to cite a few. Among these methods score-based generative models (or diffusion models) are a new powerful class of generative models that exhibit remarkable empirical performance. They consist of a ‘noising’ stage\, whereby a diffusion is used to gradually add Gaussian noise to data\, and a generative model\, which entails a `denoising’ process defined by approximating the time-reversal of the diffusion. In this talk I discuss three aspects of diffusion models. First\, I will present some of their theoretical guarantees with an emphasis on their behavior under the so-called manifold hypothesis. Such theoretical guarantees are non-vacuous and provide insight on the empirical behavior of these models. Then\, I will turn to the extension of diffusion models to non Euclidean data. Indeed\, classical generative models assume that data is supported on a Euclidean space\, i.e. a manifold with flat geometry. In many domains such as robotics\, geoscience or protein modeling\, data is often naturally described by distributions living on Riemannian manifolds which require new methodologies to be appropriately handled. Finally\, I will turn to constraints on the generative process itself. A well-known limitation of diffusion models is that the forward-time stochastic process must be run for a sufficiently long time for the final distribution to be approximately Gaussian. In contrast\, solving the Schrödinger Bridge problem\, i.e. an entropy-regularized optimal transport problem on path spaces\, yields diffusions which generate samples from the data distribution in finite time. I will present Diffusion Schrödinger Bridge\, an original approximation of the Iterative Proportional Fitting procedure to solve the Schrödinger Bridge problem.
URL:https://www.math.ens.psl.eu/evenement/valentin-de-bortoli-generative-modelling-with-diffusion-theory-and-practice/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20230325T110000
DTEND;TZID=Europe/Paris:20230325T120000
DTSTAMP:20260416T200606
CREATED:20230526T094304Z
LAST-MODIFIED:20230526T094304Z
UID:16520-1679742000-1679745600@www.math.ens.psl.eu
SUMMARY:Thomas Serre - Feedforward and feedback processes in visual processing
DESCRIPTION:29 March 2023\, Thomas Serre (Brown University)\nTitle: Feedforward and feedback processes in visual processing\nAbstract: Progress in deep learning has spawned great successes in many engineering applications. As a prime example\, convolutional neural networks\, a type of feedforward neural networks\, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk\, however\, I will show that these neural networks (and recent extensions) exhibit a limited ability to solve seemingly simple visual reasoning problems. Our group has developed a computational neuroscience model of the feedback circuitry found in the visual cortex. The model was constrained by the anatomy and physiology of the visual cortex and shown to account for diverse visual illusions – providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for visual reasoning. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
URL:https://www.math.ens.psl.eu/evenement/thomas-serre-feedforward-and-feedback-processes-in-visual-processing/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20230202T120000
DTEND;TZID=Europe/Paris:20230202T130000
DTSTAMP:20260416T200606
CREATED:20230123T104951Z
LAST-MODIFIED:20230123T104951Z
UID:16258-1675339200-1675342800@www.math.ens.psl.eu
SUMMARY:Remi Gribonval - Rapture of the deep: highs and lows of sparsity in a world of depths
DESCRIPTION:Thursday 2nd of February 2022\, 12h00-13h00 (Paris time)\, room Amphi Jaures (29 Rue d’Ulm).\nRemi Gribonval (INRIA)\n \nTitle: Rapture of the deep: highs and lows of sparsity in a world of depths\n \nAbstract: Promoting sparse connections in neural networks is natural to control their complexity. Besides\, given its thoroughly documented role in inverse problems and variable selection\, sparsity also has the potential to give rise to learning mechanisms endowed with certain interpretability guarantees. Through an overview of recent explorations around this theme\, I will compare and contrast classical sparse regularization for inverse problems with multilayer sparse regularization. During our journey\, I will notably highlight the role of rescaling-invariances in deep parameterizations. In the process we will also be remembered that there is life beyond gradient descent\, as illustrated by an algorithm that brings speedups of up to two orders of magnitude when learning certain fast transforms via multilayer sparse factorization.
URL:https://www.math.ens.psl.eu/evenement/remi-gribonval-rapture-of-the-deep-highs-and-lows-of-sparsity-in-a-world-of-depths/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20221215T120000
DTEND;TZID=Europe/Paris:20221215T130000
DTSTAMP:20260416T200606
CREATED:20230123T105231Z
LAST-MODIFIED:20230123T105231Z
UID:16260-1671105600-1671109200@www.math.ens.psl.eu
SUMMARY:Wolfram Pernice - Computing beyond Moore's law with photonic hardware
DESCRIPTION:Thursday December 15 2022\, Wolfram Pernice (University of Münster)\nTitle: Computing beyond Moore’s law with photonic hardware\nAbstract: Conventional computers are organized around a centralized processing architecture\, which is well suited to running sequential\, procedure-based programs. Such an architecture is inefficient for computational models that are distributed\, massively parallel and adaptive\, most notably those used for neural networks in artificial intelligence. In these application domains demand for high throughput\, low latency and low energy consumption is driving the development of not only new architectures\, but also new platforms for information processing. Photonic circuits are emerging as one promising candidate platform and allow for realizing the underlying computing architectures\, which process optical signals in analogy to electronic integrated circuits. Therein electrical connections are replaced with photonic waveguides which guide light to desired locations on chip. Through heterogeneous integration\, photonic circuits\, which are normally passive in their response\, are able to display active functionality and thus provide the means to build neuromorphic systems capable of learning and adaptation. In reconfigurable photonic architectures in-memory computing allows for overcoming separation between memory and central processing unit as a route for designing artificial neural networks\, which operate entirely in the optical domain.
URL:https://www.math.ens.psl.eu/evenement/wolfram-pernice-computing-beyond-moores-law-with-photonic-hardware/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20221004T113000
DTEND;TZID=Europe/Paris:20221004T123000
DTSTAMP:20260416T200606
CREATED:20220925T094331Z
LAST-MODIFIED:20220925T094420Z
UID:15942-1664883000-1664886600@www.math.ens.psl.eu
SUMMARY:Freddy Bouchet - Probabilistic forecast of extreme heat waves using convolutional neural networks and rare event simulations
DESCRIPTION:Freddy Bouchet (ENS Lyon) \nProbabilistic forecast of extreme heat waves using convolutional neural networks and rare event simulations \nUnderstanding extreme events and their probability is key for the study of climate change impacts\, risk assessment\, adaptation\, and the protection of living beings. Extreme heatwaves are\, and likely will be in the future\, among the deadliest weather events. Forecasting their occurrence probability a few days\, weeks\, or months in advance is a primary challenge for risk assessment and attribution\, but also for fundamental studies about processes\, dataset and model validation\, and climate change studies. We will demonstrate that deep neural networks can predict the probability of occurrence of long lasting 14-day heatwaves over France\, up to 15 days ahead of time for fast dynamical drivers (500 hPa geopotential height fields)\, and at much longer lead times for slow physical drivers (soil moisture). This forecast is made seamlessly in time and space\, for fast hemispheric and slow local drivers. A key scientific message is that training deep neural networks for predicting extreme heatwaves occurs in a regime of drastic lack of data. We suggest that this is likely the case for most other applications of machine learning to large scale atmosphere and climate phenomena. We discuss perspectives for dealing with this lack of data issue\, for instance using rare event simulations. Rare event simulations are a very efficient tool to oversample drastically the statistics of rare events. We will discuss the coupling of machine learning approaches\, for instance the analogue method\, with rare event simulations\, and discuss their efficiency and their future interest for climate simulations.
URL:https://www.math.ens.psl.eu/evenement/probabilistic-forecast-of-extreme-heat-waves-using-convolutional-neural-networks-and-rare-event-simulations/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220630T100000
DTEND;TZID=Europe/Paris:20220630T113000
DTSTAMP:20260416T200606
CREATED:20220530T111233Z
LAST-MODIFIED:20220530T111734Z
UID:15613-1656583200-1656588600@www.math.ens.psl.eu
SUMMARY:Data Science @ New York Times
DESCRIPTION:Chris Wiggins (Columbia & NYT)\n\nData Science @ New York Times \n\nThe Data Science group at The New York Times develops and deploys machine learning solutions to newsroom and business problems.\nRe-framing real-world questions as machine learning tasks requires not only adapting and extending models and algorithms to new or special cases but also sufficient breadth to know the right method for the right challenge.\nI’ll first outline how\n – unsupervised\,\n – supervised\, and\n – reinforcement learning methods\nare increasingly used in human applications for\n – description\,\n – prediction\, and\n – prescription\,\nrespectively.\nI’ll then focus on the ‘prescriptive’ cases\, showing how methods from the reinforcement learning and causal inference literatures can be of direct impact in\n – engineering\,\n – business\, and\n – decision-making more generally.
URL:https://www.math.ens.psl.eu/evenement/data-science-new-york-times/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220512T113000
DTEND;TZID=Europe/Paris:20220512T123000
DTSTAMP:20260416T200606
CREATED:20220530T111420Z
LAST-MODIFIED:20220530T112029Z
UID:15615-1652355000-1652358600@www.math.ens.psl.eu
SUMMARY:Effective dynamics and critical scaling for Stochastic Gradient Descent in high dimensions -  Gerard Ben Arous (New York University)
DESCRIPTION:Gerard Ben Arous (New York University)\n \nTitle: Effective dynamics and critical scaling for Stochastic Gradient Descent in high dimensions\n \nAbstract: SGD in high dimension is a workhorse for high dimensional statistics and machine learning\, but understanding its behavior in high dimensions is not yet a simple task. We study here the limiting ‘effective’ dynamics of some summary statistics for SGD in high dimensions\, and find interesting and new regimes\, i.e. not the expected one given by the population gradient flow. We find that a new corrector term is needed and that the phase portrait of these dynamics is substantially different from what would be predicted using the classical approach including for simple tasks. (joint work with Reza Gheissari (UC Berkeley) and Aukosh Jagannath (Waterloo))
URL:https://www.math.ens.psl.eu/evenement/learning-to-predict-complex-outputs-a-kernel-view-gerard-ben-arous-new-york-university/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220408T110000
DTEND;TZID=Europe/Paris:20220408T120000
DTSTAMP:20260416T200606
CREATED:20220530T111939Z
LAST-MODIFIED:20220530T111939Z
UID:15620-1649415600-1649419200@www.math.ens.psl.eu
SUMMARY:Learning to predict complex outputs: a kernel view - Florence d'Alché-Buc (Telecom ParisTech)
DESCRIPTION:Florence d’Alché-Buc (Telecom ParisTech) \nTitle: Learning to predict complex outputs: a kernel view\n \nAbstract: Motivated by prediction tasks such as molecule identification or functional regression\, we propose to leverage the notion of kernel to take into account the nature of output variables whether they be discrete structures or functions. This approach boils down to encode output data as vectors of the Reproducing kernel Hilbert Space associated to the so-called output kernel. We present vector-valued kernel machines to implement it and discuss different learning problems linked with the chosen loss function. Eventually large scale approaches can be developed using low rank approximations of the outputs. We illustrate the framework on graph prediction and infinite task learning.
URL:https://www.math.ens.psl.eu/evenement/learning-to-predict-complex-outputs-a-kernel-view-florence-dalche-buc-telecom-paristech/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20211209T120000
DTEND;TZID=Europe/Paris:20211209T130000
DTSTAMP:20260416T200606
CREATED:20220112T163844Z
LAST-MODIFIED:20220112T163926Z
UID:15052-1639051200-1639054800@www.math.ens.psl.eu
SUMMARY:Data science and science with data
DESCRIPTION:The young field of Machine learning has changed the ways we interact with data and neural networks have made us appreciate the potential of working with millions of parameters. Interestingly\, the vast majority of scientific discoveries today are not based on these new techniques. I will discuss the contrast between these two regimes and I will show how an intermediate approach\, i.e. neural network inspired but mathematically defined statistics (scattering and phase harmonic transforms)\, can provide the long-awaited tools in scientific research. I will illustrate these points using astrophysics as an example.
URL:https://www.math.ens.psl.eu/evenement/data-science-and-science-with-data/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20211021T104500
DTEND;TZID=Europe/Paris:20211021T114500
DTSTAMP:20260416T200606
CREATED:20211109T150605Z
LAST-MODIFIED:20211109T150733Z
UID:14472-1634813100-1634816700@www.math.ens.psl.eu
SUMMARY:Machine learning and applied mathematics
DESCRIPTION:The recent success of machine learning suggests that neural networks may be capable of approximating high-dimensional functions with controllably small errors. As a result\, they could outperform standard function interpolation methods that have been the workhorses of scientific computing but do not scale well with dimension. In support of this prospect\, here I will review what is known about the trainability and accuracy of shallow neural networks\, which offer the simplest instance of nonlinear learning in functional spaces that are fundamentally different from classic approximation spaces. The dynamics of training in these spaces can be analyzed using tools from optimal transport and statistical mechanics\, which reveal when and how shallow neural networks can overcome the curse of dimensionality. I will also discuss how scientific computing problem in high-dimension once thought intractable can be revisited through the lens of these results\, focusing on applications related to (i) solving Fokker-Planck equations associated with high-dimensional systems displaying metastability and (ii) sampling Boltzmann-Gibbs distributions using generative models to assist MCMC methods.
URL:https://www.math.ens.psl.eu/evenement/machine-learning-and-applied-mathematics/
LOCATION:Amphi Jaurès (29 Rue d’Ulm)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20170627T110000
DTEND;TZID=Europe/Paris:20170627T120000
DTSTAMP:20260416T200606
CREATED:20170627T090000Z
LAST-MODIFIED:20211101T153910Z
UID:8375-1498561200-1498564800@www.math.ens.psl.eu
SUMMARY:Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems
DESCRIPTION:
URL:https://www.math.ens.psl.eu/evenement/tba-7/
LOCATION:Amphi Burg Institut Curie 12 rue Lhomond sous-sol.
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20170221T120000
DTEND;TZID=Europe/Paris:20170221T120000
DTSTAMP:20260416T200606
CREATED:20170221T110000Z
LAST-MODIFIED:20211104T103800Z
UID:8362-1487678400-1487678400@www.math.ens.psl.eu
SUMMARY:Intelligence artificielle et raisonnement inductif : de la théorie de l'information aux réseaux de neurones artificiels
DESCRIPTION:Les problèmes de raisonnement inductif ou d’extrapolation comme ‘deviner la suite d’une série de nombres’\, ou plus généralement\, ‘comprendre la structure cachée dans des observations’\, sont fondamentaux si l’on veut un jour construire une intelligence artificielle. On a parfois l’impression que ces problèmes ne sont pas mathématiquement bien définis. Or il existe une théorie mathématique rigoureuse du raisonnement inductif et de l’extrapolation\, basée sur la théorie de l’information. Cette théorie est très élégante\, mais difficile à appliquer. En pratique aujourd’hui\, ce sont les réseaux de neurones qui donnent les meilleurs résultats sur toute une série de problèmes concrets d’induction et d’apprentissage (vision\, reconnaissance de la parole\, récemment le jeu de Go ou les voitures sans pilote…) Je ferai le point sur quelques-uns des principes mathématiques sous-jacents et sur leur lien avec la théorie de l’information.
URL:https://www.math.ens.psl.eu/evenement/intelligence-artificielle-et-raisonnement-inductif-de-la-theorie-de-linformation-aux-reseaux-de-neurones-artificiels/
LOCATION:room CONF IV (physic dpt)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20170110T130000
DTEND;TZID=Europe/Paris:20170110T130000
DTSTAMP:20260416T200606
CREATED:20170110T120000Z
LAST-MODIFIED:20211104T103401Z
UID:8356-1484053200-1484053200@www.math.ens.psl.eu
SUMMARY:A big data approach towards functional brain mapping
DESCRIPTION:
URL:https://www.math.ens.psl.eu/evenement/a-big-data-approach-towards-functional-brain-mapping/
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20161215T100000
DTEND;TZID=Europe/Paris:20161215T190000
DTSTAMP:20260416T200606
CREATED:20161215T090000Z
LAST-MODIFIED:20211104T103559Z
UID:8357-1481796000-1481828400@www.math.ens.psl.eu
SUMMARY:Special Inauguration Conference
DESCRIPTION:
URL:https://www.math.ens.psl.eu/evenement/special-inauguration-conference/
LOCATION:Salle Jaurès 29 rue d’Ulm
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20161108T120000
DTEND;TZID=Europe/Paris:20161108T140000
DTSTAMP:20260416T200606
CREATED:20161108T110000Z
LAST-MODIFIED:20211104T101909Z
UID:8317-1478606400-1478613600@www.math.ens.psl.eu
SUMMARY:What physics can tell us about inference?
DESCRIPTION:There is a deep analogy between statistical inference and statistical physics
URL:https://www.math.ens.psl.eu/evenement/what-physics-can-tell-us-about-inference/
LOCATION:room CONF IV (physic dpt)
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20161011T120000
DTEND;TZID=Europe/Paris:20161011T140000
DTSTAMP:20260416T200606
CREATED:20161011T100000Z
LAST-MODIFIED:20211104T101430Z
UID:8301-1476187200-1476194400@www.math.ens.psl.eu
SUMMARY:Can Big Data cure Cancer?
DESCRIPTION:As the cost and throughput of genomic technologies reach a point where DNA sequencing is close to becoming a routine exam at the clinics\, there is a lot of hope that treatments of diseases like cancer can dramatically improve by a digital revolution in medicine\, where smart algorithms analyze « big medical data » to help doctors take the best decisions for each patient or to suggest new directions for drug development. While artificial intelligence and machine learning-based algorithms have indeed had a great impact on many data-rich fields\, their application on genomic data raises numerous computational and mathematical challenges that I will illustrate on a few examples of patient stratification or drug response prediction from genomic data.
URL:https://www.math.ens.psl.eu/evenement/can-big-data-cure-cancer/
LOCATION:room CONF IV (physic dpt).
CATEGORIES:Séminaire Data de l’ENS
END:VEVENT
END:VCALENDAR