The remarkable performances of deep neural networks remains a mathematical mystery. Neural networks aggregate large numbers of data coefficients to solve complex classification, regression and generation problems. In physics, macroscopic properties emerge from interactions of a multitude of microscopic particles. This presentation bridges both domains by showing in what sense they rely on similar mathematical principles. It introduces models of deep neural networks and complex physical phenomena, by separating variations at different scales with wavelets, and by learning the interactions of structures across scales. These models are applied to image classification, as well as generation of fluid turbulences and cosmological gravitational fields.

Stéphane Mallat holds the chair of Data Science at the Collège de France. He was a Professor at NYU, until 1994, then at Ecole. Polytechnique in Paris and at École Normale Supérieure. From 2001 to 2007 he was co-founder and CEO of a semiconductor start-up company. He is a member of the French Academy of sciences, of the Academy of Technologies, and a foreign member of the US National Academy of Engineering. Stéphane Mallat's research interests include machine learning, signal processing and harmonic analysis. He developed the multiresolution wavelet theory and algorithms at the origin of the compression standard JPEG-2000, and sparse signal representations in dictionaries through matching pursuits. He currently works on mathematical models of deep neural networks, for data analysis and physics.