This course will provide a detailed overview of the mathematical foundations of modern learning techniques based on deep neural networks. Starting with the universal approximation property of neural networks, the course will then show why depth improves the capacity of networks to provide accurate function approximations for a given computational budget. Tools to address the optimization problems appearing when training networks on large collections will then be covered, and their convergence properties will be reviewed. Finally, statistical results on the generalization guarantees of deep neural networks will be described, both in the classical underfitting scenario and in the overfitting scenario leading to the so-called “double descent” phenomenon.

On mondays on Ecole Centrale campus (bus station "campus Lyon ouest"), starting on the 29th of january 2024.