Summer Graduate School
Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.
Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics (e.g., probability theory, high dimensional geometry, game theory). The purpose of the summer school is to introduce graduate students (and advanced undergraduates) to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.
To get the most out of the mini-courses in the school, students are encouraged to attend all the lectures and minimize distractions. Please try to avoid the use of laptops, smartphones, tablets, etc. except for note-taking (because the material is highly mathematical, students will probably find it easier to use a pen and notebook).
- Linear Algebra
- Multivariable calculus
- Real Analysis
For eligibility and how to apply, see the Summer Graduate Schools homepage
Due to the small number of students supported by MSRI, only one student per institution will be funded by MSRI.
Learning and adaptive systems
Computational learning theory