Summer Graduate School
|Location:||University of Washington, Seattle|
Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.
Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics (e.g., probability theory, high dimensional geometry, game theory). The purpose of the summer school is to introduce graduate students (and advanced undergraduates) to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.
For eligibility and how to apply, see the Summer Graduate Schools homepage
- Linear Algebra
- Real Analysis
Due to the small number of students supported by MSRI, only one student per institution will be funded by MSRI.
Learning and adaptive systems
Computational learning theory