Passer au contenu

/ Département d'informatique et de recherche opérationnelle

Je donne

Rechercher

Navigation secondaire

Structure and function in neural networks - David Rolnick

Structure and function in neural networks

Par

David Rolnick

University of Pennsylvania

 

Lundi 17 février, 10:30-12:00Salle 3195, Pavillon André-Aisenstadt

    Université de Montréal, 2920 Chemin de la Tour

Résumé:

Despite the success of deep learning, the design of neural networks is still based largely on empirical observation and is limited by a lack of underlying theory. In this talk, I provide theoretical insights into how the architecture of a neural network affects the functions that it can express and learn. I prove that deep networks require exponentially fewer parameters than shallow networks to approximate even simple polynomials, but that there is a massive gap between the maximum complexity of the functions that a network can express and the expected complexity of the functions that it learns in practice. Using these results, I demonstrate how to reverse-engineer the weights and structure of a neural network merely by querying it.

 

Biographie :

David Rolnick is an NSF Mathematical Sciences Postdoctoral Research Fellow at the University of Pennsylvania. His research combines tools from machine learning, mathematics, and neuroscience to provide insight into the behavior of neural networks. He also leads the Climate Change AI initiative, dedicated to helping tackle climate change with machine learning. David received his PhD in Applied Math from MIT as an NSF Graduate Research Fellow; he has also conducted research at Google and DeepMind, and as a Fulbright Scholar.