Passer au contenu

/ Department of Computer Science and Operations Research

Je donne

Rechercher

Navigation secondaire

Soutenance de thèse - Sebastien Lachapelle

Bonjour à tous,

Vous êtes cordialement invités à assister à la soutenance de thèse de doctorat de Sébastien Lachapelle mardi prochain. (détails ci-dessous). *La présentation se fera en anglais.*


Title: Identifying Latent Structures in Data

Date: 27 Août 2024 de 9:30 à 12:00 EST

Location: Mila's Auditorium 1 + *Zoom Link

 

Jury

Président rapporteur
Bengio, Yoshua
Directeur de rechercheLacoste-Julien, Simon
Membre régulier
Sridhar, Dhanya
Examinateur externe
Hyvarinen, Aapo, Helsinki Institute for Information Technology.
Représentant du doyen
à communiquer

 

Abstract

While the deep learning approach yielded stunning breakthroughs in multipledomains, it came at the cost of interpretability and theoretical guarantees. This thesis is an attempt at building models that are restricted enough to be interpretable and analyzed theoretically while remaining sufficiently expressive to be useful in high-dimensional data modalities.

The focus of most contributions is on identifiability, the property a statistical model has when its parameters can be recovered from the distribution it entails, up to some equivalence class. While identifiability is central to causal inference, causal discovery and independent component analysis, its understanding in the context of deep learning is underdeveloped. This thesis argues that studying identifiability in deep learning and machine learning more broadly is useful to gain insights into existing models as well as to build new ones that are interpretable and amenable to generalization guarantees. What comes out are novel identifiability guarantees for expressive models, for both causal discovery and representation learning.

This thesis defense will focus primarily on the problem of disentanglement, which is the property a representation possesses when its coordinates are in one-to-one correspondence with so-called “natural factors of variation” in the data. We will look into how sparse temporal dependencies, actions with sparse effects, additive decoding structures and sparse multi-task learning can be leveraged to disentangle, with identifiability guarantees. Some examples of how these principles can be used in scientific applications will also be briefly mentioned.