Passer au contenu

/ Department of Computer Science and Operations Research

Je donne

Rechercher

Navigation secondaire

Mansi Rankawat's Predoc III Presentation

Dear all / Bonjour à tous,

We are happy to invite you to Mansi Rankawat's Predoc III defense on Wednesday, December 20th, at 2pm.

Vous êtes cordialement invité.e.s à la présentation du sujet de recherche de Mansi Rankawat, le mercredi 20 décembre à 14h00.


Title: Designing min-max algorithms using the Lyapunov function approach: with applications in adversarial robustness

Date: December 20th, 2023 - 2pm - 4pm

Location: Auditorium 2 - 6650 rue Saint Urbain

 

Jury

PrésidentMitliagkas, Ioannis
Directeur de rechercheLacoste-Julien, Simon
Membre
Lajoie, Guillaume

 

Abstract

In recent years, there has been a widespread adoption of Artificial intelligence (AI) in various domains including safety-critical areas, such as self-driving cars and healthcare. Deep neural networks (DNNs), which lie at the heart of AI-based systems, are known to be susceptible to adversarial attacks. Empirically, adversarial training has proven to be one of the most successful methods of defense against such attacks. The underlying principle of adversarial training involves formulating the robustness problem as a classical min-max optimization problem. However, it suffers from a lack of guarantees and instability issues during the training process.

In this work, we aim to study the robustness and stability of DNNs by employing concepts from control theory. Control theory extensively explores the development of stable and resilient systems that can maintain their desired performance despite external noise and disturbances. Lyapunov functions are a key tool in stability analysis within control theory. While these functions are well-known for their role in analyzing system stability and providing convergence proofs for optimization algorithms, they can also serve as powerful tools for establishing a systematic framework for algorithm design. In this work, we aim to use Lyapunov functions as a framework to design algorithms for min-max optimization with guaranteed stability. We show how to derive a stable algorithm using Lyapunov functions for the case of unconstrained deterministic min-max optimization. We also discuss how to extend this framework for the case of adversarial training in DNNs. In subsequent contributions, we aim to extend this framework to the broader setting of stochastic min-max optimization. Finally, we hope that this work introduces a fresh perspective on examining the stability challenges inherent in min-max optimization. Incorporating tools from control theory into the framework of min-max optimization for Deep Neural Networks (DNNs) represents a promising direction for future research.