Passer au contenu

/ Département d'informatique et de recherche opérationnelle

Je donne

Rechercher

Soutenance de thèse - Zhaocheng Zhu

Dear all / Bonjour à tous,

We are happy to invite you to Zhaocheng Zhu's PhD defense on September 18th at 9h30 am (hybrid mode).


Vous êtes cordialement invité.e.s à la soutenance de thèse de Zhaocheng Zhu, le 18 septembre à 9h30 am (mode hybride).


Title: Learning Representations for Reasoning: Generalizing Across Diverse Structures.

Date: September 18th, 9h30 am

Location: Auditorium 1, Mila 6650 Saint-Urbain

 

Jury

Président rapporteur
Jian-Yun Nie
Directeur de rechercheJian Tang
Membre régulier
Bang Liu
Examinateur externe
Pascale Minervini

 

Abstract

Reasoning, the ability to logically draw conclusions from existing

knowledge, is a hallmark of human. Together with perception, they

constitute the two major themes of artificial intelligence. While deep

learning has pushed the limit of perception beyond human-level performance

in computer vision and natural language processing, the progress in

reasoning domains is way behind. One fundamental reason is that reasoning

problems usually have flexible structures for both knowledge (e.g.

knowledge graphs) and queries (e.g. multi-step queries), and many existing

models only perform well on structures seen during training.

In this thesis, we aim to push the boundary of reasoning models by devising

algorithms that generalize across knowledge and query structures, as well

as systems that accelerate development on structured data. This thesis is

composed of three parts. In Part I, we study models that can inductively

generalize to unseen knowledge graphs, which involve new entity

and relation vocabularies. For new entities, we propose a novel framework

that learns neural operators in a dynamic programming algorithm computing

path representations. This framework can be further scaled to million-scale

knowledge graphs by learning a priority function. For relations, we

construct a relation graph to capture the interactions between relations,

thereby converting new relations into new entities. This enables us to

develop a single pre-trained model for arbitrary knowledge graphs. In Part

II, we propose two solutions for generalizing across multi-step queries on

knowledge graphs and text respectively. For knowledge graphs, we show

multi-step queries can be solved by multiple calls of graph neural networks

and fuzzy logic operations. This design enables generalization to new

entities, and can be integrated with our pre-trained model to accommodate

arbitrary knowledge graphs. For text, we devise a new algorithm to learn

explicit knowledge as textual rules to improve large language models on

multi-step queries. In Part III, we propose two systems to facilitate

machine learning development on structured data. Our open-source library

treats structured data as first-class citizens and removes the barrier for

developing machine learning algorithms on structured data, including

graphs, molecules and proteins. Our node embedding system solves the GPU

memory bottleneck of embedding matrices and scales to graphs with billion

nodes.