Learning Grounded Pragmatic Communication
Par
Daniel Fried
University of California, Berkeley.
Mardi 6 avril 2021, 10:30-12:00
Sur zoom
Pour assister à la conférence, remplissez le formulaire Google avant lundi 5 avril, 19h.
Résumé:
To generate language, natural language processing systems predict what to say---why not also predict how listeners will respond? We show how language generation and interpretation across varied grounded domains can be improved through pragmatic inference: explicitly reasoning about the actions and intents of the people that the systems interact with. We train neural generation and interpretation models which ground language into a world context, then layer a pragmatic inference procedure on top of these models. This pragmatic procedure predicts how human listeners will interpret text generated by the models, and reasons counterfactually about why human speakers produced the text they did. We find that this approach improves models' success at generating and interpreting instructions in real indoor environments, as well as in a challenging spatial reference dialogue task.
Biographie :
Daniel Fried is a final-year PhD candidate at UC Berkeley in natural language processing, advised by Dan Klein. His research focuses on language grounding: tying language to world contexts, for tasks like visual- and embodied-instruction following, text generation, and dialogue. Previously, he graduated with an MPhil from the University of Cambridge and a BS from the University of Arizona. His work has been supported by a Google PhD Fellowship, an NDSEG Fellowship, and a Churchill Scholarship.