On Augmenting Robotic Reinforcement Learning with Prior Datasets
Par
Avi Singh
Google Brain
Jeudi 28 avril 2022, 15:30-16:30 EST
Sur zoom
Pour assister à la conférence, remplissez le formulaire Google avant mercredi 27 avril, 19h.
Résumé:
Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn. Further, policies learned in such fashion are often too brittle, and do not generalize to new scenarios. In this talk, I will present a few ways in which robotic reinforcement learning can be improved with help from previously collected diverse, multi-task interaction datasets. First, I will present a method for pre-training RL agents using data from a wide range of previously seen tasks, and show how this pre-training can accelerate learning of new tasks. Then, I will present how prior datasets can be utilized to help achieve better generalization to novel scenarios.
Biographie :
Avi Singh is a Research Scientist at Google Brain, where he works at the intersection of machine learning and robotics. In particular, his research focuses on making reinforcement learning techniques amenable to real world robotics. He received a PhD in Computer Science from UC Berkeley in 2021, where he was advised by Sergey Levine. His dissertation focused on learning reward functions from small human-provided datasets, and the role of prior data in robotic reinforcement learning. He has spent time at Google X, Cornell University and Virginia Tech, and received his undergraduate degree from the IIT-Kanpur.