Robot Learning and Planning
In this talk, we present several studies that aim to mitigate these shortcomings by combining ideas from both planning and learning. We start by introducing value iteration networks, a type of differentiable planner that can be used within model-free RL to obtain better generalization. Next, we consider a practical robotic assembly problem, and show that motion planning, based on readily available CAD data, can be combined with RL to quickly learn policies for assembling tight fitting objects. We conclude with our recent work on learning to imagine goal-directed visual plans. Motivated by humans’ remarkable capability to predict and plan complex manipulations of objects, and recent advances such as GANs in imagining images, we present Visual Plan Imagination (VPI) — a new computational problem that combines image imagination and planning. In VPI, given off-policy image data from a dynamical system, the task is to ‘imagine’ image sequences that transition the system from start to goal. Key to our method is Causal InfoGAN, a deep generative model that can learn features that are compatible with strong planning algorithms. We demonstrate our approach on learning to imagine and execute robotic rope manipulation from image data.
Aviv Tamar is an Assistant Professor at the Department of Electrical Engineering at Technion - Israel Institute for Technology. Previously, he was a post-doc at UC Berkeley with Prof. Pieter Abbeel, and prior to that, he completed his PhD with Prof. Shie Mannor at Technion. Aviv's research focuses on reinforcement learning, representation learning, and robotics. His work has been recognized by a NeurIPS Best Paper award, a Google Faculty Award, and the Alon fellowship for young researchers.
*Please note, this event is only open to the Princeton University community.
Lunch for talk attendees will be available at 12:00pm.
To request accommodations for a disability, please contact Emily Lawrence, email@example.com, 609-258-4624 at least one week prior to the event.