Quick links

Learning Visual Affordances for Robotic Manipulation

Report ID:
TR-013-19
Authors:
Date:
October 8, 2019
Pages:
154
Download Formats:
[PDF]

Abstract:

A humans remarkable ability to manipulate unfamiliar objects with little prior
knowledge of them is a constant inspiration for robotics research. Despite the interest
of the research community, and despite its practical value, robust manipulation of
novel objects in cluttered environments still remains a largely unsolved problem.
Classic solutions (e.g. involving 6D object pose estimation) typically require prior
knowledge of the objects (e.g. class categories or 3D CAD models), which may not be
available outside of highly constrained settings. More recent deep learning methods
using end-to-end convolutional networks (e.g. raw pixels to motor torques) have
the potential to model complex skills that generalize, but they remain highly data
inefficient – and robot data (e.g. trial and error) is expensive.
In this thesis, we consider an approach to learning manipulation called visual
affordances. The idea is to use classic controllers to design motion primitives, then use
convolutional networks to map from visual observations (e.g. images) to the perceived
affordances (e.g. confidence scores or action-values) of the primitives for every pixel
of the input. By leveraging dense equivariant state and action representations, this
formulation can be used to acquire complex vision-based manipulation skills (e.g.
pushing, grasping, throwing) on real robot platforms that generalize to novel objects,
while using orders of magnitude less data. While visual affordances may not be
directly compatible with classic planning frameworks that involve explicit forward
simulation or propagation, in this thesis we show that it is possible to workaround
this limitation by extending it with model-free reinforcement learning to sequence
primitive picking motions for more complex manipulation policies. We also study how
it can be combined with residual physics (learning to predict residual values on top of
control parameter estimates from an initial analytical controller) to enable learning
end-to-end visuomotor policies that leverage the benefits of analytical models while
still maintaining the capacity (via data-driven residuals) to account for real-world
iii
dynamics that are not explicitly modeled. Finally, we conclude by discussing the
limitations of learning visual affordances, which suggest directions for future work.

Follow us: Facebook Twitter Linkedin