Skip to main content Skip to secondary navigation

Acquiring, Representing, and Transporting Manipulation Knowledge

Main content start

Principal Investigators:

Jeannette Bohg and Leonidas Guibas

TRI Liaison:

Mike Laskey

Project Summary

We aim to equip a robot with the ability to select tools and perform manipulation tasks on objects it has never seen before. This requires the ability to acquire manipulation knowledge from demonstrations and then encode/store this information so that it can be transported to different settings involving new objects. We develop algorithms for processing noisy 3D data and for planning exploratory manipulations that learn needed physical object attributes.

The ability to manipulate new objects in unstructured environments is essential in building autonomous and assistive robotic systems enabling smart environments for homes, offices, hospitals, etc.

Research Goals

  • Extract manipulation knowledge from human demonstration in 2D and 3D videos
  • Transport task-indexed manipulation knowledge between related 3D object models
  • Annotate a significant fraction of models in the ShapeNet repository with manipulation information
  • Detect objects in 3D scans and regions on them that support specific manipulation affordances
  • Plan and perform physical exploration on an object to refine attribute estimation relevant to its manipulation