Quick links

Princeton Robotics Seminar

Princeton Robotics Seminar: Three Lessons for Building General-Purpose Robots

Date and Time
Friday, October 6, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker

Lerrel Pinto
Over the last decade, a variety of paradigms have sought to teach robots complex and dexterous behaviors in real-world environments. On one end of the spectrum we have nativist approaches that bake in fundamental human knowledge through physics models, simulators and knowledge graphs. While on the other end of the spectrum we have tabula-rasa approaches that teach robots from scratch. In this talk I will argue for the need for better constructivist approaches to robotics, i.e. techniques that take guidance from humans while allowing robots to continuously adapt in changing scenarios. The constructivist guide I propose will focus on three lessons. First, creating physical interfaces to allow humans to provide robots with rich and dexterous data. Second, developing adaptive learning mechanisms to allow robots to continually fine-tune in their environments. Third, architecting models that allow robots to learn from un-curated play. Applications of such a learning paradigm will be demonstrated on mobile manipulators in home environments, industrial robots on precision tasks, and multi-fingered hands on dexterous manipulation.

Bio:  Lerrel Pinto is an Assistant Professor of Computer Science at NYU. His research interests focus on creating general purpose robotic systems. He received a PhD degree from CMU in 2019; prior to that he received an MS degree from CMU in 2016, and a B.Tech in Mechanical Engineering from IIT-Guwahati. His work on robotics received paper awards at ICRA 2016 and RSS 2023, and finalist awards at IROS 2019 and CoRL 2022. He is a recipient of grants and awards from Amazon, Honda, Hyundai, Meta, LG and Google. More recently, he was named a TR35 innovator under 35 for 2023. Several of his works have been featured in popular media like TechCrunch, MIT Tech Review, Wired, and BuzzFeed among others. His recent work can be found on www.lerrelpinto.com.

Princeton Robotics Seminar: Physical Intelligence as API

Date and Time
Friday, September 22, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar

Pulkit Agrawal
Large Language Models (LLMs) are unprecedented in their ability to go beyond application-specific software and promise a one-stop solution to several digital tasks. With such advances, robotic agents are able to convert complex natural language commands into step-wise instructions. However, accurate and reliable execution of sensorimotor skills (e.g., locomotion, opening doors, object manipulation, etc.) remains elusive. I will discuss a framework that allows robots to learn new, complex, and generalizable behaviors while reducing the human effort in designing such behaviors to easily scale to many tasks. This framework is a stepping stone towards building Physical Intelligence as API  -- a one-stop robotics solution (i.e., an API) that can perform many manipulation and locomotion tasks that humans perform. I will elaborate on the framework using the following case studies:

(i) a dexterous manipulation system capable of re-orienting novel objects and tool use such as peeling vegetables.
(ii) a quadruped robot capable of fast locomotion and manipulation on diverse natural terrains.
(iii) object re-arrangement system tested on manipulating out-of-distribution object configurations.

Bio: Pulkit Agrawal is an Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, where he directs the Improbable AI Lab. His research interests span robotics, computer vision, and reinforcement learning. Pulkit's work received the Best Paper Award at Conference on Robot Learning 2021 and the Best Student Paper Award at the Conference on Computer Supported Collaborative Learning 2011. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Research Award, a Fulbright fellowship, etc. Before joining MIT, he received his Ph.D. from UC Berkeley and Bachelor's degree from IIT Kanpur, where he was awarded the Directors Gold Medal.

Princeton Robotics Seminar - Large Language Models with Eyes, Arms and Legs

Date and Time
Friday, June 9, 2023 - 11:00am to 12:00pm
Location
Zoom Webinar (off campus)
Type
Princeton Robotics Seminar

Zoom link: https://princeton.zoom.us/my/robotics


Vikas Sindhwani
To become useful in human-centric environments, robots must demonstrate language comprehension, semantic understanding and logical reasoning capabilities working in concert with low-level physical skills. With the advent of modern "foundation models" trained on massive datasets, the algorithmic path to developing general-purpose “robot brains” is (arguably) becoming clearer, though many challenges remain.  In the first part of this talk, I will attempt to give a flavor of  how state-of-the-art multimodal foundation models are built, and how they can be bridged with low-level control. In the second part of the talk, I will summarize a few surprising lessons on control synthesis observed while solving a collection of Robotics benchmarks at Google. I will end with some emerging open problems and opportunities at the intersection of dynamics, control and foundation models.

Bio: Vikas Sindhwani is Research Scientist at Google Deepmind in New York where he leads a research group focused on solving a range of planning, perception, learning, and control problems arising in Robotics.  His interests are broadly in core mathematical foundations of statistical machine learning, and in end-to-end design aspects of building large-scale and robust AI systems. He received the best paper award at Uncertainty in Artificial Intelligence (UAI-2013), the IBM Pat Goldberg Memorial Award in 2014, and was finalist for Outstanding Planning Paper Award at ICRA-2022. He serves on the  editorial board of Transactions on Machine Learning Research (TMLR) and IEEE Transactions on Pattern Analysis and Machine Intelligence; he has been area chair and senior program committee member for NeurIPS, International Conference on Learning Representations (ICLR) and Knowledge Discovery and Data Mining (KDD). He previously headed the Machine Learning group at IBM Research, NY. He has a PhD in Computer Science from the University of Chicago and a B.Tech in Engineering Physics from Indian Institute of Technology (IIT) Mumbai. His publications are available at: http://vikas.sindhwani.org/

Princeton Robotics Seminar - Dynamic Game Models for Multi-Agent Interactions: The Role of Information in Designing Efficient Algorithms

Date and Time
Friday, May 12, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
David Fridovich-Keil, from University of Texas at Austin

David Fridovich-Keil
This talk introduces dynamic game theory as a natural modeling tool for multi-agent interactions ranging from large, abstract systems such as ride-hailing networks to more concrete, physically-embodied robotic settings such as collision-avoidance in traffic. We present the key theoretical underpinnings of dynamic game models for these varied situations and draw attention to the subtleties of information structure, i.e., what information is implicitly made available to each agent in a game. Thus equipped, the talk presents a state-of-the-art technique for solving these games, as well as a set of “dual” techniques for the inverse problem of identifying players’ objectives based on observations of strategic behavior.

Bio: David Fridovich-Keil is an assistant professor at the University of Texas at Austin. David’s research spans optimal control, dynamic game theory, learning for control, and robot safety. While he has also worked on problems in distributed control, reinforcement learning, and active search, he is currently investigating the role of dynamic game theory in multi-agent interactive settings such as traffic. David’s work also focuses on the interplay between machine learning and classical ideas from robust, adaptive, and geometric control theory.

Multifunctional Origami Robots

Date and Time
Friday, April 21, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Renee Zhao, from Stanford University

Renee Zhao
In this talk, I will introduce our recent work on origami mechanisms and actuation strategies for applications spanning from biomedical devices to foldable space structures. The first topic is magnetically actuated millimeter-scale origami medical robots for effective amphibious locomotion in severely confined spaces or aqueous environments. The origami robots are based on the Kresling origami, whose thin shell structure 1) provides an internal cavity for drug storage, 2) permits torsion-induced contraction as a crawling mechanism and a pumping mechanism for controllable liquid medicine dispensing, 3) serves as propellers that spin for propulsion to swim, 4) offers anisotropic stiffness to overcome the large resistance from the severely confined spaces in biomedical environments. For the second part of my talk, the concept of hexagonal ring origami folding mechanism will be introduced as a strategy for deployable/foldable structures for space applications. The hexagonal rings can tessellate 2D/3D surfaces and each ring can snap to its stable folded configuration with only 10.6% of the initial area. Through finite-element analysis and the rod model, snap-folding of the hexagonal ring with slight geometric modification and residual strain are studied for easy folding of the ring to facilitate the design and actuation of hexagonal ring origami assemblies for functional foldable structures with extreme packing ratio.

Bio: Renee Zhao is an Assistant Professor of Mechanical Engineering at Stanford University. Renee received her PhD degree in Solid Mechanics from Brown University in 2016. She spent two years as a postdoc associate at MIT working on modeling of soft composites. Before Renee joined Stanford, she was an Assistant Professor at The Ohio State University from 2018 to 2021. Renee’s research concerns the development of stimuli-responsive soft composites and shape morning mechanisms for multifunctional robotic systems. Renee is a recipient of the NSF Career Award (2020), AFOSR YIP (2023), ASME Journal of Applied Mechanics award (2021), the 2022 ASME Pi Tau Sigma Gold Medal, and the 2022 ASME Henry Hess Early Career Publication Award.

Princeton Robotics Seminar - Taskable Agility: Making Useful Dynamic Behavior Easier to Create

Date and Time
Friday, April 7, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar

Scott Kuindersma
In this talk, I will provide some insights and observations from our recent work on Atlas, the world's most dynamic humanoid robot. I'll cover the core technical ideas that have made an impact for us over the past few years and share my thoughts about the future for robots like Atlas.

Bio: Scott Kuindersma is the Senior Director of Robotics Research at Boston Dynamics where he leads behavior research on Atlas. Prior to joining Boston Dynamics, he was an Assistant Professor of Engineering and Computer Science at Harvard. Scott’s research explores intersections of machine learning and model-based control to improve the capabilities of humanoids and other dynamic mobile manipulators.

Princeton Robotics Seminar - Learning Representations for Interactive Robotics

Date and Time
Friday, March 24, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Dorsa Sadigh, from Stanford University

Dorsa Sadigh
There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. In this talk, I will be discussing the role of learning representations for robots that interact with humans and robots that interactively learn from humans through a few different vignettes. I will first discuss how bounded rationality of humans guided us towards developing learned latent action spaces for shared autonomy. It turns out this “bounded rationality” is not a bug and a feature — i.e. we can develop extremely efficient coordination algorithms by learning latent representations of partner strategies and operating in this low dimensional space. I will then discuss how we can go about actively learning such representations capturing human preferences including our recent work on how large language models can help design human preference reward functions. Finally, I will end the talk with a discussion of the type of representations useful for learning a robotics foundation model and some preliminary results on a new model that leverages language supervision to shape representations.

Bio: Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University.  Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and human-AI interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012.  She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.

Robot Imagination: Affordance-Based Reasoning about Unknown Objects

Date and Time
Friday, March 10, 2023 - 11:00am to 12:00pm
Location
Zoom Webinar (off campus)
Type
Princeton Robotics Seminar
Speaker
Gregory Chirikjian, from National University of Singapore

Gregory Chirikjian
Today’s robots are very brittle in their intelligence. This follows from a legacy of industrial robotics where robots pick and place known parts repetitively. For humanoid robots to function as servants in the home and in hospitals they will need to demonstrate higher intelligence, and must be able to function in ways that go beyond the stiff prescribed programming of their industrial counterparts. A new approach to service robotics is discussed here. The affordances of common objects such as chairs, cups, etc., are defined in advance. When a new object is encountered, it is scanned and a virtual version is put into a simulation wherein the robot ``imagines’’ how the object can be used. In this way, robots can reason about objects that they have not encountered before, and for which they have no training using. After affordances are assessed, the robot then takes action in the real world, resulting in real2sim2real. Videos of physical demonstrations will illustrate this paradigm, which the presenter has developed with his students Hongtao Wu, Meng Xin, Sipu Ruan, Jikai Ye, and others.

Bio: Gregory S. Chirikjian received undergraduate degrees from Johns Hopkins University in 1988, and a Ph.D. degree from the California Institute of Technology, Pasadena, in 1992. From 1992 until 2021, he served on the faculty of the Department of Mechanical Engineering at Johns Hopkins University, attaining the rank of full professor in 2001. Additionally, from 2004-2007, he served as department chair. Starting in January 2019, he moved the National University of Singapore, where he is serving as Head of the Mechanical Engineering Department, where he has hired 14 new professors so far. Chirikjian’s research interests include robotics, applications of group theory in a variety of engineering disciplines, applied mathematics, and the mechanics of biological macromolecules. He is a 1993 National Science Foundation Young Investigator, a 1994 Presidential Faculty Fellow, and a 1996 recipient of the ASME Pi Tau Sigma Gold Medal. In 2008, Chirikjian became a fellow of the ASME, and in 2010, he became a fellow of the IEEE. From 2014-15, he served as a program director for the US National Robotics Initiative, which included responsibilities in the Robust Intelligence cluster in the Information and Intelligent Systems Division of CISE at NSF. Chirikjian is the author of more than 250 journal and conference papers and the primary author of three books, including Engineering Applications of Noncommutative Harmonic Analysis (2001) and Stochastic Models, Information Theory, and Lie Groups, Vols. 1+2. (2009, 2011). In 2016, an expanded edition of his 2001 book was published as a Dover book under a new title, Harmonic Analysis for Engineers and Applied Scientists.


Zoom link: https://princeton.zoom.us/my/robotics

Learning perception, action and interaction

Date and Time
Tuesday, February 28, 2023 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Danica Kragic, from Royal Institute of Technology, KTH

Danica Kragic
All day long, our fingers touch, grasp and move objects in various media such as air, water, oil. We do this almost effortlessly - it feels like we do not spend time planning and reflecting over what our hands and fingers do or how the continuous integration of various sensory modalities such as vision, touch, proprioception, hearing help us to outperform any other biological system in the variety of the interaction tasks that we can execute. Largely overlooked, and perhaps most fascinating is the ease with which we perform these interactions resulting in a belief that these are also easy to accomplish in artificial systems such as robots. However, there are still no robots that can easily hand-wash dishes, button a shirt or peel a potato. Our claim is that this is fundamentally a problem of appropriate representation or parameterization. When interacting with objects, the robot needs to consider geometric, topological, and physical properties of objects. This can be done either explicitly, by modeling and representing these properties, or implicitly, by learning them from data. The main objective of our work is to create new informative and compact representations of deformable objects that incorporate both analytical and learning-based approaches and encode geometric, topological, and physical information about the robot, the object, and the environment. We do this in the context of challenging multimodal, bimanual object interaction tasks. The focus will be on physical interaction with deformable and soft objects. 

Bio: Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Founding member of Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology.  Her research is in the area of robotics, computer vision and machine learning.

The New Wave in Robot Grasping

Date and Time
Friday, February 10, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Ken Goldberg, from UC Berkeley

Ken Goldberg
Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering.  The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics.  The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration. A "New Wave" of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models.  I'll present this history with new results from our lab on grasping diverse and previously-unknown objects.

Bio: Ken Goldberg is the William S. Floyd Distinguished Chair in Engineering at UC Berkeley and an award-winning roboticist, filmmaker, artist and popular public speaker on AI and robotics. Ken trains the next generation of researchers and entrepreneurs in his research lab at UC Berkeley; he has published over 300 papers, 3 books, and holds 9 US Patents. Ken’s artwork has been featured in 70 art exhibits including the 2000 Whitney Biennial. He is a pioneer in technology and artistic visual expression, bridging the “two cultures” of art and science. With unique skills in communication and creative problem solving, invention, and thinking on the edge, Ken has presented over 600 invited lectures at events around the world. Ken has been interested in robots, rockets, and rebels since he was a kid. He’s skeptical about claims that humans are on the verge of being replaced by Superintelligent machines yet optimistic about the potential of technology to improve the human condition. Ken developed the first provably complete algorithm for part feeding and the first robot on the Internet. In 1995 he was awarded the Presidential Faculty Fellowship and in 2005 was elected IEEE Fellow: "For contributions to networked telerobotics and geometric algorithms for automation." Ken founded UC Berkeley's Art, Technology, and Culture public lecture series in 1997 serves on the Advisory Board of the RoboGlobal Exchange Traded Fund. Ken is Chief Scientist at Ambidextrous Robotics and on the Editorial Board of the journal Science Robotics. He served as Chair of the Industrial Engineering and Operations Research Department and co-founded the IEEE Transactions on Automation Science and Engineering. Short documentary films he co-wrote were selected for Sundance and one was nominated for an Emmy Award. He lives in the Bay Area and is madly in love with his wife, filmmaker and Webby Awards founder Tiffany Shlain, and their two daughters.

Follow us: Facebook Twitter Linkedin