Quick links

Princeton Robotics Seminar

Princeton Robotics Seminar - Learning Representations for Interactive Robotics

Date and Time
Friday, March 24, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Dorsa Sadigh, from Stanford University

Dorsa Sadigh
There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. In this talk, I will be discussing the role of learning representations for robots that interact with humans and robots that interactively learn from humans through a few different vignettes. I will first discuss how bounded rationality of humans guided us towards developing learned latent action spaces for shared autonomy. It turns out this “bounded rationality” is not a bug and a feature — i.e. we can develop extremely efficient coordination algorithms by learning latent representations of partner strategies and operating in this low dimensional space. I will then discuss how we can go about actively learning such representations capturing human preferences including our recent work on how large language models can help design human preference reward functions. Finally, I will end the talk with a discussion of the type of representations useful for learning a robotics foundation model and some preliminary results on a new model that leverages language supervision to shape representations.

Bio: Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University.  Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and human-AI interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012.  She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.

Robot Imagination: Affordance-Based Reasoning about Unknown Objects

Date and Time
Friday, March 10, 2023 - 11:00am to 12:00pm
Location
Zoom Webinar (off campus)
Type
Princeton Robotics Seminar
Speaker
Gregory Chirikjian, from National University of Singapore

Gregory Chirikjian
Today’s robots are very brittle in their intelligence. This follows from a legacy of industrial robotics where robots pick and place known parts repetitively. For humanoid robots to function as servants in the home and in hospitals they will need to demonstrate higher intelligence, and must be able to function in ways that go beyond the stiff prescribed programming of their industrial counterparts. A new approach to service robotics is discussed here. The affordances of common objects such as chairs, cups, etc., are defined in advance. When a new object is encountered, it is scanned and a virtual version is put into a simulation wherein the robot ``imagines’’ how the object can be used. In this way, robots can reason about objects that they have not encountered before, and for which they have no training using. After affordances are assessed, the robot then takes action in the real world, resulting in real2sim2real. Videos of physical demonstrations will illustrate this paradigm, which the presenter has developed with his students Hongtao Wu, Meng Xin, Sipu Ruan, Jikai Ye, and others.

Bio: Gregory S. Chirikjian received undergraduate degrees from Johns Hopkins University in 1988, and a Ph.D. degree from the California Institute of Technology, Pasadena, in 1992. From 1992 until 2021, he served on the faculty of the Department of Mechanical Engineering at Johns Hopkins University, attaining the rank of full professor in 2001. Additionally, from 2004-2007, he served as department chair. Starting in January 2019, he moved the National University of Singapore, where he is serving as Head of the Mechanical Engineering Department, where he has hired 14 new professors so far. Chirikjian’s research interests include robotics, applications of group theory in a variety of engineering disciplines, applied mathematics, and the mechanics of biological macromolecules. He is a 1993 National Science Foundation Young Investigator, a 1994 Presidential Faculty Fellow, and a 1996 recipient of the ASME Pi Tau Sigma Gold Medal. In 2008, Chirikjian became a fellow of the ASME, and in 2010, he became a fellow of the IEEE. From 2014-15, he served as a program director for the US National Robotics Initiative, which included responsibilities in the Robust Intelligence cluster in the Information and Intelligent Systems Division of CISE at NSF. Chirikjian is the author of more than 250 journal and conference papers and the primary author of three books, including Engineering Applications of Noncommutative Harmonic Analysis (2001) and Stochastic Models, Information Theory, and Lie Groups, Vols. 1+2. (2009, 2011). In 2016, an expanded edition of his 2001 book was published as a Dover book under a new title, Harmonic Analysis for Engineers and Applied Scientists.


Zoom link: https://princeton.zoom.us/my/robotics

Learning perception, action and interaction

Date and Time
Tuesday, February 28, 2023 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Danica Kragic, from Royal Institute of Technology, KTH

Danica Kragic
All day long, our fingers touch, grasp and move objects in various media such as air, water, oil. We do this almost effortlessly - it feels like we do not spend time planning and reflecting over what our hands and fingers do or how the continuous integration of various sensory modalities such as vision, touch, proprioception, hearing help us to outperform any other biological system in the variety of the interaction tasks that we can execute. Largely overlooked, and perhaps most fascinating is the ease with which we perform these interactions resulting in a belief that these are also easy to accomplish in artificial systems such as robots. However, there are still no robots that can easily hand-wash dishes, button a shirt or peel a potato. Our claim is that this is fundamentally a problem of appropriate representation or parameterization. When interacting with objects, the robot needs to consider geometric, topological, and physical properties of objects. This can be done either explicitly, by modeling and representing these properties, or implicitly, by learning them from data. The main objective of our work is to create new informative and compact representations of deformable objects that incorporate both analytical and learning-based approaches and encode geometric, topological, and physical information about the robot, the object, and the environment. We do this in the context of challenging multimodal, bimanual object interaction tasks. The focus will be on physical interaction with deformable and soft objects. 

Bio: Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Founding member of Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology.  Her research is in the area of robotics, computer vision and machine learning.

The New Wave in Robot Grasping

Date and Time
Friday, February 10, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Ken Goldberg, from UC Berkeley

Ken Goldberg
Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering.  The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics.  The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration. A "New Wave" of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models.  I'll present this history with new results from our lab on grasping diverse and previously-unknown objects.

Bio: Ken Goldberg is the William S. Floyd Distinguished Chair in Engineering at UC Berkeley and an award-winning roboticist, filmmaker, artist and popular public speaker on AI and robotics. Ken trains the next generation of researchers and entrepreneurs in his research lab at UC Berkeley; he has published over 300 papers, 3 books, and holds 9 US Patents. Ken’s artwork has been featured in 70 art exhibits including the 2000 Whitney Biennial. He is a pioneer in technology and artistic visual expression, bridging the “two cultures” of art and science. With unique skills in communication and creative problem solving, invention, and thinking on the edge, Ken has presented over 600 invited lectures at events around the world. Ken has been interested in robots, rockets, and rebels since he was a kid. He’s skeptical about claims that humans are on the verge of being replaced by Superintelligent machines yet optimistic about the potential of technology to improve the human condition. Ken developed the first provably complete algorithm for part feeding and the first robot on the Internet. In 1995 he was awarded the Presidential Faculty Fellowship and in 2005 was elected IEEE Fellow: "For contributions to networked telerobotics and geometric algorithms for automation." Ken founded UC Berkeley's Art, Technology, and Culture public lecture series in 1997 serves on the Advisory Board of the RoboGlobal Exchange Traded Fund. Ken is Chief Scientist at Ambidextrous Robotics and on the Editorial Board of the journal Science Robotics. He served as Chair of the Industrial Engineering and Operations Research Department and co-founded the IEEE Transactions on Automation Science and Engineering. Short documentary films he co-wrote were selected for Sundance and one was nominated for an Emmy Award. He lives in the Bay Area and is madly in love with his wife, filmmaker and Webby Awards founder Tiffany Shlain, and their two daughters.

Princeton Robotics Seminar: Language as Robot Middleware

Date and Time
Friday, November 11, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Andy Zeng *19, from Google

We'd like to build robots that can help us with just about anything. Central to this is getting robots to build a general-purpose representation of the world from perception, and then use it to inform actions. Should this representation be 2D? or 3D? how do we "anchor" it onto a desired latent space? should it be an implicit representation? object-centric? can it be self-supervised? While many options exist out there, I'll talk about one in particular that's becoming my favorite – natural language. Partly motivated by the advent of large language models, but also motivated by recent work in multi-task learning.

In the context of robots, I'll talk about: (i) why we're starting to think that it might actually be a good idea to revisit "language" as a symbolic representation to glue our systems together to do cool things, and (ii) in the process of building these systems, discovering various "gaps" in grounding language to control that I think we could really use your help in figuring out.

Bio: Andy Zeng is a Senior Research Scientist at Google Brain working on vision and language for robotics. He received his Bachelors in Computer Science and Mathematics at UC Berkeley '15, and his PhD in Computer Science at Princeton University '19. Andy is a recipient of several awards including the Best Paper Award at T-RO '20, Best Systems Paper Awards at RSS '19 and Amazon Robotics '18, 1st Place (Stow) at the Amazon Picking Challenge '17, and has been finalist for Best Paper Awards at conferences CoRL '21, CoRL '20, ICRA '20, RSS '19, IROS '18. His research has been recognized through the Princeton SEAS Award for Excellence '18, NVIDIA Fellowship '18, and Gordon Y.S. Wu Fellowship in Engineering and Wu Prize '16, and his work has been featured in many popular press outlets, including the New York Times, BBC, and Wired. To learn more about Andy's work please visit https://andyzeng.github.io
 

Princeton Robotics Seminar: Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective

Date and Time
Friday, December 2, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Nadia Figueroa, from University of Pennsylvania

Since the 1960’s we have lived with the promise of one day being able to own a robot that would be able to co-exist, collaborate and cooperate with humans in our everyday lives. This promise has motivated a vast amount of research in the last decades on motion planning, machine learning, perception and physical human-robot interaction (pHRI). Nevertheless, we are yet to see a truly collaborative robot navigating and manipulating objects, the environment or physically collaborating with humans and other robots outside of labs and in the human-centric dynamic spaces we inhabit; i.e., “in-the-wild”. This bottleneck is due to a robot-centric set of assumptions of how humans interact and adapt to technology and machines.  In this talk, I will introduce a set of more realistic human-centric assumptions and I posit that for collaborative robots to be truly adopted in such dynamic, ever-changing environments they must possess human-like characteristics of reactivity, compliance, safety, efficiency and transparency. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. Hence, I will present possible avenues to achieve these requirements. I will show that by adopting a Dynamical System (DS) based approach for motion planning we can achieve reactive, safe and provably stable robot behaviors while efficiently teaching the robot complex tasks with a handful of demonstrations. Further, I will show that such an approach can be extended to offer task-level reactivity and can be adopted to efficiently and incrementally learn from failures, as humans do. I will also discuss the role of compliance in collaborative robots, the allowance of soft impacts and the relaxation to the standard definition of safety in pHRI and how this can be achieved with DS-based and optimization-based approaches. I will then talk about the importance of both end-users and designers having a holistic understanding of their robot’s behaviors, capabilities, and limitations and present an approach that uses Bayesian posterior sampling to achieve this. The talk will end with a discussion of open challenges and future directions to achieve truly collaborative robots in-the-wild.

Bio: Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department at the University of Pennsylvania. She holds a secondary appointment in the Computer and Information Science (CIS) department and is a faculty advisor at the General Robotics, Automation, Sensing & Perception (GRASP) laboratory. Before joining the faculty, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), advised by Prof. Julie A. Shah. She completed a Ph.D. (2019) in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL), advised by Prof. Aude Billard. Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and in the Institute of Robotics and Mechatronics (2011-2012) at the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany. Her main research interest focuses on developing collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, and psychology - with a physical human-robot interaction perspective.
 

Princeton Robotics Seminar: Generalization in Planning and Learning for Robotic Manipulation

Date and Time
Friday, October 28, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Tomas Lozano-Perez, from MIT

An enduring goal of AI and robotics has been to build a robot capable of robustly performing a wide variety of tasks in a wide variety of environments; not by sequentially being programmed (or taught) to perform one task in one environment at a time, but rather by intelligently choosing appropriate actions for whatever task and environment it is facing. This goal remains a challenge. In this talk I’ll describe recent work in our lab aimed at the goal of general-purpose robot manipulation by integrating task-and-motion planning with various forms of model learning. In particular, I’ll describe approaches to manipulating objects without prior shape models, to acquiring composable sensorimotor skills, and to exploiting past experience for more efficient planning.

Bio: Tomas Lozano-Perez is Professor in EECS at MIT, and a member of CSAIL. He was a recipient of the 2011 IEEE Robotics Pioneer Award and a co-recipient of the 2021 IEEE Robotics and Automation Technical Field Award. He is a Fellow of the AAAI, ACM, and IEEE.

Princeton Robotics Seminar: Data-Centric ML for Autonomous Driving

Date and Time
Friday, October 14, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Sarah Tang, from Waymo

Sarah Tang
Waymo is building the "World's Most Experienced Driver". This talk will discuss data challenges that arise when scaling machine learning systems with almost 1500 years worth of real-life human driving.

Bio: Sarah is passionate about building robots that make intelligent decisions in complex, dynamic environments. She is currently a Staff Software Engineer on the Planner ML team at Waymo, where she is working to make driving safer, smarter, and more scalable with high-capacity models. In previous lives, she was the Tech Lead and Manager of the Motion Planning team at Nuro, where she led development of the decision making and trajectory optimization stack for major product milestones, including autonomous operation of their last-mile delivery robot without a driver nor a chase-car-operator in Arizona, California, and Texas. She was recognized in Business Insider’s 2021 list of “Rising stars in autonomous vehicles”. Sarah got her start in robotics as a member of Princeton's Great Class of 2013. 

Princeton Robotics Seminar: Towards Collective A.I.

Date and Time
Friday, September 30, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Radhika Nagpal, from Princeton University

Radhika Nagpal
In nature, groups of thousands of individuals cooperate to create complex structure purely through local interactions — from cells that form complex organisms, to social insects like termites and ants that build nests and self-assemble bridges, to the complex and mesmerizing motion of fish schools and bird flocks. What makes these systems so fascinating to scientists and engineers alike, is that even though each individual has limited ability, as a collective they achieve tremendous complexity. What would it take to create our own artificial collectives of the scale and complexity that nature achieves? In this talk I will discuss some ongoing projects that use inspiration from biological self-assembly to create robotic systems, e.g. the Kilobot swarm inspired by cells, the Termes and EcitonR robots inspired by the 3D assembly of termites and army ants, and the BlueSwarm project inspired by fish schools. There are many challenges for both building and programming robot swarms, and we use these systems to explore decentralized algorithms, embodied intelligence, and methods for synthesizing complex global behavior. Our theme is the same: can we create simple robots that cooperate to achieve collective complexity?

Biography: Radhika Nagpal is the Norman R. Augustine '57 *59 Professor in Engineering at Princeton University, joint between MAE and COS, where she heads the Self-organizing Swarms & Robotics Lab (SSR). Nagpal is a leading researcher in swarm robotics and self-organized collective intelligence. Projects from her lab include bio-inspired multi-robot systems such as the Kilobot thousand-robot swarm (Science 2014) as well as models of collective intelligence in biology (Nature Comms. 2022). In 2017 Nagpal co-founded ROOT Robotics, an educational robotics company acquired by iRobot. Nagpal was named by Nature magazine as one of the top ten influential scientists and engineers of the year (Nature10 award, Dec 2014) and she is also known for her Scientific American blog article (“The Awesomest 7 Year Postdoc”, 2013) on academic cultural change.

Princeton Robotics Seminar: Learning from Limited Data for Robot Vision in the Wild

Date and Time
Friday, September 16, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Katie Skinner, from MIT

The Princeton Robotics Seminar series continues in Fall 2022 with all events fully in-person. It is scheduled on Fridays at 11am-12pm eastern time. The location is Computer Science Building, Room 105.

If you have a Princeton email, please fill out the form to subscribe to the robotics-seminar mailing list.

Follow us: Facebook Twitter Linkedin