Quick links

Building Machines that Discover Generalizable, Interpretable Knowledge

Date and Time
Tuesday, February 11, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Tom Griffiths

Kevin Ellis
Humans can learn to solve an endless range of problems: building, drawing, designing, coding, and cooking, to name a few, and need relatively modest amounts of experience to acquire any one new individual skill. Machines which can similarly master a diverse span of problems are surely far off.

Here, however, I will argue that program induction--an emerging AI technique--will play a role in building this more human-like AI. Program induction systems represent knowledge as programs, and learn by synthesizing code. Across three case studies in vision, natural language, and learning-to-learn, this talk will present program induction systems that take a step toward machines that can: acquire new knowledge from modest amounts of experience; strongly generalize that knowledge to extrapolate beyond their training; learn to represent their knowledge in an interpretable format; and are applicable to a broad spread of problems, from drawing pictures to discovering equations. Driving these developments is a new neuro-symbolic algorithm for Bayesian program synthesis. This algorithm integrates maturing program synthesis technologies with several complementary AI traditions (symbolic, probabilistic, and neural).

Building a human-like machine learner is a very distant, long-term goal for the field. In the near-term program induction comes with a roadmap of practical problems to push, such as language learning, scene understanding, and programming-by-examples, which this talk explores. But it's worth keeping these long-term goals in mind as well.

Bio: Kevin Ellis works across artificial intelligence, program synthesis, and machine learning. He develops learning algorithms that teach machines to write code, and applies these algorithms to problems in artificial intelligence. His work has appeared in machine learning venues (NeurIPS, ICLR, IJCAI) and cognitive science venues (CogSci, TOPICS). He has collaborated with researchers at Harvard, Brown, McGill, Siemens, and MIT, where he is a final-year graduate student advised by Josh Tenenbaum and Armando Solar-Lezama.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Follow us: Facebook Twitter Linkedin