Quick links

Controlling Large Language Models: Generating (Useful) Text from Models We Don’t Fully Understand

Date and Time
Thursday, March 23, 2023 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Danqi Chen

Ari Holtzman
Generative language models have recently exploded in popularity, with services such as ChatGPT deployed to millions of users. These neural models are fascinating, useful, and incredibly mysterious: rather than designing what we want them to do, we nudge them in the right direction and must discover what they are capable of. But how can we rely on such inscrutable systems?

This talk will describe a number of key characteristics we want from generative models of text, such as coherence and correctness, and show how we can design algorithms to more reliably generate text with these properties. We will also highlight some of the challenges of using such models, including the need to discover and name new and often unexpected emergent behavior. Finally, we will discuss the implications this has for the grand challenge of understanding models at a level where we can safely control their behavior.

Bio: Ari Holtzman is a PhD student at the University of Washington. His research has focused broadly on generative models of text: how we can use them and how can we understand them better. His research interests have spanned everything from dialogue, including winning the first Amazon Alexa Prize in 2017, to fundamental research on text generation, such as proposing Nucleus Sampling, a decoding algorithm used broadly in deployed systems such as the GPT-3 API and academic research. Ari completed an interdisciplinary degree at NYU combining Computer Science and the Philosophy of Language.


To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Follow us: Facebook Twitter Linkedin