Enabling Collaboration between Creators and Generative Models
In this talk, I argue that human creators and generative models can coexist. To achieve it, we need to involve creators in the loop of both model inference and model training while crediting their efforts for their involvement. I will first explore our recent efforts in model rewriting, which allows creators to freely control the model’s behavior by adding, altering, or removing concepts and rules. I will demonstrate several applications, including creating new visual effects, customizing models with multiple personal concepts, and removing copyrighted content. I will then discuss our data attribution algorithm for assessing the influence of each training image for a generated sample. Collectively, we aim to allow creators to leverage the models while retaining control over the creation process and data ownership.
Bio: Jun-Yan Zhu is an Assistant Professor at CMU’s School of Computer Science. Prior to joining CMU, he was a Research Scientist at Adobe Research and a postdoc at MIT CSAIL. He obtained his Ph.D. from UC Berkeley and B.E. from Tsinghua University. He studies computer vision, computer graphics, and computational photography. His current research focuses on generative models for visual storytelling. He has received the Packard Fellowship, the NSF CAREER Award, the ACM SIGGRAPH Outstanding Doctoral Dissertation Award, and the UC Berkeley EECS David J. Sakrison Memorial Prize for outstanding doctoral research, among other awards.
To request accommodations for a disability, please contact Emily Lawrence at email@example.com at least one week prior to the event.
This talk will be recorded and live streamed via Zoom. Register for webinar here.