11-12
Vishvak Murahari FPO

Vishvak Murahari will present his FPO "Towards efficient and personalized Generative AI" on Wednesday, November 12, 2025 at 11:00 AM in Friend 009.

The members of Vishvak’s committee are as follows:
Examiners: Karthik Narasimhan (Adviser), Danqi Chen, Benjamin Eysenbach
Readers: Kai Li, Elad Hazan

A copy of his thesis is available upon request. Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend his talk. 

Abstract follows below:
The rapid evolution of large language models (LLMs) has redefined generative artificial intelligence. Yet, modern systems remain constrained by three persistent challenges: their immense computational cost, their opaque evaluation, and their limited personalization. This dissertation addresses these challenges through a unifying goal—to make generative AI both efficient and personalized—advancing methods that improve computational efficiency, interpretability, and human alignment.

The first part of this work introduces architectural techniques that unlock latent capacity within neural networks. Data Multiplexing (DataMUX) enables models to process multiple inputs concurrently, improving inference throughput without sacrificing accuracy. Building on this idea, MUX-PLMs extend multiplexing to pretrained language models, demonstrating scalable multi-input multi-output (MIMO) inference across standard NLP benchmarks.

The second part of the dissertation shifts from computation to comprehension. QualEval introduces a qualitative evaluation framework that replaces single-number metrics with interpretable, domain-level insights. By automatically discovering latent task domains and generating diagnostic reports, QualEval bridges the gap between benchmarking and guided improvement, enabling models that are transparent and improvable.

The third part examines how generative systems shape modern information ecosystems. Generative Engine Optimization (GEO) formalizes visibility in AI search as a black-box optimization problem, introducing impression metrics and a benchmark, GEO-Bench. PersonaGym then provides a large-scale framework for evaluating persona-conditioned agents through normative, prescriptive, and descriptive dimensions. Across hundreds of simulated personas, it quantifies personality coherence and bias, establishing a foundation for context-aware, human-aligned generative models.

Together, these contributions trace a unified trajectory toward efficient personalization—a paradigm in which generative systems learn economically, explain their reasoning clearly, and engage with humans authentically.

Date and Time
Wednesday November 12, 2025 11:00am - 1:00pm
Location
Friend Center 009

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List