Tianyu Gao 高天宇

I am a 4th-year PhD student at Princeton University, advised by Prof. Danqi Chen. I am also a proud member of the Princeton NLP group. Before Princeton, I received my bachelor's degree at Tsinghua University. During my time at Tsinghua, I was a member of THUNLP and was advised by Prof. Zhiyuan Liu.

Find me on Twitter, Google Scholar, and Github!

Email: [firstname]g@princeton.edu

Research

My research interests lie within the intersection of natural language processing and machine learning. I am specifically interested in how to better pre-train, fine-tune/instruction-tune, and evaluate large language models (LLMs). I am also interested in how to augment LLMs with external retrieval components to alleviate the hallucination problem and make them more trustworthy.

Highlighted Publications

Please refer to publications for the full list.

Tianyu Gao, Howard Yen, Jiatong Yu, Danqi Chen
Enabling Large Language Models to Generate Text with Citations
Proceedings of EMNLP, 2023 [pdf] [code]

Sadhika Malladi*, Tianyu Gao*, Eshaan Nichani, Alex Damian, Jason D. Lee, Danqi Chen, Sanjeev Arora
Fine-Tuning Language Models with Just Forward Passes
Proceedings of Neurips, 2023 (oral) [pdf] [code]

Tianyu Gao*, Xingcheng Yao*, Danqi Chen
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Proceedings of EMNLP, 2021 [pdf] [code]

Tianyu Gao*, Adam Fisch*, Danqi Chen
Making Pre-trained Language Models Better Few-shot Learners
Proceedings of ACL, 2021 [pdf] [code]



This website is adapted from Gregory Gunderson.