I am a 4th-year PhD student at Princeton University, advised by Prof. Danqi Chen. I am also a proud member of the Princeton NLP group. Before Princeton, I received my bachelor's degree at Tsinghua University. During my time at Tsinghua, I was a member of THUNLP and was advised by Prof. Zhiyuan Liu.
Find me on Twitter, Google Scholar, and Github!
Email: [firstname]g@princeton.edu
My research interests lie within the intersection of natural language processing and machine learning. I am specifically interested in how to better pre-train, fine-tune/instruction-tune, and evaluate large language models (LLMs). I am also interested in how to augment LLMs with external retrieval components to alleviate the hallucination problem and make them more trustworthy.
Please refer to publications for the full list.
Enabling Large Language Models to Generate Text with Citations
Proceedings of EMNLP, 2023
[pdf]
[code]
Fine-Tuning Language Models with Just Forward Passes
Proceedings of Neurips, 2023
(oral)
[pdf]
[code]
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Proceedings of EMNLP, 2021
[pdf]
[code]
Making Pre-trained Language Models Better Few-shot Learners
Proceedings of ACL, 2021
[pdf]
[code]
This website is adapted from Gregory Gunderson.