I'm a PhD student in the
Princeton NLP group,
advised by Danqi Chen,
and affiliated with PLI.
Previously, I graduated with a BA & MEng from the University of Cambridge,
where I worked with
Adrian Weller.
I am interested in improving our understanding of language models in terms of their training data, architecture and training objective.
[Google Scholar] [GitHub] [Twitter]
(* indicates equal contribution)
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez*, John Yang*, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, Karthik Narasimhan
Pre-print 2023
Adapting Language Models to Compress Contexts
Alexis Chevalier*, Alexander Wettig*, Anirudh Ajith, Danqi Chen
Pre-print 2023
Learning Transformer Programs
Dan Friedman, Alexander Wettig, Danqi Chen
Pre-print 2023
A Kernel-Based View of Language Model Fine-Tuning
Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora
ICML 2023
Should You Mask 15% in Masked Language Modeling?
Alexander Wettig*, Tianyu Gao*, Zexuan Zhong, Danqi Chen
Proceedings of EACL 2023
Finding Dataset Shortcuts with Grammar Induction
Dan Friedman, Alexander Wettig, Danqi Chen
Proceedings of EMNLP 2022
Phrase Retrieval Learns Passage Retrieval, Too
Jinhyuk Lee, Alexander Wettig, Danqi Chen
Proceedings of EMNLP 2021