☑ represents peer-reviewed papers
Book | ☑ AI
Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the
Difference
Arvind Narayanan, Sayash Kapoor Princeton University Press (2024) |
Preprint | AI
Agents That Matter · Blog post
Sayash Kapoor*, Benedikt Stroebl, Zachary S. Siegel, Nitya Nadgir, Arvind Narayanan Preprint (2024) |
Journal | ☑ Considerations for governing open foundation models
Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang Science (2024) |
Journal | ☑ REFORMS: Reporting Standards for Machine
Learning Based Science · Blog post
Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien (Hien) Pham, Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, Jessica Hullman, Michael A. Lones, Momin M. Malik, Priyanka Nanayakkara, Russell A. Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J. Salganik, Marta Serra-Garcia, Brandon M. Stewart, Gilles Vandewiele, Arvind Narayanan Science Advances (2024) |
Conference | ☑ On the Societal Impact of Open
Foundation
Models · Blog
post
Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan ICML (2024) Oral |
Conference | ☑ A Safe Harbor for AI Evaluation and
Red
Teaming · Blog
post Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng-Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland, Arvind Narayanan, Percy Liang, Peter Henderson ICML (2024) Oral Our open letter to AI companies calling for a safe harbor was signed by over 350 academics, researchers, and civil society members. |
Journal | ☑ How large language models can reshape collective intelligence Jason W. Burton, Ezequiel Lopez-Lopez, Shahar Hechtlinger, Zoe Rahwan, Samuel Aeschbach, Michiel A. Bakker, Joshua A. Becker, Aleks Berditchevskaia, Julian Berger, Levin Brinkmann, Lucie Flek, Stefan M. Herzog, Saffron Huang, Sayash Kapoor, Arvind Narayanan et al. Nature Human Behaviour (2024) |
Preprint | The Foundation Model Transparency Index v1.1 Rishi Bommasani, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, Percy Liang Preprint (2024) |
Preprint | CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark Zachary S. Siegel, Sayash Kapoor, Nitya Nadgir, Benedikt Stroebl, Arvind Narayanan Preprint (2024) |
Preprint | The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources Shayne Longpre, Stella Biderman, Alon Albalak, Hailey Schoelkopf, Daniel McDuff, Sayash Kapoor et al. Preprint (2024) |
Preprint | Towards a Framework for Openness in Foundation Models Adrien Basdevant, Camille François, Victor Storchan, Kevin Bankston, Ayah Bdeir, Brian Behlendorf, Merouane Debbah, Sayash Kapoor et al. Preprint (2024) |
Journal | Promises and pitfalls of large
language models for legal professionals and lay people · Blog post
Sayash Kapoor, Peter Henderson, Arvind Narayanan Journal of Cross-disciplinary Research in Computational Law (2024) |
Journal | ☑ Against Predictive
Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize
Predictive Accuracy · Blog
post Angelina Wang*, Sayash Kapoor*, Solon Barocas, Arvind Narayanan ACM Journal on Responsible Computing (2024) Also presented at: Philosophy, AI, and Society (2023); Data (Re)Makes the World (2023), ACM Conference on Fairness, Accountability, and Transparency (2023) |
Preprint | Foundation Model Transparency
Reports · Blog post
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang Preprint (2024) |
Preprint | The Foundation Model Development
Cheatsheet Shayne Longpre, Stella Biderman, Alon Albalak, Gabriel Ilharco, Sayash Kapoor, Kevin Klyman, Kyle Lo, Maribeth Rauh, Nay San, Hailey Schoelkopf, Aviya Skowron, Bertie Vidgen, Laura Weidinger, Arvind Narayanan, Victor Sanh, David Adelani, Percy Liang, Rishi Bommasani, Peter Henderson, Sasha Luccioni, Yacine Jernite, Luca Soldaini Preprint (2024) |
Journal | ☑ Leakage and the reproducibility
crisis in ML-based science Sayash Kapoor, Arvind Narayanan Patterns (2023) |
Policy brief |
Considerations for Governing Open Foundation Models
·
Blog post
Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang Stanford HAI Issue Brief (2023) |
Preprint | The Foundation Model Transparency
Index Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, Percy Liang Preprint (2023) |
Journal | The limitations of machine
learning models for predicting scientific replicability M. J. Crockett, Xuechunzi Bai, Sayash Kapoor, Lisa Messeri, and Arvind Narayanan Proceedings of the National Academy of Sciences (2023) |
Online essay | How
to Prepare for the Deluge of Generative AI on Social Media Sayash Kapoor, Arvind Narayanan Knight First Amendment Institute (2023) |
Conference | ☑ Weaving Privacy and Power: On the Privacy
Practices of Labor Organizers in the U.S. Technology Industry Sayash Kapoor*, Matthew Sun*, Mona Wang*, Klaudia Jaźwińska*, Elizabeth Anne Watkins* ACM Conference on Computer-Supported Cooperative Work and Social Computing (2022) 🏆 Impact Recognition Award |
Conference | ☑ The worst of both worlds: A comparative
analysis of errors in learning from data in psychology and machine learning Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan ACM Conference on AI, Ethics, and Society (2022) |
Conference | ☑ Controlling polarization in
personalization: an algorithmic framework L. Elisa Celis, Sayash Kapoor, Farnood Salehi, and Nisheeth K. Vishnoi ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2019 🏆 Best Paper Award |
Journal | ☑ Corruption-tolerant
bandit learning Sayash Kapoor, Kumar Kshitij Patel, and Purushottam Kar Machine Learning (2019) |
Journal | ☑ A
dashboard for controlling polarization in personalization L. Elisa Celis, Sayash Kapoor, Vijay Keswani, Farnood Salehi, and Nisheeth K. Vishnoi AI Communications (2019) |
Conference | ☑ Balanced news using constrained
bandit-based personalization Sayash Kapoor, Vijay Keswani, Nisheeth K. Vishnoi, and L. Elisa Celis IJCAI Demos Track (2018) |