Arvind Narayanan — Princeton

I'm a professor of computer science at Princeton University, affiliated with the Center for Information Technology Policy.

I study the societal impact of digital technologies, especially AI.

Office: 308 Sherrerd


Twitter: @random_walker

AI snake oil

I’m writing a book about AI Snake Oil — AI that does not and cannot work — with Sayash Kapoor. Here’s a sneak peek. We're sharing our developing ideas through substack; subscribe here. We are also tackling the machine learning reproducibility crisis.

I first spoke about this topic in 2019; the slides have been widely circulated. I co-taught a related course on limits to prediction; here are the course materials.

Algorithmic amplification on social media

For 2022-23 I’m a visiting senior researcher at the Knight First Amendment Institute at Columbia, where I’m writing about how recommendation algorithms on social media amplify some speech and suppress others.

I'm co-organizing a symposium on algorithmic amplification that will take place April 27/28, 2023, at Columbia University and online. Registration info coming soon.

Fairness and ethics in computing » see more

I coauthored a textbook on fairness and machine learning, available online. My work was among the first to rigorously show how machine learning reflects racial, gender, and other cultural biases.

I've worked on exposing deceptive practices online, including so-called dark patterns.

Web privacy » see more

I led Princeton's Web Transparency and Accountability Project. Through large-scale, automated web measurement, we uncovered how companies collect and use our personal information. Our open-source tool OpenWPM has enabled ~100 studies of web privacy.

Previously I helped develop the Do Not Track standard.

Cryptocurrencies and blockchains » see more

I led the creation of an online course / textbook on cryptocurrencies which has been used in over 150 courses worldwide.

My main interest these days is helping shape public policy to counter the harms of cryptocurrency.

De-anonymization » see more

I've shown how sensitive information can be inferred from seemingly innocuous "anonymized" data, ranging from browsing histories to genomes. See this primer of the research and this policy piece on what it means for privacy.