Quick links

Princeton collaboration brings new insights to the ethics of artificial intelligence

Molly Sharlach, Office of Engineering Communications

Should machines decide who gets a heart transplant? Or how long a person will stay in prison?

The growing use of artificial intelligence in both everyday life and life-altering decisions brings up complex questions of fairness, privacy and accountability. Surrendering human authority to machines raises concerns for many people. At the same time, AI technologies have the potential to help society move beyond human biases and make better use of limited resources.

Princeton Dialogues on AI and Ethics is an interdisciplinary research project that addresses these issues, bringing engineers and policymakers into conversation with ethicists, philosophers and other scholars. At the project’s first workshop in fall 2017, watching these experts get together and share ideas was “like nothing I’d seen before,” said Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP). “There was a vision for what this collaboration could be that really locked into place.”

The project is a joint venture of CITP and the University Center for Human Values, which serves as “a forum that convenes scholars across the University to address questions of ethics and value” in diverse settings, said director Melissa Lane, the Class of 1943 Professor of Politics. Efforts have included a public conference, held in March 2018, as well as more specialized workshops beginning in 2017 that have convened experts to develop case studies, consider questions related to criminal justice, and draw lessons from the study of bioethics.

Ed Felten and Melissa Lane
Ed Felten (left), director of the Center for Information Technology Policy, and politics professor Melissa Lane (right), director of the University Center for Human Values, created the “Princeton Dialogues on AI and Ethics.”
Photo by Sameer A. Khan/Fotobuddy

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” said Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs. He emphasized that the careful implementation of AI systems can be an opportunity “to achieve better outcomes with less bias and less risk. It’s important not to see this as an entirely negative situation.”

Ethical knowledge is particularly critical for decisions about AI technologies, which can influence people’s lives at a greater speed and on a larger scale than many previous innovations, but risk doing so with insufficient accountability, said Lane. Felten cited the use of automated risk assessment or prediction tools in the criminal justice system to make decisions about bail, sentencing, or parole — decisions that are traditionally made by human judges.

One major question is whether AI systems should be designed to reproduce the current human decision patterns, even where those decision patterns are known to be profoundly affected by various biases and injustices, or should seek to achieve a greater degree of fairness. But then, what is fairness?

Philosophers have always known that there are different ways to view fairness, said Lane. “But we haven’t always been pressed to work out the implications of committing to one view and operationalizing it in the form of an algorithm. How do we evaluate those choices in a real-life setting?”

The project’s initial case studies, released in May 2018 and available for public use under a Creative Commons license, are based on real-world situations that have been fictionalized for study purposes. They examine ethical dilemmas arising from various applications of AI technology. CITP visiting research fellows Chloé Bakalar and Bendert Zevenbergen played key roles in coordinating and authoring the case studies, with input from the project’s workshop participants.

Lane noted that the case studies are intended as a starting point for conversations on AI and ethics in classroom settings, as well as among practitioners and policymakers. Beyond these specific examples, she said, “we also are very conscious of the society-wide, systemic questions — the questions about monopoly power, the questions about privacy, the questions about governmental regulation.”

The project will seek to broaden the scope of its resources and teaching tools in the coming years, with the goal of building a new field of research and practice. Collaborations with similar efforts by other universities are also underway, including joint conferences with Stanford and Harvard held in fall 2018.

Case studies

Learn more about the project and the case studies at the Dialogues on AI and Ethics website.

Illustrations

Illustrations by Matilda Luk, Office of Communications

Should a high school collect data on its students’ behaviors to identify at-risk teens? What is the appropriate balance between privacy and improving educational outcomes, and who should decide?

If a sound-recognition app correctly identifies details, including a speaker’s gender, 99.84 percent of the time, would the error rate be acceptable if used at such a large scale that numerous people are embarrassingly misidentified each day?

A chatbot helps law enforcement officials catch online identity thieves, but is the bot encouraging people to commit crimes? What happens when this technology crosses international borders?

A wristwatch-like sensor coupled with machine learning software prompts diabetes patients to better manage their care. Are there issues of paternalism and transparency when the system runs mini-experiments to optimize which prompts and treatments are best
for users?

This article was originally published in the Ethics in Engineering issue of the winter 2019 EQuad News.

Follow us: Facebook Twitter Linkedin