Quick links

CITP

CITP Seminar: Towards Human-Centered Design of Responsible AI

Date and Time
Tuesday, February 8, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Michael Madaio, from Microsoft Research

Click here to join the seminar.


Michael Madaio
Despite widespread awareness that AI systems may cause harm to marginalized groups—and in spite of a growing number of principles, toolkits, and other resources for fairness and ethics in AI—new examples emerge every day of the inequitable impacts of algorithmic systems. In this talk, my research with AI practitioners to co-design resources to support them in proactively addressing fairness-related harms, as well as insights from that research about the organizational dynamics that impact responsible AI work practices will be discussed. In addition, emerging research to support the inclusion of impacted stakeholders in participating in AI design processes will be reviewed. The implications of our research for the design of tools, resources, and organizational policies to support more human-centered and responsible AI will be discussed.

Bio: Michael Madaio is a postdoctoral researcher at Microsoft Research working with the FATE research group (Fairness, Accountability, Transparency, and Ethics in AI). He works at the intersection of human-computer interaction and AI/ML, focusing on enabling more fair and responsible AI through research with AI practitioners and stakeholders impacted by AI systems. He received his Ph.D. in Human-Computer Interaction from Carnegie Mellon University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event. 

This talk will be recorded.

CITP Seminar: Disinformation and Its Threat to Democracy

Date and Time
Tuesday, February 1, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Danny Rogers, from Global Disinformation Index

Click here to join the seminar.


Danny Rogers
We are at a watershed moment in the history of information.  The internet has given historically underrepresented voices unprecedented access to tools for self-expression, new platforms to build communities, and new capabilities to speak truth to power.  But in response our social internet is now being corrupted, exploited, and weaponized by those with the power to control the flow of information and distort reality. 

Marshall McLuhan and others predicted this rise of “fifth generation warfare” as far back as the 1970s, and now we are seeing social media provide the “perfect storm” of disinformation, with deadly consequences around the world.  In an era where the dominant internet business models reward engagement above all else, and where these engagement-driven algorithmic feeds warp the realities of over half the world’s population, no less than humanity’s progress since the Enlightenment itself is threatened.  Rogers will talk about how we arrived at this crisis point, how to define disinformation in the first place, and what can be done at an individual, commercial, and global policymaker level to combat this scourge.

Bio: Danny Rogers is the co-founder and CTO of the Global Disinformation Index, a non-profit focused on catalyzing change within the tech industry to disincentivize the creation and dissemination of disinformation.  Prior to founding the GDI, Rogers founded and led Terbium Labs, an information security and dark web intelligence startup based in Baltimore, Maryland. He is a computational physicist with experience supporting Defense and Intelligence Community Cyber Operations, as well as startup experience in the defense, energy, and biotech sectors. He is an author and expert in the field of quantum cryptography and has published numerous patents and papers on that and other subjects. Prior to co-founding Terbium Labs, Rogers managed a portfolio of physics and sensor research projects at the Johns Hopkins University Applied Physics Laboratory. He has a bachelor’s degree in math and physics from Georgetown University, a doctorate in chemical physics from the University of Maryland, is an adjunct professor at New York University in their program on Cybercrime and Global Security, and a security fellow at the Truman Project on National Security.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded.

CITP Seminar: Modeling Through

Date and Time
Tuesday, February 15, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Ryan Calo, from University of Washington

Please click here to join this seminar.


Ryan Calo in his office.
Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers cannot see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can.

Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and system analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm.

The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill-equipped to address. Concerns include privacy, brittleness, and automation bias of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm but there are sharks in it.

These tensions are not new. And there is danger in hewing to the status quo. (We should still pursue renewable energy even though wind turbines as presently configured waste energy and kill wildlife.) As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling.

Bio: Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director (with Batya Friedman and Tadayoshi Kohno) of the interdisciplinary UW Tech Policy Lab and (with Chris Coward, Emma Spiro, Kate Starbird, and Jevin West) the UW Center for an Informed Public. Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.

Calo’s research on law and emerging technology appears in leading law reviews (California Law Review, Columbia Law Review, Duke Law Journal, UCLA Law Review, and University of Chicago Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence) and is frequently referenced by the national media. Calo has testified three times before the United States Senate and organized events on behalf of the National Science Foundation, the National Academy of Sciences, and the Obama White House. He has been a speaker at the President Obama’s Frontiers Conference, the Aspen Ideas Festival, and NPR‘s Weekend in Washington.

Calo is a board member of the R Street Institute and an affiliate scholar at the Stanford Law School Center for Internet and Society (CIS), where he was a research fellow, and the Yale Law School Information Society Project (ISP). He serves on numerous advisory boards and steering committees, including University of California’s People and Robots Initiative, the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology (CDT), the Electronic Privacy Information Center (EPIC), Without My Consent, the Foundation for he co-founded the premiere North American annual robotics law and policy conference We Robot with Michael Froomkin and Ian Kerr.

Calo worked as an associate in the Washington, D.C. office of Covington & Burling LLP and clerked for the Honorable R. Guy Cole, the Chief Justice of the U.S. Court of Appeals for the Sixth Circuit. Prior to law school at the University of Michigan, he investigated allegations of police misconduct in New York City. He holds a B.A. in Philosophy from Dartmouth College.

Calo won the Phillip A. Trautman 1L Professor of the Year Award in 2014 and 2017 and was awarded the Washington Law Review Faculty Award in 2019.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar: Studying the Impact of Social Media Algorithms

Date and Time
Tuesday, January 25, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Andy Guess, from Princeton University

Please click here to join the webinar.


Andy Guess
Policymakers and the public are increasingly interested in the effects of social media algorithms on society. In this talk some of the challenges this topic poses to researchers will be outlined. Two different approaches to studying these systems’ effects on individuals will be introduced. One is a large-scale collaboration with a social platform designed to coincide with the 2020 U.S. presidential campaign, and the other is an experimental research design that can be adapted to assess the impact of proprietary recommendation systems. The talk will conclude with a discussion of lessons learned from these studies and the possibilities for future projects building on this work.

Bio: Andy Guess is an assistant professor of politics and public affairs at Princeton University. His research and teaching interests lie at the intersection of political communication, public opinion, and political behavior.

Via a combination of experimental methods, large datasets, machine learning, and innovative measurement, he studies how people choose, process, spread, and respond to information about politics. Recent work investigates the extent to which online Americans’ news habits are polarized (the popular “echo chambers” hypothesis), patterns in the consumption and spread of online misinformation, and the effectiveness of efforts to counteract misperceptions encountered on social media. Coverage of these findings has appeared in The New York Times, The New Yorker, Slate, The Chronicle of Higher Education, and other publications.

His research has been supported by grants from VolkswagenStiftung, the Russell Sage Foundation, and the National Science Foundation and published in peer-reviewed journals such as Nature Human Behaviour, Political Analysis, and Proceedings of the National Academy of Sciences.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will not be recorded.

CITP Seminar: The New York City AI Strategy

Date and Time
Tuesday, November 30, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Neal Parikh, from New York City Mayor’s Office

Please click here to join the webinar. (https://princeton.zoom.us/j/98660811807)


Neal Parikh
The recently published NYC AI Strategy is a foundational effort to foster a healthy cross-sector AI ecosystem in New York City. The document establishes a baseline of information about AI to help ensure decision-makers are working from an accurate and shared understanding of the technology and the issues it presents, outlines key components and characteristics of the local AI ecosystem today, and frames a set of areas of opportunity for City action.

Bio: Neal Parikh is director of Artificial Intelligence for New York City, in the Mayor’s Office of the Chief Technology Officer. Most recently, he was an inaugural fellow at the Aspen Tech Policy Hub, part of The Aspen Institute. He is Co-Founder and former CTO of SevenFifty, a technology startup based in NYC, and was a visiting lecturer in machine learning at Cornell Tech. His academic work has been cited thousands of times in the academic literature and is widely used in both research and industry. He received a Ph.D. in computer science from Stanford University, focused on artificial intelligence, machine learning, and convex optimization, and a B.A.S. in computer science and mathematics from the University of Pennsylvania.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will not be recorded.

CITP Seminar: AI Ethics: The Case for Including Animals

Date and Time
Tuesday, December 7, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Peter Singer and Yip Fai Tse, from Princeton University

There is broad agreement about the need for the development and application of AI to be subject to ethical guidelines and constraints. Equally, there is today little dissent from the view that the way we treat animals should also be guided by ethical considerations. Within the field of AI ethics, however, there is virtually no discussion of how AI impacts animals, nor of the ethical problems to which these impacts may give rise. Self-driving cars that might encounter animals, autonomous killing drones that target animals, and AI systems now working in factory farms, are all examples of AI technologies that have real impacts on animals. We will describe what is happening currently, take a peek into the future, and analyze some ethical problems related to these AI technologies. The questions raised include:

– When a self-driving car hits an animal, who bears the moral responsibility?

– Is it morally justifiable to let drones kill animals?

– Is the use of AI in factory farming good or bad for the animals?

– If an AI technology will clearly increase the suffering of animals, should a developer reject such a project even if it is legal?

Bios:

Peter Singer

Peter is the Ira W. DeCamp Professor of Bioethics in the University Center for Human Values (UCHV). He became well-known internationally after the publication of Animal Liberation (1975). His other books include: Democracy and Disobedience (1973); Practical Ethics (1979, 3rd. ed. 2011); The Expanding Circle (1981, new ed 2011); Marx (1980); Hegel (1983); The Reproduction Revolution (1984) (co-authored with Deane Wells); Should the Baby Live? (1986) (co-authored with Helga Kuhse); How Are We to Live? (1995); Rethinking Life and Death (1996); One World (2002; revised edition One World Now, 2016); Pushing Time Away (2003); The President of Good and Evil (2004); The Ethics of What We Eat (2006) (co-authored with Jim Mason); The Life You Can Save (2009); The Point of View of the Universe (2014) co-authored with Katarzyna de Lazari-Radek; The Most Good You Can Do (2015); Ethics in the Real World (2016); and Utilitarianism: A Very Short Introduction, co-authored by Katarzyna de Lazari-Radek. Peter holds his appointment at UCHV jointly with his appointment as Laureate Professor at the University of Melbourne, attached to the School of Historical and Philosophical Studies. He was made a Companion of the Order of Australia (AC) in 2012. He is the founder and board chair of The Life You Can Save(link is external), a nonprofit that fights extreme poverty.

Yip Fai Tse

Fai is a research assistant for Professor Peter Singer at Princeton University. Fai is researching the impact and ethics of artificial intelligence concerning nonhuman animals. Fai has been a researcher in the field of animal welfare advocacy and effective altruism. He has advised the Open Philanthropy Project on farmed animal welfare. He is also a 2021 Foresight Fellow in Animal Ethics for Artificial Intelligence. He was previously the China Strategy Consultant for Mercy For Animals, responsible for research and strategy for their expansion plans in China and Asia.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

This talk is co-sponsored by CITP, the University Center for Human Values and the Ira W. DeCamp Bioethics Seminar Program

CITP Seminar: A Crash Course on Algorithmic Mechanism Design

Date and Time
Tuesday, November 16, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Matt Weinberg, from Princeton University

Algorithmic Mechanism Design studies the design of algorithms in settings where participants have their own incentives. For example, when executing an ad auction, the auctioneer/designer wants to achieve as much profit as possible, but each advertiser wants the best impressions for the lowest price (and may manipulate an auction if it’s in their interest to do so). When matching doctors to residencies, each hospital wants their favorite doctors, and each doctor wants their favorite hospitals (and both may manipulate any procedure in order to get a better match). When participating in a cryptocurrency, each miner wants to maximize their own profits (and may deviate from an intended protocol in order to get greater profit).

In the first part of this talk, an overview will be presented along with some theoretical foundations of Algorithmic Mechanism Design, as well as some applications. Additionally, we will discuss some more recent theoretical results, with a focus on designing simple auctions for complex environments.


Bio: Matt is an assistant professor at Princeton University in the Department of Computer Science. His primary research interest is in Algorithmic Mechanism Design: algorithm design in settings where users have their own incentives. He is also interested more broadly in Algorithmic Game Theory, Algorithms Under Uncertainty, and Theoretical Computer Science in general. Please click here for more details.

Before joining the faculty at Princeton, he spent two years as a postdoc in Princeton’s CS Theory group, and was a research fellow at the Simons Institute during the Fall 2015 (Economics and Computation) and Fall 2016 (Algorithms and Uncertainty) semesters. Matt completed his Ph.D. in 2014 at MIT, where he was very fortunate to be advised by Costis Daskalakis. Matt graduated from Cornell University with a B.A. in Math in 2010, where he was also fortunate to have worked with Bobby Kleinberg.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Beyond Bias: Algorithmic Unfairness, Infrastructure, and Genealogies of Data

Date and Time
Tuesday, November 9, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Alex Hanna, from Google

Alex Hanna
Problems of algorithmic bias are often framed in terms of lack of representative data or formal fairness optimization constraints to be applied to automated decision-making systems. However, these discussions sidestep deeper issues with data used in AI, including problematic categorizations and the extractive logics of crowdwork and data mining. This talk will examine two interventions: first by reframing of data as a form of infrastructure, and as such, implicating politics and power in the construction of datasets; and secondly discussing the development of a research program around the genealogy of datasets used in machine learning and AI systems. These genealogies should be attentive to the constellation of organizations and stakeholders involved in their creation, the intent, values, and assumptions of their authors and curators, and the adoption of datasets by subsequent researchers.

Bio: Alex is a sociologist and senior research scientist on the Ethical AI team at Google. Before that, Alex was an assistant professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto.

Alex received a Ph.D. in sociology from the University of Wisconsin-Madison. Her dissertation was the Machine-learning Protest Event Data System (MPEDS), a system which uses machine learning and natural language processing to create protest event data.

Her current research agenda is two-fold. One line of research centers on origins of the training data which form the informational infrastructure of machine learning, artificial intelligence, and algorithmic fairness frameworks. Another line of research (with Ellen Berrey) seeks to understand the interplay between student protest and university responses in US and Canada. Alex’s past work has focused on how new and social media has changed social movement mobilization and political participation.

Alex is as much as an educator as she is a researcher. She has taught workshops and courses on computational methods for social scientists, social movements, and the implications of information as infrastructure. She co-founded the [now defunct] computational social science blog Bad Hessian.

As a second job, she plays women’s flat track roller derby with Bay Area Derby.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power

Date and Time
Tuesday, November 2, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Ari Waldman, from Northeastern University

Why are there so many privacy laws and so many privacy professionals but no privacy on the ground? With research based on interviews with scores of tech employees and internal documents outlining corporate strategies, Industry Unbound reveals that companies don’t just lobby against privacy law; they also manipulate how we think about privacy, how their employees approach their work, and how they weaken the law to make data-extractive products the norm. In contrast to those who claim that privacy law is getting stronger, Industry Unbound shows why recent shifts in privacy law are precisely the kinds of changes that corporations want and how even those who think of themselves as privacy advocates often unwittingly facilitate corporate malfeasance.

Bio: Ari, a leading authority on law, technology and society, is a professor of law and computer science at Northeastern University. He directs the School of Law’s Center for Law, Information and Creativity (CLIC). Ari studies asymmetrical power relations created and entrenched by law and technology, with particular focus on privacy, online harassment, free speech and the LGBTQ community.

He is a widely published scholar, including two books, Privacy As Trust: Information Privacy for an Information Age (Cambridge University Press, 2018) and Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (Cambridge University Press, 2021), and more than 30 articles published in leading law reviews and peer-reviewed journals, including the California Law Review, the Michigan Law Review, the Washington University Law Review, Cornell Law Review, Iowa Law Review, Indiana Law Journal and Law & Social Inquiry, among others. He has also written for the popular press, publishing in The New York Times, Slate, New York Daily News and The Advocate, among others, and serves on the editorial board of Law & Social Inquiry (LSI), a peer-reviewed journal that publishes work on sociolegal issues across multiple disciplines, including anthropology, criminology, economics, history, law, philosophy, political science, sociology and social psychology.

He is the founder of @Legally_Queer, a social media project that educates the public about the history, present and future of LGBTQ freedom. Providing accessible summaries and context to LGBTQ cases and laws decided or enacted “on this date in history,” Legally Queer seeks to engage both the LGBTQ community and the general public in the role of the courts in equality and social justice.

Ari was previously the Microsoft Visiting Professor at the Center for Information Technology Policy and visiting professor at the School of Public and International Affairs at Princeton University. He served as a professor of law at New York Law School, where he was the founding director of the Innovation Center for Law and Technology and founded the Institute for CyberSafety, a research and clinical program helping victims of online harassment obtain justice. He has also served as a visiting professor at Brooklyn Law School and Fordham University School of Law. He clerked for Judge Scott W. Stucky at the Court of Appeals for the Armed Forces. He holds a Ph.D. in sociology from Columbia University, a J.D. from Harvard Law School and an A.B. magna cum laude, from Harvard College.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Insights into Predictability of Life Outcomes: A Data-Driven Approach

Date and Time
Tuesday, October 26, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Pranay Anchuri, from Princeton University

Pranay Anchuri
Predicting life outcomes is a challenging task even for advanced machine learning (ML) algorithms. At the same time, accurately predicting these outcomes has important implications in providing targeted assistance and in improving policy making. Recent studies based on Fragile Families and Child Wellbeing Study dataset have shown that complex ML pipelines even in the presence of thousands of variables produce low quality predictions. This research raises several questions about the predictability of life outcomes: 1) What factors influence the predictability of an outcome (e.g., quality of data, pre-processing steps, model hyperparameters etc.) 2) How does the predictability of outcomes vary by domain (e.g., are health outcomes easier to predict than education outcomes)? To answer these questions, we are building a cloud-based system to train and test hundreds of ML pipelines on thousands of life outcomes. We use the results of this large-scale exploration in a data-driven way to understand the predictability of life outcomes. 

In the first part of the talk, we discuss the study design and describe the system we built to run such a large-scale exploration. This system is both general and has easy to use interfaces to run a wide range of studies. In the second part, we present a meta-learning inspired method to derive key insights related to the problem of predictability by A) Comparing the relative predictive power of different classes of models B) Using descriptive statistics that best predict the predictability of ML pipelines. Predictability of life outcomes is a multi-faceted problem. We conclude the talk by briefly discussing some of our other studies that are currently in the pipeline.

Bio: Pranay Anchuri is a data scientist at CITP. His research interests include graph mining, large-scale data analytics and blockchain technologies. Pranay graduated with a Ph.D. in computer science from Rensselaer Polytechnic Institute in 2015. During graduate studies, he worked at various labs including IBM, Yahoo, and QCRI. His thesis focused on developing algorithms for efficiently extracting frequent patterns noisy networks.

After graduation, Pranay started as a research scientist at NEC Labs, Princeton working on log modeling and analytics. Most recently, he worked as a research scientist at Axoni, NY where his research focused on problems related to the implementation of high-performance permissioned blockchains.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

This seminar is co-sponsored by CITP and the Center for Statistics and Machine Learning.

Follow us: Facebook Twitter Linkedin