Quick links

CITP

CITP Seminar: Electronic Ankle Monitors and the Politics of Criminal Justice Technology

Date and Time
Tuesday, March 1, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Lauren Kilgour, from Princeton University

Click here to join the seminar.


Lauren Kilgour
This talk examines why electronic ankle monitors look and feel the way they do. The form factor of a digital monitoring technology is often overlooked, as we tend to focus on its data collection and analysis capabilities. The research presented in this talk specifically considers how the size, shape, and appearance of computing hardware contributes to its social consequences for ankle monitor wearers. Based on 85 semi-structured interviews and multi-sited fieldwork, this study traces the social and cultural dynamics that animate the design and deployment of this technology in criminal justice contexts. This research shows how ankle monitor developer communities, government agencies, and law enforcement communities throughout the United States work to normalize the social harms of ankle monitors, contribute to the continued expansion of ankle monitor deployment, and perpetuate old and new forms of social stigma.

As a response to this normalization, we will look at everyday practices of technology modification, adaptation, and resistance among people required to wear ankle monitors, with a focus on online communities and social media, in order to understand how wearers contest this stigmatizing technology and critique state and institutional use of these devices. The case study of electronic ankle monitors evidences the need to develop laws and policies that address the visual properties of digital tools as well as the data they generate. The talk concludes by discussing the broader relevance and significance of this study’s findings for researchers, practitioners, and policymakers involved in the design and deployment of computing technology across varied contexts.

Bio: Lauren Kilgour is currently a postdoctoral research associate at CITP. She holds a master’s degree and a Ph.D. in information science from Cornell University. Additionally, she holds a master’s degree from the University of Toronto’s Faculty of Information. Kilgour studies relationships among information technology, law, and society with a focus on investigating the social harms of data and technology and their roles in perpetuating stigma, shame, and social control. Her work employs qualitative research methods and builds upon literature from fields such as, sociology, science and technology studies, history, media studies, information science, law, and policy.

Kilgour’s current book project is a critical study of electronic ankle monitors. Designed as an alternative to imprisonment, ankle monitors are both wearable networked technologies and part of complex, criminal justice and law enforcement data service ecosystems. In this work, she critically explores how ankle monitors operate as carceral technologies and demonstrates how prejudicial notions of social difference are designed into ankle monitor hardware, software, and maintenance.

Kilgour’s core postdoctoral research project expands her study of criminal justice technologies to focus on police body cameras. Specifically, she is investigating the mixed record of success of body cameras for holding police officers accountable for their actions, particularly related to their use of deadly force and complying with wider codes of conduct. Through examining the design and use of police body cameras, and their footage, as socio-legal media artifacts, she seeks to shed light on the limitations of these devices as technologies of accountability. More broadly, she is engaged in ongoing research projects examining histories and futures of surveillance technology, practices, and cultures and their impacts; uneven access to privacy; embodied artificial intelligence and machine learning; and the roles law and policy play in constructing personhood, social categories.

Kilgour’s research has been supported by a range of internal, external, and international funding bodies and academic organizations including, the Social Sciences and Humanities Research Council of Canada (SSHRC), Cornell University, Technische Universität Berlin (TU-Berlin), the University of Toronto, and the University of Pittsburgh. Her research is published in international, peer-review venues such as The Information Society, Elder Law Journal, Records Management Journal, and public venues such as MIT Technology Review, and The New York Times.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will not be recorded.

CITP Seminar: The Good Web: Competing Visions for the Future of Social Media

Date and Time
Tuesday, February 22, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Ethan Zuckerman, from University of Massachusetts Amherst

Click here to join the seminar.


Ethan Zuckerman
Between the US Capitol Insurrection – broadcast and organized online – and Frances Haugen’s revelations about the inner workings of Facebook, 2021 was a rough year for social media. Many social media observers – including legislators concluded that tools like Facebook, Twitter and YouTube were harming individuals and society and needed regulation. But there are others reimagining social media as a whole, proposing new architectures designed to address the shortcomings of existing platforms and encourage new forms of interaction online. This talk examines four different models for the future of social media, including blockchain-based web3 models and decentralized open source models, and the ideologies animating each vision of the future.

Bio: Ethan Zuckerman is an associate professor of public policy, communication, and information at the University of Massachusetts Amherst. He is also the director of the UMass Initiative for Digital Public Infrastructure, focused on reimagining the internet as a tool for civic engagement.

Prior to coming to UMass, Zuckerman was at MIT, where he served as director of the Center for Civic Media and associate professor of practice in media arts and sciences at the MIT Media Lab. His research focuses on the use of media as a tool for social change, the role of technology in international development, and the use of new media technologies by activists. The author of Rewire: Digital Cosmopolitans in the Age of Connection, he published a new book, Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them (W.W. Norton), in 2021.

In 2005, Zuckerman co-founded Global Voices, which showcases news and opinions from citizen media in more than 150 nations and 30 languages. Through Global Voices, and as a researcher and fellow for eight years at the Berkman Klein Center for Internet and Society at Harvard University, he has led efforts to promote freedom of expression and fight censorship in online spaces.

In 1999, he founded Geekcorps, an international, nonprofit, volunteer organization that sent IT specialists to work on projects in developing nations, with a focus on West Africa. Previously, he helped found Tripod.com, one of the web’s first “personal publishing” sites.

In addition to authoring numerous academic articles, Zuckerman is a frequent contributor to media outlets such as The Atlantic, Wired, and CNN. He received his bachelor’s degree from Williams College and, as a Fulbright scholar, studied at the University of Ghana at Legon.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded.

CITP Seminar: Towards Human-Centered Design of Responsible AI

Date and Time
Tuesday, February 8, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Michael Madaio, from Microsoft Research

Click here to join the seminar.


Michael Madaio
Despite widespread awareness that AI systems may cause harm to marginalized groups—and in spite of a growing number of principles, toolkits, and other resources for fairness and ethics in AI—new examples emerge every day of the inequitable impacts of algorithmic systems. In this talk, my research with AI practitioners to co-design resources to support them in proactively addressing fairness-related harms, as well as insights from that research about the organizational dynamics that impact responsible AI work practices will be discussed. In addition, emerging research to support the inclusion of impacted stakeholders in participating in AI design processes will be reviewed. The implications of our research for the design of tools, resources, and organizational policies to support more human-centered and responsible AI will be discussed.

Bio: Michael Madaio is a postdoctoral researcher at Microsoft Research working with the FATE research group (Fairness, Accountability, Transparency, and Ethics in AI). He works at the intersection of human-computer interaction and AI/ML, focusing on enabling more fair and responsible AI through research with AI practitioners and stakeholders impacted by AI systems. He received his Ph.D. in Human-Computer Interaction from Carnegie Mellon University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event. 

This talk will be recorded.

CITP Seminar: Disinformation and Its Threat to Democracy

Date and Time
Tuesday, February 1, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Danny Rogers, from Global Disinformation Index

Click here to join the seminar.


Danny Rogers
We are at a watershed moment in the history of information.  The internet has given historically underrepresented voices unprecedented access to tools for self-expression, new platforms to build communities, and new capabilities to speak truth to power.  But in response our social internet is now being corrupted, exploited, and weaponized by those with the power to control the flow of information and distort reality. 

Marshall McLuhan and others predicted this rise of “fifth generation warfare” as far back as the 1970s, and now we are seeing social media provide the “perfect storm” of disinformation, with deadly consequences around the world.  In an era where the dominant internet business models reward engagement above all else, and where these engagement-driven algorithmic feeds warp the realities of over half the world’s population, no less than humanity’s progress since the Enlightenment itself is threatened.  Rogers will talk about how we arrived at this crisis point, how to define disinformation in the first place, and what can be done at an individual, commercial, and global policymaker level to combat this scourge.

Bio: Danny Rogers is the co-founder and CTO of the Global Disinformation Index, a non-profit focused on catalyzing change within the tech industry to disincentivize the creation and dissemination of disinformation.  Prior to founding the GDI, Rogers founded and led Terbium Labs, an information security and dark web intelligence startup based in Baltimore, Maryland. He is a computational physicist with experience supporting Defense and Intelligence Community Cyber Operations, as well as startup experience in the defense, energy, and biotech sectors. He is an author and expert in the field of quantum cryptography and has published numerous patents and papers on that and other subjects. Prior to co-founding Terbium Labs, Rogers managed a portfolio of physics and sensor research projects at the Johns Hopkins University Applied Physics Laboratory. He has a bachelor’s degree in math and physics from Georgetown University, a doctorate in chemical physics from the University of Maryland, is an adjunct professor at New York University in their program on Cybercrime and Global Security, and a security fellow at the Truman Project on National Security.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded.

CITP Seminar: Modeling Through

Date and Time
Tuesday, February 15, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Ryan Calo, from University of Washington

Please click here to join this seminar.


Ryan Calo in his office.
Theorists of justice have long imagined a decision-maker capable of acting wisely in every circumstance. Policymakers seldom live up to this ideal. They face well-understood limits, including an inability to anticipate the societal impacts of state intervention along a range of dimensions and values. Policymakers cannot see around corners or address societal problems at their roots. When it comes to regulation and policy-setting, policymakers are often forced, in the memorable words of political economist Charles Lindblom, to “muddle through” as best they can.

Powerful new affordances, from supercomputing to artificial intelligence, have arisen in the decades since Lindblom’s 1959 article that stand to enhance policymaking. Computer-aided modeling holds promise in delivering on the broader goals of forecasting and system analysis developed in the 1970s, arming policymakers with the means to anticipate the impacts of state intervention along several lines—to model, instead of muddle. A few policymakers have already dipped a toe into these waters, others are being told that the water is warm.

The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill-equipped to address. Concerns include privacy, brittleness, and automation bias of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm but there are sharks in it.

These tensions are not new. And there is danger in hewing to the status quo. (We should still pursue renewable energy even though wind turbines as presently configured waste energy and kill wildlife.) As modeling through gains traction, however, policymakers, constituents, and academic critics must remain vigilant. This being early days, American society is uniquely positioned to shape the transition from muddling to modeling.

Bio: Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director (with Batya Friedman and Tadayoshi Kohno) of the interdisciplinary UW Tech Policy Lab and (with Chris Coward, Emma Spiro, Kate Starbird, and Jevin West) the UW Center for an Informed Public. Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.

Calo’s research on law and emerging technology appears in leading law reviews (California Law Review, Columbia Law Review, Duke Law Journal, UCLA Law Review, and University of Chicago Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence) and is frequently referenced by the national media. Calo has testified three times before the United States Senate and organized events on behalf of the National Science Foundation, the National Academy of Sciences, and the Obama White House. He has been a speaker at the President Obama’s Frontiers Conference, the Aspen Ideas Festival, and NPR‘s Weekend in Washington.

Calo is a board member of the R Street Institute and an affiliate scholar at the Stanford Law School Center for Internet and Society (CIS), where he was a research fellow, and the Yale Law School Information Society Project (ISP). He serves on numerous advisory boards and steering committees, including University of California’s People and Robots Initiative, the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology (CDT), the Electronic Privacy Information Center (EPIC), Without My Consent, the Foundation for he co-founded the premiere North American annual robotics law and policy conference We Robot with Michael Froomkin and Ian Kerr.

Calo worked as an associate in the Washington, D.C. office of Covington & Burling LLP and clerked for the Honorable R. Guy Cole, the Chief Justice of the U.S. Court of Appeals for the Sixth Circuit. Prior to law school at the University of Michigan, he investigated allegations of police misconduct in New York City. He holds a B.A. in Philosophy from Dartmouth College.

Calo won the Phillip A. Trautman 1L Professor of the Year Award in 2014 and 2017 and was awarded the Washington Law Review Faculty Award in 2019.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar: Studying the Impact of Social Media Algorithms

Date and Time
Tuesday, January 25, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Andy Guess, from Princeton University

Please click here to join the webinar.


Andy Guess
Policymakers and the public are increasingly interested in the effects of social media algorithms on society. In this talk some of the challenges this topic poses to researchers will be outlined. Two different approaches to studying these systems’ effects on individuals will be introduced. One is a large-scale collaboration with a social platform designed to coincide with the 2020 U.S. presidential campaign, and the other is an experimental research design that can be adapted to assess the impact of proprietary recommendation systems. The talk will conclude with a discussion of lessons learned from these studies and the possibilities for future projects building on this work.

Bio: Andy Guess is an assistant professor of politics and public affairs at Princeton University. His research and teaching interests lie at the intersection of political communication, public opinion, and political behavior.

Via a combination of experimental methods, large datasets, machine learning, and innovative measurement, he studies how people choose, process, spread, and respond to information about politics. Recent work investigates the extent to which online Americans’ news habits are polarized (the popular “echo chambers” hypothesis), patterns in the consumption and spread of online misinformation, and the effectiveness of efforts to counteract misperceptions encountered on social media. Coverage of these findings has appeared in The New York Times, The New Yorker, Slate, The Chronicle of Higher Education, and other publications.

His research has been supported by grants from VolkswagenStiftung, the Russell Sage Foundation, and the National Science Foundation and published in peer-reviewed journals such as Nature Human Behaviour, Political Analysis, and Proceedings of the National Academy of Sciences.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will not be recorded.

CITP Seminar: The New York City AI Strategy

Date and Time
Tuesday, November 30, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Neal Parikh, from New York City Mayor’s Office

Please click here to join the webinar. (https://princeton.zoom.us/j/98660811807)


Neal Parikh
The recently published NYC AI Strategy is a foundational effort to foster a healthy cross-sector AI ecosystem in New York City. The document establishes a baseline of information about AI to help ensure decision-makers are working from an accurate and shared understanding of the technology and the issues it presents, outlines key components and characteristics of the local AI ecosystem today, and frames a set of areas of opportunity for City action.

Bio: Neal Parikh is director of Artificial Intelligence for New York City, in the Mayor’s Office of the Chief Technology Officer. Most recently, he was an inaugural fellow at the Aspen Tech Policy Hub, part of The Aspen Institute. He is Co-Founder and former CTO of SevenFifty, a technology startup based in NYC, and was a visiting lecturer in machine learning at Cornell Tech. His academic work has been cited thousands of times in the academic literature and is widely used in both research and industry. He received a Ph.D. in computer science from Stanford University, focused on artificial intelligence, machine learning, and convex optimization, and a B.A.S. in computer science and mathematics from the University of Pennsylvania.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will not be recorded.

CITP Seminar: AI Ethics: The Case for Including Animals

Date and Time
Tuesday, December 7, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Peter Singer and Yip Fai Tse, from Princeton University

There is broad agreement about the need for the development and application of AI to be subject to ethical guidelines and constraints. Equally, there is today little dissent from the view that the way we treat animals should also be guided by ethical considerations. Within the field of AI ethics, however, there is virtually no discussion of how AI impacts animals, nor of the ethical problems to which these impacts may give rise. Self-driving cars that might encounter animals, autonomous killing drones that target animals, and AI systems now working in factory farms, are all examples of AI technologies that have real impacts on animals. We will describe what is happening currently, take a peek into the future, and analyze some ethical problems related to these AI technologies. The questions raised include:

– When a self-driving car hits an animal, who bears the moral responsibility?

– Is it morally justifiable to let drones kill animals?

– Is the use of AI in factory farming good or bad for the animals?

– If an AI technology will clearly increase the suffering of animals, should a developer reject such a project even if it is legal?

Bios:

Peter Singer

Peter is the Ira W. DeCamp Professor of Bioethics in the University Center for Human Values (UCHV). He became well-known internationally after the publication of Animal Liberation (1975). His other books include: Democracy and Disobedience (1973); Practical Ethics (1979, 3rd. ed. 2011); The Expanding Circle (1981, new ed 2011); Marx (1980); Hegel (1983); The Reproduction Revolution (1984) (co-authored with Deane Wells); Should the Baby Live? (1986) (co-authored with Helga Kuhse); How Are We to Live? (1995); Rethinking Life and Death (1996); One World (2002; revised edition One World Now, 2016); Pushing Time Away (2003); The President of Good and Evil (2004); The Ethics of What We Eat (2006) (co-authored with Jim Mason); The Life You Can Save (2009); The Point of View of the Universe (2014) co-authored with Katarzyna de Lazari-Radek; The Most Good You Can Do (2015); Ethics in the Real World (2016); and Utilitarianism: A Very Short Introduction, co-authored by Katarzyna de Lazari-Radek. Peter holds his appointment at UCHV jointly with his appointment as Laureate Professor at the University of Melbourne, attached to the School of Historical and Philosophical Studies. He was made a Companion of the Order of Australia (AC) in 2012. He is the founder and board chair of The Life You Can Save(link is external), a nonprofit that fights extreme poverty.

Yip Fai Tse

Fai is a research assistant for Professor Peter Singer at Princeton University. Fai is researching the impact and ethics of artificial intelligence concerning nonhuman animals. Fai has been a researcher in the field of animal welfare advocacy and effective altruism. He has advised the Open Philanthropy Project on farmed animal welfare. He is also a 2021 Foresight Fellow in Animal Ethics for Artificial Intelligence. He was previously the China Strategy Consultant for Mercy For Animals, responsible for research and strategy for their expansion plans in China and Asia.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

This talk is co-sponsored by CITP, the University Center for Human Values and the Ira W. DeCamp Bioethics Seminar Program

CITP Seminar: A Crash Course on Algorithmic Mechanism Design

Date and Time
Tuesday, November 16, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Matt Weinberg, from Princeton University

Algorithmic Mechanism Design studies the design of algorithms in settings where participants have their own incentives. For example, when executing an ad auction, the auctioneer/designer wants to achieve as much profit as possible, but each advertiser wants the best impressions for the lowest price (and may manipulate an auction if it’s in their interest to do so). When matching doctors to residencies, each hospital wants their favorite doctors, and each doctor wants their favorite hospitals (and both may manipulate any procedure in order to get a better match). When participating in a cryptocurrency, each miner wants to maximize their own profits (and may deviate from an intended protocol in order to get greater profit).

In the first part of this talk, an overview will be presented along with some theoretical foundations of Algorithmic Mechanism Design, as well as some applications. Additionally, we will discuss some more recent theoretical results, with a focus on designing simple auctions for complex environments.


Bio: Matt is an assistant professor at Princeton University in the Department of Computer Science. His primary research interest is in Algorithmic Mechanism Design: algorithm design in settings where users have their own incentives. He is also interested more broadly in Algorithmic Game Theory, Algorithms Under Uncertainty, and Theoretical Computer Science in general. Please click here for more details.

Before joining the faculty at Princeton, he spent two years as a postdoc in Princeton’s CS Theory group, and was a research fellow at the Simons Institute during the Fall 2015 (Economics and Computation) and Fall 2016 (Algorithms and Uncertainty) semesters. Matt completed his Ph.D. in 2014 at MIT, where he was very fortunate to be advised by Costis Daskalakis. Matt graduated from Cornell University with a B.A. in Math in 2010, where he was also fortunate to have worked with Bobby Kleinberg.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Beyond Bias: Algorithmic Unfairness, Infrastructure, and Genealogies of Data

Date and Time
Tuesday, November 9, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Alex Hanna, from Google

Alex Hanna
Problems of algorithmic bias are often framed in terms of lack of representative data or formal fairness optimization constraints to be applied to automated decision-making systems. However, these discussions sidestep deeper issues with data used in AI, including problematic categorizations and the extractive logics of crowdwork and data mining. This talk will examine two interventions: first by reframing of data as a form of infrastructure, and as such, implicating politics and power in the construction of datasets; and secondly discussing the development of a research program around the genealogy of datasets used in machine learning and AI systems. These genealogies should be attentive to the constellation of organizations and stakeholders involved in their creation, the intent, values, and assumptions of their authors and curators, and the adoption of datasets by subsequent researchers.

Bio: Alex is a sociologist and senior research scientist on the Ethical AI team at Google. Before that, Alex was an assistant professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto.

Alex received a Ph.D. in sociology from the University of Wisconsin-Madison. Her dissertation was the Machine-learning Protest Event Data System (MPEDS), a system which uses machine learning and natural language processing to create protest event data.

Her current research agenda is two-fold. One line of research centers on origins of the training data which form the informational infrastructure of machine learning, artificial intelligence, and algorithmic fairness frameworks. Another line of research (with Ellen Berrey) seeks to understand the interplay between student protest and university responses in US and Canada. Alex’s past work has focused on how new and social media has changed social movement mobilization and political participation.

Alex is as much as an educator as she is a researcher. She has taught workshops and courses on computational methods for social scientists, social movements, and the implications of information as infrastructure. She co-founded the [now defunct] computational social science blog Bad Hessian.

As a second job, she plays women’s flat track roller derby with Bay Area Derby.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

Follow us: Facebook Twitter Linkedin