Quick links

CITP

CITP Seminar: Beyond the Binary: How LGBTQ+ People Negotiate Networked Privacy

Date and Time
Tuesday, September 26, 2023 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP

Alice E. Marwick
Networked privacy is the desire to maintain agency over information within the social and technological networks in which information is disclosed, given meaning, and shared. This agency is continually compromised by the aggregation, connection, and diffusion facilitated by social media and big data technologies. In this talk, drawing from her book The Private is Political, Marwick examines how these dynamics map to intersectional lines of privacy, drawing from a study of LGBTQ+ individuals in North Carolina.

Because networked information intrinsically leaks, the participants strategized how to manage disclosures that might be stigmatized in one context but not in others. They worked to firewall what, how, and to whom they disclosed, engaging in privacy work to maintain agency over information. They do not navigate the idea of private and public as a binary but as a spectrum, a web, or a network. Their experiences complicate the idea of a binary distinction between “public” or “private” information. Instead, the ways people share information about stigmatized identities are deeply contextual and social.

Bio: Alice E. Marwick is currently the Microsoft Visiting Professor at the Center for Information Technology Policy at Princeton University. She is an associate professor in the Department of Communication and Principal Researcher at the Center for Information Technology and Public Life, which she co-founded, at the University of North Carolina at Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of 2017’s Global Thinkers by Foreign Policy magazine. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. In addition to academic journal articles and essays, she has written for The New York Times, The New York Review of Books, Slate, the Columbia Journalism Review, New York Magazine, and The Chronicle of Higher Education. Her work has been supported by the Carnegie Foundation, the Knight Foundation, the Luminate Group, the Digital Trust Foundation, and the Social Science Research Council, and she has held fellowships at the Data & Society Research Institute and the Institute of Arts & Humanities at UNC-CH. As a 2020 Andrew Carnegie fellow, she is working on a third book about online radicalization. In 2021, she was awarded the Phillip and Ruth Hettleman Prize for Artistic and Scholarly Achievement by the University of North Carolina.


Attendance at CITP Seminars is restricted to Princeton University faculty, staff and students.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

This talk will be recorded and posted to the CITP website, YouTube channel and Media Central.

CITP Seminar: The Epistemic Culture of AI Safety

Date and Time
Tuesday, September 19, 2023 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Shazeda Ahmed, from Global Public Policy Institute

Shazeda Ahmed
The emerging field of artificial intelligence (AI) safety has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced AI while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. We contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices.

We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through career advising and web forums; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.

In this talk, in-progress research will be presented from two collaborations with Ahmed’s CITP colleagues Klaudia Jaźwińska, Amy Winecoff, Archana Ahlawat, and Mona Wang, in which they investigate the epistemic culture of AI safety and the emergent work practices of people who are focusing on the sub-field of AI alignment.

Bio: Shazeda Ahmed graduated with a Ph.D. from the University of California Berkeley School of Information.

She is a current fellow in the Transatlantic Digital Debates at the Global Public Policy Institute. She was a pre-doctoral fellow at two Stanford University research centers, the Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC), and has previously worked as a researcher for Upturn, the Mercator Institute for China Studies, Ranking Digital Rights, and the Citizen Lab, and the AI Now Institute.

Ahmed was a Fulbright fellow at Peking University’s Law School in Beijing, where she conducted field research on how tech firms and the Chinese government are collaborating on the country’s social credit system. Her additional work focuses on perceptions of algorithmic discrimination and emotion recognition technologies in China, as well as applications of artificial intelligence in Chinese courtrooms.

Her work on the social inequalities that arise from state-firm tech partnerships in China has been featured in outlets including the Financial Times, WIRED, the South China Morning Post, Logic magazine, TechNode, The Verge, CNBC, and Tech in Asia.


Attendance at CITP Seminars is restricted to Princeton University faculty, staff and students.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

This talk will not be recorded.

CITP Special Event: Tech In Conversation – Critical Technology Ecologies and the Future of Repair

Date and Time
Tuesday, May 16, 2023 - 4:30pm to 6:00pm
Location
Zoom Webinar (off campus)
Type
CITP

Please register here to watch the webinar.
https://princeton.zoom.us/webinar/register/WN_qammGhV7QvqDQlloDppmmQ


Electronic waste, or e-waste, is the fastest growing waste stream in the United States. But there is a way to curb the spread — allowing consumers to repair and repurpose used devices. This solution is the driver behind the Right to Repair — a movement of technologists and climate activists calling for a new tech circular economy that prioritizes the collection and recycling of consumer electronics to prevent environmental degradation.

The advocates face multiple obstacles. Among them, a lack of access to proprietary parts, shoddy manufacturing, and pushback from tech companies who argue that the repair of old cell phones, TVs and other tech creates security risks for consumers.

In this panel, we will hear from community leaders, scholars, and activists from the tech, environmental, and repair sectors, advocating for consumers to have the option to repair, not just buy. We’ll also hear from those on the front lines of e-waste and innovation, and those who study the colonial and historical ties to violence created by the use of technology. Together, these panelists will elucidate the current state of affairs around the right to repair and discuss what a collective reparative future might look like.

Moderator Bios:

Kenia Hale (she/her) is an emerging scholar at Princeton University’s Center for Information Technology Policy and the Ida B. Wells Data Justice Lab. She graduated from Yale University in 2021 with a B.A. in computing and the arts with an architecture concentration. There, she researched social justice urbanisms, and completed her senior thesis titled “Algorithms of Protest: How Protests Change Cities and Cities Change Protests.” Hale is interested in environmental justice, racial justice, and the implications of big tech and surveillance on communities of color, especially across the Midwest. At CITP and the Ida. B Wells Data Justice Lab, she researches liberatory technologies, digital marronage, and Black Techno-Ecologies. You can find her online at keniahale.com and on social media at @keniaiscreating.
Speaker Bios:

Grace Akese, Ph.D. is a geographer and discard studies scholar interested in the geographies of electronic waste (e-waste). She has produced geographical scholarship on the spatiogeologies of e-waste, and asks where e-waste travels, who works with it, and under what conditions. She studies across the “global south” and shows that instead of trails of e-waste leading to dumpsites overflowing with debris, the paths of discarded electronics also lead to production sites where electronics are transformed and recirculated through reuse, repair, repurposing, and remanufacturing activities. Joining the African Cluster of Excellence at the University of Bayreuth as a research fellow, Akese is currently exploring the relational entanglements of e-waste geographies as they manifest in ideas and practices of circular economies (ethical design, repair, reuse, care & maintenance, and marker space cultures). Find her on twitter @Grace_Akese.

Joycelyn Longdon is an environmental justice activist and academic. Her research centers on the design of justice-led conservation technologies for monitoring biodiversity with local forest communities in Ghana. She is also the founder of ClimateInColour, an online education platform and community for the climate curious. The platform is a launchpad for critical conversations but also a space of hope, a space to make climate conversations more accessible and diverse. Longdon seeks to transform how people learn about, communicate and act on climate issues. Find her on twitter @climateincolour.

Emmanuel Alie Mansaray is a self-taught engineer, researcher, creative thinker, influencer, motivational speaker, geologist, environmentalist, inventor, entrepreneur a a renewable energy enthusiast. He is the creator of the Imagination Car and was featured in the 2022 Documentary “For Tomorrow,” presented by the United Nations Development Program. In 2023, he graduated from Fourah Bay College, University of Sierra Leone with a Bachelor of Science in Geology. Find him on twitter @AlieEmmanuel.

Peter Mui is the founder of Fixit Clinic (www.fixitclinic.org) which conveys critical thinking and troubleshooting skills through both in-person community repair events around the U.S. and now, globally via Intergalactic Zoom Fixit Clinics and Discord: Global Fixers. Nearly 800 Fixit Clinic events have been hosted at libraries, elementary, secondary and high schools, colleges and universities, and through teleconferencing software. “Education, entertainment, empowerment, elucidation, and, ultimately, enlightenment through all-ages do-it-together hands-on fix-n-learn community-sponsored and community-led discovery, disassembly, troubleshooting and repair.”

Mui has keynoted for the IEEE Consumer Technology Society (CTSoc) at the Consumer Electronics Show, the Zero Waste USA annual conference, and has presented to WIRED magazine’s RE:WIRED Green Climate Action Conference, the US Environmental Protection Agency, the US Armed Services, the American Library Association the California Library Association, Zero Waste Washington’s Repair Economy Summit and the American Institute of Architects (AIA). He has appeared on the PBS Newshour, Voice of America News and the Ralph Nader Radio Hour. He is the recipient of numerous awards including the E-town eChievement Award and the California Resource Recovery Association Pavitra Crimmel Reuse Award. Find him on twitter @FixitClinic.

Event poster

CITP Seminar – Digital Discrimination and the Law in Europe

Date and Time
Tuesday, May 9, 2023 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Frederik Zuiderveen Borgesius, from Radboud University, Netherlands

Frederik Zuiderveen Borgesius
Organizations can use computers or AI to make decisions about people: digital differentiation. For example, insurers can adjust prices to consumers, and the government can use AI-driven analysis to combat welfare fraud. Such digital differentiation is often useful and efficient, but it also brings discrimination-related risks. First, there is a risk of discrimination against people with a certain ethnicity, gender, or similar characteristics. Second, there is a risk of other unfair differentiation that does not specifically affect people with a particular ethnicity or similar characteristic, but is still unfair. For example, digital differentiation can reinforce economic inequality. The presentation introduces the main applicable rules in Europe, such as non-discrimination law and in the General Data Protection Regulation (GDPR). The presentation also shows that those rules, while useful, leave serious gaps.

Bio: Frederik Zuiderveen Borgesius is a professor of ICT and law. He works at the iHub, part of Radboud University in The Netherlands. The iHub is the interdisciplinary research hub on digitalization and society. Zuiderveen Borgesius is a law professor, but teaches at the computer science department. His research mostly concerns fundamental rights, such as the right to privacy and non-discrimination rights, in the context of new technologies. He often enriches legal research with insights from other disciplines. He has co-operated with, economists, computer scientists, and communication scholars. He has given expert testimony to policymakers, at the Dutch and the European parliaments, and committees of the Council of Europe and the United Nations.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Special Event: Confused by All the Chatter? Journalists, Researchers & Policymakers Talk Chatbots and Other Large Language Models

Date and Time
Thursday, May 4, 2023 - 4:30pm to 6:00pm
Location
Lewis Auditorium, Robertson Hall
Type
CITP

For all event details and attendance registration, click here.


Powerful new technologies like OpenAI’s “ChatGPT” or Google’s “Bard” have sparked excitement over the potential they have to transform how we work, learn and communicate for the better. But their potential harms also trigger fears and unease. As a result, the public discourse around such large language models (LLMs) can be noisy or chaotic.

CITP has convened a panel of experts from the journalism, tech research and public policy sectors to discuss their experiences with – and approaches to – engaging with these emerging technologies in their respective professions. We will also talk about the responsibilities journalists and academics may have in shaping the public conversation around digital technologies, and how they can support each other’s work for the benefit of the public.

Panel Speakers:

Julia Angwin is an award-winning investigative journalist and contributing opinion writer at The New York Times. She founded The Markup, a nonprofit newsroom that investigates the impacts of technology on society, and is Entrepreneur in Residence at Columbia Journalism School’s Brown Institute. Angwin was previously a senior reporter at the independent news organization ProPublica, where she led an investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2017 and won a Gerald Loeb Award in 2018. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010. In 2003, she was on a team of reporters at The Wall Street Journal that was awarded the Pulitzer Prize in Explanatory Reporting for coverage of corporate corruption. She is also the author of “Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance” (Times Books, 2014) and “Stealing MySpace: The Battle to Control the Most Popular Website in America” (Random House, March 2009). She earned a B.A. in mathematics from the University of Chicago, and an M.B.A. from the Graduate School of Business at Columbia University.

Sorelle Friedler is the Shibulal Family Associate Professor of Computer Science at Haverford College. She served as the assistant director for Data and Democracy in the White House Office of Science and Technology Policy under the Biden-Harris Administration where her work included the Blueprint for an AI Bill of Rights. Her research focuses on the fairness and interpretability of machine learning algorithms, with applications from criminal justice to materials discovery. She holds a Ph.D. in computer science from the University of Maryland, College Park.

Arvind Narayanan is a professor of computer science at Princeton University. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded and posted to the CITP website, YouTube channel and Princeton University’s Media Central channel.

CITP Lecture: Aligning Machine Learning, Law, and Policy for Responsible Real-World Deployments

Date and Time
Tuesday, April 18, 2023 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
CITP
Speaker
Peter Henderson, from Stanford University

Attendance restricted to Princeton University faculty, staff and students.


Peter Henderson
Machine learning (ML) is being deployed to a vast array of real-world applications with profound impacts on society. ML can have positive impacts, such as aiding in the discovery of new cures for diseases and improving government transparency and efficiency. But it can also be harmful: reinforcing authoritarian regimes, scaling the spread of disinformation, and exacerbating societal biases. As we rapidly move toward systemic use of ML in the real world, there are many unanswered questions about how to successfully use ML for social good while preventing its potential harms. Many of these questions inevitably require pursuing a deeper alignment between ML, law, and policy.

Are certain algorithms truly compliant with current laws and regulations? Is there a better design that can make them more tuned to the regulatory and policy requirements of the real world? Are laws, policies, and regulations sufficiently informed by the technical details of ML algorithms, or will they be ineffective and out-of-sync? In this talk, we will discuss ways to bring together ML, law, and policy to address these questions. We will draw on real-world examples throughout the talk, including a unique real-world collaboration with the Internal Revenue Service. We will show how investigating questions of alignment between ML, law, and policy can advance core research in ML, drive interdisciplinary work, and reimagine how we think about certain laws and policies. It is Henderson’s hope that this agenda will help us lead to more effective and responsible ways of deploying ML in the real world, so that we steer toward positive impacts and away from potential harms.

Bio: Peter Henderson is a joint J.D.-Ph.D. (Computer Science, AI) candidate at Stanford University where he is advised by Dan Jurafsky for his Ph.D. and Dan Ho for my J.D. at Stanford Law School. He is also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. At Stanford Law School. Henderson co-led the Domestic Violence Pro Bono Project, worked on client representation with the Three Strikes Project, and contributed to the Stanford Native Law Pro Bono Project. Previously, he was advised by David Meger and Joelle Pineau for his M.Sc. at McGill University and the Montréal Institute for Learning Algorithms.

Henderson has spent time as a software engineer and applied scientist at Amazon AWS/Alexa and worked with Justice Cuéllar at the California Supreme Court. He is a part-time researcher with the Internal Revenue Service Research, Applied Analytics and Statistics Division, a technical advisor at the Institute for Security+Technology.

His research focuses on aligning machine learning, law, and policy for responsible real-world deployments. This alignment process is two-fold: (1) guided by law, policy and ethics, develop general AI systems capable of safely tackling longstanding challenges in government and society; (2) empowered by a deep technical understanding of AI, make sure that laws & policies keep general AI systems safe and beneficial for all.

Some of his work has received coverage by TechCrunch, Science, The Wall Street Journal, Bloomberg, and more. He also occasionally posts a roundup of news at the intersection of AI, Law, and Policy, the latest of which is here. Overall, he is interested in a wide range of other technical machine learning research, policy, and legal work.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar - The Societal Impact of Foundation Model

Date and Time
Tuesday, May 2, 2023 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CITP
Speaker
Rishi Bommasani, from Stanford University

Rishi Bommasani
Foundation models (ChatGPT, StableDiffusion) are transforming society: remarkable capabilities, serious risks, rampant deployment, unprecedented adoption, overflowing funding, and unending controversy. In this talk, we will center our attention on their societal impact. In the first half, we will discuss two efforts (HELM, Ecosystem Graphs) to ensure their transparency through standardized public reporting. In the second half, we will talk about work-in-progress to understand unique dimensions of their harms (monoculture and homogenization) as well as mechanisms to drive change. Overall, the goal is for the talk to be high-tempo and broad-coverage to stimulate discussion on what we should do!

Bio: Rishi Bommasani is a third-year CS Ph.D. student at Stanford University, advised by Percy Liang and Dan Jurafsky. His work broadly focuses on the societal impact of AI, often by leading large-scale collaborations and building interdisciplinary teams. More specifically, he studies foundation models in relation to concepts such as evaluation, systemic harm, governance, policy, and power. Prior to Stanford, Rishi completed his bachelor’s degree at Cornell University, advised by Claire Cardie. He is currently supported by the NSF Graduate Research Fellowship.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar – Auditing Large Language Model

Date and Time
Tuesday, March 28, 2023 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Jakob Mökander, from the University of Oxford

Jakob Mokander
The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, Mokander will present a novel blueprint for how to audit LLMs. By drawing on best practices from IT governance and system engineering, he and his collaborators at the University of Oxford propose a three-layered approach, whereby governance audits, model audits, and application audits complement and inform each other. Ultimately, this research seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyze and evaluate LLMs from technical, ethical, and legal perspectives. However, it is still work in progress, so all feedback is welcome.

Bio: Jakob Mökander is a Visiting Scholar at the Center for Information Technology Policy. His research – which sits at the intersection of ethics, law, management studies and systems engineering – focuses on developing and evaluating methods to audit automated decision-making systems. The aim thereby is to provide both tech companies and policymakers with the tools they need to ensure that automated decision-making systems are designed and deployed in ways that are legal, ethical, and safe.

Mökander’s home institution is the University of Oxford, where he is pursuing a doctorate in philosophy under the supervision of Luciano Floridi, Professor of Philosophy and Ethics of Information at the Oxford Internet Institute (OII). Mökander’s doctoral research is funded through an Oxford-AstraZeneca studentship.

Academically, Mökander holds one MSc in Social Science of the Internet from the University of Oxford, and one MSc in Industrial Engineering and Management from Linköping University in Sweden. Prior to joining the OII, Mökander was posted at the Swedish Trade & Invest Council in New Delhi, India, where he facilitated international industrial R&D projects.

For an overview of Mökander’s published work, visit his Google Scholar page.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded. The video will be posted to the CITP website, the CITP YouTube channel and the Princeton University Media Central channel.

Watch the webinar here.

CITP Seminar – #HashtagActivism: Networks of Race and Gender Justice

Date and Time
Tuesday, April 4, 2023 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Brooke Welles, from Northeastern University

Brooke Foucault Welles
The proliferation of social media has given rise to widespread study and speculation about the impact of digital technologies on politics, activism, and social change. Key among these debates is the role of social media in shaping the contemporary public sphere, and by proxy, our democracy. Maligned by some as “slacktivism,” it will be argued that social media platforms such as Twitter created unique opportunities for traditionally excluded voices to challenge the terms of public debate. Using the evidence from Twitter hashtag networks such as #BlackLivesMatter and #MeToo, we will demonstrate how hashtag activism complemented other forms of activism and changed the terms of mainstream discussions about race and gender justice in the United States. We will also reflect on the continued capacity of social media for social change, in light of recent changes to Twitter and other platforms.

This talk draws on research from #HashtagActivism: Networks of Race and Gender Justice, available for free through MIT Press Direct: https://direct.mit.edu/books/book/4597/HashtagActivismNetworks-of-Race-and-Gender-Justice

Bio: Brooke Foucault Welles (she/her) is the Associate Dean for Research in the College of Arts, Media, and Design and Director of the Network Science Ph.D. Program at Northeastern University in Boston. Combining the methods of network science with theories from the social sciences, Welles studies influence and amplification in online communication networks, with particular emphasis on how these networks mitigate and exacerbate marginalization. Her work is interdisciplinary and collaborative, with co-authors from computer science, political science, digital humanities, design, and public health. She is the co-author of #HashtagActivism: Networks of Race and Gender Justice and co-editor of the Oxford Handbook of Networked Communication. For more information, see https://camd.northeastern.edu/faculty/brooke-foucault-welles/


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded. The video will be posted to the CITP website, the CITP YouTube channel and the Princeton University Media Central channel.

Watch the webinar here

Foundations of Responsible Machine Learning

Date and Time
Monday, March 20, 2023 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CITP
Speaker
Michael Kim, from University of California, Berkeley
Host
Aleksandra Korolova

Michael Kim
Algorithms make predictions about people constantly.  The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups.   This talk will provide an overview of my research building a theory of "responsible" machine learning.  I will highlight a notion of fairness in prediction, called Multicalibration (ICML'18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data.  This "multi-group" approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections.  Additionally, I will present a new paradigm for learning, Outcome Indistinguishability (STOC'21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility.  Finally, I will discuss the threat of Undetectable Backdoors (FOCS'22), which represent a serious challenge for building trust in machine learning models

Bio: Michael P. Kim is a Postdoctoral Research Fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser.  Before this, Kim completed his Ph.D. in Computer Science at Stanford University, advised by Omer Reingold.  Kim's research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people.  More generally, Kim is interested in how the computational lens (i.e., algorithms and complexity theory) can provide insights into emerging societal and scientific challenges. 


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

Follow us: Facebook Twitter Linkedin