Quick links

CITP

CITP Special Event: Privacy and Autonomy in the Metaverse

Date and Time
Wednesday, September 21, 2022 - 5:00pm to 6:00pm
Location
PUL Makerspace – Lewis Library
Type
CITP
Host
Princeton University Library and the Center for Information Technology Policy (CITP)

Silhouette of a person wearing a VR headset.
Virtual reality (VR) technologies have never been more accessible. The utopian vision of accessibility is that it has a democratizing effect; however, when companies monetize their technology, the trade-off is often privacy.

Biometric data collection by companies and governments is on the rise, and adverse effects of “filter bubbles” that prey upon negative emotional responses also continue to afflict society. As an increasingly mainstream technological medium, VR is already used as a tool towards these ends, and Meta has gone on record indicating their data extraction-based business model will not change. Is there time to intervene? What would intervention look like at the point of policy, and at the point of design?

Join us for a panel discussion about current trends and concerns in data privacy policy as it relates to VR and extended reality (XR) technologies. This panel brings together diverse perspectives on data privacy policy and liberatory digital systems. We’ll talk about recent developments in U.S. privacy policy as it pertains to biometric data collection, and implications for VR moving forward.

Panelists include:

  • Payton Croskey, Ida B. Wells Just Data Lab
  • Jennifer Grayburn, Princeton University Library
  • Mihir Kshirsagar, Center for Information Technology Policy

This event is free and open to the public.

Hosted by the Princeton University Library and co-sponsored by the Center for Information Technology Policy (CITP)

CITP Seminar: Participatory User Data Collection and Potential Futures for Platform Accountability: The Case of Mozilla Rally

Date and Time
Tuesday, October 11, 2022 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Rebecca Weiss, from Mozilla

Rebecca Weiss
As modern life increasingly moves online, our ability to understand the impact of the Internet on society has grown critically dependent upon the benevolence of tech platforms. To date, corporate data sharing programs have largely failed to meet the needs of the research community aimed at this problem.   In this talk, the shortcomings of previous corporate data sharing initiatives and identify requirements that must be met by future systems will be outlined.  In addition, Mozilla Rally, a data platform that addresses some of these system requirements will be presented. The talk will focus on the following properties of Mozilla Rally: user data donation and concomitant consent architectures, alternative corporate governance models, and community-led software development and release practices.​ The talk concludes with discussion of lessons learned during the creation of Rally, as well as possible future directions that similar efforts could take.

Bio: Rebecca Weiss is an award-winning computational social scientist and data science leader. She has worked in academia and industry, applying innovative methods to large-scale data sets to better understand online environments and their behavioral consequences.

As head of research and innovation at Mozilla, she created and incubated the Rally project, a privacy-preserving data platform leveraged by institutions including Princeton, Stanford, and The Markup to conduct research in the public interest. Before that, she founded the Firefox Data Science team and advanced Lean Data Practices as Mozilla’s director of data science.

Weiss has held fellowships at the Berkman-Klein Center for Internet and Society at Harvard University and at the Brown Institute for Media Innovation (a joint effort between Stanford School of Engineering and Columbia School of Journalism). She has advised the U.S. Congress on artificial intelligence policy and her research has been published in leading computer science and social science conferences and journals, such as WWW, ICWSM, KDD, PETS, and ICA.  She holds a Ph.D. from Stanford, a S.M. in technology policy from MIT, and a B.A. in cognitive systems from the University of British Columbia.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded and posted on the CITP YouTube channel and on the Princeton University Media Central website.

Click here to attend via Zoom: https://princeton.zoom.us/j/96994818236

CITP Seminar: An Equitable Technological Future for Cities

Date and Time
Tuesday, October 4, 2022 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Elie Bou-Zeid, from Princeton University

Elie Bou-Zeid
Will artificial intelligence correct or perpetuate historic discriminatory practices in cities? Will urban heat mitigation strategies and new ecosystem amenities be deployed fairly across all neighborhoods? Will new mobility technologies be accessible to all citizens and localities? Will new policing or security technology deployment have intended or unintended bias? Who will pay to bring urban infrastructure into the 21st century? Who owns the data collected by the myriad smart devices in the internet of things and who is trusted to oversee the use of these data? Who is responsible when technology does not function as intended?

As cities begin a deep, but slow, technological transformation, these are some of the questions that emerge and that will require open debate, broad stakeholder engagement, and new legal and policy frameworks. This talk does not answer any of these questions, but it puts them in the context of accelerating urbanization and the broad challenges and opportunities cities will face in the coming decades, and it offers a plausible framework for engaging with the intellectual dilemmas they pose in terms of access to, benefits of, and the ultimate goal of deploying new technologies in cities.

Bio: Elie Bou-Zeid is professor of civil and environmental engineering. He was the director of the Metropolis Project for urban technology at Princeton University until 2022. He is also associated faculty with the Andlinger Center for Energy and the Environment and the Mechanical and Aerospace Engineering department. An expert in geophysics and atmospheric sciences, his research is broadly focused on measurement and modelling of material and energy transfers in the lower atmosphere, with applications to urban environmental quality, building energy efficiency, wind energy production, and polar sea ice fluctuations. He is editor of the Journal of the Atmospheric Sciences, and a co-author of the NSF-sponsored report on Urban Climate and Resiliency aimed at understanding the role of megacities on global climate, and developing strategies to equitable improve urban climate-resilience and  and reduce urban atmospheric greenhouse emissions. Bou-Zeid holds a bachelor’s degree in Mechanical Engineering and a master’s degree in environmental engineering and water resources from the American University of Beirut, and a Ph.D. in environmental engineering from the Johns Hopkins University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded and posted on the CITP YouTube channel and on the Princeton University Media Central website.

Click here to attend via Zoom: https://princeton.zoom.us/j/93031681543

CITP Seminar: Data Privacy is Important, but it’s not Enough

Date and Time
Tuesday, September 20, 2022 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Katrina Ligett, from Hebrew University

Katrina Ligett
Our current data ecosystem leaves individuals, groups, and society vulnerable to a wide range of harms, ranging from privacy violations to subversion of autonomy to discrimination to erosion of trust in institutions. In this talk, the Data Co-ops Project, a multi-institution, multi-disciplinary effort co-led with Kobbi Nissim will be discussed. The Project seeks to organize our understanding of these harms and to coordinate a set of technical and legal approaches to addressing them. In particular, recent joint work with Ayelet Gordon and Alex Wood will be presented, wherein we argue that legal and technical tools aimed at controlling data and addressing privacy concerns are inherently insufficient for addressing the full range of these harms.

Bio: Katrina Ligett is a professor in the School of Computer Science and Engineering at the Hebrew University, where she is also the head of the program on the Interfaces of Technology, Society, and Networks (formerly known as Internet & Society), an elected member of the Federmann Study for the Center of Rationality, and an affiliate of the Federmann Cyber Security Research Center. Before joining the Hebrew University, she was faculty in computer science and economics at Caltech. Her primary research interests are in data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her Ph.D. in computer science from Carnegie Mellon University in 2009 and did her postdoc at Cornell University. She is a recipient of the NSF CAREER award and a Microsoft Faculty Fellowship. Ligett was the co-chair of the 2021 International Conference on Algorithmic Learning Theory (ALT) and the chair of the 2021 Symposium on Foundations of Responsible Computing (FORC). She currently serves as an advisory board member to the Harvard University OpenDP Project, and as an associate editor at the journals TheoretiCS and Transactions on Economics and Computation (TEAC). She is also an executive board member of the Association for Computing Machinery (ACM) Special Interest Group on Economics and Computation (SIGecom) and a principal investigator in the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. Ligett is the Princeton University Center for Information Technology Policy (CITP) Microsoft Visiting Professor.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This talk will be recorded and posted on the CITP YouTube channel and on the Princeton University Media Central website.

To attend via Zoom, click here: https://princeton.zoom.us/j/91048334264

CITP Special Event: Tech In Conversation: Imagining Radical Tech Futures

Date and Time
Monday, April 11, 2022 - 4:30pm to 5:30pm
Location
Hybrid event (off campus)
Type
CITP

Zoom link for public and those who do NOT want to attend in person.

In person attendance is restricted to Princeton University students, faculty and staff. Please register here using your Princeton University email address.


While scholars often examine the ways in which technologies fail and marginalize communities, this event focuses on an equally critical goal of adopting an abolitionist mindset – one that simultaneously asks how can we build new and life-affirming systems, while tearing down others that inflict harm. This panel brings together three technologists engaged in the exploration of generative, creative and justice-oriented interventions to improve the relationship between technology and society — specifically, developing alternatives to violent and discriminatory systems. We look forward to a lively discussion about each of their approaches to designing radical tech tools for social change, and expanding our own imaginations around the future of tech.

This event is the first in our Tech in Conversation series at the Center for Information Technology Policy. This series aims to spark conversations about tech and society across a wide range of disciplines; from cybersecurity to designing radical games to community technology initiatives. We’ll host speakers with experiences outside of academia – those working in the field, in policy, on social media, at the grassroots, in art, in community – and engage them in conversations that are relevant and accessible to a wide community of people.

CITP Seminar: Algorithmic Ecosystems: Understanding Human-AI Interactions from Both Sides of the Algorithm

Date and Time
Tuesday, April 19, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Amy Winecoff, from CITP

Click here to join the seminar


Amy Winecoff
Machine learning (ML) and artificial intelligence (AI) algorithms constitute a core component of many technology products. Although ML and AI algorithms can be beneficial, they also pose risks to individual users and society as a whole. Most often, the performance of algorithms is assessed using static measures of predictive accuracy. This approach is not only insufficient for reliably and validly estimating model performance, but also provides no information about a system’s ethical risks within a social context. To better understand the positive and negative impacts of AI and ML, we must recontextualize algorithms as embedded within a dynamic social ecosystem in which humans both influence and are influenced by algorithms.

The talk will discuss two ecosystems in which human-algorithm interactions affect broader social outcomes. The first part of the talk will address how ideas from empirical research methods can be leveraged within agent-based simulations to better understand the effects of feedback loops in algorithmic systems. The second part of the talk will address how institutional factors influence technology development in AI startups and how these factors can catalyze or constrain ethical approaches to AI.

Bio: Amy Winecoff is a DataX data scientist at CITP. Her primary interests are in human-algorithm interactions and fairness in machine learning systems. Winecoff received her Ph.D. in psychology and neuroscience from Duke University. After graduate school, she was an assistant professor at Bard College, where she taught neuroscience, abnormal psychology, and research methods. After leaving academia, she conducted research and developed machine learning models for government agencies such as DARPA and the U.S. Air Force to explain and predict human behavior. As a senior data scientist at True Fit and Chewy, she developed product recommendation and search systems. She also conducted quantitative user research to assess how users’ psychology informs their evaluation of algorithmic predictions. Winecoff is passionate about diversity and inclusion in the technology industry.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This seminar will be recorded.

This seminar is co-sponsored by the Center for Statistics and Machine Learning.

CITP Seminar: The Next Generation of Software Developers

Date and Time
Tuesday, April 12, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Denae Ford Robinson, from Microsoft Research

Click here to join the seminar.


Denae Ford Robinson
Microsoft is home to the world’s largest developer communities and ecosystems with Azure, GitHub, and Visual Studio. Thus, having sustainable and inclusive communities is of strategic importance as it has the potential to transform society by enabling more people to develop software. Developers in these communities and others (e.g., Stack Overflow, YouTube, Twitter) often intersect aspects of their professional work with their personal life on social media platforms which allow them to feel more comfortable engaging. Therefore, understanding how developers operate at these intersections helps practitioners to better prepare for the evolution of online professional communities and continue to bridge its enterprise and consumer markets. In this talk, we will cover recent research on evolving developer communities and outline opportunities on how we can usher in the next generation of software developers by fostering healthy and inclusive communities.

Bio: Denae Ford Robinson is a senior researcher at Microsoft Research in the SAINTes group and an affiliate assistant professor in the Human Centered Design and Engineering Department at the University of Washington. Her research lies at the intersection of Human-Computer Interaction and Software Engineering. In her work she identifies and dismantles cognitive and social barriers by designing mechanisms to support software developer participation in online socio-technical ecosystems. She is best known for her research on just-in-time mentorship as a mode to empower welcoming engagement in collaborative Q&A for online programming communities including open-source software and work to empower marginalized software developers in online communities.

She received her B.S. and M.S. in computer science from North Carolina State University. She also received her Ph.D. in computer science and graduate minor in cognitive science from North Carolina State University. She is also a recipient of the National GEM Consortium Fellowship, National Science Foundation Graduate Research Fellowship, and Microsoft Research Ph.D.Fellowship.

Her research publications can be found under her pen name ‘Denae Ford’. More information about her latest research can be found on her website: http://denaeford.me/


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This seminar will be recorded.

Co-sponsored by: Princeton HCI
Human-Computer Interaction at Princeton

CITP Seminar: Machine Bullshit: Emergent Manipulative Behavior in Language Agents

Date and Time
Tuesday, March 29, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP

Click here to join the seminar.


Jaime Fernández Fisac
Our research group is currently trying to shed light on what we think is one of the most pressing dangers presaged by the increasing power and reach of AI technologies. The conjunction of large-scale language models like GPT-3 with advanced strategic decision-making systems like AlphaZero can bring about a plethora of extremely effective AI text-generation systems with the ability to produce compelling arguments in support of arbitrary ideas, whether true, false, benign or malicious.

Through continued interactions with many millions of users, such systems could quickly learn to produce statements that are highly likely to elicit the desired human response, belief or action. That is, these systems will reliably say whatever they need to say to achieve their goal: we call this Machine Bullshit, after Harry Frankfurt’s excellent 1986 philosophical essay “On Bullshit”. If not properly understood and mitigated, this technology could result in a large-scale behavior manipulation device far more effective than subliminal advertising, and far more damaging than “deep fakes” in the hands of malicious actors.

Our aim is to bring together insights from dynamic game theory, machine learning and human-robot interaction to better understand these risks and inform the design of safe language-enabled AI systems.”

Bio: Jaime Fernández Fisac is an assistant professor in Department of Electrical and Computer Engineering at Princeton. He is an associated faculty member in the Department of Computer Science and the Center for Statistics and Machine Learning as well as a co-director of the Princeton AI4ALL summer camp.

He is interested in ensuring the safe operation of robotic systems in the human space. Fernández Fisac’s work combines safety analysis from control theory with machine learning and artificial intelligence techniques to enable robotic systems to reason competently about their own safety in spite of using inevitably fallible models of the world and other agents. This is done by having robots monitor their own ability to understand the world around them, accounting for how the gap between their models and reality affects their ability to guarantee safety.

Much of his research uses dynamic game theory together with insights from cognitive science to enable robots to strategically plan their interaction with human beings in contexts ranging from human-robot teamwork to drone navigation and autonomous driving. His lab’s scope spans theoretical work, algorithm design, and implementation on a variety of robotic platforms.

Fernández Fisac completed his Ph.D. in electrical engineering and computer science at UC Berkeley in 2019; at the midpoint of his Ph.D., he spent six months doing R&D work at Apple. Before that, Fernández Fisac received his B.S./M.S. in electrical engineering at the Universidad Politécnica de Madrid in Spain and a master’s degree in aeronautics at Cranfield University in the UK. Before joining Princeton in fall 2020, he spent a year as a research scientist at Waymo (formerly known as Google’s Self-Driving Car project).


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Data, Power, and AI Ethics: Critiquing and Rethinking Machine Learning Data Infrastructure

Date and Time
Tuesday, April 5, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Emily Denton, from Google

Emily Denton
Click here to join the seminar.


In response to growing concerns of bias, discrimination, and unfairness perpetuated by algorithmic systems, the datasets used to train and evaluate machine learning models have come under increased scrutiny in recent years. In this talk the role datasets play in model development and in the broader social organization of the field will be examined. A host of concerns that have been identified relating to the dominant practices of dataset development and used across the field, as well as the strengths and deficiencies of interventions that have emerged in response to these concerns will be summarized.

Bio: Emily Denton (they/them) is a Senior Research Scientist at Google, studying the societal impacts of artificial intelligence (AI) technology and the conditions of AI development. Prior to joining Google, Denton received their Ph.D. in machine learning from the Courant Institute of Mathematical Sciences at New York University, focusing on unsupervised learning and generative modeling of images and video.

Though trained formally as a computer scientist, Denton draws ideas and methods from multiple disciplines and is drawn towards highly interdisciplinary collaborations, in order to examine AI systems from a sociotechnical perspective.  Their recent research centers on a critical examination of the histories of datasets — and the norms, values, and work practices that structure their development and use — that make up the underlying infrastructure of AI research and development.

Denton is queer and nonbinary and uses they/them pronouns. They are also a circus aerialist, rock climber, and cat parent of two.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

CITP Seminar: Closer Than They Appear: A Bayesian Perspective on Individual-level Heterogeneity in Risk Assessment

Date and Time
Tuesday, March 15, 2022 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Kristian Lum, from Twitter

Click here to join the seminar.


Kristian Lum
Risk assessment instruments are used across the criminal justice system to estimate the probability of some future behavior given covariates. The estimated probabilities are then used in making decisions at the individual level. In the past, there has been controversy about whether the probabilities derived from group-level calculations can meaningfully be applied to individuals. Using Bayesian hierarchical models applied to a large longitudinal dataset from the court system in the state of Kentucky, we analyze variation in individual-level probabilities of failing to appear for court and the extent to which it is captured by covariates.

We find that individuals within the same risk group vary widely in their probability of the outcome. In practice, this means that allocating individuals to risk groups based on standard approaches to risk assessment, in large part, results in creating distinctions among individuals who are not meaningfully different in terms of their likelihood of the outcome. This is because uncertainty about the probability that any particular individual will fail to appear is large relative to the difference in average probabilities among any reasonable set of risk groups.

Bio: Kristian Lum is a senior staff machine learning researcher at Twitter in the Machine Learning Ethics, Transparency, and Accountability group. Prior to that she was a research professor at the University of Pennsylvania in the Department of Computer and Information Science and the lead statistician at the Human Rights Data Analysis Group. She was a founding member of the Executive Committee of the ACM Conference on Fairness, Accountability, and Transparency. Her research focuses on the responsible use of algorithmic decision-making, with an emphasis on evaluation of models for harmful impacts and mitigation techniques. In the past, her research has focused on the (un)fairness of predictive models used in the criminal justice system.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

Follow us: Facebook Twitter Linkedin