Quick links

CITP

CITP Seminar – The Reach of Fairness: From Algorithmic Justice to Experimental Design

Date and Time
Tuesday, October 29, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Lydia Liu, from Princeton University

Lydia Liu
As AI systems increasingly shape critical decisions in society, ensuring fairness presents both philosophical and practical challenges. This talk begins by broadening the existing scope of normative discourse on machine learning and algorithmic decision-making. Drawing on an understanding of fair cooperation among free and equal persons as a fundamental political value as in Rawlsian theory, the talk explores how concerns about fairness and machine learning should be expanded in three key ways: (1) addressing discrimination beyond group subordination, (2) addressing equality of opportunity beyond organizational decision-making, and (3) addressing fairness beyond the equality of opportunity.

In translating these principles into practice, the challenge of evaluating automated decision systems in deployment is examined. The talk highlights how experimental designs often simplify human decision-making, potentially biasing the understanding of the impacts of algorithmic interventions. Together, these perspectives underscore the need for both conceptual expansion and rigorous evaluation to ensure that algorithmic deployments align with societal fairness.

Bio: Lydia Liu joined Princeton University as an assistant professor in 2024. Her current research examines the theoretical foundations of machine learning and algorithmic decision-making, with a focus on societal impact and welfare.

Prior to joining Princeton she was a postdoctoral associate at Cornell University Computer Science in the Artificial Intelligence, Policy, and Practice (AIPP) initiative. Her work has be recognized with a Microsoft Ada Lovelace Fellowship, an Open Philanthropy AI Fellowship, an NUS Development Grant, and an ICML Best Paper Award.

She obtained a Ph.D. in Electrical Engineering and Computer Sciences from University of California, Berkeley and a B.S.E. in Operations Research and Financial Engineering at Princeton University.


In-person attendance is open to Princeton University faculty, staff and students.

This talk will be open to the public, at this link, via Zoom. It will be recorded and posted here, on the CITP YouTube channel, and on the Princeton University Media Central channel.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Divvi Up and the Future of Privacy Respecting Metrics

Date and Time
Tuesday, November 5, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Josh Aas, from Internet Security Research Group

Josh Aas
Facilities for collecting data and metrics are central to both the goals and operations of the Internet today. Given this, it’s interesting that with the exception of ubiquity and operational scaling, the way the world collects data today has not changed much in decades. Mathematical and engineering wizardry has unlocked more opportunities than ever to collect data in ways that improve privacy and mitigate negative impacts, but for the most part we still just collect numbers from system participants and send them over a pipe back to a central clearinghouse. Cookies continue to serve the same central role in systems like advertising attribution that they did twenty years ago.

This talk will provide an overview of the Divvi Up service provided by nonprofit Internet Security Research Group (ISRG) and the broader context for services like this today. What do the options look like and how can they work together, including DAP, OHTTP, and differential privacy? Why are these things worth pursuing? Why has adoption been relatively slow? What are the challenges and tradeoffs? How might legislation and policy impact the future of these systems?

Bio: Josh Aas co-founded and currently runs Internet Security Research Group (ISRG), the nonprofit entity behind Let’s Encrypt, the world’s largest certificate authority helping to secure more than 290 million websites. He also spearheaded ISRG’s latest projects, one focused on bringing memory-safe code to security-sensitive software, called Prossimo, and Divvi Up, a privacy-respecting metrics service. Josh worked in Mozilla’s platform engineering group for many years, improving the Firefox web browser. He also worked for Mozilla in a senior strategy role, helping to find solutions for some of the Web’s most difficult problems. He has deep expertise in software security and ecosystem dynamics, as well as organizational leadership.”


In-person attendance is open to Princeton University faculty, staff and students. This talk will be open to the public, at this link, via Zoom. It will be recorded and posted on the CITP website, the CITP YouTube channel, and on the Princeton University Media Central channel.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Conference - Tech Policy: The Next Ten Years

Date and Time
Friday, October 25, 2024 - 9:00am to 5:00pm
Location
Andlinger Center Maeder Hall
Type
CITP

This conference is for everyone who is interested in ensuring that technology has a positive impact on society. Learn about how you can make an impact on the development and governance of technology, whether in industry or the public sector. Alumni, affiliates, and friends of CITP will reflect on their careers and share advice, while current CITP scholars will present their research that has contributed to ongoing debates on topics including AI, social media, and cybersecurity.


Additional details about the conference will be added soon.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Science for Policy and Policy for Science

Date and Time
Tuesday, September 10, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Arvind Narayanan, from Princeton University

Arvind Narayanan
Policy making should be informed by evidence, especially scientific evidence. But exactly how is a surprisingly tricky question. In this talk Narayanan will take a close look at the science-policy interface: how it works and how it should work. The talk will diagnose structural reasons why he believes the kind of evidence that science is good at producing is mismatched with the kind of evidence that’s useful for policy. Ignoring this mismatch leads to bad policy and weakens public trust in science. The talk will end by proposing paths toward a healthier relationship between science and policy.

This talk is a high-level overview of an early-stage book project and is an invitation to deeper 1-1 discussions.

Bio: Arvind Narayanan is the director of CITP and a professor of computer science at Princeton University. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.


In-person attendance is open to Princeton University faculty, staff and students. This seminar is open only to those with a Princeton University email address at this link via Zoom. It will be recorded and available to the Princeton University community by request.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Engineering Theory for Emerging Tech Policymaking

Date and Time
Tuesday, October 1, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Lav Varshney, from University of Illinois Urbana-Champaign

Lav Varshney
Mathematical engineering theories are useful in numerous ways, whether in providing fundamental relationships between the capabilities of emerging technologies and the resources they require; establishing fundamental benchmarks to evaluate new technologies on absolute scales, rather than only compared to previous technologies; delineating what is possible from what is impossible (and principles for optimal architectures); and giving ideals for pushing industry to build technologies that approach/achieve these limits.  Yet, engineering theory is largely ignored in technology policymaking.

Here, we argue that engineering theory can play  dual roles for policy.  First as a way to inform policymaking, via methods for net assessment, relationships between capabilities and resources for regulatory policy, and architectures for industrial policy.  Second as a policy lever itself, through mechanisms of performativity and by countering philosophies of computational positivism.  To make this case, we present vignettes from several areas of emerging tech policy including AI, 6G wireless, semiconductors, quantum, and climate.  For example, novel engineering theory can characterize emergent capabilities of AI and alternative paths to artificial general intelligence (AGI), for AI policymaking.  These vignettes will draw on policy work at the White House, with the City of Syracuse, and with the Indian Forest Service.

Bio: Lav Varshney is an associate professor of electrical and computer engineering at the University of Illinois Urbana-Champaign, co-founder and CEO of Kocree, Inc., a startup company using novel human-integrated AI in social music co-creativity platforms to enhance human wellbeing across society, and chief scientist of Ensaras, Inc., a startup company focused on AI and wastewater treatment.  He also holds affiliations with RAND Corporation and with Brookhaven National Laboratory.

He is a former White House staffer, having just served on the National Security Council staff as a White House Fellow, where he contributed to national AI and wireless communications policy.  Previously at IBM Research, he led the development and deployment of the Chef Watson system for culinary creativity as the first commercially successful generative AI technology, which also received worldwide acclaim.  At Salesforce Research, he was part of the team that open-source released the largest and most capable large language model at the time.

His work and public scholarship has been featured in media ranging from Fox News and the Wall Street Journal to the New York Times, NPR, Slate, and The New Yorker.  He appeared in the Robert Downey, Jr. documentary series, Age of AI.  He holds a B.S. degree in electrical and computer engineering from Cornell University and S.M. and Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology.  His current research interests include information theory; artificial intelligence foundations, explainability, and governance; agent-based policymaking; and AI applications in health and wellbeing.


In-person attendance is open to Princeton University faculty, staff and students. This seminar is open to the general public, at this link, via Zoom. It will be recorded and posted to the CITP website, the CITP YouTube channel and the Princeton University Media Central channel.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Auditing Ad Delivery Algorithms in the Public Interest

Date and Time
Tuesday, October 8, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Basileal Imana, from CITP

Basileal Imana
Ad delivery algorithms play an important role in shaping access to information and economic opportunities. However, the opaque nature of these algorithms to both users and advertisers has raised societal concerns about bias and discrimination. These concerns have led to increased scrutiny through research, civil rights audits, and regulation. In this talk, we present findings from our black-box audit of ad delivery algorithms that reveal bias in the delivery of ads for employment and education opportunities. We then discuss steps that platforms have taken to mitigate bias in response to academic and legal scrutiny. We conclude with open questions surrounding these efforts and paths forward for future research.

Bio: Basileal Imana’s research interests broadly lie in studying privacy and algorithmic fairness properties of real-world systems on the Internet. He focuses on developing novel methods for auditing the fairness of algorithms used to deliver content on social media platforms without introducing new privacy risks to platforms or users.

Imana received his Ph.D. in computer science from the University of Southern California. Prior to USC, he received his BSc in 2017, also in CS, from Trinity College Connecticut, where he worked on solving computationally difficult problems using high-performance computing.


In-person attendance is open to Princeton University faculty, staff and students. This seminar is open to the general public at this link via Zoom. It will be recorded and posted to the CITP website, the CITP YouTube channel and the Princeton University Media Central channel.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Redesigning the Trust Model of the Web Public Key Infrastructure (PKI)

Date and Time
Tuesday, September 24, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Cyrill Krähenbühl, from CITP

Cyrill Krahenbuhl
The web PKI, which is used to secure TLS-based web communication (HTTPS), is one of the most frequently used network security systems, enabling billions of users to securely connect to the world wide web and prevent the theft of user credentials, protect the privacy of personal information, impede stealthy wiretapping, and enable secure online shopping. The web PKI operates on an oligopoly trust model, with several designated trusted certificate authorities (CAs) issuing cryptographic certificates to webpage owners to secure their webpages. It has grown tremendously over the last decade, mainly due to the use of automated tools for fetching and installing certificates (ACME), and through certain CAs offering free certificates to all users. However, the web PKI is not without fault, in particular, adversaries exploit weaknesses in the automatic domain control validation process to issue fake certificates and extract millions of dollars in cryptocurrency, certificate authorities may misbehave and issue certificates to unauthorized entities or reveal critical secret keys to adversaries, and faulty validation software may give unauthorized entities the ability to request certificates.

Instead of the security of domain control validation, this talk will focus on the fundamental problem of the web PKI’s weakest-link trust model and explore ways to mitigate it. In particular, one observation is that the trust agility of web PKI users is often lacking, leaving users little to no room for tailoring trust preferences to their individual needs without completely distrusting certain entities and thus trading off availability for security. Furthermore, the classification into trusted and untrusted CAs is a centralized process performed by a select few organizations, called root programs, which is contradictory to the recent Web3-driven push toward more decentralization. We posit that trust is inherently subjective and there is typically no single global valid notion of trust in our heterogeneous society.

In our recent work on F-PKI, we propose a different trust model that empowers both webpage owners and clients to express their individual trust preference and validate certificates according to this preference. This talk will give an overview of F-PKI’s technical aspects, and then discuss the opportunities and challenges of our flexible trust model based on individual trust preferences.

Finally, we would like to have an open discussion on the suitability of the web PKI as a foundation for modern security-sensitive applications, such as decentralized protocols, and the possibility of leveraging alternative PKIs to accommodate the need of these protocols.

Bio: Cyrill Krähenbühl’s research focuses on public key infrastructures (PKI) and path aware networking (PAN). He completed his Ph.D. under the guidance of Adrian Perrig in 2023 at ETH Zürich where he also earned his master’s degree in computer science.

We all make use of PKI when we look at web pages or use secure communication systems. Although the majority of our communication is secured through PKI-based systems, the protocols and infrastructures have grown organically and have still numerous shortcomings. Krähenbühl’s research introduces flexibility into trust foundation of PKIs, enabling end users and certificate owners to define their trust preferences. He designed systems that enable trust agility, while increasing the security yet retaining the high availability of the systems.

Path-aware networking provides more transparency and control to network participants, in particular the network endpoints. Knowledge and (partial) control of the forwarding path can significantly improve a network’s efficiency through in-network multipath and allows endpoints to optimize for the needs of individual applications. In the area of path-aware networking, Krähenbühl contributed to documenting the deployment methodologies, defined and categorized path properties, and extended path selection in the SCION architecture to enable fine-grained intra-domain paths with specific policies.

During his Ph.D., Krähenbühl actively collaborated with industry partners and other research groups, and helped analyze real-world security-critical systems and design concrete security recommendations. He published at multiple top tier venues, such as NDSS, USENIX Security, and CoNEXT, where he was given the best paper and best presentation award. In addition to his work in the research community, he actively engaged with standardization bodies such as the IETF, in particular with the path aware networking research group, where he co-authored a guidance RFC on path properties in path aware networks.


In-person attendance is open to Princeton University faculty, staff and students.

This talk will be open to the public as a webinar, at this link, via Zoom. It will be recorded and posted here, on the CITP YouTube channel, and on the Princeton University Media Central channel.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar - AGI is Coming… Is HCI Ready?

Date and Time
Tuesday, September 17, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Meredith Ringel Morris, from Google DeepMind

Meredith Ringel Morris
We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI).

In this talk, we will define what is meant (and what is NOT meant) by “AGI.” We will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild?

We will close by reflecting on the relationship between HCI and AI research. Typically, HCI and other sociotechnical domains are not considered as core to the ML research community as areas like model building. However, it will be argued that research on Human-AI Interaction and the societal impacts of AI is vital and central to this moment in computing history. HCI must not become a “second class citizen” to AI, but rather be recognized as fundamental to ensuring the path to AGI and beyond is a beneficial one.

Bio: Meredith Ringel Morris is director for Human-AI Interaction Research at Google DeepMind. Prior to joining DeepMind, she was director of the People + AI Research team in Google Research’s Responsible AI division. She also previously served as research area manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research. In addition to her industry role, Morris has a faculty appointment at the University of Washington, where she is an affiliate professor in The Paul G. Allen School of Computer Science & Engineering and also in The Information School.

Morris has been recognized as a fellow of the ACM and as a member of the ACM SIGCHI Academy for her contributions to Human-Computer Interaction research. She earned her Sc.B. in computer science from Brown University and her M.S. and Ph.D. in computer science from Stanford University. More details on her research and publications are available at http://merrie.info.


In-person attendance is open to Princeton University faculty, staff and students. Information regarding virtual participation will be posted once available.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CITP Seminar – Decomposing Wage Gaps with a Foundation Model of Labor History

Date and Time
Friday, April 26, 2024 - 12:30pm to 1:30pm
Location
Bendheim House, 26 Prospect Ave.
Type
CITP
Speaker
Keyon Vafa, from Harvard University

Co-sponsored by CITP and Princeton Language and Intelligence

In-person attendance is open to Princeton University faculty, staff, students and alumni. Lunch will be available at noon.

This talk is also available via Zoom.


Keyon Vafa
Social scientists frequently perform statistical decompositions of wage gaps, attributing group differences in wages to group differences in worker characteristics. Since the survey datasets used to estimate these decompositions are small, the included characteristics are typically low-dimensional, e.g. summary statistics about job history. These low-dimensional summaries risk inducing biased estimates. To mitigate this bias, we adapt machine learning methods to summarize worker histories with rich, low-dimensional representations that are learned from data. We take a “foundation model” approach, first training representations on a dataset of passively-collected resumes before fine-tuning them on the small survey datasets used for wage gap estimation. We discuss an omitted variable bias that can arise in this setting and propose a fine-tuning approach to minimize it. On data from the Panel Study of Income Dynamics, we show that full worker history explains a substantial portion of wage gaps that are unexplained by standard econometric techniques.

With collaborators Susan Athey and David Blei.

Bio: Keyon Vafa is a postdoctoral fellow at Harvard University as part of the Harvard Data Science Initiative. His research focuses on developing machine learning methods to help answer economic questions and also using insights from economics to improve machine learning models. He completed his Ph.D. in computer science at Columbia University in 2023, where he was advised by David Blei. During his Ph.D. he was an NSF GRFP Fellow and Cheung-Kong Innovation Doctoral Fellow. He is a member of the Early Career Board of the Harvard Data Science Review.


Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Zoom: https://princeton.zoom.us/j/97144681849

CITP Seminar – The Relative Value of Prediction

Date and Time
Tuesday, April 16, 2024 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
CITP
Speaker
Juan Carlos Perdomo, from Harvard University

In-person attendance is open to Princeton University faculty, staff, students and alumni. The Zoom will be open to the public.


Juan Carlos Perdomo
Algorithmic predictions are increasingly used to inform the allocations of goods and services in the public sphere. In these domains, predictions serve as a means to an end. They provide stakeholders with insights into the likelihood of future events in order to improve decision making quality, and enhance social welfare. However, if maximizing welfare is the question, to what extent is improving prediction the best answer?

In this talk, we discuss various attempts to contextualize the relative value of algorithmic predictions through both theory and practice. The goal of the first part will be to formally understand how the welfare benefits of improving prediction compare to those of expanding access when distributing social goods. In the latter half, an empirical case study will be presented illustrating how these issues play out in the context of a risk prediction system used throughout Wisconsin public schools.

Bio: Juan Carlos Perdomo is currently a postdoctoral fellow at Harvard University’s Center for Research on Computation and Society. His research centers on the theoretical and empirical foundations of machine learning. Perdomo is particularly interested in studying the downstream consequences, and feedback loops, that arise when predictions are used to make decisions about people.

Perdomo received his Ph.D. from the University of California, Berkeley and his bachelor’s degree in computer science and math from Harvard College.


This seminar will be recorded and posted to the CITP website, Media Central and YouTube.

If you need an accommodation for a disability please contact Jean Butcher at butcher@princeton.edu at least one week before the event.

Follow us: Facebook Twitter Linkedin