Quick links

CITP

CITP Seminar: Insights into Predictability of Life Outcomes: A Data-Driven Approach

Date and Time
Tuesday, October 26, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Pranay Anchuri, from Princeton University

Pranay Anchuri
Predicting life outcomes is a challenging task even for advanced machine learning (ML) algorithms. At the same time, accurately predicting these outcomes has important implications in providing targeted assistance and in improving policy making. Recent studies based on Fragile Families and Child Wellbeing Study dataset have shown that complex ML pipelines even in the presence of thousands of variables produce low quality predictions. This research raises several questions about the predictability of life outcomes: 1) What factors influence the predictability of an outcome (e.g., quality of data, pre-processing steps, model hyperparameters etc.) 2) How does the predictability of outcomes vary by domain (e.g., are health outcomes easier to predict than education outcomes)? To answer these questions, we are building a cloud-based system to train and test hundreds of ML pipelines on thousands of life outcomes. We use the results of this large-scale exploration in a data-driven way to understand the predictability of life outcomes. 

In the first part of the talk, we discuss the study design and describe the system we built to run such a large-scale exploration. This system is both general and has easy to use interfaces to run a wide range of studies. In the second part, we present a meta-learning inspired method to derive key insights related to the problem of predictability by A) Comparing the relative predictive power of different classes of models B) Using descriptive statistics that best predict the predictability of ML pipelines. Predictability of life outcomes is a multi-faceted problem. We conclude the talk by briefly discussing some of our other studies that are currently in the pipeline.

Bio: Pranay Anchuri is a data scientist at CITP. His research interests include graph mining, large-scale data analytics and blockchain technologies. Pranay graduated with a Ph.D. in computer science from Rensselaer Polytechnic Institute in 2015. During graduate studies, he worked at various labs including IBM, Yahoo, and QCRI. His thesis focused on developing algorithms for efficiently extracting frequent patterns noisy networks.

After graduation, Pranay started as a research scientist at NEC Labs, Princeton working on log modeling and analytics. Most recently, he worked as a research scientist at Axoni, NY where his research focused on problems related to the implementation of high-performance permissioned blockchains.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

This seminar is co-sponsored by CITP and the Center for Statistics and Machine Learning.

CITP Seminar: Judging Truth In a Fake News Era

Date and Time
Tuesday, October 12, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Nadia Brashier, from Purdue University

Nadia Brashier
Every day, we encounter false claims that range from silly (e.g., We use 10% of our brains) to dangerous (e.g., Drinking bleach kills coronavirus). How do we know what to believe? In the first half of this talk, a three-part model of how people judge truth will be put forth. First, most content encountered in daily life is mundane and true.  Reflecting this base rate, we develop a modest bias to accept claims. Second, our own feelings convey useful information, so we often “go with our guts.” Assertions that feel easy to process, or fluent, seem true. Negative affect may disengage people from these biases and heuristics. Third, we can draw on our own memories, but often must be prompted to do so. Retrieving factual knowledge and cues about a source’s credibility helps us reject misinformation, but also takes up time and cognitive resources. In the second half of the talk we apply this logic to the question of why older adults share seven times more fake news stories than their young counterparts. Cognitive declines cannot fully explain their vulnerability – interventions in a ‘post-truth world’ must also consider older adults’ shifting social goals and gaps in their digital literacy. Together, this work suggests ways to cope in the current climate of misinformation, where falsehoods travel further and faster than the truth.

Bio: Nadia Brashier is an assistant professor of psychological sciences at Purdue University.  She studies memory and judgment across the lifespan, with a specific focus on cognitive “shortcuts” people use to evaluate truth. Her research identifies why young and older adults fall for fake news and misinformation.  She previously completed an NIH Postdoctoral Fellowship at Harvard University and earned her Ph.D. at Duke University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will not be recorded.

CITP Seminar: Building Language Technologies for Analyzing Online Activism

Date and Time
Tuesday, October 5, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Anjalie Field, from Carnegie Mellon University

Anjalie Field
While recent advances in natural language processing (NLP) have greatly enhanced our ability to analyze online text, distilling broad social-oriented research questions into tasks concrete enough for NLP models remains challenging. In this work, we develop state-of-the-art NLP models grounded in frameworks from social theory in order to analyze two social media movements: online media coverage of the #MeToo movement in 2017-2018 and tweets about #BlackLivesMatter protests in 2020.

In the first part, we show that despite common perception of the #MeToo movement as empowering, media coverage of events often portrayed women as sympathetic but unpowerful. In the second, we show that positive emotions like hope and optimism are prevalent in tweets with pro-BlackLivesMatter hashtags and significantly correlated with the presence of on-the-ground protests, whereas anger and disgust are not.  These results contrast stereotypical portrayals of protesters as perpetuating anger and outrage.  Overall, our work provides insight into social movements and debunks harmful stereotypes. We aim to bridge the gap between NLP, where models are often not designed to address social-oriented questions, and computational social science, where state-of-the-art NLP has often been underutilized.

Bio: Anjalie Field is a Ph.D. candidate at the Language Technologies Institute at Carnegie Mellon University and a visiting student at the University of Washington, where she is advised by Yulia Tsvtekov. Her work focuses on the intersection of NLP and computational social science, including both developing NLP models that are socially aware and using NLP models to examine social issues like propaganda, stereotypes, and prejudice. She has presented her work in NLP and interdisciplinary conferences, receiving a nomination for best paper at SocInfo 2020, and she is also the recipient of a NSF graduate research fellowship and a Google PhD fellowship. Prior to graduate school, she received her undergraduate degree in computer science, with minors in Latin and ancient Greek, from Princeton University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

This seminar will be recorded.

This seminar is co-sponsored by CITP and the Center for Statistics and Machine Learning

CITP Seminar: A Decentralized and Encrypted National Gun Registry

Date and Time
Tuesday, September 28, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Lucy Qin, from Brown University

Please click here to join the webinar.


Lucy Qin
Gun violence results in a significant number of deaths in the United States. Starting in the 1960’s, the US Congress passed a series of gun control laws to regulate the sale and use of firearms. One of the most important but politically fraught gun control measures is a national gun registry. A US Senate office is currently drafting legislation that proposes the creation of a voluntary national gun registration system. At a high level, the bill envisions a decentralized system where local county officials would control and manage the registration data of their constituents. These local databases could then be queried by other officials and law enforcement to trace guns. Due to the sensitive nature of this data, however, these databases should guarantee the confidentiality of the data. In this work, we translate the high-level vision of the proposed legislation into technical requirements and design a cryptographic protocol that meets them.

Roughly speaking, the protocol can be viewed as a decentralized system of locally-managed end-to-end encrypted databases. Our design relies on various cryptographic building blocks including structured encryption, secure multi-party computation and secret sharing. We propose a formal security definition and prove that our design meets it. We implemented our protocol and evaluated its performance empirically at the scale it would have to run if it were deployed in the United States. Our results show that a decentralized and end-to-end encrypted national gun registry is not only possible in theory but feasible in practice.

Bio: Lucy Qin is a Ph.D. candidate in computer science at Brown University. She works on applied cryptography in the Encrypted Systems Lab, advised by Prof. Seny Kamara. Prior to graduate school, Lucy was a researcher and software engineer at the Hariri Institute for Computing at Boston University where she implemented cryptographic capabilities to support initiatives in partnership with local government. Prior to that, she was a member of the Humanitarian Assistance and Disaster Relief Systems group and the Secure Systems and Resilient Technologies group at MIT Lincoln Laboratory where she built tools to assist first responders. Lucy graduated from Tufts University with a B.S. in computer science and economics and is an NSF Graduate Research Fellow.

To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This webinar will be recorded.

CITP Seminar: "In the Pre-Evening Light it Sees So Little”: A Human-Centered, Qualitative Investigation into Computer Vision as Account Verification in Gig Work

Date and Time
Tuesday, September 21, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Elizabeth Anne Watkins, from Princeton University

Please click here to join the webinar.


Elizabeth Watkins
Through intensive research on datasets, benchmarks, and models, the computer-vision community has taken great strides to identify the societal biases intrinsic in these technologies. Less is known about the last mile of the computer-vision machine-learning pipeline: on-the-ground integration into the real world. In this talk, I will discuss my empirical research analyzing this space, through an analysis of facial verification used as account verification in ride-hail work. Using a sociotechnical framework combined with ethnographic research, including interviews and analysis of an online community of workers, this talk will present a deep dive into facial verification at the level of local practice. This research reveals the intensive labor and high-stakes negotiation demanded of workers to be "recognized" by these systems. While the narrative of this system's purpose relies on claims to increased safety and security, a sociotechnical lens shows that for workers, this system creates new threats to life and livelihood. Findings demonstrate why understanding this last mile is imperative for engineers and practitioners: this system not only determines whether workers can be "verified" to access their platform of work, but does so in physical contexts bounded by dangerous conditions of high-speed traffic and dark parking lots. This work presents a call to action for the community, to integrate sociotechnical analysis into the engineering process, and to think deeply about whether and how technical solutions ought to be designed for social problems.

Bio: Elizabeth Anne Watkins is a postdoctoral research associate at CITP, also affiliated with the Princeton Human-Computer Interaction group (HCI). She studies work and technology, and completed her doctoral degree at Columbia University where she was advised by David Stark. Trained as an organizational sociologist in the field of communications, she uses interviews, analysis of online communities, and surveys to understand how people interpret and strategize around the algorithmic tools they use in their work.

With a special interest in the sociotechnical nexus of AI, usable security, and privacy, her dissertation examined facial recognition in spaces of algorithmic management. She has published or presented at the conferences on Computer-Human Interaction (CHI), Computer-Supported Cooperative Work (CSCW), Algorithmic Fairness, Accountability, and Transparency (FAccT), the security conference USENIX and co-located workshop USENIX FOCI, and the annual meetings of the Academy of Management (AOM) and the Society for the Social Studies of Science (4S). She holds a master’s degree from the Massachusetts Institute of Technology (MIT), and is also a researcher at the Tow Center for Digital Journalism, a member of the Columbia Center on Organizational Innovation, and an affiliate at the Data & Society Research Institute.

To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This seminar will not be recorded. 

CITP Seminar: Fairness in Visual Recognition: Redesigning the Datasets, Improving the Models and Diversifying the AI Leadership

Date and Time
Tuesday, September 14, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP

Please click here to join the webinar.

Computer vision models trained on unparalleled amounts of data have revolutionized many applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems.  Attention is focused on two types of biases: (1) bias in the form of inappropriate correlations between protected attributes (age, gender expression, skin color, …) and the predictions of visual recognition models, as well as (2) bias in the form of unintended discrepancies in error rates of vision systems across different social, demographic or cultural groups. In this talk, we’ll dive deeper both into the technical reasons and the viable strategies for mitigating bias in computer vision. A subset of our recent work will be highlighted, addressing bias in visual datasets (FAT*2020, ECCV 2020;  and recently here), in visual models (CVPR 2020; CVPR 2021; ICCV 2021), in evaluation metrics (ICML 2021) as well as in the makeup of AI leadership.

Bio:
Olga Russakovsky is an assistant professor in the computer science department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org’s Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in Artificial Intelligence. She completed her Ph.D. at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.

To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.
This webinar will be recorded.

CITP Seminar: Designing and Deploying Social Computing Systems

Date and Time
Tuesday, September 7, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP

Please click here to join the webinar.

Social computing has permeated most aspects of our lives, from work to play. In recent years, however, social platforms have faced challenges ranging from labor concerns about crowd work to misinformation on social media. These and other challenges, along with the emergence of new technical platforms, create opportunities to reimagine the future of social technologies. 

This talk will start with an overview of the research designing and deploying social computing systems to help millions of people connect and collaborate to learn new skills, crowdsource news reporting, and delegate tasks to hybrid AI systems. The rest of the time will be focused on talking about how our team here at Princeton will develop the next generation of public-interest social computing systems. For example, how we are working on identifying and developing the socio-technical layers needed to support community-owned gig work platforms. 

Bio:

Andrés Monroy-Hernández is an assistant professor of computer science and works on human-computer interaction and social computing. Along with his team, he designs and studies technologies that help millions of people connect and collaborate in new ways. He led the creation of the Scratch online community at MIT, the crowd-powered Cortana scheduling assistant at Microsoft, and several social AR and wearable experiences at Snap Inc. At Princeton, he directs the Human-computer Interaction Lab, focusing on public-interest technology development.

His research has received best paper awards at CHI, CSCW, HCOMP, and ICWSM, and has been featured in The New York Times, CNN, Wired, BBC, and The Economist. He was the technical program co-chair for the ACM Conference on Computer Supported Collaborative Work and Social Computing (CSCW ’18) and the ACM Conference on Collective Intelligence (CI ’19). He was named one of the most influential Latinos in Tech by CNET and one of the MIT Technology Review’s 35-under-35 Innovators for his research on citizens’ use of social media to circumvent drug cartel violence and censorship.

Andrés was on the leadership team of the Future Social Experiences Lab at Microsoft Research and founded the HCI research team at Snap Inc. He holds a master’s degree and a Ph.D. from the MIT Media Lab and a bachelor’s degree in electronics engineering from Tec de Monterrey in México.

This webinar will not be recorded. 

CITP Seminar: Building Cyberweapons Policy

Date and Time
Tuesday, April 6, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Ari Schwartz, from Venable

Ari Schwartz
Governments are finding themselves at a crossroads in their policies on how to find, buy, store, use and share vulnerabilities. When is it the right decision for governments to exploit a flaw in a commercial technology for law enforcement or national security purposes and when is it right to share it with the vendor who makes that technology to fix to help companies and consumers protect themselves? How much of this information should be made public? Ari Schwartz, a former special assistant to President Obama on the National Security Council, will explore how governments are addressing these issues and the challenges they face in getting it right.

Bio: Ari Schwartz directs Cybersecurity Services for Venable’s Cybersecurity Risk Management Group. In this role, Ari guides the establishment of cybersecurity consulting services for Venable, assisting organizations with understanding and the development of risk management strategies, including implementation of the Cybersecurity Framework and other planning tools to help minimize risk. Ari also coordinates the Cybersecurity Coalition, a group of leading cybersecurity companies dedicated to educating policymakers on cybersecurity issues and promoting a vibrant marketplace for cybersecurity technology solutions.

Prior to joining Venable, Ari was a member of the White House National Security Council, where he served as special assistant to the president and senior director for cybersecurity. As director, Ari coordinated all network defense cybersecurity policy, including critical infrastructure protection, federal network protection, supply-chain efforts, cybersecurity standards promotion, and information sharing. He led the White House’s legislative and policy outreach to businesses, trade groups, academics, and civil liberties groups on cybersecurity and developed new policies and legislation, including development of the Executive Orders on the Security of Consumer Financial Protection, Cybersecurity Information Sharing, and Sanctions Against Individuals Engaging in Malicious Cyber-Enabled Activities. Ari also led the successful White House rollout of the Cybersecurity Framework and the White House Cybersecurity Summit held at Stanford University.

Ari served in the Department of Commerce, where he advised the secretary on technology policy matters related to the National Institute of Standards and Technology (NIST), the National Telecommunications and Information Administration (NTIA), and the U.S. Patent and Trademark Office (USPTO). He led the department’s Internet Policy Task Force and represented the Obama administration on major Internet policy issues on privacy and security before Congress, at public events, and before the media.

Ari began his career in Washington at OMB Watch. For twelve years, he worked at the Center for Democracy and Technology, including serving as vice president and chief operating officer, and developing legislation and policy related to privacy, cybersecurity, and open government.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar: Lie Machines

Date and Time
Tuesday, April 20, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Philip N. Howard, from Oxford University

Philip N. Howard
Artificially intelligent fake accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. Big data from the social media firms, combined with interviews with internet trolls, bot writers and political operatives, demonstrates how misinformation gets produced, distributed and marketed. Ultimately, understanding how all the components work together is vital to dismantling such “lie machines” and strengthening democracy.

Bio: Philip N. Howard is a professor and writer. He is the director of the Oxford Internet Institute at Oxford University and is the author, most recently, of “Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives from Yale University Press. He blogs from philhoward.org and tweets from @pnhoward.

To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

CITP Seminar: Dreams of (Black) Tech Futures Past

Date and Time
Tuesday, March 9, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
CITP
Speaker
Charlton McIlwain, from New York University

Charlton McIlwain
This is what could have been. If the computer geeks at MIT in 1960 had just held on just a little while longer with the Mississippi freedom riders. If uprisings in Watts, and Detroit, and Newark and Kansas City did not make Black people the computing revolution’s first problem to solve. If Black people had averted the collision between civil rights and computing technology that Willard Wirtz once predicted. If Black people had bothered to seriously engage Roy Wilkins’ admonition to “computerize the race problem.” This talk will walk the attendees through the alternative black technological futures that some had already begun to imagine and design more than fifty years ago. Who, what and why were those futures foreclosed upon, and how did they impact the tech present? Can Black people still salvage our former technological dreams to imagine – and realize – a different kind of Black future?

Bio: Charlton McIlwain is the vice provost for faculty engagement and development at New York University (NYU). Charlton leads NYU’s Center for Faculty Advancement, which provides programming, resources, and special recognitions and awards that promote faculty research, teaching, mentorship, community engagement, and academic leadership development for NYU faculty, as well as those faculty with whom we collaborate through our Faculty Resource Network.

Charlton oversees the NYU Alliance for Public Interest Technology, which brings together NYU’s faculty experts to collaborate with each other and with partners in the public and private sectors on the ethical creation, use, and governance of technology in society, and is NYU’s Designee to the New America/Ford Foundation sponsored Public Interest Technology-University Network. In addition to these specific duties, he works closely with the Vice Provost’s team and the offices of Research, Work Life, Teaching & Learning with Technology, Academic Appointments, Program & Project Management Services, Human Resources, Equal Opportunity, Global Inclusion, Diversity, and Strategic Innovation, NYU Libraries, and others to ensure that our faculty have access to all available resources at NYU to advance their professional goals.

Charlton has been at NYU since 2001. As a professor of media, culture, and communication at NYU’s Steinhardt School of Culture, Education, and Human Development, his scholarly work focuses on the intersections of race, digital media, and racial justice activism. He is the founder of the Center for Critical Race and Digital Studies and the author of the new book, Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter, by Oxford University Press. He also co-authored the award-winning book, Race Appeal: How Political Candidates Invoke Race In U.S. Political Campaigns. He received his doctoral degree in communication and a master’s degree in human relations, both from the University of Oklahoma, and a bachelor’s degree in family psychology from Oklahoma Baptist University.


To request accommodations for a disability please contact Jean Butcher, butcher@princeton.edu, at least one week prior to the event.

Follow us: Facebook Twitter Linkedin