Sunnie S. Y. Kim

I am a computer science PhD candidate at Princeton University advised by Olga Russakovsky. I also work closely with Andrés Monroy-Hernández and Ruth Fong, and am affiliated with the Princeton Visual AI Lab and the Princeton HCI Lab. My research is supported by the NSF graduate research fellowship.

I work on AI transparency and explainability to help people better understand and interact with AI systems. I like to do interdisciplinary research, drawing from the fields of computer vision and machine learning, human-computer interaction, and fairness, accountability, transparency, and ethics in AI.

Previously, I received a BSc degree in Statistics and Data Science at Yale University and worked with John Lafferty in the Yale Statistical Machine Learning Group. After graduation, I spent a year at Toyota Technological Institute at Chicago doing computer vision and machine learning research with Greg Shakhnarovich in the Perception and Learning Systems Lab.

My first name is pronounced as sunny🔆 and I use she/her/hers pronouns. I like to run and play tennis in my free time!

Email  /  Github  /  Google Scholar  /  TwitterMastodon

News

02/2024: Gave an invited talk at the KAIST Graduate School of AI on "Supporting End-Users' Interaction with AI through Transparency and Explainability."
12/2023: Was accepted to the CHI 2024 doctoral consortium. My research description, titled Establishing Appropriate Trust in AI through Transparency and Explainability, will be published as CHI Extended Abstracts.
12/2023: Will be organizing two workshops on XAI next year: HCXAI at CHI 2024 and XAI4CV at CVPR 2024.
09/2023: Year 4 begins!
08/2023: Had a lovely summer interning at Microsoft Research in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group with Jenn Wortman Vaughan (manager), Q. Vera Liao, Mickey Vorvoreanu, and Steph Ballard.

06/2023: Attended CVPR 2023 in Vancouver, co-organized two workshops XAI4CV and WiCV, and presented Overlooked Factors in Concept-based Explanations at the main conference.
06/2023: Attended FAccT 2023 in Chicago and presented Humans, AI, and Context at the main conference.
04/2023: Attended CHI 2023 in Hamburg and presented at the main conference and 2 workshops: TRAIT and HCXAI.
04/2023: Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application was accepted to FAccT 2023.
04/2023: "Help Me Help the AI" received a Best Paper Honorable Mention🏅 at CHI 2023.
02/2023: Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability was accepted to CVPR 2023.
01/2023: "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction was accepted to CHI 2023.
12/2022: Gave a panel presentation at the NeurIPS 2022 HCAI workshop on Closing the Creator-Consumer Gap in XAI: A Call for Participatory XAI Design with End-users.
10/2022: Attended ECCV 2022 virtually and presented HIVE at the main conference.
08/2022: Year 3 here we go :)

07/2022: HIVE was accepted to ECCV 2022.
06/2022: Attended CVPR 2022 in New Orleans and presented at 2 workshops: XAI4CV (talk & poster), WiCV (poster).
05/2022: Attended CHI 2022 HCXAI workshop virtually and presented HIVE: Evaluating the Human Interpretability of Visual Explanations.
04/2022: Was awarded the NSF Graduate Research Fellowship.
01/2022: Passed my program's general exam (quals). Huge thanks to my committee members Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández for helpful feedback on my research.
08/2021: Moved to Princeton after a fully virtual first year and became G2.

07/2021: Served as a research instructor for Princeton AI4ALL and mentored high school students on research projects on using computer vision for biodiversity monitoring.
06/2021: Attended CVPR 2021 virtually and presented at the main conference (2 posters) and 3 workshops: WiCV (talk & poster), RCV (talk), FGVC (poster).
03/2021: Our submission to the ML Reproducibility Challenge 2020, [Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias, was accepted for publication in the ReScience C journal.
03/2021: Led a discussion on the costs and risks of large language models in the Princeton Bias in AI Reading Group (slides).
02/2021: Two papers were accepted to CVPR 2021: Fair Attribute Classification through Latent Space De-biasing & Information-Theoretic Segmentation by Inpainting Error Maximization.
08/2020: Started my PhD program at Princeton University.

08/2020: Attended ECCV 2020 virtually and presented Deformable Style Transfer at the main conference and the WiCV workshop.
07/2020: Wrapped up my time at TTIC as a visiting student. The year went by very quickly. I’ll especially miss the Perception and Learning Systems Lab, the 2019-2020 cohort friends, and the Girls Who Code team.

Research while at Princeton (2020 - Present)

Currently, I'm most excited about developing and evaluating transparency approaches for generative AI, and understanding what modes of human-AI interaction leads to complementarity.

I also believe open science improves transparency, accountability, and progress in research. I try to document and open source my code as much as I can, and support initiatives like the ML Reproducibility Challenge that encourages the ML community to do more reproducible research (participated in the 2020 version and reviewed for the 2020-2022 versions).

* denotes equal contribution.

Establishing Appropriate Trust in AI through Transparency and Explainability
Sunnie S. Y. Kim
CHI 2024 Extended Abstracts
paper (coming)

This is a description of my dissertation research for the CHI 2024 Doctoral Consortium. In summary, my dissertation aims to elucidate mechanisms and factors of trust in AI, and develop AI transparency and explainability approaches that help people form appropriate understanding and trust in AI.

CVPR 2023 Explainable AI for Computer Vision (XAI4CV) Workshop
Sunnie S. Y. Kim, Vikram V. Ramaswamy, Ruth Fong, Filip Radenovic, Abhimanyu Dubey, Deepti Ghadiyaram
workshop website

We organized a workshop to provide a forum for researchers and practitioners to discuss the unique challenges and opportunities in XAI for CV and push the frontiers of the field.

CVPR 2023 Women in Computer Vision (WiCV) Workshop
Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang
workshop website / workshop report

We organized a workshop for women researchers in the field to present their work, discuss challenges and community issues, and participate in networking and mentorship.

Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky
CVPR 2023
paper / 6min talk / code / bibtex

We analyze three commonly overlooked factors in concept-based model explanations, (1) the choice of the probe dataset, (2) the learnability of concepts in the probe dataset, (3) the number of concepts used in explanations, and make suggestions for future development and analysis of concept-based methods.

Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández
FAccT 2023
project page / paper / blog post / 10min talk / bibtex

We study how end-users trust AI in a real-world context. Concretely, we describe multiple aspects of trust in AI and how human, AI, and context-related factors influence each.

Featured in the Montreal AI Ethics Institute's newsletter and website. Shorter version of this work also appeared at the CHI 2023 Trust and Reliance in AI-assisted Tasks (TRAIT) Workshop.

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández
CHI 2023 Best Paper Honorable Mention🏅
project page / paper / supplement / 30sec preview / 10min talk / bibtex

We explore how XAI can support human-AI interaction by interviewing 20 end-users of a real-world AI application. Specifically, we study (1) what explainability needs end-users have, (2) how they intend to use explanations of AI outputs, and (3) how they perceive existing explanation approaches.

Featured in the Human-Centered AI medium blog as CHI 2023 Editors' Choice. Position papers based on this work also appeared at the CHI 2023 Human-Centered Explainable AI (HCXAI) Workshop and NeurIPS 2022 Human-Centered AI (HCAI) Workshop as panel presentation.

HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
ECCV 2022
project page / paper / supplement / extended abstract / code / 2min talk / 4min talk / 8min talk / bibtex

We introduce HIVE, an evaluation framework for model explanation methods that enables falsifiable hypothesis testing, cross-method comparison, and human-centered evaluation. Using HIVE, we evaluate four existing methods and find that explanations engender trust, even when the model is incorrect, and don't help people distinguish between correct and incorrect model predictions.

Shorter versions of this work appeared at the CHI 2022 Human-Centered Explainable AI (HCXAI) Workshop, CVPR 2022 Explainable AI for Computer Vision (XAI4CV) Workshop as spotlight talk, and CVPR 2022 Women in Computer Vision (WiCV) Workshop.

UFO: A Unified Method for Controlling Understandability and Faithfulness Objectives in Concept-based Explanations for CNNs
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky
Preprint
paper / code / bibtex

We present UFO, a concept-based explanation method for CNNs that enables users to control the understandability and faithfulness of concept-based explanations using well-defined objective functions for the two qualities.

ELUDE: Generating Interpretable Explanations via a Decomposition into Labelled and Unlabelled Features
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky
Preprint
paper / bibtex

We present ELUDE, a concept-based model explanation framework that decomposes a model's output into two parts: one that is explainable through a linear combination of the semantic concepts, and another that is dependent on the set of uninterpretable features.

Shorter version of this work appeared at the CVPR 2022 Explainable AI for Computer Vision (XAI4CV) Workshop.

Cleaning and Structuring the Label Space of the iMet Collection 2020
Vivien Nguyen*, Sunnie S. Y. Kim*
paper / extended abstract / code / bibtex

We clean and structure the noisy label space of the iMet Collection dataset for fine-grained art attribute recognition. This work was done as a course project for Princeton COS 529 Advanced Computer Vision.

Shorter version of this work appeared at the CVPR 2021 Fine-Grained Visual Categorization (FGVC) Workshop.

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias
Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky
ReScience C 2021
paper (journal) / paper (arXiv) / code / openreview / bibtex

We implement from scratch Singh et al. (CVPR 2020)'s algorithms for mitigating contextual bias in object/attribute recognition and test the reproducibility of their results. This work is one of 23/82 works accepted for publication from the ML Reproducibility Challenge 2020.

Fair Attribute Classification through Latent Space De-biasing
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky
CVPR 2021
project page / paper / code / demo / 2min talk / 5min talk / 10min talk / bibtex

We propose a method for doing controlled data generation with a single trained GAN, and demonstrate how the generated data can be used to train fairer visual classifiers.

Featured in Coursera's GANs Specialization course and Foundations of Computer Vision (MIT Press book) by Antonio Torralba, Phillip Isola, and William Freeman. I also gave invited talks on this work at CVPR 2021 Responsible Computer Vision (RCV) Workshop and CVPR 2021 Women in Computer Vision (WiCV) Workshop.
Research while at TTIC (2019 - 2020)

I was introduced to deep learning and computer vision during my gap year at TTIC (between undergrad/grad). Late by these days standards? Well I was a statistics major in undergrad😄

Information-Theoretic Segmentation by Inpainting Error Maximization
Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Gregory Shakhnarovich, David McAllester
CVPR 2021
project page / paper / bibtex

We introduce a cheap, class-agnostic, and learning-free method for unsupervised image segmentation.

Deformable Style Transfer
Sunnie S. Y. Kim, Nicholas Kolkin, Jason Salavon, Gregory Shakhnarovich
ECCV 2020
project page / paper / code / demo / 1min talk / 10min talk / slides / bibtex

We propose an image style transfer method that can transfer both texture and geometry.

Shorter version of this work appeared at the ECCV 2020 Women in Computer Vision (WiCV) Workshop.

Research while at Yale (2016 - 2019)

In my undergraduate years, I was fortunate to gain research experience in various fields (e.g., statistics, neuroscience, environmental science, psychology) under the guidance of many great mentors.

Shallow Neural Networks Trained to Detect Collisions Recover Features of Visual Loom-Selective Neurons
Baohua Zhou, Zifan Li, Sunnie S. Y. Kim, John Lafferty, Damon A. Clark
eLife 2022
paper / code / bibtex

We find that anatomically-constrained shallow neural networks trained to detect impending collisions resemble experimentally observed LPLC2 neuron responses for many visual stimuli.

Shorter version of this work appeared at Computational and Systems Neuroscience (Cosyne) 2021.

2018 Environmental Performance Index
Zachary A. Wendling, Daniel C. Esty, John W. Emerson, Marc A. Levy, Alex de Sherbinin, ..., Sunnie S. Y. Kim et al.
website / report & data / discussion at WEF18 / news 1 / news 2

We evaluate 180 countries' environmental health and ecosystem vitality. EPI is a biennial project conducted by researchers at Yale and Columbia in collaboration the World Economic Forum. I built the full data pipeline and led the data analysis work for the 2018 version.

Which Grades are Better, A’s and C’s, or All B’s? Effects of Variability in Grades on Mock College Admissions Decisions
Woo-kyoung Ahn, Sunnie S. Y. Kim, Kristen Kim, Peter K. McNally
Judgment and Decision Making 2019
paper

We study the effect of negativity bias in human decision making.


Academic Service

Reviewer/Program Committee:
•   Conferences: CVPR 2022-2024, ICCV 2021 & 2023, ECCV 2022 & 2024, CHI 2023 & 2024, FAccT 2023 & 2024, SaTML 2023
•   Workshops: CVPR 2021 RCV Workshop, AAAI 2023 R2HCAI Workshop, CVPR 2023 XAI4CV Workshop, ICML 2023 AI & HCI Workshop
•   Challenges: ML Reproducibility Challenge 2020, 2021 Outstanding Reviewer, 2022 Outstanding Reviewer

Conference Volunteer: NeurIPS 2019 & 2020, ICLR 2020, ICML 2020, CVPR 2022

Organizing Committee: NESS NextGen Data Science Day 2018, CVPR 2023 WiCV Workshop, CVPR 2023 XAI4CV Workshop Lead Organizer

Teaching & Outreach

I deeply care about increasing diversity and inclusion in STEM. As a woman in STEM who never considered pursuing a career in it before college, I experienced firsthand the importance of having a supportive environment to join and stay in the field. So I'm passionate about creating environments where women and other historically underrepresented minorities in STEM feel supported and happy :) through outreach, mentorship, and community building.

Princeton COS
Mentor for G1 and Graduate Applicants, 2021–Present
Princeton COS 429 Computer Vision
Graduate TA, Fall 2021
Princeton AI4ALL
Research Instructor, Summer 2021
TTI-Chicago Girls Who Code
Facilitator & Instructor, 2019-2020
Yale S&DS Departmental Student Advisory Committee
Co-founding Member, 2017-2019
Yale S&DS 365/565 Data Mining and Machine Learning
Undergraduate TA, Fall 2018
Yale S&DS 230/530 Data Exploration and Analysis
Undergraduate TA, Fall 2017

Website modified from here.