Quick links

Olga Russakovsky: Creating a positive future for AI

By Sean C. Downey, Princeton University Advancement Communications

Olga Russakovsky. Photo by Steve Freeman

For young people, summer camp can be a haven of belonging — a time to put a pause on sunbathing and Fortnite to bond over activities with kindred spirits. At camp, all the must-do activities of the school year give way to the want-to-do projects of self-discovery. For Olga Russakovsky, assistant professor of computer science and director of the Princeton Visual AI Lab, summer camp was the moment in the calendar of her youth that ignited her long love affair with mathematics and computer science.

“Yes, I was the cool kid who went to math camp in high school,” Russakovsky said, flashing a deprecating smile. Outside her office window, the Princeton campus teetered on the edge of summer with students stressing over final exams and last-minute projects. As the daughter of math teachers, Russakovsky was always fascinated by mathematics and attended the math camp at Stanford University to take stock of what she was truly capable of. She came away with much more: “The experience was transformative. The other campers and I would sprawl out on the staircase landing in our dorm and hang out for hours working through problem sets together. It was amazing to be among peers who similarly found this a fun way to spend an afternoon!”

Russakovsky went on to study mathematics as a Stanford undergrad before earning her Ph.D. in computer science, specializing in computer vision, a branch of artificial intelligence (AI) that develops systems that identify objects in images and video and can reason about the visual world. As a Ph.D. student at Stanford in the lab of Fei-Fei Li ’99, a leading expert on cognitively inspired AI, Russakovsky developed an algorithm for computer vision systems that identified foreground and background image elements to process them separately, which made it easier to isolate and annotate the relevant objects of interest in a picture.

While still a graduate student, Russakovsky joined a team of Stanford and Princeton computer scientists that included Li, Jia Deng *12, associate professor of computer science at Princeton, and Kai Li, Princeton’s Paul M. Wythes ’55 and Marcia R. Wythes Professor in Computer Science, to launch ImageNet, a massive database containing more than 14 million annotated images. As a dataset of unprecedented scale, ImageNet quickly became the standard in computer vision and data-centric machine learning and helped kickstart the deep learning revolution that led to the current boom in generative AI. “Having this very large collection of data allowed a new class of deep learning models to showcase their full power,” Russakovsky said.

Now, as AI-driven systems hold sway over real-world decisions such as identifying missing children, diagnosing rare diseases and deciding which direction your car should turn, some experts are predicting an AI apocalypse. Russakovsky, however, takes a more nuanced view. “AI is a beautiful, transformative and incredibly powerful technology,” she said. “But I think it can be massively misused.”

Russakovsky believes the technology has the potential to do an incredible amount of good for the world, but only if AI systems have more transparency and accountability. “We need to ramp up our efforts to harness this technology for good rather than evil,” she said. “And we have to move quickly on that front.”

Russakovsky has focused her research on the historical and societal biases impacting visual recognition and developing methods to mitigate unfair outcomes. Her AI bias work led the Visual AI Lab to build REVISE (REvealing VIsual BiaSEs), a tool that analyzes visual datasets for signs of prejudice, including racial and gender assumptions. “You can’t build an unbiased system, but you can certainly do better than what we’re doing now,” she said. “AI is going to have an important impact on the world, and yet we have a very homogeneous group of people building all these AI systems.”

Case in point, MIT’s infamous gender shades study from 2018 found that popular facial recognition programs from Microsoft, IBM and Megvii had a 21 to 35 percent error rate when it came to identifying darker-skinned women and less than a 1 percent error rate with lighter-skinned men. “You don’t need to dig much further than that to find the bias,” Russakovsky said.

The issue is one of omission; the datasets the companies used to train their programs contained many more images of white men than Black women, so the results skewed in the direction of the data. In the years since the study’s publication, each company has worked to address these issues to varying degrees of success.

“When you have a homogeneous group that controls the power to build these systems and profit from them, inequality and disparities will continue to increase in subtle yet powerful ways.” For Russakovsky, addressing this lack of inclusivity starts by opening up AI to people from a variety of backgrounds that reflect the world we live in.

FACING DOWN STRUCTURAL BIAS

Russakovsky’s quest to advance equity, diversity and inclusion in AI is partly rooted in her experience as a woman in computer science, a profession dominated by men. She has first-hand experience with deep structural bias and being passively discouraged from pursuing AI as a career.

At Stanford, she remembers being one of three women in a graduate-level math class of 30 to 40 students and later feeling marginalized as the only female Ph.D. student in her first graduate lab. “I hadn’t quite realized at the time how strange and isolating that experience was,” she said. “I sort of accepted it as the status quo.”

Russakovsky says that two years into her Ph.D., she had internalized so much self-doubt that she started suffering from imposter syndrome. But Fei-Fei Li helped her break out of that mentality. “She noticed that when I did research, I always took one timid step forward and then panicked and took two steps back to rethink everything,” she said. “I had so much self-doubt that I couldn’t move forward.”

Li became Russakovsky’s Ph.D. advisor and mentor, teaching her to navigate the graduate and postdoc computer science space as a woman. “She told me that we weren’t afforded the same benefit of the doubt and had to remain conscious of that even after becoming professors,” she said.

Li also instilled in her a sense of responsibility for inclusive advising and inclusive community building as she advanced in her career. “If you don’t see anyone who looks like you in the positions that you want to occupy then you don’t have a roadmap for how to get there yourself,” Russakovsky said.

Women currently hold only 28 percent of the mathematics and computer science jobs in the U.S. and received 21 percent of the computer science bachelor’s degrees in 2019. One explanation of this disparity is that many women and girls are commonly dissuaded from becoming interested in computer science — something Russakovsky has witnessed too often. “I was teaching a math course at a summer camp for middle school students, and one of the girls told me that some of the boys said that girls weren’t good at math,” she said. The girl was one of four in a class of 30 students. “The boys were teasing her about being super smart and it just broke my heart.”

WHO WILL CHANGE AI?

In 2017, Russakovsky established the nonprofit AI4ALL — alongside Li and Stanford’s Rick Sommer — to educate future AI leaders from underrepresented groups. She currently co-directs the organization’s Princeton chapter with Edward Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs, Emeritus, and Jaime Fernández Fisac, assistant professor of electrical and computer engineering, with the goal of educating future AI leaders from underrepresented groups. She said her involvement in AI4ALL stems from her own experiences in summer math camps: “I realized how impactful that kind of early intervention can be on high school or middle school students and how much it can shape their life trajectory.”

Russakovsky with AI4ALL students on a field trip to Washington, D.C. in 2018.

Adopting the slogan, “AI will change the world. Who will change AI?” AI4ALL offers college- and high school-aged girls and other underrepresented groups immersive programming in the basics of artificial intelligence. “The core idea is that we’re trying to give these students a look into the field of AI, to demystify it and help them see a path for themselves in this space,” Russakovsky said.

What started as a two-week day camp at Stanford for high school girls has expanded to 11 separate chapters at universities across the U.S. and Canada, with camps running for three weeks and offering online and residential options. Each AI4ALL chapter harnesses the strengths of its host university to offer intensive training, group projects and guest lectures by leading researchers in AI. “Part of our goal is to embrace a diversity of approaches, target populations and educational experiences,” Russakovsky said. Some chapters focus on female students, and others on students of color and lower-income students.

“There’s plenty of fear around AI that contributes to the lack of diversity in this field, which is part of what’s driving people away from it,” Russakovsky said. “But I hope we can change that for the better.”

Princeton’s interdisciplinary strengths, including computer science, public policy and the humanities, allow its AI4ALL camp to explore AI's ethical and societal impacts and draw from the collective expertise of the University’s computer science department and the Center for Information Technology Policy (CITP). “With CITP’s connection to AI policymakers, they’re trying to translate some of the AI research discoveries into actionable guidance for government policies,” Russakovsky said.

As an example of how this translates into AI4ALL programming, she pointed to the issues surrounding large image datasets built using millions of pictures scraped from social media. “We ask our students how they think about privacy, consent issues and data ownership,” Russakovsky said. “For instance, when you upload your photo to Flickr, are you also consenting for it to be used in training an AI model?”

This is an area where new laws and policies are needed, especially when it comes to using datasets like these for facial recognition. To reinforce these connections, students take a two-day field trip to Washington, D.C., during the third week of camp.

Princeton AI4ALL campers also work on group projects that showcase the different AI applications. Past projects included identifying melanoma from pictures of skin lesions, tracing the human genome’s geographic origins and teaching robots to navigate unknown spaces by following verbal instructions. “Part of what we’re trying to do at AI4ALL is dispel some of the myths, hype and fear around AI,” Russakovsky said. “And this is both AI coming to take away your jobs and AI as killer robots.”

Princeton’s liberal arts environment, she said, has been ideal for this chapter of AI4ALL to take root. “In many ways, having access to experts on ethics and philosophy has helped shape how we conduct our research.”

It’s crucial, Russakovsky said, to have people with a comparably broad set of experiences and worldviews working on AI problems: “If the people working on AI systems are thoughtful, responsible, educated on the ethics of it, understand how it impacts society, and care about using it for social benefit, AI will do a lot of good things for the world.”

This article originally appeared on the Princeton University Alumni website. 

Follow us: Facebook Twitter Linkedin