For Elissa Redmiles, Computer Security Is a Human Thing
By Doug Hulette
Photos by Doug Wells, Wells Photography
Question: Can computer science help to diminish human foibles, especially our penchant to set aside common sense as we juggle life’s conflicting priorities?
Elissa Redmiles believes the answer is yes. Redmiles, who will join Princeton’s Computer Science Department as an assistant professor in fall 2020, focuses on computer users and their seemingly idiosyncratic decision making on the crucial issues of security and privacy. Her research aims to understand why many users fail to update their software when security fixes become available, why so many of us fall for phishing emails and use weak passwords, and why we do other things that jeopardize our own online safety.On her website at the University of Maryland, where she earned her doctorate in computer science in 2019, she describes her research as investigating inequalities that arise in security and privacy decisions and “mitigating those inequalities through the design of systems that facilitate safety equitably across users.” Recognizing such inequalities — related to skills, socioeconomic status, culture and gender identity — she uses computational, economic and social-science methods to create models that explain the apparent irrationality of users regarding security behavior. Importantly, the models illuminate ways to improve the situation, and also produce insights for other areas of concern, such as algorithmic fairness.
Jonathan Mayer, an assistant professor who holds appointments in both the CS department and the Woodrow Wilson School of Public and International Affairs, understands exactly where Redmiles is coming from.
“For too long, information security research has centered on designing defenses that are impervious in theory but then underused in practice,” says Mayer, who is both a computer scientist and a lawyer. “Elissa's insight is that in order to protect people, we have to understand people. And in order to understand people, we can use rigorous behavioral science.
“Elissa is leading a new wave of high-impact, interdisciplinary, and fundamental research about how to build security that works for people in the real world, not just people idealized in academic print. Her work has already reshaped how the information security field embraces social science methods — and she's just getting started.”
Like Redmiles, who is currently doing postdoctoral work at Microsoft Research in Seattle, Mayer’s research crosses academic and cultural lines. He focuses on the convergence of technology and law, especially regarding national security, criminal procedure, and consumer privacy. Before joining the Princeton faculty, he served as technology law and policy advisor to Senator Kamala Harris and as the chief technologist for the enforcement arm of the Federal Communications Commission.
During a visit to Princeton in early 2019, Redmiles explained the importance of her work during a presentation titled “Security for All: Modeling Structural Inequities to Design More Secure Systems” as part of the CS department’s Colloquium Series. Associate professor Arvind Narayanan, who introduced her, says “Elissa's work bridges computer security and human-computer interaction, and has important implications for technology policy. Thus, her hire comes at a strategic time for the department as we look to add to our existing strength in security and tech policy while beginning to build a research group in human-computer interaction.”
“Security for All: Modeling Structural Inequities to Design More Secure Systems”
Talk given by Elissa Redmiles on February 21, 2019 for the Princeton Computer Science department.
Prior to her Colloquium talk, Redmiles spoke at a Center for Information Technology Policy lunch seminar in October 2018, at the invitation of Narayanan and other CITP associated faculty. Her talk was titled “Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning.”
In a description of her talk, Redmiles summed up her research goal: “A variety of experts — computer scientists, policy makers, judges — constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users.
“This raises a question: Is it possible to learn best practices directly from the users?”
Narayanan says he and others have been following Redmiles’s research for years. “During her visits, an unusually wide range of scholars found connections to Elissa's work, ranging from theoretical computer scientists to sociologists,” he says. “We were delighted to find out that she was planning to go into the job market.”