Quick links

Talk

Protecting Privacy by Splitting Trust

Date and Time
Thursday, April 4, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Arvind Narayanan

Henry Corrigan-Gibbs
When the maker of my phone, smart-watch, or web browser collects data about how I use it, must I trust the manufacturer to protect that sensitive information from theft? When I use the cryptographic hardware module in my laptop, need I trust that it will keep my secrets safe? When I use a messaging app to chat with friends, must I trust the app vendor not to sell the details of my messaging activity for profit?

This talk will show that we can get the functionality we want from our systems without having to put blind faith in the correct behavior of the companies collecting our data, building our hardware, or designing our apps. The principle is to split our trust -- among organizations, or devices, or users. I will introduce new cryptographic techniques and systems-level optimizations that make it practical to split trust in a variety of settings. Then, I will present three built systems that employ these ideas, including one that now ships with the Firefox browser.

Bio: 
Henry Corrigan-Gibbs is a Ph.D. candidate at Stanford, advised by Dan Boneh. His research interests are in computer security, applied cryptography, and online privacy. Henry and his collaborators have received the Best Young Researcher Paper Award at Eurocrypt 2018, the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award, and Henry's work has been cited by IETF and NIST.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

The "D" word: solving for "diversity" on high-tech teams

Date and Time
Thursday, February 28, 2019 - 4:30pm to 5:30pm
Location
Bowen Hall 222
Type
Talk

Janet Vertesi
This talk gives an overview of the sociological factors that affect the construction of diverse, high-performing teams. Building on a summary of key issues that affect gender and racial disparities in high-performing occupations, the talk covers the current social science and vocabulary for addressing the problem, as well as strategies for moving forward, avoiding common traps, and protecting performance-based advancement. 

Bio:
Dubbed “Margaret Mead among the Starfleet” by the Times Literary Supplement, Janet Vertesi is Assistant Professor of Sociology at Princeton University and an expert in the sociology of science, technology, and organizations. Vertesi’s past decade of research, funded by the National Science Foundation, examines how distributed robotic spacecraft teams work together effectively to produce scientific and technical results. Her book Seeing Like a Rover (University of Chicago Press, 2015) describes the collaborative work of the Mars Exploration Rover mission including the people, the images, and the robots who do science on Mars. Vertesi is also a long-time contributor to the Association of Computing Machinery conferences on human-computer interaction and computer-supported cooperative work. She is an advisory board member of the Data and Society institute in New York City and is a member of Princeton University’s Center for Information Technology Policy.

To join the talk, please email seasdiversity@princeton.edu.

This event co-sponsored by School of Engineering and Applied Science and Department of Computer Science.

Systems to Improve Online Discussion

Date and Time
Tuesday, April 16, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Adam Finkelstein

Amy Zhang
Discussions online are integral to everyday life, affecting how we learn, work, socialize, and participate in public society. Yet the systems that we use to conduct online discourse, whether they be email, chat, or forums, have changed little since their inception many decades ago. As more people participate and more venues for discourse migrate online, new problems have arisen, and old problems have intensified. People are still drowning in information and must now juggle dozens of disparate discussion silos in addition. Finally, an unfortunately significant proportion of this online interaction is unwanted or unpleasant, with clashing norms leading to people bickering or getting harassed into silence. My research in human-computer interaction is on reimagining outdated designs towards designing novel online discussion systems that fix what's broken about online discussion. To solve these problems, I develop tools that empower users and communities to have direct control over their experiences and information. These include: 1) summarization tools to make sense of large discussions, 2) annotation tools to situate conversations in the context of what is being discussed, as well as 3) moderation tools to give users more fine-grained control over content delivery. 

Bio:
Amy X. Zhang is a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory, focusing on human-computer interaction and social computing, and a 2018-19 Fellow at the Harvard Berkman Klein Center. She has interned at Microsoft Research and Google Research, received awards at ACM CHI and CSCW, and featured in stories by ABC News, BBC, CBC, and more. She has an M.Phil. in CS at University of Cambridge on a Gates Fellowship and a B.S. in CS at Rutgers, where she captained the Division I Women’s tennis team. Her research is supported by a Google PhD Fellowship and an NSF Graduate Research Fellowship.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Safe and Reliable Reinforcement Learning for Continuous Control

Date and Time
Thursday, March 7, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Ryan Adams, CS and Yuxin Chen, EE

Many autonomous systems such as self-driving cars, unmanned aerial vehicles, and personalized robotic assistants are inherently complex.  In order to deal with this complexity, practitioners are increasingly turning towards data-driven learning techniques such as reinforcement learning (RL) for designing sophisticated control policies. However, there are currently two fundamental issues that limit the widespread deployment RL: sample inefficiency and the lack of formal safety guarantees. In this talk, I will propose solutions for both these issues in the context of continuous control tasks. In particular, I will show that in the widely applicable setting where the dynamics are linear, model-based algorithms which exploit this structure are substantially more sample efficient than model-free algorithms, such as the widely used policy gradient method. Furthermore, I will describe a new model-based algorithm which comes with provable safety guarantees and is computationally efficient, relying only on convex programming. I will conclude the talk by discussing the next steps towards safe and reliable deployment of reinforcement learning. 

Bio:
Stephen Tu is a PhD student in Electrical Engineering and Computer Sciences at the University of California, Berkeley advised by Benjamin Recht. His research interests are in machine learning, control theory, optimization, and statistics. Recently, he has focused on providing safety and performance guarantees for reinforcement learning algorithms in continuous settings. He is supported by a Google PhD fellowship in machine learning. 

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization

Date and Time
Monday, March 11, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Elad Hazan

Jason Lee
Deep Learning has had phenomenal empirical successes in many domains including computer vision, natural language processing, and speech recognition. To consolidate and boost the empirical success, we need to develop a more systematic and deeper understanding of the elusive principles of deep learning.

In this talk, I will provide analysis of several elements of deep learning including non-convex optimization, overparametrization, and generalization error. First, we show that gradient descent and many other algorithms are guaranteed to converge to a local minimizer of the loss. For several interesting problems including the matrix completion problem, this guarantees that we converge to a global minimum. Then we will show that gradient descent converges to a global minimizer for deep overparametrized networks. Finally, we analyze the generalization error by showing that a subtle combination of SGD, logistic loss, and architecture combine to promote large margin classifiers, which are guaranteed to have low generalization error. Together, these results show that on overparametrized deep networks SGD finds solution of both low train and test error.

Bio:
Jason Lee is an assistant professor in Data Sciences and Operations at the University of Southern California. Prior to that, he was a postdoctoral researcher at UC Berkeley working with Michael Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in statistics, machine learning, and optimization. Lately, he has worked on the foundations of deep learning, non-convex optimization algorithm, and adaptive statistical inference. He has received a Sloan Research Fellowship in 2019 and NIPS Best Student Paper Award for his work.  

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Optimizing the Automated Programming Stack

Date and Time
Thursday, April 18, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Margaret Martonosi

James Bornholt
The scale and pervasiveness of modern software poses a challenge for programmers: software reliability is more important than ever, but the complexity of computer systems continues to grow. Automated programming tools are powerful weapons for programmers to tackle this challenge: verifiers that check software correctness, and synthesizers that generate new correct-by-construction programs. These tools are most effective when they apply domain-specific optimizations, but doing so today requires considerable formal methods expertise.

In this talk, I present a new application-driven approach to optimizing the automated programming stack underpinning modern domain-specific tools. I will demonstrate the importance of programming tools in the context of memory consistency models, which define the behavior of multiprocessor CPUs and whose subtleties often elude even experts. Our new tool, MemSynth, automatically synthesizes formal descriptions of memory consistency models from examples of CPU behavior. We have used MemSynth to synthesize descriptions of the x86 and PowerPC memory models, each of which previously required person-years of effort to describe by hand, and found several ambiguities and underspecifications in both architectures. I will then present symbolic profiling, a new technique we designed and implemented to help people identify the scalability bottlenecks in automated programming tools. These tools use symbolic evaluation, which evaluates all paths through a program, and is an execution model that defies both human intuition and standard profiling techniques. Symbolic profiling diagnoses scalability bottlenecks using a novel performance model for symbolic evaluation that accounts for all-paths execution. We have used symbolic profiling to find and fix performance issues in 8 state-of-the-art automated tools, improving their scalability by orders of magnitude, and our techniques have been adopted in industry. Finally, I will give a sense of the importance of future application-driven optimizations to the automated programming stack, with applications that inspire improvements to the stack and in turn beget even more powerful automated tools.

Bio:
James Bornholt is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Emina Torlak, Dan Grossman, and Luis Ceze. His research interests are in programming languages and formal methods, with a focus on automated program verification and synthesis. His work has received an ACM SIGPLAN Research Highlight, two IEEE Micro Top Picks selections, an OSDI best paper award, and a Facebook Ph.D. fellowship.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Learning-based Learning Systems

Date and Time
Tuesday, March 5, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Ryan Adams

Tianqi Chen
Data, models, and computing are the three pillars that enable machine learning to solve real-world problems at scale. Making progress on these three domains requires not only disruptive algorithmic advances but also systems innovations that can continue to squeeze more efficiency out of modern hardware. Learning systems are in the center of every intelligent application nowadays. However, the ever-growing demand for applications and hardware specialization creates a huge engineering burden for these systems, most of which rely on heuristics or manual optimization.

In this talk, I will present a new approach that uses machine learning to automate system optimizations. I will describe our approach in the context of deep learning deployment problem. I will first discuss how to design invariant representations that can lead to transferable statistical cost models, and apply these representations to optimize tensor programs used in deep learning applications. I will then describe the system improvements we made to enable diverse hardware backends. TVM, our end-to-end system, delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned deep learning frameworks. Finally, I will discuss how to generalize our approach to do full-stack optimization of the model, system, hardware jointly, and how to build systems to support life-long evolution of intelligent applications.
 
Bio:
Tianqi Chen is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Carlos Guestrin on the intersection of machine learning and systems. He has created three major learning systems that are widely adopted: XGBoost, TVM, and MXNet(co-creator). He is a recipient of the Google Ph.D. Fellowship in Machine Learning.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Resilient Safety Assurance for Robotic Systems: Staying Safe Even When Models Are Wrong

Date and Time
Tuesday, February 26, 2019 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Prof. Naveen Verma and Prof. Szymon Rusinkiewicz

Jaime Fernández Fisac
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must be able to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads means that any precomputed guarantees may become invalid when modeling assumptions made at design time are violated once the system is deployed. My research aims to enable autonomous systems to proactively ensure safety during their operation by explicitly reasoning about the gap between their models and the real world.

In this talk I will present recent contributions to safety assurance for autonomous systems. I will first discuss new advances in efficient safety computation, and demonstrate their use in large-scale unmanned air traffic. Next, I will present a general safety framework enabling the use of learning control schemes (e.g. reinforcement learning) for safety-critical robotic systems in uncertain environments. I will then turn my attention to the important problem of safe human-robot interaction, and introduce a real-time Bayesian method to monitor the reliability of predictive human models. The talk will end with a discussion of challenges and opportunities ahead, including the introduction of game-theoretic planning in autonomous driving and the bridging of safety analysis and deep reinforcement learning.

Bio:
Jaime Fernández Fisac is a Ph.D. candidate in Electrical Engineering and Computer Sciences at UC Berkeley. His research interests lie in control theory and artificial intelligence, with a focus on safety assurance for autonomous systems. He works to enable robotic systems to safely interact with uncertain environments and human beings despite using inaccurate models. Jaime received a B.S./M.S. degree in Electrical Engineering from the Universidad Politécnica de Madrid, Spain, in 2012, and a M.Sc. in Aeronautics from Cranfield University, U.K., in 2013. He is a recipient of the La Caixa Foundation fellowship.

Security for All: Modeling Structural Inequities to Design More Secure Systems

Date and Time
Thursday, February 21, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Arvind Narayanan

Elissa Redmiles
Users often fall for phishing emails, reuse simple passwords, and fail to effectively utilize "provably" secure systems. These behaviors expose users to significant harm and frustrate industry practitioners and security researchers alike. As consequences of security breaches become ever more grave, it is important to study why humans behave seemingly irrationally. In this talk, I will illustrate how modeling the effects of structural inequities -- variance in skill, socioeconomic status, as well as culture and gender identity -- can both explain apparent irrationality in users’ security behavior and offer tangible improvements in industry systems. Modeling and mitigating security inequities requires a combination of techniques from economic, data scientific, and social science methodologies to develop new tools for systematically understanding and mitigating insecure behavior. 

Through novel experimental methodology, I empirically show strong evidence of bounded rationality in security behavior: Users make mathematically modelable tradeoffs between the protection offered by security behaviors and the costs of practicing those behaviors, which even in a highly usable system may outweigh the benefits, especially for less resourced users. These findings emphasize the need for industry systems that balance structural inequities and accommodate behavioral variance between users rather than one-size-fits-all security solutions. More broadly, my techniques for modeling and accounting for inequities have offered key insights in growing technical areas beyond security, including algorithmic fairness.

Bio:
Elissa Redmiles is a Ph.D. Candidate in Computer Science at the University of Maryland and has been a visiting researcher with the Max Planck Institute for Software Systems and the University of Zurich. Elissa’s research interests are broadly in the areas of security and privacy. She uses computational, economic, and social science methods to conduct research on behavioral security. Elissa seeks to understand users’ security and privacy decision-making processes and specifically investigate inequalities that arise in these processes and to mitigate those inequalities through the design of systems that facilitate safety equitably across users. Elissa is the recipient of a NSF Graduate Research Fellowship, a National Science Defense and Engineering Graduate Fellowship, and a Facebook Fellowship. Her work has appeared in popular press publications such as Scientific American, Business Insider, Newsweek, and CNET and has been recognized with the John Karat Usable Privacy and Security Student Research Award, a Distinguished Paper Award at USENIX Security 2018, and a University of Maryland Outstanding Graduate Student Award. 

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Reinventing the Internet

Date and Time
Friday, October 5, 2018 - 3:00pm to 4:30pm
Location
McDonnell Hall A01
Type
Talk

Reinventing the Internet

Presenter:
Jennifer Rexford ’91, Gordon Y.S. Wu Professor of Engineering, Professor of Computer Science and Computer Science Department Chair

This event is part of She Roars: Celebrating Women at Princeton

Follow us: Facebook Twitter Linkedin