Quick links

CS Department Colloquium Series

Improving Stack-Wide Resource Utilization for a Faster Mobile Web

Date and Time
Monday, March 23, 2020 - 12:30pm to 1:30pm
Location
Zoom (off campus)
Type
CS Department Colloquium Series
Host
Karthik Narasimhan

***Due to the developing situation surrounding the COVID-19 virus, this talk will be available for remote viewing.  See below for details.***

Ravi Netravali
Abstract: Mobile web pages are integral to today's society, supporting critical services such as education, e-commerce, and social networking. Despite considerable academic and industrial research efforts, and major improvements over the past decade across the client-side web stack (i.e., networks, device CPUs, and browser engines), page load performance has plateaued and continues to fall short of user performance demands in practice. The consequences of this are far reaching: users abandon pages early, costing content providers billions of dollars in lost revenue; or pages are unusably slow, particularly in developing regions where web pages are often the sole gateway to the aforementioned services.

In this talk, I will describe the origin of this performance plateau in the context of serialized page load tasks that preclude effective utilization of the underlying network and CPU resources. Then, I will describe two complementary optimizations that my students and I have developed to eliminate these inefficiencies throughout the page load process and cut mobile load times in half. Key to these optimizations are judicious applications of programming languages (e.g., symbolic execution) and machine learning (e.g., reinforcement learning) techniques that enable us to 1) discover optimization knobs that preserve application correctness, and 2) tune those knobs according to stack-wide signals from the network, device, page, and browser, without developer intervention. I will conclude by describing how these underlying techniques can motivate and address a range of future challenges in networked applications and distributed systems. 

Bio: Ravi Netravali is an Assistant Professor of Computer Science at UCLA. His research interests are broadly in computer systems and networking, with a recent focus on building practical systems to improve the performance and debugging of large-scale, distributed applications for both end users and developers. His research has been recognized with an NSF CAREER Award, a Google Faculty Research Award, an ACM SoCC Best Paper Award, and an IRTF Applied Networking Research Prize. Prior to joining UCLA, Netravali received a PhD in Computer Science from MIT in 2018.


Zoom information:
Topic: Ravi Netravali CS Seminar
Time: Mar 23, 2020 12:00 PM Eastern Time (US and Canada)

Join Zoom Meeting
https://princeton.zoom.us/j/645162020

Meeting ID: 645 162 020

One tap mobile
+13126266799,,645162020# US (Chicago)
+16465588656,,645162020# US (New York)

Dial by your location
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 253 215 8782 US
+1 301 715 8592 US
+1 346 248 7799 US (Houston)
+1 669 900 6833 US (San Jose)
Meeting ID: 645 162 020
Find your local number: https://princeton.zoom.us/u/avcvlf1F3

Join by SIP
645162020@zoomcrc.com

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
221.122.88.195 (China)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (EMEA)
103.122.166.55 (Australia)
209.9.211.110 (Hong Kong)
64.211.144.160 (Brazil)
69.174.57.160 (Canada)
207.226.132.110 (Japan)
Meeting ID: 645 162 020

Deep Probabilistic Graphical Modeling

Date and Time
Thursday, March 12, 2020 - 12:30pm to 1:30pm
Location
Zoom (off campus)
Type
CS Department Colloquium Series
Host
Ryan Adams

***Due to the developing coronavirus situation, this talk will now be available for remote viewing via Zoom.  See below for full details.***

Adji Bousso Dieng
Abstract: Deep learning (DL) is a powerful approach to modeling complex and large scale data. However, DL models lack interpretable quantities and calibrated uncertainty. In contrast, probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and a way to express uncertainty about what we do not know. How can we develop machine learning methods that bring together the expressivity of DL with the interpretability and calibration of PGM to build flexible models endowed with an interpretable latent structure that can be fit efficiently? I call this line of research deep probabilistic graphical modeling (DPGM). In this talk, I will discuss my work on developing DPGM both on the modeling and algorithmic fronts. In the first part of the talk I will show how DPGM enables learning document representations that are highly predictive of sentiment without requiring supervision. In the second part of the talk I will describe entropy-regularized adversarial learning, a scalable and generic algorithm for fitting DPGMs. 

Bio: Adji Bousso Dieng is a PhD Candidate at Columbia University where she is jointly advised by David Blei and John Paisley. Her research is in Artificial Intelligence and Statistics, bridging probabilistic graphical models and deep learning. Dieng is supported by a Dean Fellowship from Columbia University. She won a Microsoft Azure Research Award and a Google PhD Fellowship in Machine Learning. She was recognized as a rising star in machine learning by the University of Maryland.  Prior to Columbia, Dieng worked as a Junior Professional Associate at the World Bank. She did her undergraduate studies in France where she attended Lycee Henri IV and Telecom ParisTech--France's Grandes Ecoles system. She spent the third year of Telecom ParisTech's curriculum at Cornell University where she earned a Master in Statistics.


Topic: Adji Bousso Dieng CS Seminar
Time: Mar 12, 2020 12:30 PM Eastern Time (US and Canada)

Join Zoom Meeting
https://princeton.zoom.us/j/384273957 

Meeting ID: 384 273 957

One tap mobile
+16465588656,,384273957# US (New York)
+16699006833,,384273957# US (San Jose)

Dial by your location
        +1 646 558 8656 US (New York)
        +1 669 900 6833 US (San Jose)
Meeting ID: 384 273 957
Find your local number: https://princeton.zoom.us/u/abUHt2KPwU

Join by SIP
384273957@zoomcrc.com

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
221.122.88.195 (China)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (EMEA)
103.122.166.55 (Australia)
209.9.211.110 (Hong Kong)
64.211.144.160 (Brazil)
69.174.57.160 (Canada)
207.226.132.110 (Japan)
Meeting ID: 384 273 957

Sharing without Showing: Building Secure Collaborative Systems

Date and Time
Tuesday, March 10, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Amit Levy

***Due to the developing coronavirus situation, we want to reduce the number of attendees at invited talks this week.  Attendance at this talk will now be limited to "Princeton University faculty only".  For other people who want to watch the talk, it will be available by livestream only, via the Princeton Broadcast Center for individual viewing.  We will not be hosting a separate room for the livestream.***

 

Wenting Zheng
In many domains such as finance and medicine, organizations have encountered obstacles in data acquisition because their target applications need sensitive data that reside across multiple parties. However, such data cannot be shared today due to data privacy concerns, policy regulation, and business competition.

My graduate research focused on solving this problem by enabling organizations to run complex computations on the joint dataset without revealing their sensitive input to the other parties. My overall approach is to co-design systems with cryptography to build practical and functional systems that provide strong and provable security. In this talk, I will focus on two systems — Opaque and Helen — which secure SQL analytics and machine learning, respectively. My open source has been used by organizations such as IBM Research, Ericsson, Alibaba, and Microsoft.

Bio: Wenting Zheng is a Ph.D. candidate at UC Berkeley, co-advised by Raluca Ada Popa and Ion Stoica. She completed her bachelor’s and master of engineering at MIT, where she was advised by Barbara Liskov. Her research interests are in computer systems, security, and applied cryptography. She is the recipient of a Berkeley Fellowship from 2014-2016, an IBM Research fellowship from 2017-2018, and was an invited participant at the 2019 EECS Rising Stars workshop. 

Designing Algorithms for Social Good

Date and Time
Monday, March 9, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

Rediet Abebe
Algorithmic and artificial intelligence techniques show immense potential to deepen our understanding of socioeconomic inequality and inform interventions designed to improve access to opportunity. Interventions aimed at historically under-served communities are made particularly challenging by the fact that disadvantage and inequality are multifaceted, notoriously difficult to measure, and reinforced by feedback loops in underlying structures.

In this talk, we develop and analyze algorithmic and computational techniques to address these issues through two types of interventions: one in the form of allocating scarce societal resources and another in the form of improving access to information. We examine the ways in which techniques from algorithms, discrete optimization, and network and computational science can combat different forms of disadvantage, including susceptibility to income shocks, disparities in access to health information, and social segregation. We discuss current policy and practice informed by this work and close with a discussion of an emerging research area -- Mechanism Design for Social Good (MD4SG) -- around the use of algorithms, optimization, and mechanism design to address this category of problems.

Bio: Rediet Abebe is a Junior Fellow at the Harvard Society of Fellows. She holds a Ph.D. in computer science from Cornell University, where she was advised by Jon Kleinberg, as well as an M.S. in applied mathematics from Harvard University, an M.A. in mathematics from the University of Cambridge, and a B.A. in mathematics from Harvard College. Her research is in the fields of algorithms and AI, with a focus on discrete algorithms, optimization, network and computational science, and their applications to equity and social good concerns. As part of this research agenda, Abebe co-founded Mechanism Design for Social Good (MD4SG), a multi-institutional, interdisciplinary initiative working to improve access to opportunity. This initiative has participants from over 100 institutions in 20 countries and has been supported by Schmidt Futures, the MacArthur Foundation, and the Institute for New Economic Thinking.

Abebe's work has informed policy and practice at various organizations, including the Ethiopian Ministry of Education and the National Institutes of Health. In 2019, she served on the NIH Advisory Committee to the Director Working Group on AI, whose recommendations were unanimously approved by the General Director's advisory committee. Abebe was recently recognized by the 2019 MIT Technology Review’s 35 Innovators Under 35 award and honored as a one to watch by the 2018 Bloomberg 50 list. She has presented her research in venues such as the National Academy of Sciences, the United Nations, and the Museum of Modern Art. Her work has been covered by outlets including Forbes, the Boston Globe, and the Washington Post. In 2017 Abebe co-founded Black in AI, a non-profit organization tackling diversity and inclusion issues in the field. Her research is deeply influenced by her upbringing in her hometown of Addis Ababa, Ethiopia.

This talk is being co-sponsored by CITP and the Department of Computer Science.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Software-Hardware systems for the Internet of Things

Date and Time
Thursday, March 5, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Kyle Jamieson

Omid Abari
Recently, there has been a huge interest in Internet of Things (IoT) systems, which bring the digital world into the physical world around us. However, barriers still remain to realizing the dream applications of IoT. One of the biggest challenges in building IoT systems is the huge diversity of their demands and constraints (size, energy, latency, throughput, etc.). For example, virtual reality and gaming applications require multiple gigabits-per-second throughput and millisecond latency. Tiny sensors spread around a greenhouse or smart home must be low-cost and batteryless to be sustainable in the long run. Today's networking technologies fall short in supporting these IoT applications with a hugely diverse set of constraints and demands. As such, they require distinct innovative solutions.

In this talk, I will describe how we can design a new class of networking technologies for IoT by designing software and hardware jointly, with an understanding of the intended application. In particular, I will present two examples of our solutions. The first solution tackles the throughput limitations of existing IoT networks by developing new millimeter wave devices and protocols, enabling many new IoT applications, such as untethered high-quality virtual reality. The second solution tackles the energy imitations of IoT networks by introducing new wireless devices that can sense and communicate without requiring any batteries. I demonstrate how our solution is applicable in multiple, diverse domains such as HCI, medical, and agriculture. I will conclude the talk with future directions in IoT research, both in terms of technologies and applications.

Bio: Omid Abari is an Assistant Professor at the University of Waterloo, School of Computer Science. He received his Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT) in 2018. His research interests are in the area of computer networks and mobile systems, with applications to the Internet of Things (IoT). He is currently leading the Intelligent Connectivity (ICON) Lab, where his team focuses on the design and implementation of novel softwarehardware systems that deliver ubiquitous sensing, communication and computing at scale. His work has been selected for GetMobile research highlights (2018, 2019), and been featured by several media outlets, including Wired, TechCrunch, Engadget, IEEE Spectrum, and ACM Tech News.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

The Value Alignment Problem in Artificial Intelligence

Date and Time
Monday, February 24, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Tom Griffiths

Dylan Hadfield-Menell
Much of our success in artificial intelligence stems from the adoption of a simple paradigm: specify an objective or goal, and then use optimization algorithms to identify a behavior (or predictor) that optimally achieves this goal. This has been true since the early days of AI (e.g., search algorithms such as A* that aim to find the optimal path to a goal state), and this paradigm is common to AI, statistics, control theory, operations research, and economics. Loosely speaking, the field has evaluated the intelligence of an AI system by how efficiently and effectively it optimizes for its objective. This talk will provide an overview of my thesis work, which proposes and explores the consequences of a simple, but consequential, shift in perspective: we should measure the intelligence of an AI system by its ability to optimize for our objectives.

In an ideal world, these measurements would be the same -- all we have to do is write down the correct objective! This is easier said than done: misalignment between the behavior a system designer actually wants and the behavior incentivized by the reward or loss functions they specify is routine, it is commonly observed in a wide variety of practical applications, and fundamental, as a consequence of limited human cognitive capacity. This talk will build up a formal model of this value alignment problem as a cooperative human-robot interaction: an assistance game of partial information between a human principal and an autonomous agent. It will begin with a discussion of a simple instantiation of this game where the human designer takes one action, write down a proxy objective, and the robot attempts to optimize for the true objective by treating the observed proxy as evidence about the intended goal. Next, I will generalize this model to introduce Cooperative Inverse Reinforcement Learning, a general and formal model of this assistance game, and discuss the design of efficient algorithms to solve it. The talk will conclude with a discussion of directions for further research including applications to content recommendation and home robotics, the development of reliable and robust design environments for AI objectives, and the theoretical study of AI regulation by society as a value alignment problem with multiple human principals.

Bio: Dylan is a final year Ph.D. student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel, and Stuart Russell. His research focuses on the value alignment problem in artificial intelligence. His goal is to design algorithms that learn about and pursue the intended goal of their users, designers, and society in general. His recent work has focused on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Robot Learning and Planning

Date and Time
Thursday, February 20, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Elad Hazan

Aviv Tamar
How can we build autonomous robots that operate in unstructured and dynamic environments such as homes or hospitals? This problem has been investigated under several disciplines, including planning (motion planning, task planning, etc.), and reinforcement learning. While both of these fields have witnessed tremendous progress, each have fundamental drawbacks: planning approaches require substantial manual engineering in mapping perception to a formal planning problem, while RL, which can operate directly on raw percepts, is data hungry, cannot generalize to new tasks, and is ‘black box’ in nature. 

In this talk, we present several studies that aim to mitigate these shortcomings by combining ideas from both planning and learning. We start by introducing value iteration networks, a type of differentiable planner that can be used within model-free RL to obtain better generalization. Next, we consider a practical robotic assembly problem, and show that motion planning, based on readily available CAD data, can be combined with RL to quickly learn policies for assembling tight fitting objects. We conclude with our recent work on learning to imagine goal-directed visual plans. Motivated by humans’  remarkable capability to predict and plan complex manipulations of objects, and recent advances such as GANs in imagining images, we present Visual Plan Imagination (VPI) — a new computational problem that combines image imagination and planning. In VPI, given off-policy image data from a dynamical system, the task is to ‘imagine’ image sequences that transition the system from start to goal. Key to our method is Causal InfoGAN, a deep generative model that can learn features that are compatible with strong planning algorithms. We demonstrate our approach on learning to imagine and execute robotic rope manipulation from image data.

Bio:
Aviv Tamar is an Assistant Professor at the Department of Electrical Engineering at Technion - Israel Institute for Technology. Previously, he was a post-doc at UC Berkeley with Prof. Pieter Abbeel, and prior to that, he completed his PhD with Prof. Shie Mannor at Technion. Aviv's research focuses on reinforcement learning, representation learning, and robotics. His work has been recognized by a NeurIPS Best Paper award, a Google Faculty Award, and the Alon fellowship for young researchers.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Building Machines that Discover Generalizable, Interpretable Knowledge

Date and Time
Tuesday, February 11, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Tom Griffiths

Kevin Ellis
Humans can learn to solve an endless range of problems: building, drawing, designing, coding, and cooking, to name a few, and need relatively modest amounts of experience to acquire any one new individual skill. Machines which can similarly master a diverse span of problems are surely far off.

Here, however, I will argue that program induction--an emerging AI technique--will play a role in building this more human-like AI. Program induction systems represent knowledge as programs, and learn by synthesizing code. Across three case studies in vision, natural language, and learning-to-learn, this talk will present program induction systems that take a step toward machines that can: acquire new knowledge from modest amounts of experience; strongly generalize that knowledge to extrapolate beyond their training; learn to represent their knowledge in an interpretable format; and are applicable to a broad spread of problems, from drawing pictures to discovering equations. Driving these developments is a new neuro-symbolic algorithm for Bayesian program synthesis. This algorithm integrates maturing program synthesis technologies with several complementary AI traditions (symbolic, probabilistic, and neural).

Building a human-like machine learner is a very distant, long-term goal for the field. In the near-term program induction comes with a roadmap of practical problems to push, such as language learning, scene understanding, and programming-by-examples, which this talk explores. But it's worth keeping these long-term goals in mind as well.

Bio: Kevin Ellis works across artificial intelligence, program synthesis, and machine learning. He develops learning algorithms that teach machines to write code, and applies these algorithms to problems in artificial intelligence. His work has appeared in machine learning venues (NeurIPS, ICLR, IJCAI) and cognitive science venues (CogSci, TOPICS). He has collaborated with researchers at Harvard, Brown, McGill, Siemens, and MIT, where he is a final-year graduate student advised by Josh Tenenbaum and Armando Solar-Lezama.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

New Compilation Techniques for Reconfigurable Analog Devices

Date and Time
Tuesday, February 18, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford

Sara Achour
Reconfigurable analog devices are a powerful new computing substrate especially appropriate for executing dynamical systems in an energy efficient manner. These devices leverage the physical behavior of transistors to directly implement computation. Under this paradigm, voltages and currents within the device implement continuously evolving variables in the computation. 

In this talk, I discuss compilation techniques for automatically configuring such devices to execute dynamical systems. I present Legno, the first compilation system that automatically targets a real reconfigurable analog device of this class. Legno synthesizes analog circuits from parametric and specialized analog blocks and accounts for analog noise, quantization error, operating range limitations, and manufacturing variations within the device. I evaluate Legno on applications from the biology, physics, and controls domains. The results demonstrate that these applications execute with acceptable error while consuming microjoules of energy. 

Bio: Sara Achour is a PhD candidate at the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology (CSAIL MIT) and a NSF Fellowship recipient. Her research focuses on new techniques and tools, specifically new programming languages, compilers, and runtime systems, that enable end users to easily develop computations that exploit the potential of emerging nontraditional computing platforms.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Automated Discovery of Machine Learning Optimizations

Date and Time
Wednesday, February 19, 2020 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Kai Li

Zhihao Jia
As an increasingly important workload, machine learning (ML) applications require different performance optimization techniques from traditional runtimes and compilers. In particular, to accelerate ML applications, it is generally necessary to perform ML computations on heterogeneous hardware and parallelize computations using multiple data dimensions, neither of which is even expressible in traditional compilers and runtimes. In this talk, I will describe my work on automated discovery of performance optimizations to accelerate ML computations.

TASO, the Tensor Algebra SuperOptimizer, optimizes the computation graphs of deep neural networks (DNNs) by automatically generating potential graph optimizations and formally verifying their correctness. TASO outperforms rule-based graph optimizers in existing ML systems (e.g., TensorFlow, TensorRT, and TVM) by up to 3x by automatically discovering novel graph optimizations, while also requiring significantly less human effort.

FlexFlow is a system for accelerating distributed DNN training. FlexFlow identifies parallelization dimensions not considered in existing ML systems (e.g., TensorFlow and PyTorch) and automatically discovers fast parallelization strategies for a specific parallel machine. Companies and national labs are using FlexFlow to train production ML models that do not scale well in current ML systems, achieving over 10x performance improvement.

I will also outline future research directions for further automating ML systems, such as codesigning ML models, software systems, and hardware backends for end-to-end ML deployment.

Bio: Zhihao Jia is a Ph.D. candidate in the Computer Science department at Stanford University working with Alex Aiken and Matei Zaharia. His research interests lie in the intersection of computer systems and machine learning, with a focus on building efficient, scalable, and high-performance systems for ML computations.

*Please note, this event is only open to the Princeton University community.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Follow us: Facebook Twitter Linkedin