Quick links

CS Department Colloquium Series

Unleashing Hardware Potential through Better OS Abstractions

Date and Time
Thursday, March 24, 2016 - 12:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman

Adam Belay
Datacenter workloads have demanding performance requirements, including the simultaneous need for high throughput, low tail latency, and high server utilization. While modern hardware is compatible with these goals, modern operating systems remain a bottleneck. Better OS abstractions could significantly improve performance, yet deploying these abstractions has become intractable given the size and complexity of today's systems.

I will first discuss Dune, a kernel extension that allows OS developers to sidestep software and hardware complexity by running an OS within an ordinary Linux process.  With Dune, developers can both access the capabilities of raw hardware and fall back on the functionality of a full Linux environment where convenient. I will then discuss IX and Shinjuku, two generations of new datacenter-focused operating systems that were enabled by Dune. IX provides a novel system call interface that greatly improves network throughput without sacrificing latency. For example, IX improves Memcached's TCP throughput by 5x over Linux. Shinjuku, an ongoing research effort, aims to significantly increase CPU utilization through a centralized approach to intra-server load balancing.

Adam Belay is a Ph.D. candidate in Computer Science at Stanford University, where he is a member of the Secure Computer Systems Group and the Multiscale Architecture and Systems Team. Previously, he worked on storage virtualization at VMware Inc. and contributed substantial power management code to the Linux Kernel project.  Adam's research area is operating systems and networking.  Much of his work has focused on restructuring computer systems so that developers can more easily reach the full performance potential of hardware. Adam has received a Stanford Graduate Fellowship, a VMware Graduate Fellowship, and an OSDI Jay Lepreau Best Paper Award.

The Power of Nonconvex Paradigms for High-Dimensional Estimation

Date and Time
Monday, February 29, 2016 - 4:30pm to 5:30pm
Location
Computer Science Large Auditorium (Room 104)
Type
CS Department Colloquium Series
Host
Electrical Engineering and Computer Science Departments

In various scenarios in the information sciences, one wishes to estimate a large number of parameters from highly incomplete / imperfect data samples. A growing body of recent work has demonstrated the effectiveness of convex relaxation --- in particular, semidefinite programming --- for solving many problems of this kind. However, the computational cost of such convex paradigms is often unsatisfactory, which limits applicability to large-dimensional data. This talk follows another route: by formulating the problems into nonconvex programs, we attempt to optimize the nonconvex objectives directly. To illustrate the power of this strategy, we present two concrete stories. The first involves solving a random quadratic system of equations, which spans many applications ranging from the century-old phase retrieval problem to various latent-variable models in machine learning. The second is about recovering a collection of discrete variables from noisy pairwise difference measurements, which arises when one wishes to jointly align multiple images or to retrieve the genome phases from paired sequencing reads. We propose novel nonconvex paradigms for solving these two problems. In both cases, the proposed solutions can be accomplished within linear time, while achieving a statistical accuracy that is nearly un-improvable.

Yuxin Chen is currently a postdoctoral scholar in the Department of Statistics at Stanford University, supervised by Prof. Emmanuel Candès. He received the Ph.D. degree in Electrical Engineering and M.S. in Statistics from Stanford University, M.S. in Electrical and Computer Engineering from the University of Texas at Austin, and B.E. in microelectronics from Tsinghua University. His research interests include high-dimensional structured estimation, convex and nonconvex optimization, statistical learning, and information theory.

Organizing Computation for High-Performance Visual Computing

Date and Time
Tuesday, March 22, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Szymon Rusinkiewicz

Future visual computing applications—from photorealistic real-time rendering, to 4D light field cameras, to pervasive sensing and computer vision—demand orders of magnitude more computation than we currently have. From data centers to mobile devices, performance and energy scaling is limited by locality (the distance over which data has to move, e.g., from nearby caches, far away main memory, or across

networks) and parallelism. Because of this, I argue that we should think of the performance and efficiency of an application as determined not just by the algorithm and the hardware on which it runs, but critically also by the organization of computations and data. For algorithms with the same complexity—even the exact same set of arithmetic operations and data—executing on the same hardware, the order and granularity of execution and placement of data can easily change performance by an order of magnitude because of locality and parallelism. To extract the full potential of our machines, we must treat the organization of computation as a first class concern while working across all levels from algorithms and data structures, to compilers, to hardware.

 This talk will present facets of this philosophy in systems I have built for visual computing applications from image processing and vision, to 3D rendering, simulation, optimization, and 3D printing. I will show that, for data-parallel pipelines common in graphics, imaging, and other data-intensive applications, the organization of computations and data for a given algorithm is constrained by a fundamental tension between parallelism, locality, and redundant computation of shared values. I will focus particularly on the Halide language and compiler for image processing, which explicitly separates what computations define an algorithm from the choices of organization which determine parallelism, locality, memory footprint, and synchronization. I will show how this approach can enable much simpler programs to deliver performance often many times faster than the best prior hand-tuned C, assembly, and CUDA implementations, while scaling across radically different architectures, from ARM cores, to massively parallel GPUs, to FPGAs and custom ASICs.

 Jonathan Ragan-Kelley is a postdoc in computer science at Stanford. He works on high-efficiency visual computing, including systems, compilers, and architectures for image processing, vision, 3D rendering, 3D printing, physical simulation, and scientific computing.

He earned his PhD in Computer Science at MIT in 2014, where he built the Halide language for high-performance image processing. Halide is now used throughout industry to deploy code to hundreds of millions of smartphones and process tens of billions of images per day. Jonathan previously built the Lightspeed preview system, which was used on over a dozen films at Industrial Light & Magic and was a finalist for an Academy Technical Achievement Award. He has worked in GPU architecture, compilers, and research at NVIDIA, Intel, and ATI.

 

Computational Tools for Robot Design: A Composition Approach

Date and Time
Thursday, February 18, 2016 - 3:00pm to 4:00pm
Location
Bowen Hall 222
Type
CS Department Colloquium Series
Host
Electrical Engineering and Computer Science Departments

Cynthia Sung
As robots become more prevalent in society, they must develop an ability to deal with more diverse situations. This ability entails customizability of not only software intelligence, but also of hardware. However, designing a functional ro-bot remains challenging and often involves many iterations of design and testing even for skilled designers. My goal is to create computational tools for making functional machines, allowing future designers to quickly improvise new hardware in response to unexpected circumstances.

In this talk, I will discuss one possible approach to automated design using composition. I will describe our origami-inspired print-and-fold process that allows entire robots to be fabricated within a few hours, and I will demonstrate how foldable modules can be composed together to create foldable mechanisms and robots. The modules are represented parametrically, enabling a small set of mod-ules to describe a wide range of geometries and also allowing geometries to be optimized in a straightforward manner. I will also introduce a tool that we have developed that combines this composition approach with simulations to help human designers of all skill levels to design and fabricate custom functional robots.

Cynthia Sung is a Ph.D. candidate in the Computer Science and Artificial Intelli-gence Laboratory at the Massachusetts Institute of Technology (MIT). She received a B.S. in Mechanical Engineering from Rice University in 2011 and an M.S. in Elec-trical Engineering and Computer Science from MIT in 2013. Her research interests include computational design, folding theory, and rapid fabrication, and her cur-rent work focuses on algorithms for synthesis and analysis of engineering designs.

Control of agile robots in complex environments with formal safety guarantees

Date and Time
Tuesday, February 16, 2016 - 3:00pm to 4:00pm
Location
Bowen Hall 222
Type
CS Department Colloquium Series
Host
Electrical Engineering and Computer Science Departments

Anirudha Majumdar
The goal of my research is to develop algorithmic and theoretical techniques that push highly agile robotic systems to the brink of their hardware limits while guaranteeing that they operate in a safe manner despite uncertainty in the environment and dynamics.

In this talk, I will describe my work on algorithms for the synthesis of feedback controllers that come with associated formal guarantees on the stability of the robot and show how these control-lers and certificates of stability can be used for robust planning in environments previously un-seen by the system. In order to make these results possible, my work connects deeply to compu-tational tools such as sums-of-squares (SOS) programming and semidefinite programming from the theory of mathematical optimization, along with approaches from nonlinear control theory.

I will describe this work in the context of the problem of high-speed unmanned aerial vehicle (UAV) flight through cluttered environments previously unseen by the robot. In this context, the tools I have developed allow us to guarantee that the robot will fly through its environment in a collision-free manner despite uncertainty in the dynamics (e.g., wind gusts or modeling errors). The resulting hardware demonstrations on a fixed-wing airplane constitute one of the first exam-ples of provably safe and robust control for robotic systems with complex nonlinear dynamics that need to plan in realtime in environments with complex geometric constraints.

Anirudha Majumdar is a Ph.D. candidate in the Electrical Engineering and Computer Science de-partment at MIT. He is a member of the Robot Locomotion Group at the Computer Science and Artificial Intelligence Lab and is advised by Prof. Russ Tedrake. Ani received his undergraduate degree in Mechanical Engineering and Mathematics from the University of Pennsylvania, where he was a member of the GRASP lab. His research is primarily in robotics: he works on algorithms for controlling highly dynamics robots such as unmanned aerial vehicles with formal guarantees on the safety of the system. Ani's research has been recognized by the Siebel Foundation Scholar-ship and the Best Conference Paper Award at the International Conference on Robotics and Auto-mation (ICRA) 2013.

Faster algorithms for fundamental convex problems and their applications in combinatorial optimization

Date and Time
Tuesday, April 5, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Sanjeev Arora

Convex optimization has been studied extensively and is a prominent tool in various areas such as combinatorial optimization, data analysis, operations research, and scientific computing.  Each field has developed specialized tools including data structures, sampling methods, and dimension reduction. In the past several years, I have been combining and improving the optimization techniques from different fields to design faster optimization algorithms. 

 In this talk, I will discuss my work in this direction and illustrate it through my results on linear programming and general convex optimization. In particular, I will present a new algorithm for solving linear programs, which gives the first improvement to the running time for linear programming in 25 years. Then, I will present the first nearly cubic time algorithm for solving general convex optimization problems. Furthermore, I will discuss how these two results can be used to improve the running time of many classical combinatorial problems such as maximum flow and submodular function minimization.

 This talk will assume no prior knowledge of optimization.

Yin Tat Lee is a Ph.D. candidate in the department of mathematics at the Massachusetts Institute of Technology. He is interested in designing faster algorithms, particularly for problems in optimization. Since he began his Ph.D. in 2012, he has combined ideas from continuous and discrete mathematics to substantially advance the state-of-the-art for solving many fundamental problems in computer science, such as linear programming, maximum flow, and submodular function minimization. He has received a variety of awards, including the Best Student Paper Award at FOCS 2015, Best Paper Award at SODA 2014, Best Paper Award and Best Student Paper Award at FOCS 2014, and Notable Article in Computing in 2014 by Computing Reviews.

Fully Verified Outsourced Computation

Date and Time
Friday, February 26, 2016 - 12:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Arvind Narayanan

Bryan Parno
Frequent headline-grabbing data breaches suggest that current best practices for safeguarding personal data are woefully inadequate.  To try to move beyond the cycle of attacks and patches we see today, I design and build systems with formal end-to-end guarantees.  For example, to provide strong guarantees for outsourced computations, I developed a new cryptographic framework, verifiable computation, which allows clients to outsource general computations to completely untrusted services and efficiently verify the correctness of each returned result.  Through improvements to the theory and the underlying systems, we reduced the costs of verification by over twenty orders of magnitude.  As a result, verifiable computation is now a thriving research area that has produced several startups, as well as enhancements to the security and privacy of X.509, MapReduce, and Bitcoin.

While verifiable computation provides strong guarantees, even the best cryptographic system is useless if implemented badly, applied incorrectly, or used in a vulnerable system.  Thus, I have led a team of researchers and engineers in the Ironclad project, working to expand formal software verification to provide end-to-end guarantees about the security and reliability of complex systems.  By creating a set of new tools and methodologies, Ironclad produced the first complete stack of verified-secure software.  We also recently developed the first methodology for verifying both the safety and liveness of complex distributed systems implementations. While interesting challenges remain, I expect that verification will fundamentally improve the software that underpins our digital and physical infrastructure.

Bryan Parno is a Researcher in the Security and Privacy Group at Microsoft Research.  After receiving a Bachelor's degree from Harvard College, he completed his PhD at Carnegie Mellon University, where his dissertation won the 2010 ACM Doctoral Dissertation Award.  In 2011, he was selected for Forbes' 30-Under-30 Science List.  He formalized and worked to optimize verifiable computation, receiving a Best Paper Award at the IEEE Symposium on Security and Privacy his advances.  He coauthored a book on Bootstrapping Trust in Modern Computers, and his work in that area has been incorporated into the latest security enhancements in Intel CPUs. His research into security for new application models was incorporated into Windows and received a Best Paper Awards at the IEEE Symposium on Security and Privacy and the USENIX Symposium on Networked Systems Design and Implementation.  He has recently extended his interest in bootstrapping trust to the problem of building practical, formally verified secure systems. His other research interests include user authentication, secure network protocols, and security in constrained environments (e.g., RFID tags, sensor networks, or vehicles).
 

Exploiting compositionality to explore a large space of model structures

Date and Time
Thursday, April 7, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Barbara Engelhardt

I will present flexible algorithms for model discovery and model fitting which apply to broad, open-ended classes of models, yet which also incorporate model-specific algorithmic insights. First, I will introduce a framework for building probabilistic models compositionally out of common modeling motifs, such as clustering, sparsity, and dimensionality reduction. This compositional framework yields a variety of existing models as special cases. We can flexibly perform posterior inference across this large, open-ended space of models by composing sophisticated inference algorithms carefully designed for the individual modeling motifs. An automatic structure search procedure over this space of models yields sensible analyses of datasets as diverse as motion capture, natural image patches, and Senate voting records, all using a single software package with no hand-tuned metaparameters. Applying a similar compositional structure search procedure to Gaussian Process models yields interpretable decompositions of diverse time series datasets and enables automatic generation of natural language reports.  Finally, compositional structure search depends crucially on the estimation of intractable likelihoods. I will briefly outline an approach for obtaining precise likelihood estimates with rigorous tail bounds by sandwiching the true value between stochastic upper and lower bounds.

 

Roger Grosse is a Postdoctoral Fellow in the University of Toronto machine learning group. He received his Ph.D. in computer science from MIT under the supervision of of Bill Freeman. He is a recipient of the NDSEG Graduate Fellowship, the Banting Postdoctoral Fellowship, and outstanding paper awards at the International Conference of Machine Learning (ICML) and the Conference for Uncertainty in AI (UAI). He is also a co-creator of Metacademy, an open-source web site for developing personalized learning plans in machine learning and related fields.

Interacting with Personal Fabrication Machines

Date and Time
Monday, March 28, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Barbara Engelhardt

In anticipation of 3D printers reaching millions of users, I am investigating how to allow future users to interact with the new hardware. I present a series of interactive software+hardware systems that I created to answer this question. They are characterized by two main properties. First, they produce physical output quickly, allowing users not only to see their results, but also to touch and test their mechanical properties as users work towards a solution. Second, the systems allow users to interact directly with the workpiece, i.e., rather than using a digital 3D editor, users manipulate the workpiece located inside the 3D printer by pointing at it, causing the machine to then modify the workpiece accordingly. I put my research into perspective by drawing analogies to the evolution of interactive computing from batch processing, to turn taking, to direct manipulation.

Stefanie Mueller is a PhD student working with Patrick Baudisch at the Human-Computer Interaction Lab at Hasso Plattner Institute. In her research, she develops novel hardware and software systems that advance personal fabrication technologies. Stefanie has published 10 papers at the most selective HCI venues CHI and UIST, for which she received a best paper award and two best paper nominees. She is also serving on the CHI and UIST program committees as an associate chair. In addition, Stefanie has been an invited speaker at universities and research labs, such as MIT, Stanford, UC Berkeley, Microsoft Research, Disney Research, and Adobe Research.    

Composing differentiable procedures for modeling, inference, and optimization

Date and Time
Thursday, February 18, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Barbara Engelhardt

Much recent success in machine learning has been through optimizing simple feedforward procedures, such as neural networks, using gradients.  Surprisingly, many complex procedures such as message passing, filtering, inference, and even optimization itself can be meaningfully differentiated though as well.  Composing these procedures lets us build sophisticated models that generalize existing methods but retain their good properties.  We'll show applications to chemical design, gradient-based tuning of optimization procedures, and training procedures that don't require cross-validation.

David Duvenaud is a postdoc in the Harvard Intelligent Probabilistic Systems group, working with Prof. Ryan Adams on model-based optimization, synthetic chemistry, and neural networks.  He did his Ph.D. at the University of Cambridge with Carl Rasmussen and Zoubin Ghahramani. Previous to that, he worked on machine vision both with Kevin Murphy at the University of British Columbia, and later at Google Research.  David also co-founded Invenia, an energy forecasting and trading firm.

Follow us: Facebook Twitter Linkedin