Quick links

CS Department Colloquium Series

Iterative Methods, Combinatorial Optimization, and Linear Programming Beyond the Universal Barrier

Date and Time
Monday, March 23, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
Aaron Sidford

Aaron Sidford

Over the past decade there has been a revolution in the field of algorithmic graph theory. Using techniques from disparate areas of computer science, ranging from numerical analysis, to data structures, to continuous optimization, there have been numerous breakthroughs in improving the running time for solving classic problems in combinatorial optimization. 
 
In this talk I will discuss my work in this area and show how it has led to both improved running times for solving fundamental problems in combinatorial optimization and the creation of improved iterative methods for solving classic continuous optimization problems. After briefly discussing this high-level research program, I will provide a more detailed illustration through my recent work on linear programming and the maximum flow problem. 
 
In particular, I will present a new algorithm for solving linear programs that improves upon the convergence rate of previous state-of-the-art methods. Where the convergence rate of the previous efficient methods depended on the larger of the number of variables and the number of constraints ours depends on the smaller. Our algorithm approximately matches the convergence rate of the "universal barrier" of Nesterov and Nemirovskii while greatly decreasing the iteration costs. Furthermore, I will discuss how our method outperforms the universal barrier in particular cases, yielding the fastest known running times for solving dense instances of the directed maximum flow problem) and more generally, minimum cost flow. 
 
This talk will assume no previous experience with linear programming algorithms, convex optimization, or graph theory.
 
Aaron Sidford is a PhD student in the electrical engineering and computer science department at the Massachusetts Institute of Technology, advised by Jonathan Kelner. His research interests lie broadly in the theory of computation and the design of efficient algorithms. He is particularly interested in problems at the intersection of combinatorial optimization and numerical analysis. Since he began his PhD in 2011, his research has focused on improving the running time for solving classic problems in algorithmic graph theory while using and improving tools ranging from convex analysis to data structures to spectral graph theory.

Deterministic Transaction Processing

Date and Time
Thursday, March 12, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Nick Feamster

Daniel Abadi

Daniel Abadi

The database system was one of the first highly concurrent systems ever designed, and has served as a blueprint for the design of many subsequent concurrent computing systems. The decision to allow the system to process concurrent transactions nondeterministically has led to countless headaches from bugs (and debugging), security, replication, and general code complexity. In this talk, we will discuss a fundamentally different design for concurrent execution of transactions in database systems that guarantees that the final state of the database is deterministically prescribed from the input to the system. We look at the consequences of the deterministic design, from throughput and latency, to replication and recovery. We will also discuss a radically different mechanism for processing transactions that is enabled by the deterministic execution framework: lazy transactional processing. Instead of immediately processing transactions as they enter the system, transactions are only partially processed, with the remainder of processing performed at an optimal time for the database system. We will discuss the cache locality, load balancing, and performance benefits of our lazy processing framework.

Daniel Abadi is a member of the computer science faculty at Yale University where he performs research on database system architecture and implementation, especially at the intersection with scalable and distributed systems.  He is known for the development of the storage and query execution engines of the C-Store (column-oriented database) prototype, which was commercialized by Vertica and eventually acquired by Hewlett-Packard. More recently, his HaodopDB research on fault tolerant scalable analytical database systems was commercialized by Hadapt and acquired last summer by Teradata. Abadi has been a recipient of a Churchill Scholarship, an NSF CAREER Award, a Sloan Research Fellowship, the VLDB Early Career Researcher Award, the SIGMOD Jim Gray Doctoral Dissertation Award, and a VLDB best paper award.

Verifying distributed car and aircraft systems with logic and refinement

Date and Time
Tuesday, March 3, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Aarti Gupta

Formal methods and theorem proving have been used successfully in many discrete applications, such as chip design and verification, but computation is not confined to the discrete world.  Increasingly, we depend on discrete software to control safety-critical components of continuous physical systems. It is vital that these hybrid (discrete and continuous) systems behave as designed, or they will cause more damage than they intend to fix. In this talk, I will present several challenges associated with verifying hybrid systems and how we use differential dynamic logics to ensure safety for hybrid systems under a continuously infinite range of circumstances and unbounded time horizon.

I will discuss the first formal proof of safety for a distributed car control system and the first formal verification of distributed, flyable, collision avoidance protocols for aircraft. In both cases, our verification holds for an unbounded number of cars or aircraft, so the systems are protected against unexpected emergent behavior, which simulation and testing may miss.

Providing the first formal proofs of these systems required the development of several seemingly unrelated techniques. We have since identified a unifying challenge for each case study: it is difficult to compare hybrid systems, even when their behaviors are very similar. I will discuss the development of a logic with first-class support for refinement relations on hybrid systems, called differential refinement logic, which aims to address this core challenge for hybrid systems verification within a formal framework. 

Sarah Loos is a Ph.D. student in the computer science department at Carnegie Mellon University. Her research interests include logical analysis and formal verification of distributed hybrid systems, such as distributed car control and collision avoidance protocols for aircraft. She is a Department of Energy computational science graduate fellow and an NSF graduate research fellow. In addition to her role as co-editor of the ACM-W newsletter, Sarah has served as a student member on the board of trustees and board of advisors for the Anita Borg Institute for Women and Technology.  She currently serves on the board of directors for the Foundation for Learning Equality, a non-profit tech company dedicated to bringing the online learning revolution to communities with little or no internet connectivity.  Sarah received her undergraduate B.S. degrees in computer science and math from Indiana University.

 

Learning Systems: Systems and Abstractions for Large-Scale Machine Learning

Date and Time
Thursday, February 26, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Barbara Engelhardt

Joseph Gonzalez

Joseph Gonzalez

The challenges of advanced analytics and big data cannot be address by developing new machine learning algorithms or new computing systems in isolation.  Some of the recent advances in machine learning have come from new systems that can apply complex models to big data problems.  Likewise, some of the recent advances in systems have exploited fundamental properties in machine learning to reach new points in the system design space.  By considering the design of scalable learning systems from both perspectives, we can address bigger problems, expose new opportunities in algorithm and system design, and define the new fundamental abstractions that will accelerate research in these complementary fields.

In this talk, I will present my research in learning systems spanning the design of efficient inference algorithms, the development of graph processing systems, and the unification of graphs and unstructured data.  I will describe how the study of graphical model inference and power-law graph structure shaped the common abstractions in contemporary graph processing systems, and how new insights in system design enabled order-of-magnitude performance gains over general purpose data-processing systems.  I will then discuss how lessons learned in the context of specialized graph-processing systems can be lifted to more general data-processing systems enabling users to view data as graph and tables interchangeably while preserving the performance gains of specialized systems.  Finally, I will present a new direction for the design of learning systems that looks beyond traditional analytics and model fitting to the entire machine learning life-cycle spanning model training, serving, and management.

Joseph Gonzalez is a postdoc in the UC Berkeley AMPLab and cofounder of GraphLab. Joseph received his PhD from the Machine Learning Department at Carnegie Mellon University where he worked with Carlos Guestrin on parallel algorithms and abstractions for scalable probabilistic machine learning. Joseph is a recipient of the AT&T Labs Graduate Fellowship and the NSF Graduate Research Fellowship.

Rise of the Planet of the Apps: Security and Privacy in the Age of Bad Code

Date and Time
Wednesday, February 25, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Arvind Narayanan

Suman Jana

Suman Jana

Computing is undergoing a major shift.  Third-party applications hosted in online software markets have become ubiquitous on all kinds of platforms: mobile phones, Web browsers, gaming devices, even household robots.  These applications often include yet more third-party code for advertising, analytics, etc.  These trends have dramatically increased the amount of bad code throughout the software stack - buggy code, malicious code, code that overcollects private information intentionally or by accident, overprivileged code vulnerable to abuse - as well as the amount of sensitive data processed by bad code.

In this talk, I will demonstrate that existing application platforms are ill-suited to dealing with bad code, thus causing security and privacy problems.  I will then show how to isolate bad code without affecting its useful functionality, by redesigning the interfaces across the software stack and controlling the information released to the applications by the platform.  I will also show how automated testing can identify bad code and help developers improve their applications. 

Suman Jana is a postdoctoral researcher at Stanford University. He earned his PhD in 2014 from the University of Texas, where he was supported by the Google PhD Fellowship.  He is broadly interested in identifying fundamental flaws in existing systems and building new systems with strong security and privacy guarantees.  Suman received the 2014 PET Award for Outstanding Research in Privacy-Enhancing Technologies, Best Practical Paper Award from the 2014 IEEE Symposium on Security and Privacy (Oakland), Best Student Paper Award from the 2012 IEEE Symposium on Security and Privacy, and the 2012 Best Applied Security Paper Award. Suman's research has been widely covered in popular media, and his code has been deployed at Google, Mozilla, and Apache.

Crowd-Agents: Creating Crowd-Powered Interactive Systems

Date and Time
Tuesday, February 17, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Thomas Funkhouser

Walter Lasecki

Walter Lasecki

I create and deploy interactive systems that use a combination of human and machine intelligence to operate robustly in real-world settings. Unlike prior work in human computation, my “Crowd Agent” model allows crowds of people to support continuous real-time interactive systems that require ongoing context. For example, Chorus allows multi-session conversations with a virtual personal assistant; Scribe allows non-experts to caption speech in real time for deaf and hard of hearing users, where prior approaches were either not accurate enough, or required professionals with years of training; and Apparition allows designers to rapidly prototype new interactive interfaces from sketches in real time. In this talk, I will describe how computationally-mediated groups of people can solve problems that neither people nor computers can solve alone, and scaffold AI systems using the real-world data they collect.

Walter S. Lasecki is a Computer Science Ph.D. candidate at the University of Rochester working with Jeffrey Bigham (CMU) and James Allen (Rochester). He creates interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. Mr. Lasecki received a B.S. in Computer Science and Mathematics from Virginia Tech in 2010, and an M.S. from the University of Rochester in 2011. He was named a Microsoft Research Ph.D. Fellow in 2013, and has held visiting research positions at Stanford and Google[x]. Since 2013, he has been a Visiting Researcher at Carnegie Mellon University.

Unlocking Interaction via Pixel-Based Reverse Engineering

Date and Time
Wednesday, February 18, 2015 - 12:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Adam Finkelstein

Morgan Dixon

Morgan Dixon

User interface software has become a victim of its own success. Graphical event-driven systems are the dominant user interface for computational devices, realized via popular user interface toolkits. However, this success has induced a lock-in effect that demonstrably stifles innovation. For example, human-computer interaction researchers regularly propose new interaction techniques for desktop and mobile devices, enhancements for people with disabilities, better support for localization, and new collaboration facilities. However, these techniques rarely make it outside the laboratory because existing software tools cannot support them. Changing the ecosystem would require a mass adoption of new tools, and most of the applications we use today would need to be rewritten.

I will describe an approach that unlocks existing graphical application without their source code and without their cooperation, making it possible to explore and deploy many promising ideas in the context of existing real-world interfaces. Prefab is a system that reverse-engineers graphical interfaces from raw pixels. I will show how Prefab can be used to augment user interfaces with a variety of techniques, including accessibility enhancements, improved input for touch screens, UI language translation, and end-user customization. Some of these enhancements were proposed in the literature over the last two decades, but prior to Prefab they had been considered nearly impossible to deploy in practice.

Morgan Dixon is a PhD student in Computer Science & Engineering at the University of Washington working with James Fogarty. His research is centered around designing, building, and studying systems that make user interface software more flexible and extensible. He has also worked on novel interaction techniques for accessibility, dual-display mobile devices, health technologies, crossing-based interfaces, and data analysis tools. Morgan received a B.S. in Computer Science and Mathematics from the University of Maryland in 2008, and an M.S. from the University of Washington in 2010. His work on Prefab received a CHI Best Paper Award, and he was named a Microsoft Research PhD Fellow in 2011. 

Unlocking Cellular Computation and Information Processing through Multidimensional Single-Cell Data

Date and Time
Thursday, March 5, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mona Singh

Smita Krishnaswamy

Smita Krishnaswamy

Cells are computational entities that process external signals through networks of interacting proteins and reconfigure their state via biochemical modifications of proteins and changes in gene expression. Despite progress in the understanding of signaling biology, graph diagrams typically used as depictions of signaling relationships only offer qualitative abstractions. New single-cell measurement technologies provide quantitatively precise measurements of dozens of cellular components representing important biochemical functions. However, a major challenge in deciphering single-cell signaling data is developing computational methods that can handle the complexity, noise and bias in the measurements. I will describe algorithms that quantify the flow of information through signaling interactions and mathematically characterize relationships between signaling molecules, using statistical techniques to detect dependencies while mitigating the effect of noise. I will show how these algorithms can be utilized to characterize signaling relationships in immune cells, detect subtle differences between cell types, and predict differential responses to perturbation. Next, I will analyze T cells from non-obese diabetic (NOD) mice and show that previously recognized defects in extracellular-signal-regulated kinase (ERK) signaling can be traced back to a small receptor-proximal defect that is amplified through reconvergence in the network. Then, I will show how multidimensional extensions of these techniques can be used to track dynamic changes in the relatively unknown network driving the epithelial-to-mesenchymal (EMT) transition that occurs during cancer metastasis, with the goal of predicting drugs to halt the process. Finally, I will discuss future directions involving integration of gene expression and other data types in order to gain a more complete picture of cellular computation. 

Inference Attacks: Understanding Privacy in the Era of "Privacy is Dead"

Date and Time
Tuesday, February 24, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Ed Felten

Matt Fredrikson

Matt Fredrikson

As data from far-reaching sources is collected, aggregated, and re-packaged to enable new and smarter applications, confidentiality and data security are at greater risk than ever before. Some of the most surprising and invasive threats to materialize in recent years are brought about by so-called inference attacks: successful attempts to learn sensitive information by leveraging public data such as social network updates, published research articles, and web APIs.

In this talk, I will focus on two of my research efforts to better understand and defend against these attacks. First I will discuss work that examines the privacy risks that arise when machine learning models are used in a popular medical application, and illustrate the consequences of applying differential privacy as a defense. This work uncovered a new type of inference attack on machine learning models, and shows via an in-depth case study how to understand privacy "in situ" by balancing the attacker's chance of success against the likelihood of harmful medical outcomes. The second part of the talk will detail work that helps developers correctly write privacy-aware applications using verification tools. I will illustrate how a wide range of confidentiality guarantees can be framed in terms of a new logical primitive called Satisfiability Modulo Counting, and describe a tool that I have developed around this primitive that automatically finds privacy bugs in software (or proves that the software is bug-free). Through a better understanding of how proposed defenses impact real applications, and by providing tools that help developers implement the correct defense for their task, we can begin to proactively identify potential threats to privacy and take steps to ensure that they will not surface in practice.

Matt Fredrikson is a Ph.D. candidate in the department of Computer Sciences at the University of Wisconsin-Madison. His research interests lie at the intersection of security, privacy, and formal methods, covering topics in software security, privacy issues associated with machine-learning models, and applied cryptography. His work has been profiled by Reuters, Technology Review, and New Scientist, and received the best paper award at USENIX Security 2014. He is a recipient of the Microsoft Research Graduate Fellowship Award.

Privacy in the Land of Plenty

Date and Time
Thursday, February 19, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora

Privacy-preserving data analysis has a large literature that spans several disciplines.  "Differential privacy" -- a notion tailored to situations in which data are plentiful -- has provided a theoretically sound and powerful framework, and given rise to an explosion of research.  We will review the definition of differential privacy, describe some algorithmic contributions, and conclude with a surprising application.

Cynthia Dwork, Distinguished Scientist at Microsoft Research, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. A cornerstone of this work is differential privacy, a strong privacy guarantee frequently permitting highly accurate data analysis. Dr. Dwork has also made seminal contributions in cryptography and distributed computing, and is a recipient of the Edsger W. Dijkstra Prize, recognizing some of her earliest work establishing the pillars on which every fault-tolerant system has been built for decades. She is a member of the National Academy of Sciences and the National Academy of Engineering, and a Fellow of the American Academy of Arts and Sciences.

Follow us: Facebook Twitter Linkedin