Quick links

CS Department Colloquium Series

Programming Abstractions for Mobile-Cloud Computing

Date and Time
Monday, October 26, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Nick Feamster

Mayur Naik
As the computing landscape changes to provide a more personalized and targeted experience to users, the demand for compute-intensive tasks on mobile devices such as tablets, smartphones, and smart watches increases. However, this demand coincides with diminishing returns from transistor scaling as well as battery and power constraints that may curtail mobile devices from providing new capabilities. Mobile-cloud computing is an emerging approach that boosts mobile devices by offloading compute-intensive tasks in mobile applications to more powerful machines. A central challenge in mobile-cloud computing concerns orchestrating computation and communication between the mobile device and the remote system in a manner that allows computation gains to outweigh communication costs. To enable this tradeoff, we study two simple programming abstractions that allow the remote system to preserve state across disparately offloaded segments of a computation, and to communicate with the mobile device while performing offloaded computation. To evaluate the benefits of these abstractions, we have developed an efficient algorithm based on finding a min-cut of a graph that identifies the optimal offloading pattern. I will describe our experience applying these abstractions to partition several Android applications and obtain speedup, energy savings, and improved QoS under different network settings. Our experience also yields insights into how to further accelerate mobile-cloud computing through designing new networking protocols and leveraging advances in hardware approximation and acceleration.

Mayur Naik is an Assistant Professor in the School of Computer Science at Georgia Tech since 2011. His research interests lie in areas related to programming systems, with a current emphasis on program analysis techniques for improving software quality and programmer productivity on modern computing platforms. He is a recipient of Distinguished Paper awards at FSE'15, PLDI'14, and ICSE'09, the Lockheed-Martin Dean's award for excellence in teaching at Georgia Tech (2015), and an NSF CAREER award (2013). He received his Ph.D. in Computer Science from Stanford University in 2008 and was a Research Scientist at Intel Labs, Berkeley from 2008 to 2011.

Seeing, Saying, Doing, and Learning: Integrating Computer Vision, Natural Language Processing, Robotics, and Machine Learning Through Multidirectional Inference

Date and Time
Thursday, October 15, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Christiane Fellbaum

Jeff Siskind
The semantics of natural language can be grounded in perception and motor control with a unified cost function that supports multidirectional inference. I will present several instances of this approach.  The first is a cost function relating sentences, video, and a lexicon.  Performing inference from video and a lexicon to sentences allows it to generate sentential descriptions of video.  Performing inference from sentences and a lexicon to video allows it to search a video database for clips that match a sentential query. Performing inference from sentences and video to a lexicon allows it to learn a lexicon.  The second is the functional inverse of video captioning.  Instead of mapping video and object detections to sentences, one can map video and sentences to object detections.  This allows one to use sentential constraint on a video object codetection process to find objects without pretrained object detectors.  The third is a cost function relating sentences, robotic navigation paths, and a lexicon.  Performing inference from sentences and navigation paths to a lexicon allows it to learn a lexicon.  Performing inference from navigation paths and a learned lexicon to sentences allows it to generate sentential descriptions of paths driven by a mobile robot. Performing inference from sentences and a learned lexicon to navigation paths allows it to plan and drive navigation paths that satisfy a sentential navigation request.  Finally, one can perform object codetection on the video stream from a robot-mounted camera during navigation to satisfy sentential requests and use the collection of constraints from vision, language, and robotics to detect, localize, and label objects in the environment without any pretrained object detectors.

Joint work with Andrei Barbu, Daniel Paul Barrett, Scott Alan Bronikowski, N. Siddharth, and Haonan Yu.

Jeffrey M. Siskind received the B.A. degree in computer science from the Technion, Israel Institute of Technology, Haifa, in 1979, the S.M. degree in computer science from the Massachusetts Institute of Technology (M.I.T.), Cambridge, in 1989, and the Ph.D. degree in computer science from M.I.T. in 1992. He did a postdoctoral fellowship at the University of Pennsylvania Institute for Research in Cognitive Science from 1992 to 1993. He was an assistant professor at the University of Toronto Department of Computer Science from 1993 to 1995, a senior lecturer at the Technion Department of Electrical Engineering in 1996, a visiting assistant professor at the University of Vermont Department of Computer Science and Electrical Engineering from 1996 to 1997, and a research scientist at NEC Research Institute, Inc. from 1997 to 2001. He joined the Purdue University School of Electrical and Computer Engineering in 2002 where he is currently an associate professor. His research interests include computer vision, robotics, artificial intelligence, neuroscience, cognitive science, computational linguistics, child language acquisition, automatic differentiation, and programming languages and compilers.

Actionable big data: from data to decisions and back

Date and Time
Tuesday, October 13, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Computer Science and the Center for Statistics and Machine Learning

In many sequential decision problems all that we have is a record of historical trajectories. Building dynamic models from these trajectories and ultimately sequential decision policies may result in much uncertainty and bias. In this talk we consider the question of how to create control policies from existing historical data and how to better sample trajectories so that future control policies would be better. This question has been central in reinforcement learning in the last decade if not more, and involves methods from statistics, machine learning, optimization, and control theory.

I will start my talk with demonstrating why planning with parameter uncertainty is an important issue. I will then describe several approaches: Bayesian uncertainty model over the unknown parameters, a robust approach that takes a worst case view, and a frequentist approach. I will then discuss the challenges that are posed when the model itself rather than just the parameters may not be fully known. 

I will then describe two challenging real-world domains that have been studied in my research group in collaboration with experts from industry and academia: diabetes care management in healthcare and asset management in high-voltage transmission grids. For each domain I will describe our efforts to reduce the problem to its bare essentials as a reinforcement learning problem, the algorithms for learning the control policies, and some of the lessons we learned. 

I graduated from the Technion with a BSc in Electrical Engineering and BA in mathematics (both summa cum laude) in 1996. After that I spent almost four years as an intelligence officer with the Israeli Defense Forces. I was subsequently involved in a few ventures in the high-tech industry. I earned my PhD in Electrical Engineering from the Technion at 2002, under the supervision of Nahum Shimkin. I was then a Fulbright postdoctoral associate with LIDS (MIT) working with John Tsitsiklis for two years. I was at the Department of Electrical and Computer Engineering in McGill University from July 2004 until August 2010, where I held a Canada Research Chair in Machine Learning from 2005 to 2009. I have been with the Department of Electrical Engineering at the Technion since 2008 where I am a professor. I am married to Tally and father to Liam, Oliver and Dylan.

Predicting activities in images

Date and Time
Friday, October 9, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

The task of visual prediction is important for two main reasons: (a) For intelligent agents and systems, prediction is vital for decision making. For example, in order to perform assistive activities, robots must be able to predict the intentions of other agents in the scene. Even a task as simple as walking through a crowded hallway requires the prediction of human trajectories. (b) More importantly, prediction requires deep understanding of the visual world and complex interplay between different elements of the scene. Therefore, prediction can act as a way to define “what does it mean to understand an image,” and the task of visual prediction can act as an enabler for scene understanding. In this talk, I will go over several techniques for prediction developed over the past few years. These techniques operates at different levels of descriptions from high-level object motions, to mid-level patch motion and appearance changes, to low-level optical flow. They operate at different levels of supervision from strong the supervision requiring explicit object tracking to unsupervised learning for patch-based prediction. I will discuss the pros and cons of different levels, ideas for combining them and example applications in robotics and transportation applications.

Martial Hebert is a Professor of Robotics Carnegie Mellon University and Director of the Robotics Institute, which he joined in 1984. His interest includes computer vision, especially recognition in images and video data, model building and object recognition from 3D data, and perception for mobile robots and for intelligent vehicles. His group has developed approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences. In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data. He has served on the editorial boards the IEEE Transactions on Robotics and Automation, the IEEE transactions on Pattern Analysis and Machine Intelligence, and the International Journal of Computer Vision (for which he currently serves as Editor-in-Chief).

Learning Hidden Computational Processes

Date and Time
Thursday, October 1, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Elad Hazan

We are interested in prediction problems in which evaluating the learned function requires multiple intermediate steps of computation. One motivation is building a system that can answer complex questions: here the function would need to map "How many countries have held the Summer Olympics more than once?" to "3" by applying a sequence of aggregation and filtering operations on a database.  In this talk, we examine two key machine learning problems that arise in this setting. First, how do we model the computational process?  We argue that the classic paradigm of decoupling modeling from inference is inadequate, and we propose techniques that directly model the inference procedure. Second, learning is very difficult: in our example, the supervision "3" constrains the hidden computational process in a very indirect way.  We propose methods that relax the output supervision in a parameterized way, and learn both the relaxation and model parameters jointly subject to an explicit computational constraint.  Finally, we show some empirical progress on a new challenging question answering task.

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011).  His research interests include (i) modeling natural language semantics, (ii) developing machine learning methods that infer rich latent structures from limited supervision, (iii) and studying the tradeoff between statistical and computational efficiency.  He is a 2015 Sloan Research Fellow, 2014 Microsoft Research Faculty Fellow, a 2010 Siebel Scholar, and won the best student paper at the International Conference on Machine Learning in 2008.

Software-Defined Datacenter Networks

Date and Time
Tuesday, September 29, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
Today’s cloud-scale services (e.g., search, social networking and cloud computing) operate on large datacenter networks (DCNs). Keeping these networks running smoothly is difficult, due to the sheer number of physical devices, the complexity of network software stack, and the dynamic nature of the environment. At any given moment, multiple devices experience component failures, are brought down for planned maintenance, are upgraded with new firmware, or are reconfigured to adapt to prevailing traffic demand.
 
In this talk, I will go through some of the automated systems that are developed for managing the routing, configuration, and firmware and power of network devices in Microsoft Azure. I will first describe how NetPilot achieves fast, safe mitigation of common network failures by automatically deactivating or restarting offending links or devices. I will then describe how SWAN boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently reconfiguring the network’s data plane to match current traffic demand. Finally, I will present Network State Service (NSS) that manages the entire network state for Azure datacenter and wide-area networks. It serves as the foundation of multiple network management applications (including NetPilot and SWAN), allowing these applications to operate independently, while maintaining network-wide safety.
 
Ming Zhang is a Senior Researcher in the Mobility and Networking group at Microsoft Research, where he focuses on datacenter and wide-area networking. He creates cutting-edge technologies that power Microsoft’s massive cloud networks. His research is published in leading systems and networking conferences including SIGCOMM, NSDI, OSDI, and MobiSys. His work on MobiPerf won the Open Internet App Award and People's Choice App Award in FCC Open Internet Challenge. He received his Ph.D. in Computer Science from Princeton University in 2005 and his B.S. in Computer Science from Nanjing University in 1999.

   

Hard Problems in Hardness of Approximation: Sharp Thresholds, Parallel Repetition and Unique Games

Date and Time
Tuesday, May 19, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora

Many of the optimization problems of interest to humanity are NP-hard, and most computer scientists believe that they have no efficient algorithms. Focusing on approximation rather than exact optimization might extend the reach of algorithmic technique into the intractable. However, for many NP-hard problems approximating them better than a problem-dependent threshold turns out to be NP-hard as well. Proving so rigorously is a difficult task, which -- by a leap of thought -- leads to fundamental questions about the nature of proofs and their verification.

In this talk I'll discuss the phenomenon of sharp thresholds in approximability, namely how many approximation problems transform instantly from efficiently solvable to exponentially hard as one focuses on a better approximation (joint work with Ran Raz). I'll discuss two prover games and a new, incredibly simple, method ("fortification") for analyzing their parallel repetition. Finally, I'll discuss a recent candidate hard instance for unique games, which might lead to progress on the Unique Games Conjecture - one of the biggest open problems in approximability (joint work with Subhash Khot).
 

Approximation Algorithms for Graph Routing Problems

Date and Time
Tuesday, May 12, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mark Braverman

In a typical routing problem, we are given a graph G, and a collection (s_1,t_1),…,(s_k,t_k) of pairs of vertices, called demand pairs, that we would like to route. In order to route a demand pair (s_i,t_i), we need to choose a path connecting s_i to t_i in G. Our goal is usually to route as many of the demand pairs as possible, while keeping the congestion of the routing - the maximum load on any vertex or an edge of G - as small as possible. This general framework gives rise to a number of basic and widely studied graph routing problems, that have lead to the development of a rich toolkit of algorithmic techniques, as well as structural graph theoretic results. In this talk we will describe some of the recent developments in approximation algorithms for graph routing problems, and highlight some connections between this area and graph theory.

Julia Chuzhoy is an Associate Professor at the Toyota Technological Institute at Chicago (TTIC). She received her Ph.D. in Computer Science from Technion, Israel in 2004, and spent several years as a postdoc at MIT, University of Pennsylvania and Institute for Advanced Study, before joining TTIC in 2007. Her research area is theoretical computer science, with the main emphasis on the design and the analysis of approximation algorithms for NP-hard problems.

Exploring the Limits of the Efficiently Computable

Date and Time
Wednesday, April 29, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
I'll give a broad overview of my research over the last decade aimed at understanding the relationship between computational complexity and physics -- and in particular, the capabilities and limitations of quantum computers.  Possible topics, depending on time, will include the BosonSampling model of quantum computing, creating unforgeable 'quantum money' using hidden subspaces, quantum computing versus the polynomial-time hierarchy, maximal separations between classical and quantum query complexity, the need for structure in quantum speedups, and the computational complexity of decoding Hawking radiation.
 
Scott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. He studied at Cornell and UC Berkeley, and did postdocs at the Institute for Advanced Study as well as the University of Waterloo. His research focuses on the capabilities and limitsof quantum computers, and more generally on computational complexity and its relationship to physics.  His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. Aaronson has written about quantum computing for Scientific American and the New York Times, and writes a popular blog.  He’s received the National Science Foundation’s Alan T. Waterman Award, the United States PECASE Award, and MIT's Junior Bose Award for Excellence in Teaching.

The Surprising Power of Modern Cryptography

Date and Time
Monday, April 6, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mark Braverman
Modern cryptography is surprisingly powerful, yielding capabilities such as secure multiparty computation, computing on encrypted data, and hiding secrets in code.  Currently, however, some of these advanced abilities are still too inefficient for practical use.  The goals of my research are two-fold: (1) continue expanding the capabilities of cryptography and its applications, and (2) bring these advanced capabilities closer to practice.
 
In this talk, I will focus on a particular contribution that addresses both of these objectives: establishing a shared secret key among a group of participants with only a single round of interaction.  The first such protocols required a setup phase, where a central authority determines the parameters for the scheme; unfortunately, this authority can learn the shared group key and must therefore be trusted.  I will discuss how to remove this setup phase using program obfuscation, though the scheme is very impractical due to the inefficiencies of current obfuscators.  I will then describe a new technical tool called witness pseudorandom functions and show how to use this tool in place of obfuscation, resulting in a significantly more efficient protocol.
 
Mark Zhandry is a Ph.D. candidate at Stanford University advised by Dan Boneh.  He studies cryptography and computer science theory and is currently focusing on developing new cutting-edge cryptographic capabilities and improving the efficiency of these applications.  He is visiting Microsoft Research New England and MIT for the 2014-15 academic year.
Follow us: Facebook Twitter Linkedin