Quick links

CS Department Colloquium Series

Predicting activities in images

Date and Time
Friday, October 9, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

The task of visual prediction is important for two main reasons: (a) For intelligent agents and systems, prediction is vital for decision making. For example, in order to perform assistive activities, robots must be able to predict the intentions of other agents in the scene. Even a task as simple as walking through a crowded hallway requires the prediction of human trajectories. (b) More importantly, prediction requires deep understanding of the visual world and complex interplay between different elements of the scene. Therefore, prediction can act as a way to define “what does it mean to understand an image,” and the task of visual prediction can act as an enabler for scene understanding. In this talk, I will go over several techniques for prediction developed over the past few years. These techniques operates at different levels of descriptions from high-level object motions, to mid-level patch motion and appearance changes, to low-level optical flow. They operate at different levels of supervision from strong the supervision requiring explicit object tracking to unsupervised learning for patch-based prediction. I will discuss the pros and cons of different levels, ideas for combining them and example applications in robotics and transportation applications.

Martial Hebert is a Professor of Robotics Carnegie Mellon University and Director of the Robotics Institute, which he joined in 1984. His interest includes computer vision, especially recognition in images and video data, model building and object recognition from 3D data, and perception for mobile robots and for intelligent vehicles. His group has developed approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences. In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data. He has served on the editorial boards the IEEE Transactions on Robotics and Automation, the IEEE transactions on Pattern Analysis and Machine Intelligence, and the International Journal of Computer Vision (for which he currently serves as Editor-in-Chief).

Learning Hidden Computational Processes

Date and Time
Thursday, October 1, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Elad Hazan

We are interested in prediction problems in which evaluating the learned function requires multiple intermediate steps of computation. One motivation is building a system that can answer complex questions: here the function would need to map "How many countries have held the Summer Olympics more than once?" to "3" by applying a sequence of aggregation and filtering operations on a database.  In this talk, we examine two key machine learning problems that arise in this setting. First, how do we model the computational process?  We argue that the classic paradigm of decoupling modeling from inference is inadequate, and we propose techniques that directly model the inference procedure. Second, learning is very difficult: in our example, the supervision "3" constrains the hidden computational process in a very indirect way.  We propose methods that relax the output supervision in a parameterized way, and learn both the relaxation and model parameters jointly subject to an explicit computational constraint.  Finally, we show some empirical progress on a new challenging question answering task.

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011).  His research interests include (i) modeling natural language semantics, (ii) developing machine learning methods that infer rich latent structures from limited supervision, (iii) and studying the tradeoff between statistical and computational efficiency.  He is a 2015 Sloan Research Fellow, 2014 Microsoft Research Faculty Fellow, a 2010 Siebel Scholar, and won the best student paper at the International Conference on Machine Learning in 2008.

Software-Defined Datacenter Networks

Date and Time
Tuesday, September 29, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
Today’s cloud-scale services (e.g., search, social networking and cloud computing) operate on large datacenter networks (DCNs). Keeping these networks running smoothly is difficult, due to the sheer number of physical devices, the complexity of network software stack, and the dynamic nature of the environment. At any given moment, multiple devices experience component failures, are brought down for planned maintenance, are upgraded with new firmware, or are reconfigured to adapt to prevailing traffic demand.
 
In this talk, I will go through some of the automated systems that are developed for managing the routing, configuration, and firmware and power of network devices in Microsoft Azure. I will first describe how NetPilot achieves fast, safe mitigation of common network failures by automatically deactivating or restarting offending links or devices. I will then describe how SWAN boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently reconfiguring the network’s data plane to match current traffic demand. Finally, I will present Network State Service (NSS) that manages the entire network state for Azure datacenter and wide-area networks. It serves as the foundation of multiple network management applications (including NetPilot and SWAN), allowing these applications to operate independently, while maintaining network-wide safety.
 
Ming Zhang is a Senior Researcher in the Mobility and Networking group at Microsoft Research, where he focuses on datacenter and wide-area networking. He creates cutting-edge technologies that power Microsoft’s massive cloud networks. His research is published in leading systems and networking conferences including SIGCOMM, NSDI, OSDI, and MobiSys. His work on MobiPerf won the Open Internet App Award and People's Choice App Award in FCC Open Internet Challenge. He received his Ph.D. in Computer Science from Princeton University in 2005 and his B.S. in Computer Science from Nanjing University in 1999.

   

Hard Problems in Hardness of Approximation: Sharp Thresholds, Parallel Repetition and Unique Games

Date and Time
Tuesday, May 19, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora

Many of the optimization problems of interest to humanity are NP-hard, and most computer scientists believe that they have no efficient algorithms. Focusing on approximation rather than exact optimization might extend the reach of algorithmic technique into the intractable. However, for many NP-hard problems approximating them better than a problem-dependent threshold turns out to be NP-hard as well. Proving so rigorously is a difficult task, which -- by a leap of thought -- leads to fundamental questions about the nature of proofs and their verification.

In this talk I'll discuss the phenomenon of sharp thresholds in approximability, namely how many approximation problems transform instantly from efficiently solvable to exponentially hard as one focuses on a better approximation (joint work with Ran Raz). I'll discuss two prover games and a new, incredibly simple, method ("fortification") for analyzing their parallel repetition. Finally, I'll discuss a recent candidate hard instance for unique games, which might lead to progress on the Unique Games Conjecture - one of the biggest open problems in approximability (joint work with Subhash Khot).
 

Approximation Algorithms for Graph Routing Problems

Date and Time
Tuesday, May 12, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mark Braverman

In a typical routing problem, we are given a graph G, and a collection (s_1,t_1),…,(s_k,t_k) of pairs of vertices, called demand pairs, that we would like to route. In order to route a demand pair (s_i,t_i), we need to choose a path connecting s_i to t_i in G. Our goal is usually to route as many of the demand pairs as possible, while keeping the congestion of the routing - the maximum load on any vertex or an edge of G - as small as possible. This general framework gives rise to a number of basic and widely studied graph routing problems, that have lead to the development of a rich toolkit of algorithmic techniques, as well as structural graph theoretic results. In this talk we will describe some of the recent developments in approximation algorithms for graph routing problems, and highlight some connections between this area and graph theory.

Julia Chuzhoy is an Associate Professor at the Toyota Technological Institute at Chicago (TTIC). She received her Ph.D. in Computer Science from Technion, Israel in 2004, and spent several years as a postdoc at MIT, University of Pennsylvania and Institute for Advanced Study, before joining TTIC in 2007. Her research area is theoretical computer science, with the main emphasis on the design and the analysis of approximation algorithms for NP-hard problems.

Exploring the Limits of the Efficiently Computable

Date and Time
Wednesday, April 29, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
I'll give a broad overview of my research over the last decade aimed at understanding the relationship between computational complexity and physics -- and in particular, the capabilities and limitations of quantum computers.  Possible topics, depending on time, will include the BosonSampling model of quantum computing, creating unforgeable 'quantum money' using hidden subspaces, quantum computing versus the polynomial-time hierarchy, maximal separations between classical and quantum query complexity, the need for structure in quantum speedups, and the computational complexity of decoding Hawking radiation.
 
Scott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. He studied at Cornell and UC Berkeley, and did postdocs at the Institute for Advanced Study as well as the University of Waterloo. His research focuses on the capabilities and limitsof quantum computers, and more generally on computational complexity and its relationship to physics.  His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. Aaronson has written about quantum computing for Scientific American and the New York Times, and writes a popular blog.  He’s received the National Science Foundation’s Alan T. Waterman Award, the United States PECASE Award, and MIT's Junior Bose Award for Excellence in Teaching.

The Surprising Power of Modern Cryptography

Date and Time
Monday, April 6, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mark Braverman
Modern cryptography is surprisingly powerful, yielding capabilities such as secure multiparty computation, computing on encrypted data, and hiding secrets in code.  Currently, however, some of these advanced abilities are still too inefficient for practical use.  The goals of my research are two-fold: (1) continue expanding the capabilities of cryptography and its applications, and (2) bring these advanced capabilities closer to practice.
 
In this talk, I will focus on a particular contribution that addresses both of these objectives: establishing a shared secret key among a group of participants with only a single round of interaction.  The first such protocols required a setup phase, where a central authority determines the parameters for the scheme; unfortunately, this authority can learn the shared group key and must therefore be trusted.  I will discuss how to remove this setup phase using program obfuscation, though the scheme is very impractical due to the inefficiencies of current obfuscators.  I will then describe a new technical tool called witness pseudorandom functions and show how to use this tool in place of obfuscation, resulting in a significantly more efficient protocol.
 
Mark Zhandry is a Ph.D. candidate at Stanford University advised by Dan Boneh.  He studies cryptography and computer science theory and is currently focusing on developing new cutting-edge cryptographic capabilities and improving the efficiency of these applications.  He is visiting Microsoft Research New England and MIT for the 2014-15 academic year.

The Power of Asymmetry in Binary Hashing

Date and Time
Thursday, March 12, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Elad Hazan

Nathan Srebro

When looking for similar objects, like images and documents, and especially when querying a large remote data-base for similar objects, it is often useful to construct short similarity-preserving binary hashes. That is, to map each image or document to a short bit strings such that similar objects have similar bit strings. Such a mapping lies at the root of nearest neighbor search methods such as Locality Sensitive Hashing (LSH) and is recently gaining popularity in a variety of vision, image retrieval and document retrieval applications. In this talk I will demonstrate, both theoretically and empirically, that even for symmetric and well behaved similarity measures, much could be gained by using two different hash functions---one for hashing objects in the database and an entirely different hash function for the queries. Such asymmetric hashings can allow to significantly shorter bit strings and more accurate retrieval.

Joint work with Behnam Neyshabur, Yury Makarychev and Russ Salakhutdinov 

Nati Srebro obtained his PhD at the Massachusetts Institute of Technology (MIT) in 2004, held a post-doctoral fellowship with the Machine Learning Group at the University of Toronto, and was a Visiting Scientist at IBM Haifa Research Labs.  Since January 2006, he has been on the faculty of the Toyota Technological Institute at Chicago (TTIC) and the University of Chicago, and has also served as the first Director of Graduate Studies at TTIC.  From 2013 to 2014 he was associate professor at the Technion-Israel Institute of Technology. Prof. Srebro's research encompasses methodological, statistical and computational aspects of Machine Learning, as well as related problems in Optimization.  Some of Prof. Srebro's significant contributions include work on learning "wider" Markov networks, pioneering work on matrix factorization and collaborative prediction, including introducing the use of the nuclear norm for machine learning and matrix reconstruction and work on fast optimization techniques for machine learning, and on the relationship between learning and optimization.

Leveraging Array Signal Processing in the Wireless Internet of Things

Date and Time
Monday, April 13, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman

Kyle Jamieson

Phased array signal processing has long been employed outdoors in radar, underwater in sonar, and underground in seismic monitoring.  Today, it has the potential to revolutionize the Internet of Things (IoT) by giving us the ability to track every one of the billions of IoT devices indoors, and meet their exploding bandwidth requirements.  But to make the shift to indoor wireless networks, it must cope with strong multipath radio reflections, packetized data transmissions, and commodity hardware platforms.  In this talk I will describe two relevant systems through the lens of system-building and experimentation.  First, I will describe an indoor location system that uses solely the existing Wi-Fi infrastructure to achieve a median location accuracy of 23 centimeters, and sub-second response time, allowing Wi-Fi-enabled devices to be tracked in real-time.  Next, I will present a MIMO-based Sphere Decoder system that leverages novel search algorithms and geometric reasoning to increase wireless throughput, the first of its kind that scales to 256-QAM constellation densities with computational demands that are practically realizable in current ASIC hardware.  Finally, I will conclude with a vision of how these techniques will support exciting IoT applications such as video analytics over billions of indoor wireless cameras.
 
Kyle Jamieson is a Senior Lecturer (U.S. equivalent: Assistant Professor) in the Department of Computer Science at University College London (UCL).  His research interests are in building wirelessly networked systems for the real world that cut across the boundaries of digital communications and networking.  Prior to joining UCL, he received the B.S., M.Eng., and Ph.D. (2008) degrees in Computer Science from the Massachusetts Institute of Technology.  He then received a Starting Investigator fellowship from the European Research Council in 2011, Best Paper awards at USENIX 2013 and CoNEXT 2014, and a Google Faculty Research Award in 2015.  He regularly serves on the program committees of the ACM MobiCom, USENIX NSDI, and ACM SIGCOMM conferences.

Parallel Proofs for Parallel Programs

Date and Time
Wednesday, March 25, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Zachary Kincaid, from University of Toronto
Host
David Walker

Zachary Kincaid

Today's software systems are large and complex - beyond the scope of comprehension of any one person.  My research is motivated by the question of how machines can help humans better understand their code and aid the construction of reliable, secure, and efficient systems.  Multi-threading is a long-standing obstacle to reasoning by both humans and machines.  Conventionally, this obstacle is met by developing clever ways to reason about the program as if it were executing sequentially.  In this talk, I will argue that we should embrace parallelism, not hide from it.  I will discuss new *fundamentally parallel* foundations for automated program analysis, which allow the parallelism present in a program to be explicitly maintained and enable tractable automated reasoning and succinct proofs.  In other words: my talk will be on parallel proofs for parallel programs.

Zachary Kincaid is a PhD candidate in the Department of Computer Science at the University of Toronto.  He is interested in developing automated reasoning techniques to facilitate the construction of high-performance, dependable software.  Zak's research focuses on static analysis and program verification, with a particular emphasis on multi-threaded programs.  He has also made contributions to theorem proving and program synthesis.
Follow us: Facebook Twitter Linkedin