Quick links

CS Department Colloquium Series

Delegation of Computation, P-completeness of Linear Programming, and the many facets of the notion of a proof

Date and Time
Monday, November 30, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mark Braverman

Suppose that Alice performs some computation for Bob, as he does not have sufficient computational power to run the computation himself. Can Bob be convinced that the computation was done correctly?

Delegation of Computation is a central problem in modern cryptography. I will describe a recent one-round delegation protocol. The discussion will take us on a journey into the notion of a proof, through some of the most fascinating ideas in the history of theoretical computer science. I will conclude with a seemingly unrelated application to the P-completeness of Linear Programming with a fixed polytope.

Matrix Completion and Matrix Concentration

Date and Time
Monday, December 14, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Computer Science Department and The Center for Statistics and Machine Learning

Lester Mackey
The goal in matrix completion is to recover a matrix from a small subset of noisy entries. Web-scale instances of this problem include collaborative filtering for recommendation and link prediction in social networks. Many modern matrix completion algorithms provide strong recovery guarantees but scale poorly due to the repeated and costly computation of truncated SVDs. To address this deficiency, in the first part of this talk, I will introduce Divide-Factor-Combine (DFC), a parallel divide-and-conquer framework for boosting the scalability of a matrix completion algorithm while retaining its theoretical guarantees. Our experiments demonstrate the near-linear to super-linear speed-ups attainable with this approach, and our analysis shows that DFC enjoys high-probability recovery guarantees comparable to those of its base algorithm.

Fundamental to our analysis – and to the analyses of many matrix completion procedures – are matrix concentration inequalities that characterize the fluctuations of a random matrix about its mean. In the second part of this talk, I will show how Stein’s method of exchangeable pairs can be used to derive concentration inequalities for matrix-valued random elements. When applied to a sum of independent random matrices, this approach yields matrix generalizations of the classical inequalities due to Hoeffding, Bernstein, Khintchine, and Rosenthal. The same technique delivers bounds for sums of dependent random matrices and more general matrix functionals of dependent random elements.

Structured Sum of Squares Polynomials in Optimization and Control

Date and Time
Thursday, November 12, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

Amirali Ahmadi
In recent years, algebraic techniques in optimization such as sum of squares (SOS) programming have led to powerful semidefinite programming relaxations for a wide range of NP-hard problems in computational mathematics. We begin by giving an overview of these techniques, emphasizing their implications for optimization and Lyapunov analysis of dynamical systems, and pointing out some challenges that remain in terms of scalability. We then introduce new algebraic relaxation schemes that are very similar to SOS relaxations in nature but instead of semidefinite programs result in linear or second order cone programs. These are what we call ``DSOS and SDSOS programs.'' We show that these relaxations are orders of magnitude more scalable than SOS relaxations while enjoying many of the same asymptotic theoretical guarantees. (Joint work with Anirudha Majumdar.)

Time permitting, we will also present our recently-introduced area of ``robust-to-dynamics optimization'', which has to do with optimizing over invariant sets of dynamical systems. (Joint work with Oktay Gunluk.)

Amir Ali Ahmadi is an Assistant Professor at the Department of Operations Research and Financial Engineering at Princeton University and an Associated Faculty member of the Department of Computer Science. Amir Ali received his PhD in EECS from MIT and was a Goldstine Fellow at the IBM Watson Research Center prior to joining Princeton. His research interests are in optimization theory and in computational aspects of dynamics and control. Amir Ali's recent awards include the INFORMS Computing Society Prize (for best series of papers at the interface of operations research and computer science), the AFOSR Young Investigator Program Award, the Goldstine Fellowship of IBM Research, the Best Conference Paper Award of the IEEE International Conference on Robotics and Automation, and the prize for one of two most outstanding papers published in the SIAM Journal on Optimization and Control in 2013-2015. Amir Ali is also a three-time recipient of the NSF Oberwolfach Fellowship and a best student paper award finalist at the 47th IEEE Conference on Decision and Control.

Mapping, Localization, and Self-Driving Vehicles

Date and Time
Tuesday, November 10, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

This talk will discuss the critical role of mapping and localization in the development of self-driving vehicles.  After a discussion of some of the recent amazing progress and open technical challenges in the development of self-driving vehicles, we will discuss the past, present and future of Simultaneous Localization and Mapping (SLAM) in robotics.  We will review the history of SLAM research and will discuss some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure.  We will also present recent results on object-based mapping in dynamic environments and real-time dense mapping using RGB-D cameras.

Joint work with Sudeep Pillai, Tom Whelan, Michael Kaess, John McDonald, Hordur Johannsson, Maurice Fallon, David Rosen, Ross Finman, Paul Huang, Liam Paull, Nick Wang, and Dehann Fourie.

John J. Leonard is Samuel C. Collins Professor of Mechanical and Ocean Engineering and Associate Department Head for Research in the MIT Department of Mechanical Engineering.  He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).  His research addresses the problems of navigation and mapping for autonomous mobile robots.  He holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994).  He was team leader for MIT's DARPA Urban Challenge team, which was one of eleven teams to qualify for the Urban Challenge final event and one of six teams to complete the race.  He is the recipient of an NSF Career Award (1998) and the King-Sun Fu Memorial Best Transactions on Robotics Paper Award (2006).  He is an IEEE Fellow (2014).

Benchmarking the D-Wave 2X: Challenges and Early Results

Date and Time
Tuesday, October 20, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Andrew Appel

A D-Wave platform implements a quantum annealing algorithm in hardware, to solve an NP-hard problem known as Ising Model Optimization (also called Quadratic Unconstrained Boolean Optimization).   The ``hardware'' is a processor chip containing qubits that exploit quantum properties such as superposition and entanglement to carry out the computation.   This is a heuristic algorithm that belongs to the adiabatic quantum model of computation, an alternative to the more familiar quantum gate model of computation. 

The task of performance assessment for these novel platforms  -- comparing classical heuristics implemented in software to a quantum analog heuristic implemented in hardware -- gives rise to a number of new methodological issues, on top of the usual challenges relating to evaluation   of heuristics for NP-hard problems.  I will discuss some of these issues and present some early performance results for the D-Wave 2X,  a 1000-qubit processor launched in summer 2015.

A Cognitive Architecture for Human-Robot Collaborations Based on Spatial, Temporal and Causal And-Or Graphs

Date and Time
Monday, November 16, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

Song-Chun Zhu
In this talk, I will present an ongoing effort for developing autonomous robots that can collaborate with humans in real world scenes and tasks. One objective is to explore a cognitive architecture that can embrace modern progresses in vision, cognition, learning, NLP, and cognitive robot using a unified knowledge representation --- the spatial, temporal and causal ang-or graph (STC-AOG). The STC-AOG is a probabilistic, graphical and compositional model that represents stochastic context sensitive grammars for the hierarchical structures in scenes and objects (spatial), in events and actions (temporal), and in the effects of actions on the scenes (causal).   In addition, the representation must also consider the theory of mind, i.e the beliefs and intents of others, for collaborations. I will show a few examples of human robot collaborations.  

Song-Chun Zhu received a Ph.D. degree from Harvard University in 1996. He is currently a professor of Statistics and Computer Science, and director of the Center for Vision, Learning, Cognition and Autonomy at UCLA. His work in computer vision received a number of honors, including the Marr Prize in 2003 for image parsing, the Marr Prize honorary nominations in 1999 for texture modeling and 2007 for object modeling. As a junior faculty he received the Sloan Fellow in Computer Science, NSF Career Award, and ONR Young Investigator Award in 2001. In 2008 he received the Aggarwal prize from the Intl Association of Pattern Recognition for “contributions to a unified foundation for visual pattern conceptualization, modeling, learning, and inference”. He received the Helmholtz Test-of-time prize at 2013. He is a fellow of the IEEE Computer Society since 2011.  He is PI of two consecutive ONR MURI projects on Scene/Event Understanding and Commonsense Reasoning respectively. In recent years, he is also interested in situated dialogues and cognitive robots with the support of DARPA MSEE and SIMPLEX projects.

Networks of Shapes and Images

Date and Time
Thursday, February 25, 2016 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series

Leonidas Guibas
Multimedia content has become a ubiquitous presence on all our computing devices, spanning the gamut from live content captured by device sensors such as smartphone cameras to immense databases of images, audio, video, 3D scans and 3D models  stored in the cloud. As we try to maximize the utility and value of all these petabytes of content, we often do so by analyzing each piece of data individually and foregoing a deeper analysis of the relationships between the media. In this talk we focus on developing rigorous mathematical and computational tools for making such relationships or correspondences between signal and media data sets first-class citizens -- so that the relationships themselves become explicit, algebraic, storable and searchable objects. We discuss mathematical and algorithmic issues on how to represent and compute relationships or mappings at multiple levels of detail -- and go on to build entire networks based on these relationships in collections of inter-related data.

Information transport and aggregation in such networks naturally lead to abstractions of objects and other visual entities,  allowing data compression while capturing variability as well as shared structure. Furthermore, the network can act as a regularizer, allowing us to to benefit from the "wisdom of the collection" in performing operations on individual data sets or in map inference between them, ultimately enabling a certain joint understanding of data that provides the powers of abstraction, analogy, compression, error correction, and summarization. Examples include entity extraction from images or videos, 3D segmentation, the propagation of annotations and labels among images/videos/3D models, variability analysis in a collection of shapes, etc.

Finally we briefly describe the ShapeNet effort, an attempt to build a large-scale repository of 3D models richly annotated with geometric, physical, functional, and semantic information -- both individually and in relation to other models and media. More than a repository, ShapeNet is a true network that allows information transport not only between its nodes but also to/from new visual data coming from sensors. This effectively enables us to add missing information to signals, giving us for example the ability to infer what an occluded part of an object in an image may look like, or what other object arrangements may be possible, based on the world-knowledge encoded in ShapeNet.

This is joint work with several collaborators, as will be discussed during the talk.

Leonidas Guibas obtained his Ph.D. from Stanford under the supervision of Donald Knuth. His main subsequent employers were Xerox PARC, DEC/SRC, MIT, and Stanford. He is currently the Paul Pigott Professor of Computer Science (and by courtesy, Electrical Engineering) at Stanford University. He heads the Geometric Computation group and is part of the Graphics Laboratory, the AI Laboratory, the Bio-X Program, and the Institute for Computational and Mathematical Engineering. Professor Guibas’ interests span geometric data analysis, computational geometry, geometric modeling, computer graphics, computer vision, robotics, ad hoc communication and sensor networks, and discrete algorithms. Some well-known past accomplishments include the analysis of double hashing, red-black trees, the quad-edge data structure, Voronoi-Delaunay algorithms, the Earth Mover’s distance, Kinetic Data Structures (KDS), Metropolis light transport, heat-kernel signatures, and functional maps. Professor Guibas is an ACM Fellow, an IEEE Fellow and winner of the ACM Allen Newell award.

Programming Abstractions for Mobile-Cloud Computing

Date and Time
Monday, October 26, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Nick Feamster

Mayur Naik
As the computing landscape changes to provide a more personalized and targeted experience to users, the demand for compute-intensive tasks on mobile devices such as tablets, smartphones, and smart watches increases. However, this demand coincides with diminishing returns from transistor scaling as well as battery and power constraints that may curtail mobile devices from providing new capabilities. Mobile-cloud computing is an emerging approach that boosts mobile devices by offloading compute-intensive tasks in mobile applications to more powerful machines. A central challenge in mobile-cloud computing concerns orchestrating computation and communication between the mobile device and the remote system in a manner that allows computation gains to outweigh communication costs. To enable this tradeoff, we study two simple programming abstractions that allow the remote system to preserve state across disparately offloaded segments of a computation, and to communicate with the mobile device while performing offloaded computation. To evaluate the benefits of these abstractions, we have developed an efficient algorithm based on finding a min-cut of a graph that identifies the optimal offloading pattern. I will describe our experience applying these abstractions to partition several Android applications and obtain speedup, energy savings, and improved QoS under different network settings. Our experience also yields insights into how to further accelerate mobile-cloud computing through designing new networking protocols and leveraging advances in hardware approximation and acceleration.

Mayur Naik is an Assistant Professor in the School of Computer Science at Georgia Tech since 2011. His research interests lie in areas related to programming systems, with a current emphasis on program analysis techniques for improving software quality and programmer productivity on modern computing platforms. He is a recipient of Distinguished Paper awards at FSE'15, PLDI'14, and ICSE'09, the Lockheed-Martin Dean's award for excellence in teaching at Georgia Tech (2015), and an NSF CAREER award (2013). He received his Ph.D. in Computer Science from Stanford University in 2008 and was a Research Scientist at Intel Labs, Berkeley from 2008 to 2011.

Seeing, Saying, Doing, and Learning: Integrating Computer Vision, Natural Language Processing, Robotics, and Machine Learning Through Multidirectional Inference

Date and Time
Thursday, October 15, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Christiane Fellbaum

Jeff Siskind
The semantics of natural language can be grounded in perception and motor control with a unified cost function that supports multidirectional inference. I will present several instances of this approach.  The first is a cost function relating sentences, video, and a lexicon.  Performing inference from video and a lexicon to sentences allows it to generate sentential descriptions of video.  Performing inference from sentences and a lexicon to video allows it to search a video database for clips that match a sentential query. Performing inference from sentences and video to a lexicon allows it to learn a lexicon.  The second is the functional inverse of video captioning.  Instead of mapping video and object detections to sentences, one can map video and sentences to object detections.  This allows one to use sentential constraint on a video object codetection process to find objects without pretrained object detectors.  The third is a cost function relating sentences, robotic navigation paths, and a lexicon.  Performing inference from sentences and navigation paths to a lexicon allows it to learn a lexicon.  Performing inference from navigation paths and a learned lexicon to sentences allows it to generate sentential descriptions of paths driven by a mobile robot. Performing inference from sentences and a learned lexicon to navigation paths allows it to plan and drive navigation paths that satisfy a sentential navigation request.  Finally, one can perform object codetection on the video stream from a robot-mounted camera during navigation to satisfy sentential requests and use the collection of constraints from vision, language, and robotics to detect, localize, and label objects in the environment without any pretrained object detectors.

Joint work with Andrei Barbu, Daniel Paul Barrett, Scott Alan Bronikowski, N. Siddharth, and Haonan Yu.

Jeffrey M. Siskind received the B.A. degree in computer science from the Technion, Israel Institute of Technology, Haifa, in 1979, the S.M. degree in computer science from the Massachusetts Institute of Technology (M.I.T.), Cambridge, in 1989, and the Ph.D. degree in computer science from M.I.T. in 1992. He did a postdoctoral fellowship at the University of Pennsylvania Institute for Research in Cognitive Science from 1992 to 1993. He was an assistant professor at the University of Toronto Department of Computer Science from 1993 to 1995, a senior lecturer at the Technion Department of Electrical Engineering in 1996, a visiting assistant professor at the University of Vermont Department of Computer Science and Electrical Engineering from 1996 to 1997, and a research scientist at NEC Research Institute, Inc. from 1997 to 2001. He joined the Purdue University School of Electrical and Computer Engineering in 2002 where he is currently an associate professor. His research interests include computer vision, robotics, artificial intelligence, neuroscience, cognitive science, computational linguistics, child language acquisition, automatic differentiation, and programming languages and compilers.

Actionable big data: from data to decisions and back

Date and Time
Tuesday, October 13, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Computer Science and the Center for Statistics and Machine Learning

In many sequential decision problems all that we have is a record of historical trajectories. Building dynamic models from these trajectories and ultimately sequential decision policies may result in much uncertainty and bias. In this talk we consider the question of how to create control policies from existing historical data and how to better sample trajectories so that future control policies would be better. This question has been central in reinforcement learning in the last decade if not more, and involves methods from statistics, machine learning, optimization, and control theory.

I will start my talk with demonstrating why planning with parameter uncertainty is an important issue. I will then describe several approaches: Bayesian uncertainty model over the unknown parameters, a robust approach that takes a worst case view, and a frequentist approach. I will then discuss the challenges that are posed when the model itself rather than just the parameters may not be fully known. 

I will then describe two challenging real-world domains that have been studied in my research group in collaboration with experts from industry and academia: diabetes care management in healthcare and asset management in high-voltage transmission grids. For each domain I will describe our efforts to reduce the problem to its bare essentials as a reinforcement learning problem, the algorithms for learning the control policies, and some of the lessons we learned. 

I graduated from the Technion with a BSc in Electrical Engineering and BA in mathematics (both summa cum laude) in 1996. After that I spent almost four years as an intelligence officer with the Israeli Defense Forces. I was subsequently involved in a few ventures in the high-tech industry. I earned my PhD in Electrical Engineering from the Technion at 2002, under the supervision of Nahum Shimkin. I was then a Fulbright postdoctoral associate with LIDS (MIT) working with John Tsitsiklis for two years. I was at the Department of Electrical and Computer Engineering in McGill University from July 2004 until August 2010, where I held a Canada Research Chair in Machine Learning from 2005 to 2009. I have been with the Department of Electrical Engineering at the Technion since 2008 where I am a professor. I am married to Tally and father to Liam, Oliver and Dylan.

Follow us: Facebook Twitter Linkedin