# CS Department Colloquium Series

## Learning Hidden Computational Processes

We are interested in prediction problems in which evaluating the learned function requires multiple intermediate steps of computation. One motivation is building a system that can answer complex questions: here the function would need to map "How many countries have held the Summer Olympics more than once?" to "3" by applying a sequence of aggregation and filtering operations on a database. In this talk, we examine two key machine learning problems that arise in this setting. First, how do we model the computational process? We argue that the classic paradigm of decoupling modeling from inference is inadequate, and we propose techniques that directly model the inference procedure. Second, learning is very difficult: in our example, the supervision "3" constrains the hidden computational process in a very indirect way. We propose methods that relax the output supervision in a parameterized way, and learn both the relaxation and model parameters jointly subject to an explicit computational constraint. Finally, we show some empirical progress on a new challenging question answering task.

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research interests include (i) modeling natural language semantics, (ii) developing machine learning methods that infer rich latent structures from limited supervision, (iii) and studying the tradeoff between statistical and computational efficiency. He is a 2015 Sloan Research Fellow, 2014 Microsoft Research Faculty Fellow, a 2010 Siebel Scholar, and won the best student paper at the International Conference on Machine Learning in 2008.

## Software-Defined Datacenter Networks

## Hard Problems in Hardness of Approximation: Sharp Thresholds, Parallel Repetition and Unique Games

Many of the optimization problems of interest to humanity are NP-hard, and most computer scientists believe that they have no efficient algorithms. Focusing on approximation rather than exact optimization might extend the reach of algorithmic technique into the intractable. However, for many NP-hard problems approximating them better than a problem-dependent threshold turns out to be NP-hard as well. Proving so rigorously is a difficult task, which -- by a leap of thought -- leads to fundamental questions about the nature of proofs and their verification.

In this talk I'll discuss the phenomenon of sharp thresholds in approximability, namely how many approximation problems transform instantly from efficiently solvable to exponentially hard as one focuses on a better approximation (joint work with Ran Raz). I'll discuss two prover games and a new, incredibly simple, method ("fortification") for analyzing their parallel repetition. Finally, I'll discuss a recent candidate hard instance for unique games, which might lead to progress on the Unique Games Conjecture - one of the biggest open problems in approximability (joint work with Subhash Khot).

## Approximation Algorithms for Graph Routing Problems

In a typical routing problem, we are given a graph G, and a collection (s_1,t_1),…,(s_k,t_k) of pairs of vertices, called demand pairs, that we would like to route. In order to route a demand pair (s_i,t_i), we need to choose a path connecting s_i to t_i in G. Our goal is usually to route as many of the demand pairs as possible, while keeping the congestion of the routing - the maximum load on any vertex or an edge of G - as small as possible. This general framework gives rise to a number of basic and widely studied graph routing problems, that have lead to the development of a rich toolkit of algorithmic techniques, as well as structural graph theoretic results. In this talk we will describe some of the recent developments in approximation algorithms for graph routing problems, and highlight some connections between this area and graph theory.

Julia Chuzhoy is an Associate Professor at the Toyota Technological Institute at Chicago (TTIC). She received her Ph.D. in Computer Science from Technion, Israel in 2004, and spent several years as a postdoc at MIT, University of Pennsylvania and Institute for Advanced Study, before joining TTIC in 2007. Her research area is theoretical computer science, with the main emphasis on the design and the analysis of approximation algorithms for NP-hard problems.

## Exploring the Limits of the Efficiently Computable

## The Surprising Power of Modern Cryptography

## The Power of Asymmetry in Binary Hashing

Joint work with Behnam Neyshabur, Yury Makarychev and Russ Salakhutdinov

Nati Srebro obtained his PhD at the Massachusetts Institute of Technology (MIT) in 2004, held a post-doctoral fellowship with the Machine Learning Group at the University of Toronto, and was a Visiting Scientist at IBM Haifa Research Labs. Since January 2006, he has been on the faculty of the Toyota Technological Institute at Chicago (TTIC) and the University of Chicago, and has also served as the first Director of Graduate Studies at TTIC. From 2013 to 2014 he was associate professor at the Technion-Israel Institute of Technology. Prof. Srebro's research encompasses methodological, statistical and computational aspects of Machine Learning, as well as related problems in Optimization. Some of Prof. Srebro's significant contributions include work on learning "wider" Markov networks, pioneering work on matrix factorization and collaborative prediction, including introducing the use of the nuclear norm for machine learning and matrix reconstruction and work on fast optimization techniques for machine learning, and on the relationship between learning and optimization.