Solving non-convex optimization problems on probabilistic models of inputs lies at the heart of foundational algorithmic challenges arising in high-dimensional statistical data analysis, beyond-worst-case combinatorial optimization, cryptography, and statistical physics.
In this talk, I will present a new method for average-case algorithm design that relies on a concrete polynomial time meta-algorithm called the sum-of-squares method. This method yields substantially improved and often nearly optimal guarantees for a wide range of problems.
I will focus on the impact of this method on two prominent areas of average-case algorithm design:
1) High-dimensional statistical estimation, where this method has led to efficient algorithms for classical data analysis tasks that provably tolerate adversarial data corruption while incurring minimal possible error. The resulting applications range from new robust estimators in high dimensions for basic tasks such as computing mean, covariance, and moments of data to more sophisticated tasks such as regression, clustering, sparse recovery, and fitting mixture models. Most recently, this theory led to the first efficient algorithm for robustly learning a high-dimensional mixture of Gaussians. This resolves a central open question in the area, which has a history going back to a famous work of Pearson from 1894.
2) Beyond worst-case combinatorial optimization, where this method has led to new efficient algorithms that escape worst-case hardness while avoiding "overfitting" to brittle properties of any specific random model. Most recently, this line of work resulted in a resolution of longstanding open questions of finding optimal algorithms for "smoothed" models of k-SAT and "semirandom" models of Max-Clique.
Taken together, these results suggest a unified theory for average-case algorithm design that not only makes substantial progress on long open foundational challenges but also brings a conceptual unity to algorithm design that we had never anticipated.
Bio: Pravesh Kothari is an Assistant Professor of Computer Science at Carnegie Mellon University since September 2019. Before joining CMU, he was a postdoctoral Research Instructor jointly hosted by Princeton University and the Institute for Advanced Study from 2016-19. He obtained his Ph.D. in 2016 from the University of Texas at Austin. Kothari's recent work has focused on algorithm design for problems with statistical inputs. It is also the subject of his recent monograph "Semialgebraic Proofs and Efficient Algorithm Design". His research has been recognized with a Simons Award for graduate students in Theoretical Computer Science, a Google Research Scholar Award, an NSF CAREER Award, and an Alfred P. Sloan Research Fellowship.
To request accommodations for a disability please contact Emily Lawrence, firstname.lastname@example.org, at least one week prior to the event.