I am currently a Simons Foundation Postdoctoral Research Fellow with the Theory Group at Carnegie Mellon University.
My research interests are broadly in the field of Theoretical Computer Science, namely, in designing efficient algorithms for problems in Combinatorial Optimization and Machine Learning.
I obtained my PhD from Princeton University in the Department of Computer Science. My advisor was Prof. Moses Charikar. My PhD thesis was Beyond Worst Case Analysis in Approximation Algorithms.
Prior to that, I finished my bachelor's degree in Computer Science and Engineering from the Indian Institute of Technology Madras in 2007. I spent the first fifteen years of my life in Pondicherry, a beautiful town in Southern India, where Pi Patel hails from.
PhD Thesis
Beyond Worst Case Analysis in Approximation Algorithms.
[
PDF 
Abstract ]
Worstcase analysis has had many successes over the last few decades, and has been the tool of choice in analyzing approximation guarantees of algorithms. Yet, for
several important optimization problems, approximation algorithms with good guarantees have been elusive.
In this dissertation, we look beyond worstcase analysis to obtain improved approximation guarantees for fundamental graph optimization problems, given that realworld instances are unlikely to behave like worstcase instances. We study average
case models and other structural assumptions that seem to capture properties of
realworld instances, and design improved approximation algorithms in these settings. In some cases, this averagecase study also gives us surprising insights into the
worstcase approximability.
Selected Publications

Constant Factor Approximations for Balanced Cut in the random PIE model. STOC 2014.
(With Konstantin Makarychev and Yury Makarychev). [ Abstract]
We propose and study a new averagecase model for Balanced Cut, a planted model with permutationinvariant random edges (PIE). Our model is much more general than previously studied averagecase models like semirandom models, random planted partitioning models and stochastic block models, and can capture realworld network models like preferential attachment models. We give constant factor approximations for Balanced Cut in this model.
The model : Consider a set of vertices V partitioned into two clusters L and R of equal size. Let G be an arbitrary graph on V with no edges between L and R. Let E_random be a set of edges sampled from an arbitrary permutationinvariant distribution (a distribution that is invariant under renaming of vertices in L and in R). Then we say that G + E_random is a graph with permutationinvariant random
edges. For example, in a social network formed by the superposition of separate networks representing different types of ties (say geographic and professional), these different networks are independent.
We present an approximation algorithm for the Balanced Cut problem that finds a balanced cut of cost O(E_random) + n.polylog(n) in this model. In the most interesting regime, this is a constant factor approximation with respect to the cost of the planted cut.

Smoothed Analysis of Tensor Decompositions. STOC 2014.
(With Aditya Bhaskara, Moses Charikar and Ankur Moitra).
[ PDF  ArXiv  Abstract ]
Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error  a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, these are not known to be stable to noise.
Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multiview models and mixtures of axisaligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.
 BiluLinial Stable Instances of Max Cut and Minimum Multiway Cut. SODA 2014.
(With Konstantin Makarychev and Yury Makarychev). [PDF  ArXiv  Abstract]
We investigate the notion of stability proposed by Bilu and Linial. We obtain an exact polynomialtime algorithm for $\gamma$stable Max Cut instances with $\gamma >= c\sqrt{log n}loglog n$ for some absolute constant $c > 0$. Our algorithm is robust: it never returns an incorrect answer; if the instance is $\gamma$stable, it finds the maximum cut, otherwise, it either finds the maximum cut or certifies that the instance is not $\gamma$stable. We prove that there is no robust polynomialtime algorithm for $\gamma$stable instances of Max Cut when $\gamma < \alpha_{SC}(n/2)$, where $\alpha_{SC}$ is the best approximation factor for Sparsest Cut with nonuniform demands.
Our algorithm is based on semidefinite programming. We show that the standard SDP relaxation for Max Cut (with $\ell_2^2$ triangle inequalities) is integral if $\gamma \geq D_{\ell_2^2\to \ell_1}(n)$, where $D_{\ell_2^2\to \ell_1}(n)$ is the least distortion with which every n point metric space of negative type embeds into $\ell_1$. On the negative side, we show that the SDP relaxation is not integral when $\gamma < D_{\ell_2^2\to \ell_1}(n/2)$. Moreover, there is no tractable convex relaxation for $\gamma$stable instances of Max Cut when $\gamma < \alpha_{SC}(n/2)$. That suggests that solving $\gamma$stable instances with $\gamma =o(\sqrt{\log n})$ might be difficult or impossible.
Our results significantly improve previously known results. The best previously known algorithm for $\gamma$stable instances of Max Cut required that $\gamma \geq c\sqrt{n}$ (for some $c > 0$) [Bilu, Daniely, Linial, and Saks]. No hardness results were known for the problem. Additionally, we present an algorithm for 4stable instances of Minimum Multiway Cut. We also study a relaxed notion of weak stability.
 Approximation algorithms for Semirandom Graph Partitioning problems. STOC 2012.
(With Konstantin Makarychev and Yury Makarychev). [PDF  ArXiv  Abstract]
We propose and study a new semirandom model for graph partitioning problems. We believe that it captures many properties of real
WThe model is more flexible than the semirandom model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
In our semirandom graph partitioning model, we need just edges across different clusters to be (semi)random.
The edges inside clusters can be completely arbitrary (unlike previous models).
We develop a general framework for solving semirandom instances using new SDPbased techniques and apply it to several problems of interest.
We present constant factor approximation algorithms for semirandom instances of the Balanced Cut, Multicut, Min Uncut and SmallSet Expansion problems.
We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition.
Our algorithms work in a wider range of parameters than algorithms for previously studied random and semirandom models.
 Detecting High LogDensities  an O(n^{1/4}) Approximation for Densest kSubgraph. STOC 2010.
(With Aditya Bhaskara, Moses Charikar, Eden Chlamtac and Uri Feige). [PDF  Abstract]
In the Densest kSubgraph problem, given a graph G and a parameter
k, one needs to find a subgraph of G induced on k vertices that
contains the largest number of edges. There is a significant gap
between the best known upper and lower bounds for this problem. It is
NPhard, and does not have a PTAS unless NP has subexponential time
algorithms. On the other hand, the current best known algorithm of
Feige, Kortsarz and Peleg~\cite{FKP}, gives an approximation ratio of
$n^{1/3  \epsilon}$ for some specific $\epsilon > 0$ (estimated by those
authors at around $\epsilon = 1/60$).
We present an algorithm that for every $\epsilon> 0$ approximates the
Densest $k$Subgraph problem within a ratio of $n^{1/4 + \epsilon}$ in
time $n^{O(1/\epsilon)}$. If allowed to run for time $n^{O(\log n)}$, our
algorithm achieves an approximation ratio of $O(n^{1/4})$. Our
algorithm is inspired by studying an averagecase version of the
problem where the goal is to distinguish random graphs from random
graphs with planted dense subgraphs  the approximation ratio we
achieve for the general case matches the ``distinguishing ratio'' we
obtain for this planted problem. Achieving a distinguishing ratio of
$o(n^{1/4})$ for the planted problem (in polynomial time) is beyond
the reach of our current techniques.
At a high level, our algorithms involve cleverly counting
appropriately defined trees of constant size in $G$, and using these
counts to identify the vertices of the dense subgraph. Our algorithm
is based on the following principle. We say that a graph $G(V,E)$ has
logdensity $\alpha$ if its average degree is
$\Theta(V^{\alpha})$. The algorithmic core of our result is a family
of algorithms that output $k$subgraphs of nontrivial density whenever
the logdensity of the densest $k$subgraph is larger than the
logdensity of the host graph.
Finally, we extend this algorithm to obtain an $O(n^{1/4 
\epsilon})$approximation algorithm which runs in time
$O(2^{n^{O(\epsilon)}})$ and also explore various approaches
to obtain better approximation algorithms in restricted parameter
settings for random instances.