I'm a fifth year graduate student at the Department of Computer Science at Princeton University. I work in Theoretical Computer Science and my advisor is Sanjeev Arora.

Before joining Princeton, I completed my undergraduate studies at the Computer Science and Engineering department at IIT Bombay and obtained a B.Tech. My advisor was Sundar Vishwanathan.

### Research Interests

My research interests lie broadly in algorithms -- both towards designing better algorithms, and towards understanding the limits of efficient algorithms.

Specific areas of Interest: Fast graph algorithms, Graph partitioning, Approximation algorithms, SDP Hierarchies, Hardness of approximation.

### Publications

Click on the title of the paper for an abstract and an online copy of the paper (if available).

1. Pooya Hatami, Sushant Sachdeva, Madhur Tulsiani.

[Manuscript]
We give an arithmetic version of the recent proof of the triangle removal lemma by Fox [Fox11], for the group $$\mathbb{F}_2^n$$.

A triangle in $$\mathbb{F}_2^n$$ is a tuple $$(x,y,z)$$ such that $$x+y+z = 0$$. The triangle removal lemma for $$\mathbb{F}_2^n$$ states that for every $$\epsilon > 0$$ there is a $$\delta > 0$$, such that a subset $$A$$ of $$\mathbb{F}_2^n$$ which is $$\epsilon$$-far from being triangle free, must contain at least $$\delta \cdot 2^{2n}$$ triangles. We give a direct proof which gives an improved lower bound for $$\delta$$, analogous to the one obtained by Fox for triangle removal in graphs.

2. Sushant Sachdeva, Rishi Saket.
CCC 2013 (to appear)

[Manuscript]
This work studies the inapproximability of Minimum Vertex Cover on uniform hypergraphs from a specific structural perspective. Our study is motivated by applications to two well known scheduling problems: Concurrent Open Shop and the Assembly Line problem. For both these problems, Bansal and Khot [BK10] obtained tight $$(2-\epsilon)$$-factor inapproximability, assuming the Unique Games Conjecture (UGC). In this work, we prove optimal $$(2-\epsilon)$$-factor NP-hardness of approximation for both these problems unconditionally, i.e., without assuming UGC. Our results for the scheduling problems follow from a structural hardness for Minimum Vertex Cover on hypergraphs -- an unconditional but weaker analog of a similar result of Bansal and Khot [BK10] which however, is based on UGC.

3. Sushant Sachdeva, Nisheeth K. Vishnoi.

[Manuscript]
We observe that the inverse of a positive-definite matrix can be approximated by a weighted-sum of a small number of matrix exponentials. Combining it with a previous result [OSV12], this establishes a formal equivalence between inversion and exponentiation up to poly-log factors. When translated to the world of graph Laplacians, this result reduces the problem of solving a Laplacian system to simulating a poly-log number of random walks on the graph. The proof follows from an equivalent result in the scalar world. We start with a carefully chosen representation of the function $$x^{-1}$$ as an exponential integral and then approximate it with a sparse sum of the form $$\sum_{i} w_i e^{-t_i x}$$ using the trapezoidal rule. The error in this approximation is bounded using the Euler-Maclaurin formula and certain bounds derived from the Riemann zeta function.

4. Lorenzo Orecchia, Sushant Sachdeva, Nisheeth K. Vishnoi.
STOC 2012.

[Proceedings] [arXiv]
We give a novel spectral approximation algorithm for the balanced separator problem that, given a graph G, a constant balance b $$\in (0,\frac{1}{2}],$$ and a parameter $$\gamma,$$ either finds an $$\Omega(b)$$-balanced cut of conductance $$O(\sqrt{\gamma})$$ in G, or outputs a certificate that all b-balanced cuts in G have conductance at least $$\gamma,$$ and runs in time $$\tilde{O}(m).$$ This settles the question of designing asymptotically optimal spectral algorithms for balanced separator. Our algorithm relies on a variant of the heat kernel random walk and requires, as a subroutine, an algorithm to compute exp(-L)v where L is the Laplacian of a graph related to G and v is a vector. Algorithms for computing the matrix-exponential-vector product efficiently comprise our next set of results. Our main result here is a new algorithm which computes a good approximation to exp(-A)v for a class of symmetric positive semidefinite (PSD) matrices A and a given vector v, in time roughly $$\tilde{O}(m_A),$$ where $$m_A$$ is the number of non-zero entries of A. This uses, in a non-trivial way, the breakthrough result of Spielman and Teng on inverting symmetric and diagonally-dominant matrices in $$\tilde{O}(m_A)$$ time. Finally, we prove that $$e^{-x}$$ can be uniformly approximated up to a small additive error, in a non-negative interval [a,b] with a polynomial of degree roughly $$\sqrt{b-a}.$$ While this result is of independent interest in approximation theory, we show that, via the Lanczos method from numerical analysis, it yields a simple algorithm to compute exp(-A)v for symmetric PSD matrices that runs in time roughly $$O(t_A\cdot \sqrt{\|A\|}),$$ where $$t_A$$ is time required for the computation of the vector Aw for given vector w. As an application, we obtain a simple and practical algorithm, with output conductance $$O(\sqrt{\gamma}),$$ for balanced separator that runs in time $$\tilde{O}(\frac{m}{\sqrt{\gamma}}).$$ This latter algorithm matches the running time, but improves on the approximation guarantee of the Evolving-Sets-based algorithm by Andersen and Peres for balanced separator.

5. Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva.
NIPS 2012.

[arXiv]
We present a new algorithm for Independent Component Analysis (ICA) which has provable performance guarantees. In particular, suppose we are given samples of the form y = Ax + $$\eta$$ where A is an unknown n X n matrix and x is chosen uniformly at random from $$\{+1, -1\}^n$$, $$\eta$$ is an n-dimensional Gaussian random variable with unknown covariance $$\Sigma$$: We give an algorithm that provable recovers A and $$\Sigma$$ up to an additive $$\epsilon$$ whose running time and sample complexity are polynomial in n and $$1 / \epsilon$$. To accomplish this, we introduce a novel "quasi-whitening" step that may be useful in other contexts in which the covariance of Gaussian noise is not known in advance. We also give a general framework for finding all local optima of a function (given an oracle for approximately finding just one) and this is a crucial step in our algorithm, one that has been overlooked in previous attempts, and allows us to control the accumulation of error when we find the columns of A one by one via local search.

6. Sanjeev Arora, Arnab Bhattacharyya, Rajsekar Manokaran, Sushant Sachdeva.
Random 2012.

[arXiv]
Suppose we are given an oracle that claims to approximate the permanent for most matrices X, where X is chosen from the Gaussian ensemble the matrix entries are i.i.d.~univariate complex Gaussians). Can we test that the oracle satisfies this claim? This paper gives a polynomial-time algorithm for the task. The oracle-testing problem is of interest because a recent paper of Aaronson and Arkhipov showed that if there is a polynomial-time algorithm for simulating boson-boson interactions in quantum mechanics, then an approximation oracle for the permanent (of the type described above) exists in BPP^NP. Since computing the permanent of even 0/1 matrices is #P-complete, this seems to demonstrate more computational power in quantum mechanics than Shor's factoring algorithm does. However, unlike factoring, which is in NP, it was unclear previously how to test the correctness of an approximation oracle for the permanent, and this is the contribution of the paper. The technical difficulty overcome here is that univariate polynomial self-correction, which underlies similar oracle-testing algorithms for permanent over finite fields ---and whose discovery led to a revolution in complexity theory---does not seem to generalize to complex (or even, real) numbers. We believe that this tester will motivate further progress on understanding the permanent of Gaussian matrices.

7. Sanjeev Arora, Rong Ge, Sushant Sachdeva, Grant Schoenebeck.
EC 2012.

[Proceedings] [arXiv]
A community in a social network is usually understood to be a group of nodes more densely connected with each other than with the rest of the network. This is an important concept in most domains where networks arise: social, technological, biological, etc. For many years algorithms for ﬁnding communities implicitly assumed communities are nonoverlapping (leading to use of clustering-based approaches) but there is increasing interest in ﬁnding overlapping communities. A barrier to ﬁnding communities is that the solution concept is often deﬁned in terms of an NP-complete problem such as Clique or Hierarchical Clustering.
This paper seeks to initiate a rigorous approach to the problem of ﬁnding overlapping communities, where “rigorous” means that we clearly state the following: (a) the object sought by our algorithm (b) the assumptions about the underlying network (c) the (worst-case) running time.
Our assumptions about the network lie between worst-case and average-case. An average-case analysis would require a precise probabilistic model of the network, on which there is currently no consensus. However, some plausible assumptions about network parameters can be gleaned from a long body of work in the sociology community spanning ﬁve decades focusing on the study of individual communities and ego-centric networks (in graph theoretic terms, this is the subgraph induced on a node’s neighborhood). Thus our assumptions are somewhat “local” in nature. Nevertheless they suﬃce to permit a rigorous analysis of running time of algorithms that recover global structure.
Our algorithms use random sampling similar to that in property testing and algorithms for dense graphs. We note however that our networks are not necessarily dense graphs, not even in local neighborhoods. Our algorithms explore a local-global relationship between ego-centric and socio-centric networks that we hope will provide a fruitful framework for future work both in computer science and sociology.

8. Sushant Sachdeva, Rishi Saket.
Approx 2011.

[arXiv]
We study the problem of computing the minimum vertex cover on $$k$$-uniform $$k$$-partite hypergraphs when the $$k$$-partition is given. On bipartite graphs $$(k=2)$$, the minimum vertex cover can be computed in polynomial time. For general $$k$$, the problem was studied by Lovász, who gave a $$\frac{k}{2}$$-approximation based on the standard LP relaxation. Subsequent work by Aharoni, Holzman and Krivelevich showed a tight integrality gap of $$\frac{k}{2} - o(1)$$ for the LP relaxation. While this problem was known to be NP-hard for $$k\geq 3$$, the first non-trivial NP-hardness of approximation factor of $$\frac{k}{4}-\epsilon$$ was shown in a recent work by Guruswami and Saket. They also showed that assuming Khot's Unique Games Conjecture yields a $$\frac{k}{2}-\epsilon$$ inapproximability for this problem, implying the optimality of Lovász's result.

In this work, we show that this problem is NP-hard to approximate within $$\frac{k}{2}-1+\frac{1}{2k}-\epsilon$$. This hardness factor is off from the optimal by an additive constant of at most 1 for $$k\geq 4$$. Our reduction relies on the Multi-Layered PCP of Dinur et al. and uses a gadget -- based on biased Long Codes -- adapted from the LP integrality gap of Aharoni et al. The nature of our reduction requires the analysis of several Long Codes with different biases, for which we prove structural properties of the so called cross-intersecting collections of set families -- variants of which have been studied in extremal set theory.

[arXiv]
The $$k$$-fold Cartesian product of a graph $$G$$ is defined as a graph on tuples $$(x_1,\ldots,x_k)$$ where two tuples are connected if they form an edge in one of the positions and are equal in the rest. Starting with $$G$$ as a single edge gives $$G^{\Box k}$$ as a $$k$$-dimensional hypercube. We study the distributions of edges crossed by a cut in $$G^{\Box k}$$ across the copies of $$G$$ in different positions. This is a generalization of the notion of influences for cuts on the hypercube.

We show the analogues of the statements of Kahn, Kalai and Linial (KKL theorem) and that of Friedgut (Friedgut's Junta Theorem), for the setting of Cartesian products of arbitrary graphs. We also extends the work on studying isoperimetric constants for these graphs to the value of semidefinite relaxations for expansion. We connect the optimal values of the relaxations for computing expansion, given by various semidefinite hierarchies, for $$G$$ and $$G^{\Box k}$$.

In a recent paper, Lee and Moharrami construct a family of metrics $$(X_n,d_n)$$ with $$|X_n|=N$$, such that $$(X_n,\sqrt{d_n})$$ embeds into $$\ell_2$$ with constant distortion, but embedding $$(X_n,d_n)$$ into a metric of negative type requires distortion $$\Omega((\log N)^{\frac{1}{4}})$$. In this paper, we build on their analysis, and improve their result by showing a $$\Omega((\log N)^{\frac{1}{3}})$$ lower bound for embedding $$(X_n,d_n)$$ into a metric of negative type. Moreover, we show that this analysis is essentially tight by constructing a map that has distortion $$O((\log N)^{\frac{1}{3}})$$.

This result implies a lower bound of $$\Omega((\log N)^{\frac{1}{3}})$$ for the integrality gap of the relaxed version of Goemans-Linial semidefinite program with weak triangle inequalities for Non-uniform Sparsest Cut.

11. Sanjeev Arora, James Lee, Sushant Sachdeva.

[arXiv]
In a well-known paper, Arora, Rao and Vazirani obtained an $$O(\log n)$$ approximation to the Balanced Separator problem and Uniform Sparsest Cut. At the heart of their result is a geometric statement about sets of points that satisfy triangle inequalities, which also underlies subsequent work on approximation algorithms and geometric embeddings. In this note, we give an equivalent formulation of the Structure theorem in [ARV] in terms of the expansion of large sets in geometric graphs on sets of points satisfying triangle inequalities.

Please email me if you would like to see a draft that has not been posted here.

### Contact

The best way to reach me is by email which would be

[last name] @ cs.princeton.edu

where [last name] is sachdeva