Princeton Theory Lunch Schedule, 2005-06.
Computer Science Department,
Princeton University
| Date | Speaker | Title |
| Sept 9 | Adi Avidor, Tel Aviv University. | Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT |
| Sept 16 | Boaz Barak, Princeton | How To Play Almost Any Mental Game Over The Net --Concurrent composition via Super-Polynomial Simulation |
| Sept 23 | | No theory lunch owing to faculty meeting |
| Sept 30 | Elliot Anshelevich | The Price of Stability for Network Design |
| Oct 7 | Nir Ailon | Aggregating Inconsistent Information: Ranking and Clustering |
| Oct 14 | |
| Oct 21 | |
| Oct 28 | Eyal Ackerman, Technion |
| Nov 4 | |
| Nov 11 | Paulo Zuliani, Princeton | Quantum Programming |
| Nov 18 | | No theory lunch |
| Nov 25 | Shengyu Zhang, Princeton |
| Dec 2 | Elad Hazan, Princeton | New algorithms for Online Game Playing and Universal Portfolio Management. |
| Dec 9 | Benny Sudakov, Princeton | Additive Approximation for Edge-Deletion Problems |
| Jan 20 | Liam Roditty, Tel Aviv University |
| Feb 10 | Jon Kelner, MIT |
| March 3 | Danny Harnik, Technion | On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications |
| March 10 | Ravi Kannan, Yale | Matrix-valued Random Variables, Clustering |
| March 17 | Shengyu Zhang, Princeton |
| March 24 | No talk planned (spring break) |
| March 31 | Vladlen Koltun, Stanford. |
| Apr 7 | Elchanan Mossel, UC Berkeley. |
| Apr 14 | Ashish Goel, Stanford |
| Apr 21 | Laszlo Lovasz |
| Apr 28 | No lunch; NYU/Columbia Theory Day |
| May 5 | Tal Malkin, Columbia |
| May 12 | Michel Goemans, MIT | Bounded Degree Minimum Spanning Trees |
| June 30 at 2pm | Esther Ezra, Tel Aviv University | On Translational Motion Planning in 3-Space |
Abstracts for the talks.
Sept 9 Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT
Adi Avidor, Tel Aviv University
Goemans and Williamson obtained an approximation algorithm for
the MAX CUT problem with a performance ratio of $\alpha_{GW}~
0.87856$. Their algorithm starts by solving a standard SDP
relaxation of MAX CUT and then rounds the optimal solution
obtained using a random hyperplane. In some cases, the optimal
solution of the SDP relaxation happens to lie in a low dimensional
space. Can an improved performance ratio be obtained for such
instances? We show that the answer is yes in dimensions two and
three and conjecture that this is also the case in any higher
fixed dimension. In two dimensions an optimal $32/(25+5\sqrt{5})$
approximation algorithm was already obtained by Goemans. (Note that
$32/(25+5\sqrt{5}) ~ 0.88456$.)
We obtain an alternative derivation of this result using Gegenbauer
polynomials. Our main result is an improved rounding procedure for
SDP solutions that lie in $R3$ with a
performance ratio of about
$0.8818$ . The rounding procedure uses an interesting yin-yan
coloring of the three dimensional sphere. The improved performance
ratio obtained resolves, in the negative, an open problem posed by
Feige and Schechtman [STOC'01]. They asked whether there are
MAX CUT instances with integrality ratios arbitrarily close to
$\alpha_{GW}~0.87856$ that have optimal embedding, i.e.,
optimal solutions of their SDP relaxations, that lie in R^3
Joint work with Uri Zwick
Sept 16 How To Play Almost Any Mental Game Over The Net - Concurrent Composition via Super-Polynomial Simulation
Boaz Barak, Princeton University
We consider the problem of designing a secure protocol for ANY
multi-party functionality (for example, electronic auctions,
electronic elections, privacy-preserving data mining), that remains
secure even when executed concurrently in the network with many copies
of itself and other protocols. Previous such constructions either
required a third trusted party or used highly non-standard computational
assumptions.
Under reasonably standard computational assumptions,
we construct a protocol for this problem that remains secure (under a
relaxed definition of security) even in the concurrent, asynchronous setting.
The relaxation of security we consider is simulation in
quasi-polynomial (as opposed to polynomial) time. This relaxation seems to suffice to
obtain protocols for any task whose privacy, integrity and input
independence cannot broken by efficient adversaries.
Joint work with Amit Sahai (UCLA). An extended abstract of this work
will appear in FOCS 2005.
Sept 30 The Price of Stability for Network Design
E. Anshelevich
Network design is a fundamental problem for which it is important to
understand the effects of strategic behavior. Given a collection of
self-interested agents who want to form a network connecting certain
endpoints, the set of stable solutions (the Nash equilibria) may look
quite different from the centrally enforced optimum. We study the price
of stability, i.e. the quality of the best Nash equilibrium compared to
the optimum network cost. The best Nash equilibrium solution has a
natural meaning of stability in this context: it is the optimal
solution that can be proposed from which no user will "deviate".
We consider two versions of this game: one where agents may divide the
cost of the edges they use in any manner they desire, and one where the
cost of each such edge is divided equally between the agents whose
connections make use of it. In the first version, determining whether
or not a Nash equilibrium exists is NP-complete. However, when the goal
of each player is to connect a terminal to a common source, we prove
that there is a Nash equilibrium as cheap as the optimal network, and
give a polynomial time algorithm to find a (1+epsilon)-approximate Nash
equilibrium that does not cost much more. In the second version,
however, a Nash equilibrium always exists and can be achieved via
best-response dynamics. In this version we can show a tight bound of
O(log k) on the price of stability (where k is the number of agents). I
will discuss these results and possibly mention some extensions as
well.
This is joint work with: Anirban Dasgupta, Jon Kleinberg, Eva Tardos,
Tom Wexler, and Tim Roughgarden
Oct 7 Aggregating Inconsistent Information: Ranking and Clustering
Nir Ailon, Princeton.
A ranking of n web pages is to be chosen from outputs of k search
engines.How do we choose one ranking minimizing the
"disagreement"
with the k rankings?
A clustering of n genes is to be chosen from outputs of k clustering
algorithms.How do we choose one clustering minimizing the
"disagreement"
with the k clusterings?
These information aggregation problems date back to as early as 1785,
when
the French philosopher Condorcet considered voting systems where each
voter chooses a full ranking of a set of candidates.Recently,
their computational aspects have been studied.
In this talk, I will describe new algorithms improving on the best
known approximation ratios for both the ranking and the clustering problems
with respect to a standard measure of disagreement. I will also
discuss
related graph theoretic optimization problems, known as the minimum
feedback arc set in tournaments and the correlation clustering in
complete graphs. Additionally, I will show that the problem of finding a
minimum feedback arc set in tournaments has no poly-time algorithm, assuming NP
is not contained in BPP. This almost settles a long-standing
conjecture of Bang-Jensen and Thomassen, that the problem is NP-hard.
This is joint work with Moses Charikar and Alantha Newman.
Oct 21 Additive Approximation for Edge-Deletion Problems
Benny Sudakov, Princeton
A graph property is monotone if it is closed under removal
of vertices and edges. In this talk we consider the following
edge-deletion problem; given a monotone property P and a graph G, compute the smallest number of edge deletions that are
needed in order to turn G into a graph satisfying P.
We denote this quantity by $E_P(G)$. Our first result states that for
any monotone graph property P, any $\epsilon >0$ and n-vertex input
graph $G$ one can approximate $E_P(G)$ up to an additive error of $\epsilon n^2
Given the above, a natural question is for which monotone
properties one can obtain better additive approximations of $E_P(G)$.
Our second main result essentially resolves this
problem by giving a precise characterization of the monotone graph
properties for which such approximations exist. We will show that
for any dense monotone property, that is, a property for which there
are graphs on n vertices with $\Omega (n2)$
edges that satisfy it, it is NP-hard to approximate $E_P(G)$ up to an additive error of
$n^{2-\delta}$, for any fixed positive $\delta$.
The proof requires several new ideas and involves tools from Extremal
Graph Theory together with spectral techniques.Interestingly,
prior to this work it was not even known that computing $E_P(G)$
precisely for dense monotone properties is $NP$-hard. We thus answer
(in a strong form) a question of Yannakakis raised in 1981.
Joint work with N. Alon and A. Shapira.
Oct 28 On the maximum number of edges in $k$-quasi-planar graphs
Eyal Ackerman, Technion
A topological graph is called \emph{$k$-quasi-planar}, if it does not
contain $k$ pairwise crossing edges. It is conjectured that for every fixed $k$
the maximum number of edges in a $k$-quasi-planar graph on $n$ vertices is
$O(n)$. We provide, for the first time, an affirmative answer to the case
$k=4$. We also give the simplest proof and the best upper bound known, for the
maximum number of edges in 3-quasi-planar graphs on $n$ vertices.
Moreover, we show a tight upper bound for 3-quasi-planar graphs in
which every pair of edges meet at most once.
Joint work with Gabor Tardos.
Quantum Programming
Paulo Zuliani, Princeton
In this talk we introduce a programming language for the expression of
quantum algorithms. Quantum algorithms are usually described by means of pseudocode,
supported by two underlying models: quantum circuits and quantum Turing machines.
Those models are useful in complexity analysis of quantum computation and in low-level
description of implementations. However, they are inadequate for specifying and reasoning
about correctness of quantum algorithms at a higher level.We present a quantum programming
language that contains the features required to program a `universal' quantum computer,
including initialisation and observation. Moreover, the language has a formal semantics and
body of laws, and provides a refinement calculus supporting the verification and derivation of
programs against their specifications. We express in thelanguage a selection of quantum algorithms
and derive one of them (Deutsch-Jozsa's) from its specification.
Dec 2 New Algorithms for Online Game Playing and Universal Portfolio Management
Elad Hazan, Princeton
We introduce a new algorithm, Smooth Prediction, and a new analysis
technique that is applicable to a variety of online optimization scenarios, including
regret minimization for Lipschitz regret functions studied by Hannan,
universal portfolios by Cover, Kalai and Vempala, onine optimization by
Zinkevich, and others. Our algorithm is more efficient and applies to a
more general setting (e.g. when the payoff function is unknown).
The algorithm extends a method proposed by Hannan in the 1950's,
called `Follow the Leader". It adds a barrier function to the convex
optimization routine required to find the ``leader" at every iteration.
One application of our algorithm is the well studied universal portfolio
management problem, for which our algorithm is the first to combine
optimal convergence rate with computational efficiency. For more general
settings, our algorithm is the first to achieve optimal convergence
rate.
Joint work with Amit Agarwal.
March 3, 2006 On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications
Danny Harnik, Technion
We consider NP problems that have long instances but relatively short
witnesses. The question is, can one efficiently compress an instance and
store a shorter representation that maintains the information of whether the
original input is in the language or not. We want the length of the
compressed instance to be polynomial in the length of the witness rather than
the length of original input. Such compression enables to succinctly store
in stances until a future setting will allow solving them, either via a technological
or algorithmic breakthrough or simply until enough time has elapsed. We
investigate when such compression is possible and give a new classification of
NP with respect to this compression.
April 21, 2006 Distances and limits of graphs
Laszlo Lovasz, Microsoft Research
How to define that two large graphs are "similar"? For example, two large random graphs wi
th the same edge density (but possibly with a different number of nodes) behave very simil
arly from virtually all points of view.
It is possible to define a distance of two graphs that captures global similarity. You can
take the completion of the resulting metric space (i.e., limits of graph sequences conver
gent in this metric); it turns out that the limit objects you have to add have several ver
y natural descriptions; one of these is 2-variable symmetric measurable functions from [0,
1]^2 to [0,1]. The completed space is compact; this fact is essentially equivalent to Szem
eredi's Regularity Lemma. This compactness can be used in proofs; for example, it yields v
ery short (although non-effective) proofs for results in property testing.
This is joint work with Chrisitan Borgs, Jennifer Chayes, Vera Sos, Balazs Szegedy
and Kati Vesztergombi.
May 12, 2006 Bounded Degree Minimum Spanning Trees
Michel X. Goemans, M.I.T.
In this talk, I will consider the minimum cost spanning tree problem
under the restriction that all degrees must be at most a given value
$k$. The main result is that one can efficiently find a spanning tree
of maximum degree at most $k+2$ whose cost is at most the cost of the
optimum spanning tree of maximum degree $k$. This is almost best
possible.
The approach uses a sequence of simple algebraic, polyhedral and
combinatorial arguments. It illustrates many techniques and ideas in
combinatorial optimization as it involves polyhedral
characterizations, uncrossing, matroid intersection, and graph
orientations (or packing of spanning trees). The talk will be self-contained.
The result generalizes to the setting where every vertex has both
upper and lower bounds and gives then a spanning tree which violates
the bounds by at most 2 units and whose cost is at most the cost of
the optimum tree. It also gives a better understanding of the subtour
relaxation for both the symmetric and asymmetric traveling salesman
problems.
June 30, 2006, NEW TIME: 2pm On Translational Motion Planning in 3-Space
Esther Ezra, Tel Aviv University
Constructing the union of geometric objects and bounding its combinatorial
complexity is a fundamental problem in computational and combinatorial
geometry. A major application of this problem is to translational motion
planning, where a polyhedral robot $R$ translates in a fixed polyhedral
environment, and one wishes to compute all placements of $R$ (known as
free placements) at which it avoids collision with any obstacle.
In this talk, I will give a brief overview of the area, and explain its
importance. I will then discuss the union problem, where the input
consists of $n$ ``fat'' tetrahedra in $3$-space of arbitrary sizes, and
show that in this case the combinatorial complexity of the union is
O(n^{5/2+\eps})$, for any $\eps > 0$, thus obtaining the first subcubic
upper bound. An immediate consequence of this result implies that the
combinatorial complexity of the union of $n$ cubes in 3-space of
arbitrary side lengths, is $O(n^{5/2+\eps})$, for any $\eps > 0$.
In addition, I will show that when the tetrahedra have nearly equal size,
this bound becomes $O(n^{2+\eps})$, for any $\eps > 0$, which is almost
tight in the worst case.
The talk will be self-contained.
Joint work with Micha Sharir.