Main.TheoryLunch History

Hide minor edits - Show changes to markup

September 05, 2009, at 02:34 PM by 69.86.217.170 -
Changed lines 13-15 from:

Fall 2008

For the Fall schedule, please visit our new Theory calendar.

to:

Talks since Fall 2008

For the theory talks since Fall 2008, please visit our new Theory calendar.


Previous years

September 10, 2008, at 06:23 PM by 128.112.92.142 -
Changed lines 13-16 from:
to:

Fall 2008

For the Fall schedule, please visit our new Theory calendar.

May 22, 2008, at 05:56 PM by 128.112.95.79 -
Changed lines 35-36 from:
May 23Nina Balcan, CMU
to:
May 23cancelled
May 14, 2008, at 11:05 AM by 172.16.29.250 -
Changed line 34 from:
May 16Navin Goyal, Georgia TechThe VPN Conjecture is True
to:
May 16Navin Goyal, Georgia TechThe VPN Conjecture is True
Added lines 519-529:

May 16 The VPN conjecture is true

Navin Goyal, Georgia Tech

We consider the following network design problem. A group of nodes (terminals) in a large network wishes to reserve bandwidth to form a sub-network called virtual private network (VPN). The VPN should be able to support various communication patterns that may arise between the terminals. These communication patterns may change with time, and the only restriction on them is that for each terminal there is an upper bound on the total amount of traffic handled by that terminal. This leads to the well-studied VPN design problem wherein we must find paths between every pair of terminals and reserve sufficient capacity on the edges on these paths so that all possible communication patterns satisfying the given upper bounds can be routed. Each edge has cost proportional to the bandwidth reserved on it. The goal is to minimize the total cost of the VPN.

Formally, we are given an undirected graph G=(V,E) with edges costs c(e) and a set of terminal nodes W. A hose demand matrix for W is any symmetric matrix [D_{ij}] such that for each i, \sum_{j \neq i} D_{ij} \leq 1. We must compute the minimum cost edge capacities that are able to support the oblivious routing of every hose matrix in the network. An oblivious routing template is a simple path P_{ij} for each pair i,,j \in W. Given such a template, if we are to route a demand matrix D, then for each i,,j we send D_{ij} units of flow along each P_{ij}. A well-studied conjecture states that there exists an optimal VPN that is a tree, i.e., the union of paths P_{ij} is a tree. In this talk, I will explain our recent proof of this conjecture. This also yields the first polynomial time algorithm for computing the optimal VPN.

This is joint work with Neil Olver and Bruce Shepherd.

April 30, 2008, at 11:54 PM by 172.16.29.247 -
Changed line 33 from:
May 9no talkspecial program at Princeton
to:
May 9all day workshopBob Tarjan's 60th Birthday Celebration
April 30, 2008, at 11:51 PM by 172.16.29.247 -
Changed line 32 from:
May 2Adam Meyerson, UCLA
to:
Added lines 508-518:

May 2 Randomized K-Server on Hierarchical Binary Trees

Adam Meyerson, UCLA

Metric K-Server is a major problem in the area of online algorithms. We are given a metric space along with a set of K initial server locations, and a sequence of location requests arrives one at a time. As each request arrives, we must move one of our servers to that location. The goal is to minimize the total (or average) distance traveled by servers. This models emergency response, and also has a wide range of applications relating to caching and paging, robot navigation, and reconfigurable computing. In general we cannot compute an optimum solution without foreknowledge of the request sequence, so we will seek an algorithm with good competitive performance (minimizing the worst-case ratio of our total distance traveled to the optimum).

The major result on K-Server is the Work Function Algorithm by Koutsoupias and Papadimitriou, establishing a 2k-1 competitive deterministic algorithm. Slightly improved results (k-competitive) exist for some special cases and a deterministic lower bound of k is also known. However, in many cases it is possible to improve online competitive results using a randomized algorithm. In this talk, I will present a randomized algorithm for K-Server on a special class of metric spaces -- hierarchical binary trees. While this seems a restrictive case, a slight generalization of this result to non-binary hierarchically structured trees would apply to arbitrary finite metrics because of a recent set of results on randomized metric embedding. This talk presents a number of new ideas, including a model of an online problem as a hierarchy of independent online decision makers and an improved algorithm for a special case of the metrical task system problem.

This is joint work with Aaron Cote (UCLA) and Laura Poplawski (Northeastern) and has been accepted to STOC 2008.


April 13, 2008, at 04:47 AM by 219.234.180.200 -
Changed line 30 from:
Apr 18Vijay Vazirani, Georgia Tech
to:
Apr 18Vijay V. Vazirani, Georgia TechNash Bargaining via Flexible Budget Markets
Added lines 481-507:

Apr 18 Nash Bargaining via Flexible Budget Markets

Vijay V. Vazirani, Georgia Tech

In his seminal 1950 paper, John Nash defined the bargaining problem; the ensuing theory of bargaining lies today at the heart of game theory. In this work, we initiate an algorithmic study of Nash bargaining problems.

We consider a class of Nash bargaining problems whose solution can be stated as a convex program. For these problems, we show that there corresponds a market whose equilibrium allocations yield the solution to the convex program and hence the bargaining problem. For several of these markets, we give combinatorial, polynomial time algorithms, using the primal-dual paradigm.

Unlike the traditional Fisher market model, in which buyers spend a fixed amount of money, in these markets, each buyer declares a lower bound on the amount of utility she wishes to derive. The amount of money she actually spends is a specific function of this bound and the announced prices of goods.

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate topics as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.


April 01, 2008, at 09:50 PM by 71.125.148.236 -
Changed line 31 from:
Apr 25 
to:
Apr 25Umesh Vazirani, UC Berkeley
April 01, 2008, at 05:42 PM by 128.112.95.79 -
Changed lines 439-444 from:

A sparsifier of a graph $G$ is a sparse subgraph $H$ that approximates it in some appropriate way. Benczur and Karger gave the first sparsifiers in '96, in which the weight of every cut in $H$ was within a factor of $(1+/-\epsilon)$ of its weight in $G$. In this work, we consider a stronger notion of approximation (introduced by Spielman and Teng in '04) that requires the Laplacian quadratic forms of $H$ and $G$ to be close -- specifically, that $x^TL'x = (1+/-\epsilon) x^TLx$ for all vectors $x\in\R^n$, where $L$ and $L'$ are the Laplacians of $G$ and $H$ respectively. This subsumes the Benczur-Karger notion, which corresponds to the special case of $x\in {0,1}^n$.

We show that every graph contains a (strong) sparsifier with $O(n\logn)$ edges, and that one can be found in nearly-linear time by randomly sampling each edge of the graph with probability proportional to its effective resistance. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in $O(\log n)$ time.

We conjecture the existence of sparsifiers with $O(n)$ edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph.

to:

A sparsifier of a graph G is a sparse subgraph H that approximates it in some appropriate way. Benczur and Karger gave the first sparsifiers in '96, in which the weight of every cut in H was within a factor of (1+/-\epsilon) of its weight in G. In this work, we consider a stronger notion of approximation (introduced by Spielman and Teng in '04) that requires the Laplacian quadratic forms of H and G to be close -- specifically, that x^TL'x = (1+/-\epsilon) x^TLx for all vectors x in R^n, where L and L' are the Laplacians of G and H respectively. This subsumes the Benczur-Karger notion, which corresponds to the special case of x in {0,1}^n.

We show that every graph contains a (strong) sparsifier with O(n log n) edges, and that one can be found in nearly-linear time by randomly sampling each edge of the graph with probability proportional to its effective resistance. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in O(log n) time.

We conjecture the existence of sparsifiers with O(n) edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph.

April 01, 2008, at 05:34 PM by 128.112.95.79 -
Changed line 29 from:
Apr 11Rohit Khandekar, IBM
to:
Added lines 449-480:

Apr 11 Stateless Distributed Gradient Descent for Positive Linear Programs

Rohit Khandekar, IBM

I shall describe a framework of distributed and stateless solutions for packing and covering linear programs, which are solved by multiple agents operating in a cooperative but uncoordinated manner. Our model has a separate "agent" controlling each variable and an agent is allowed to read-off the current values only of those constraints in which it has non-zero coefficients. This is a natural model for many distributed applications like multi-commodity flow control, maximum bipartite matching, and dominating sets.

The most appealing feature of our algorithms is their simplicity and fast convergence. For example, in the case of the flow control problem, our algorithm associates edge-lengths that are exponential in the edge-congestions and each commodity increases (or decreases) its flow multiplicatively if its path-length is less (or more) than a threshold. Our algorithm starting from any feasible solution, always maintains feasibility, and converges to a near-optimum solution in poly-logarithmic time.

While exponential dual variables are used in several packing/ covering LP algorithms before, this is the first algorithm which is both stateless and has poly-logarithmic convergence. Our algorithms can be thought of as applying distributed gradient descent/ascent on a carefully chosen potential.

This talk is based on a joint work with Baruch Awerbuch and is going to appear at STOC 08.

April 01, 2008, at 05:30 PM by 128.112.95.79 -
Changed line 28 from:
Apr 4Nikhil Srivastava, Yale[#srivastava|Graph Sparsification by Effective Resistances]
to:
April 01, 2008, at 05:29 PM by 128.112.95.79 -
Changed line 28 from:
Apr 4Nikhil Srivastava, YaleGraph Sparsification by Effective Resistances
to:
Apr 4Nikhil Srivastava, Yale[#srivastava|Graph Sparsification by Effective Resistances]
Changed lines 435-448 from:
to:

Apr 4 Graph Sparsification by Effective Resistances

Nikhil Srivastava, Yale

A sparsifier of a graph $G$ is a sparse subgraph $H$ that approximates it in some appropriate way. Benczur and Karger gave the first sparsifiers in '96, in which the weight of every cut in $H$ was within a factor of $(1+/-\epsilon)$ of its weight in $G$. In this work, we consider a stronger notion of approximation (introduced by Spielman and Teng in '04) that requires the Laplacian quadratic forms of $H$ and $G$ to be close -- specifically, that $x^TL'x = (1+/-\epsilon) x^TLx$ for all vectors $x\in\R^n$, where $L$ and $L'$ are the Laplacians of $G$ and $H$ respectively. This subsumes the Benczur-Karger notion, which corresponds to the special case of $x\in {0,1}^n$.

We show that every graph contains a (strong) sparsifier with $O(n\logn)$ edges, and that one can be found in nearly-linear time by randomly sampling each edge of the graph with probability proportional to its effective resistance. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in $O(\log n)$ time.

We conjecture the existence of sparsifiers with $O(n)$ edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph.

Joint work with Dan Spielman, to appear in STOC 2008.

Paper: http://arxiv.org/abs/0803.0929

March 28, 2008, at 03:42 AM by 71.125.142.65 -
Changed line 31 from:
Apr 25
to:
Apr 25 
March 28, 2008, at 03:41 AM by 71.125.142.65 -
Changed line 31 from:
Apr 25Nina Balcan, CMU
to:
Apr 25
Changed lines 35-36 from:
May 23 
to:
May 23Nina Balcan, CMU
March 10, 2008, at 12:12 PM by Boaz Barak -
Changed lines 392-393 from:
to:

Alexandr Andoni, MIT

March 10, 2008, at 12:11 PM by Boaz Barak -
Changed line 26 from:
Mar 21Alexandr Andoni, MITTBA
to:
Added lines 390-420:

Mar 21 The Computational Hardness of Estimating Edit Distance

Edit distance (aka Levenshtein distance) between two strings is the minimum number of insertions/deletions/substitutions needed to transform one string into the other. This distance is of key importance in fields like computational biology and text processing. Unfortunately, edit distance appears to be a "hard" distance to deal with, eluding efficient algorithms, both theoretically and in practice.

We present the first non-trivial communication complexity lower bound for the problem of estimating the edit distance. To the best of our knowledge, this is the first computational setting in which the complexity of computing the edit distance is provably larger than that of Hamming distance.

Our lower bound implies that protocols with O(1) communication can only obtain approximation of Omega~(log d), where d is the strings' length. This model is of particular importance, since it captures constant-size sketches as well as embeddings into spaces like L_1 and squared-L_2, two prevailing algorithmic approaches to deal with edit distance. Furthermore, the bound holds even if we restrict our attention to strings where any character appears at most once in a string (namely, the Ulam metric). In this latter case, our lower bound nearly matches the upper bound.

Joint work with Robert Krauthgamer (Weizmann Inst and IBM Almaden). Based on a result presented at FOCS'07.


March 04, 2008, at 07:32 PM by 172.16.29.250 -
Changed line 25 from:
Mar 14Mikkel Thorup, AT&TEfficient Cuts via Greedy Tree Packing
to:
Mar 14Mikkel Thorup, AT&TMaximum Overhang
Changed lines 363-364 from:

Mar 14 Efficient Cuts via Greedy Tree Packing

to:

Mar 14 Maximum Overhang

Changed lines 367-387 from:

We study a simple greedy tree packing of a graph and use it to derive better algorithms for fully-dynamic min-cut and for the static k-way cut problem.

A greedy tree packing is a sequence of spanning tree where each new tree is a minimum spanning tree with respect to the edge loads from the previous trees, that is, the load of an edge is the number of times it has been used by the previous trees.

A minimum k-way cut is a minimum set of edges whose removal splits the graph in k components. A min-cut is a minimum 2-way cut.

If the (unknown) edge connectivity of the graph is c, we show that if we pack c^7*log^3 m trees, then some min-cut is crossed exactly once by some tree. This leads to the best fully-dynamic min-cut algorithm (presented at STOC'01)

If we pack k^3*log n trees, then every minimum k-way cut is crossed 2k-1 times by some tree. This leads to the best determinstic algorithm for k-way cut (to be presented at STOC'08)

to:

Stacking $n$ blocks on a table so as to achieve maximum overhang over the side has a long history. The problem appears in physics and engineering textbooks from as early as the mid 19th century, but was apparently first brought to the attention of the mathematical community in 1923 when J.G. Coffin posed it in the ``Problems and Solutions" section of the American Mathematical Monthly; no solution was presented there. Since then the problem has been widely publicized by Gardner in Scientific American. In a cover article in Amer. J. Phys. in 2005 it was claimed that order log n would be best possible, while a book "Mad about Physics" from 2001 suggested that $n^{1/2}$ would be possible. Both of these mutually contradictory views are wrong! At SODA'06, Paterson and Zwick constructed n-block stacks with overhangs of order $n^{1/3}$, but can we go even further? We prove that order $n^{1/3}$ is best possible, resolving the long-standing overhang problem up to a constant factor.

In the talk we will both discuss the construction of Paterson and Zwick, and our proof that nothing substantially better is possible.

Joint work with Mike Paterson (Warwick), Yuval Peres (Berkeley), Peter Winkler (Dartmouth), and Uri Zwick (Tel Aviv).

March 04, 2008, at 10:06 AM by 172.16.29.250 -
Changed line 32 from:
May 2 
to:
May 2Adam Meyerson, UCLA
Changed line 34 from:
May 16 
to:
May 16Navin Goyal, Georgia TechThe VPN Conjecture is True
March 02, 2008, at 04:31 PM by 69.203.21.182 -
Changed lines 35-36 from:
to:
May 23 
March 02, 2008, at 12:25 PM by 69.203.21.182 -
Changed line 33 from:
May 9 
to:
May 9no talkspecial program at Princeton
February 28, 2008, at 09:09 PM by 172.16.29.250 -
Changed line 25 from:
Mar 14Mikkel Thorup
to:
Mar 14Mikkel Thorup, AT&TEfficient Cuts via Greedy Tree Packing
Added lines 362-387:

Mar 14 Efficient Cuts via Greedy Tree Packing

Mikkel Thorup, AT&T

We study a simple greedy tree packing of a graph and use it to derive better algorithms for fully-dynamic min-cut and for the static k-way cut problem.

A greedy tree packing is a sequence of spanning tree where each new tree is a minimum spanning tree with respect to the edge loads from the previous trees, that is, the load of an edge is the number of times it has been used by the previous trees.

A minimum k-way cut is a minimum set of edges whose removal splits the graph in k components. A min-cut is a minimum 2-way cut.

If the (unknown) edge connectivity of the graph is c, we show that if we pack c^7*log^3 m trees, then some min-cut is crossed exactly once by some tree. This leads to the best fully-dynamic min-cut algorithm (presented at STOC'01)

If we pack k^3*log n trees, then every minimum k-way cut is crossed 2k-1 times by some tree. This leads to the best determinstic algorithm for k-way cut (to be presented at STOC'08)


February 26, 2008, at 11:32 AM by 172.16.29.250 -
Changed lines 326-337 from:

1. Polynomial-time constructible binary codes with the currently best

   known trade-off between error-correction radius and rate, achieving
   the so-called Blokh-Zyablov bound.

2. Existence of binary concatenated codes that achieve list decoding

   capacity, which is encouraging given the preeminent role played by
   code concatenation in constructing good binary codes.

3. Explicit binary codes for correcting a fraction (1/2-eps) of errors

   for eps -> 0 with block length growing as 1/eps^3; improving this
   cubic dependence will likely require a substantially new approach.
to:
  1. Polynomial-time constructible binary codes with the currently best known trade-off between error-correction radius and rate, achieving the so-called Blokh-Zyablov bound.
  2. Existence of binary concatenated codes that achieve list decoding capacity, which is encouraging given the preeminent role played by code concatenation in constructing good binary codes.
  3. Explicit binary codes for correcting a fraction (1/2-eps) of errors for eps -> 0 with block length growing as 1/eps^3; improving this cubic dependence will likely require a substantially new approach.
February 26, 2008, at 11:30 AM by 172.16.29.250 -
Changed line 23 from:
Feb 29Venkat Guruswami, U. Washington
to:
Changed line 29 from:
to:
Apr 11Rohit Khandekar, IBM
February 26, 2008, at 11:29 AM by 172.16.29.250 -
Changed line 26 from:
Mar 21Alexandr Andoni, MITTBA
to:
Mar 21Alexandr Andoni, MITTBA
Changed lines 29-30 from:
Apr 11Rohit Khandekar, IBM
Apr 18 
to:
Apr 11Rohit Khandekar, IBMList decoding of binary codes: Improved results via new concatenation schemes
Apr 18Vijay Vazirani, Georgia Tech
Added lines 307-341:

Feb 29 List decoding of binary codes: Improved results via new concatenation schemes

Venkat Guruswami, IAS and U. Washington

Recent progress in algebraic coding theory has led to the construction of error-correcting codes with the optimal trade-off between information rate and the fraction of worst-case errors corrected (in the model of list decoding). These codes, which are folded versions of the classical Reed-Solomon codes, are defined over large alphabets. The corresponding program of achieving list decoding capacity for binary codes -- namely constructing binary codes of rate approaching 1-H(p) to list decode a fraction p of errors -- remains a major long term challenge.

In this talk, I will discuss the following three results on list decoding of binary codes. All of them use a concatenation scheme with an outer folded Reed-Solomon code, and crucially rely on the powerful list decoding property of these codes.

1. Polynomial-time constructible binary codes with the currently best

   known trade-off between error-correction radius and rate, achieving
   the so-called Blokh-Zyablov bound.

2. Existence of binary concatenated codes that achieve list decoding

   capacity, which is encouraging given the preeminent role played by
   code concatenation in constructing good binary codes.

3. Explicit binary codes for correcting a fraction (1/2-eps) of errors

   for eps -> 0 with block length growing as 1/eps^3; improving this
   cubic dependence will likely require a substantially new approach.

Based on joint works with Atri Rudra appearing in RANDOM'07, SODA'08, and CCC'08.


February 21, 2008, at 10:05 PM by Boaz Barak -
Changed line 26 from:
Mar 21 spring break
to:
Mar 21Alexandr Andoni, MITTBA
February 20, 2008, at 06:05 PM by 128.112.95.79 -
Changed lines 28-29 from:
Apr 4 
Apr 11 
to:
Apr 4Nikhil Srivastava, YaleGraph Sparsification by Effective Resistances
Apr 11Rohit Khandekar, IBM
February 20, 2008, at 06:00 PM by 128.112.95.79 -
Changed line 24 from:
Mar 7 
to:
Mar 7Nikhil Bansal, IBMDegree bounded network design
Changed lines 307-331 from:
to:

Mar 7 Degree bounded network design

Nikhil Bansal, IBM

Computing the minimum cost spanning tree in an undirected graph is well understood since the 1920's. However, consider the generalization where we are given degree bounds b_v on each vertex v, and we want to find the minimum cost spanning tree subject to these bounds. In a recent break through result, Goemans gave a LP based approximation algorithm that computes the minimum cost tree, while violating the degree bounds by at most an additive +2. This was subsequently improved and simplified by Singh and Lau who gave an additive guarantee of +1.

In this talk, I will describe a new extremely simple proof of the +1 result. Then I will discuss extensions to directed graphs, matroids and more general network design problems. Most of the talk will consist of giving a flavor of the polyhedral techniques used in the design and analysis of these algorithms.

Joint work with Rohit Khandekar and Viswanath Nagarajan.

February 15, 2008, at 12:30 AM by 172.16.29.250 -
Changed lines 31-32 from:
Apr 25 
to:
Apr 25Nina Balcan, CMU
May 2 
May 9 
May 16 
February 12, 2008, at 06:53 PM by 128.112.95.79 -
Changed line 21 from:
Feb 15Prasad Raghavendra, U. Washington
to:
Changed lines 266-279 from:
to:

Feb 15 Optimal Algorithms and Inapproximability Results for every CSP?

Prasad Raghavendra, U. Washington

Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming.

Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by certain simple SDP. Specifically, we show a generic conversion from SDP integrality gap instances to UGC hardness results. This result holds both for maximization and minimization problems over arbitrary finite domains.

Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP.

Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of the natural SDP. Thus the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2SAT and Unique Games.

February 05, 2008, at 02:51 PM by 128.112.95.79 -
Changed line 25 from:
Mar 14 
to:
Mar 14Mikkel Thorup
February 05, 2008, at 10:30 AM by 172.16.29.250 -
Changed lines 268-269 from:

Approximate Hypergraph Partitioning and Applications

to:

Feb 22 Approximate Hypergraph Partitioning and Applications

February 05, 2008, at 10:20 AM by 172.16.29.250 -
Changed lines 23-24 from:
Feb 29Venkat Guruswami, U. Wahington
Mar 7|
to:
Feb 29Venkat Guruswami, U. Washington
Mar 7 
February 05, 2008, at 10:17 AM by 172.16.29.250 -
Changed lines 14-15 from:

Fall 2007

to:

Spring 2008

Added lines 18-38:
Jan 15Robi Krauthgamer,WeizmannOvercoming the l_1 nonembeddability barrier
Jan 23Mark Braverman,TorontoNoisy sorting without resampling
Feb 8C. Seshadhri, PrincetonAdaptive Algorithms for Online Optimization
Feb 15Prasad Raghavendra, U. Washington
Feb 22Arie Matsliah, TechnionApproximate Hypergraph Partitioning and Applications
Feb 29Venkat Guruswami, U. Wahington
Mar 7|
Mar 14 
Mar 21 spring break
Mar 28James Lee, U. WashingtonSpectral bounds without conformal mappings
Apr 4 
Apr 11 
Apr 18 
Apr 25 

Fall 2007

DateSpeakerTitle
Deleted lines 52-62:

Spring 2008

DateSpeakerTitle
Jan 15Robi Krauthgamer,WeizmannOvercoming the l_1 nonembeddability barrier
Jan 23Mark Braverman,TorontoNoisy sorting without resampling
Feb 8C. SeshadhriAdaptive Algorithms for Online Optimization
Added line 57:
Deleted lines 58-82:

Feb 8 Adaptive Algorithms for Online Optimization

C. Seshadhri

The online learning framework captures a wide variety of learning problems. The setting is as follows - in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm's loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret", a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Joint work with Elad Hazan


Changed lines 241-306 from:
to:

Feb 8 Adaptive Algorithms for Online Optimization

C. Seshadhri, Princeton University

The online learning framework captures a wide variety of learning problems. The setting is as follows - in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm's loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret", a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Joint work with Elad Hazan



Approximate Hypergraph Partitioning and Applications

Arie Matsliah, Technion

I will present an algorithm that given eps>0 and a graph G on n vertices outputs an eps-regular partition of G in O(n) time.

Unlike the previous approaches for this problem that "reproved" Szemerdi's regularity lemma algorithmically and therefore guaranteed to find partitions of tower-size only, our algorithm will find a small regular partition if one exists.

The main tool that we develop for the above task is an algorithm for finding approximate partitions of hypergraphs, which generalizes the algorithm of Goldreich-Goldwasser-Ron for finding approximate partitions of graphs.

Joint work with Eldar Fischer and Asaf Shapira

Paper: http://www.cs.technion.ac.il/~ariem/papers/ahp.pdf



Mar 28 Spectral bounds without conformal mappings

James Lee, University of Washington

In 1980, Yang and Yau gave optimal bounds on the second eigenvalue of the Laplacian on a 2-dimensional surface of bounded genus. Spielman and Teng showed that such a bound holds in the graph setting for planar graphs, which implies that for bounded degree planar graphs, Lipton-Tarjan separators will be found by a simple spectral algorithm. Later, Kelner extended this to all graphs of bounded genus.

All of these approaches are based on conformal mappings, or their discrete analogue (e.g. Koebe's embedding theorem), and it seems difficult to extend them to families of graphs that don't possess a conformal structure. We give a more intrinsic approach to these spectral bounds, based on metric embeddings, together with a bit of duality, and some elementary multi-commodity flow and crossing number arguments. In particular, we are able to positively resolve a conjecture of Spielman and Teng which gives eigenvalue bounds for graphs that exclude any fixed minor.

Finally, unlike previous spectral approaches, we can obtain separators without a bounded degree assumption; although the second eigenvector may perform poorly in this setting, we show that our test vector still gives a \sqrt{n}-sized separator, yielding an alternate proof of the Alon-Seymour-Thomas result on separators in non-planar graphs.

[Joint work with Punya Biswal and Satish Rao]


January 31, 2008, at 01:34 PM by 128.112.95.79 -
Changed lines 48-49 from:

Feb 1 Adaptive Algorithms for Online Optimization

to:

Feb 8 Adaptive Algorithms for Online Optimization

January 31, 2008, at 01:33 PM by 128.112.95.79 -
Changed lines 41-42 from:
Feb 1C. SeshadhriTBA
to:
January 23, 2008, at 09:13 PM by Sanjeev -
Changed line 28 from:
Nov 30Kostas Daskalakis
to:
Nov 30Kostas DaskalakisComputing Approximate Nash Equilibria
January 23, 2008, at 09:12 PM by Sanjeev -
Changed lines 39-40 from:
Jan 15Robi KrauthgamerOvercoming the l_1 nonembeddability barrier
Jan 23Mark BravermanNoisy sorting without resampling
to:
Jan 15Robi Krauthgamer,WeizmannOvercoming the l_1 nonembeddability barrier
Jan 23Mark Braverman,TorontoNoisy sorting without resampling
January 23, 2008, at 09:03 PM by Sanjeev -
Added lines 39-40:
Jan 15Robi KrauthgamerOvercoming the l_1 nonembeddability barrier
Jan 23Mark BravermanNoisy sorting without resampling
December 10, 2007, at 09:51 PM by 172.16.40.253 -
Changed lines 39-40 from:
Feb 2C. SeshadhriTBA
to:
Feb 1C. SeshadhriTBA
Changed lines 46-47 from:

Feb 8 Adaptive Algorithms for Online Optimization

to:

Feb 1 Adaptive Algorithms for Online Optimization

December 10, 2007, at 09:48 PM by 172.16.40.253 -
Added line 44:
Added lines 46-70:

Feb 8 Adaptive Algorithms for Online Optimization

C. Seshadhri

The online learning framework captures a wide variety of learning problems. The setting is as follows - in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm's loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret", a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Joint work with Elad Hazan


December 10, 2007, at 01:12 AM by 172.16.40.253 -
Added lines 33-34:
December 10, 2007, at 01:10 AM by 172.16.40.253 -
Changed lines 31-32 from:

to:
December 10, 2007, at 01:10 AM by 172.16.40.253 -
Added line 32:
December 10, 2007, at 01:08 AM by 172.16.40.253 -
Changed lines 36-37 from:
Feb 2Seshadhri ComandurTBA
to:
Feb 2C. SeshadhriTBA
December 10, 2007, at 01:07 AM by 172.16.40.253 -
Changed lines 32-33 from:

Abstracts

to:

Spring 2008

DateSpeakerTitle
Feb 2Seshadhri ComandurTBA
Added lines 39-41:

Abstracts


Changed lines 223-229 from:

Spring 2008

DateSpeakerTitle
Feb 2Seshadhri ComandurTBA
to:
December 10, 2007, at 01:06 AM by 172.16.40.253 -
Added lines 215-222:

Spring 2008

DateSpeakerTitle
Feb 2Seshadhri ComandurTBA
November 05, 2007, at 01:01 PM by 172.16.29.250 -
Changed lines 179-180 from:

roland? Nov 16 Non-locality, negative probabilities, and communication complexity

to:

Nov 16 Non-locality, negative probabilities, and communication complexity

November 05, 2007, at 01:01 PM by 172.16.29.250 -
Changed line 26 from:
Nov 16Jérémie Roland[#roland|Non-locality, negative probabilities, and communication complexity]]
to:
November 05, 2007, at 01:00 PM by 172.16.29.250 -
Changed line 26 from:
Nov 16 
to:
Nov 16Jérémie Roland[#roland|Non-locality, negative probabilities, and communication complexity]]
Changed line 28 from:
Nov 30Kostas Daskalakis
to:
Nov 30Kostas Daskalakis
Added lines 177-214:

roland? Nov 16 Non-locality, negative probabilities, and communication complexity

Jérémie Roland, UC Berkeley

Note: We will not assume any knowledge about quantum mechanics (or any particular field whatsoever.

Abstract: Ever since its inception, quantum mechanics has appeared rather counter-intuitive in contrast to classical physics. In particular, one odd property of quantum mechanics is its apparent non-locality: this was first emphasized by the EPR paradox, and later clarified by Bell's theorem, which states that no local hidden variable (LHV) model may reproduce the results of an EPR-type experiment. One possible explanation that was first proposed by Feynman to resolve these apparent paradoxes was the (rather abstract) idea that quantum mechanics allowed probabilities to be negative.

Here, we will prove that any EPR-type experiment that respects non-signaling may be reproduced by a LHV model with quasi-probabilities, and give applications to communication complexity. More specifically, we will show how to derive a quasi-LHV model from a communication protocol. This yields a new lower bound method for the deterministic communication complexity, expressed as a linear program. The duality theorem of linear programming then clarifies the relation between Bell inequalities, negative probabilities and communication complexity

Time permitting, we will also consider the problem of computing with so-called "non-local boxes", i.e. abstract devices whose inputs-outputs follow the most simple non-local yet non-signaling probability distribution, and as such may be described by a quasi-LHV model. As was first proved by van Dam, communication complexity becomes trivial as soon as non-local boxes may be used for free. Here, we will quantify the cost of non-local boxes in such computation by showing how a communication protocol may be translated into a protocol using non-local boxes, and by giving a lower bound on the number of non-local boxes necessary to compute a given function.

October 31, 2007, at 11:19 AM by Sanjeev -
Changed line 28 from:
Nov 30 
to:
Nov 30Kostas Daskalakis
October 15, 2007, at 01:44 AM by 172.16.29.250 -
Changed line 25 from:
Nov 9 
to:
Nov 9Elad Hazan
October 11, 2007, at 01:43 AM by 172.16.29.250 -
Changed line 23 from:
Oct 26 
to:
Oct 26Andrej BogdanovPseudorandom bits for polynomials
Changed lines 152-162 from:
to:

Oct 26 Pseudorandom bits for polynomials

Andrej Bogdanov, Tsinghua University

Despite our general belief that randomized computations can be simulated deterministically in an efficient way, there are relatively few models for which results of this type can be shown unconditionally. On the other hand, the study of derandomization in simple models of computation has proved fruitful even in domains where no applications were apparent at the outset.

In this talk we consider pseudorandom generators for the class of low-degree multivariate polynomials over GF(2). In the case of degree 1 polynomials (linear functions), such pseudorandom generators were first constructed by Naor and Naor in 1989 and have been widely studied ever since. We show that, for any constant d, assuming the "Inverse Conjecture for the d-th Gowers norm" of Samorodnitsky, the sum of d generators for degree 1 polynomials is pseudorandom against degree d polynomials. The "Inverse Conjecture" is known to hold for d = 2 and d = 3 and in those cases our results are unconditional.

This is based on joint work with Emanuele Viola.

October 05, 2007, at 05:08 PM by 128.112.95.79 -
Changed lines 20-22 from:
Oct 5Benny Applebaum, PrincetonCryptography in Constant Parallel Time
Oct 12 
Oct 19Oded Regev, Tel AvivA Hypercontractive Inequality for Matrix-Valued Functions
to:
Deleted line 109:
Changed lines 111-112 from:

Oct 19 A Hypercontractive Inequality for Matrix-Valued Functions

to:

Oct 12 Chernoff-type Direct Product Theorems

Ragesh Jaiswal, UCSD

Consider a challenge-response protocol where the probability of a correct response is at least p for a legitimate user, and at most q < p for an attacker. One example is a CAPTCHA challenge, where a human should have a significantly higher chance of answering a single challenge (e.g., uncovering a distorted letter) than an attacker. Another example would be an argument system without perfect completeness. A natural approach to boost the gap between legitimate users and attackers would be to issue many challenges, and accept if the response is correct for more than a threshold fraction, for the threshold chosen between p and q. We give the first proof that parallel repetition with thresholds improves the security of such protocols. We do this with a very general result about an attacker’s ability to solve a large fraction of many independent instances of a hard problem, showing a Chernoff-like convergence of the fraction solved incorrectly to the probability of failure for a single instance.

Joint work with Russell Impagliazzo (UCSD, IAS) and Valentine Kabanets (SFU). This work was presented at CRYPTO 2007.


Oct 19 The Unique Games Conjecture with Entangled Provers is False

Changed lines 139-157 from:

The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier analysis of real-valued functions on the Boolean cube. We present a version of this inequality for *matrix-valued* functions on the Boolean cube. The proof is by a simple inductive argument based on a powerful inequality by Ball, Carlen, and Lieb.

Time permitting, we will present two applications of the inequality:

1. Bounds on k-out-of-n quantum random access codes:

   Consider a map that encodes n classical bits into n/2 qubits, in such a
   way that any set of k bits can be recovered with probability p
   by some measurement. We show that the success probability p cannot be better
   than exponentially small in k.

2. We will describe a new approach to obtain lower bounds on locally decodable

   codes (LDCs) with more than 2 queries.

NOTE: This talk requires no prior knowledge of quantum computation.

Joint work with Avraham Ben-Aroya and Ronald de Wolf.

to:

We consider one-round games between a classical verifier and two provers who share entanglement. We show that when the constraints enforced by the verifier are `unique' constraints (i.e., permutations), the value of the game can be well approximated by a semidefinite program. Essentially the only algorithm known previously was for the special case of binary answers, as follows from the work of Tsirelson in 1980. Among other things, our result implies that the variant of the unique games conjecture where we allow the provers to share entanglement is false. Our proof is based on a novel `quantum rounding technique', showing how to take a solution to an SDP and transform it to a strategy for entangled provers.

NOTE: This talk requires absolutely no prior knowledge of quantum computation.

Joint work with Julia Kempe and Ben Toner.

October 05, 2007, at 12:11 AM by Boaz Barak -
Changed lines 3-6 from:
 Computer Science Department,
 Princeton University
to:

Theory Group, Computer Science Department, Princeton University

September 25, 2007, at 11:09 PM by 172.16.29.250 -
Changed lines 24-25 from:
Oct 26No talk
Nov 2No talkFall break
to:
Oct 26 
Nov 2Michael SchapiraInterdomain Routing and Games
Added lines 137-151:

Nov 2 Interdomain Routing and Games

Michael Schapira, The Hebrew University of Jerusalem http://www.cs.huji.ac.il/~mikesch/

We present a game-theoretic model that captures many of the intricacies of interdomain routing in today's Internet. In this model, the strategic agents are source-nodes located on a network, who aim to send traffic to a unique destination node . The model enables complex dynamic interactions between the agents (asynchronous, sequential, and based on partial-information).

We provide realistic conditions that guarantee that executing the Border gateway Protocol (BGP), the only protocol currently used for interdomain routing, is in the best interest of each of the players. Moreover, we show that even coalitions of players of any size cannot improve their routing outcomes by jointly deviating from BGP. Unlike the vast majority of works in mechanism design, these results do not require any monetary transfers (to or by the agents).

Based on joint work with Hagay Levin and Aviv Zohar


September 20, 2007, at 12:35 PM by Boaz Barak -
Added lines 95-96:

Benny Applebaum, Princeton

September 20, 2007, at 12:35 PM by Boaz Barak -
Changed lines 99-102 from:
 First, I will show that every "moderately easy" one-way function (resp., pseudorandom generator, encryption scheme, signature scheme), say computable in NC1, can be compiled into a corresponding one-way function (resp., low-stretch pseudorandom generator, encryption scheme, signature scheme) in which each output bit depends on only four input bits.

 Second, I will discuss new relations between the parallel complexity of simple cryptographic tasks (e.g., generation of pseudorandom bits and computation of a one-way permutation) to the parallel complexity of complicated cryptographic tasks (e.g., encryption, signature or zero-knowledge proofs).
to:

First, I will show that every "moderately easy" one-way function (resp., pseudorandom generator, encryption scheme, signature scheme), say computable in NC1, can be compiled into a corresponding one-way function (resp., low-stretch pseudorandom generator, encryption scheme, signature scheme) in which each output bit depends on only four input bits.

Second, I will discuss new relations between the parallel complexity of simple cryptographic tasks (e.g., generation of pseudorandom bits and computation of a one-way permutation) to the parallel complexity of complicated cryptographic tasks (e.g., encryption, signature or zero-knowledge proofs).

September 20, 2007, at 12:34 PM by Boaz Barak -
Changed line 21 from:
Oct 5Benny Applebaum, PrincetonTBA
to:
Oct 5Benny Applebaum, PrincetonCryptography in Constant Parallel Time
Changed lines 93-109 from:
to:

October 5th Cryptography in Constant Parallel Time

We study the parallel time-complexity of basic cryptographic primitives such as one-way functions, pseudorandom generators, encryption schemes and signatures. Specifically, we consider the possibility of computing instances of these primitives by NC0 circuits, in which each output bit depends on a constant number of input bits. While NC0 functions might seem too simple to have cryptographic strength, we show that, under standard cryptographic assumptions (e.g., that integer factorization is hard), most cryptographic tasks can be carried out by such functions.

In this talk, I will survey several results regarding cryptography in NC0.

 First, I will show that every "moderately easy" one-way function (resp., pseudorandom generator, encryption scheme, signature scheme), say computable in NC1, can be compiled into a corresponding one-way function (resp., low-stretch pseudorandom generator, encryption scheme, signature scheme) in which each output bit depends on only four input bits.

 Second, I will discuss new relations between the parallel complexity of simple cryptographic tasks (e.g., generation of pseudorandom bits and computation of a one-way permutation) to the parallel complexity of complicated cryptographic tasks (e.g., encryption, signature or zero-knowledge proofs).

Third, I will describe a new connection between NC0 cryptography and hardness of approximation.

Finally, I will examine the possibility of carrying out cryptographic tasks by functions in which each output bit depends on a constant number of input bits, and each input bit affects a constant number of *output* bits at the same time. I will characterize what cryptographic tasks can be performed by such functions.

The talk is self-contained, and does not assume previous background in cryptography. The talk is based on joint works with Yuval Ishai and Eyal Kushilevitz (FOCS '04, CCC '05, RANDOM '06, CRYPTO '07).

September 11, 2007, at 01:13 AM by 172.16.29.250 -
Changed lines 104-105 from:
  1. Bounds on k-out-of-n quantum random access codes:
to:

1. Bounds on k-out-of-n quantum random access codes:

Changed lines 110-111 from:
  1. We will describe a new approach to obtain lower bounds on locally decodable
to:

2. We will describe a new approach to obtain lower bounds on locally decodable

September 11, 2007, at 01:13 AM by 172.16.29.250 -
Changed lines 103-104 from:

Time permitting, we will present two applications of the inequality:

to:

Time permitting, we will present two applications of the inequality:

Deleted line 108:
September 11, 2007, at 01:12 AM by 172.16.29.250 -
Changed lines 104-105 from:

1. Bounds on k-out-of-n quantum random access codes:

to:
  1. Bounds on k-out-of-n quantum random access codes:
Changed lines 110-111 from:

2. We will describe a new approach to obtain lower bounds on locally decodable

to:
  1. We will describe a new approach to obtain lower bounds on locally decodable
September 11, 2007, at 01:10 AM by 172.16.29.250 -
Changed line 21 from:
Oct 5Benny Applebaum, PrincetonTBA
to:
Oct 5Benny Applebaum, PrincetonTBA
September 11, 2007, at 01:10 AM by 172.16.29.250 -
Changed lines 71-72 from:

Hardness of Reconstructing Multivariate Polynomials over Finite Fields

to:

Sept 28 Hardness of Reconstructing Multivariate Polynomials over Finite Fields

Changed lines 95-96 from:

A Hypercontractive Inequality for Matrix-Valued Functions

to:

Oct 19 A Hypercontractive Inequality for Matrix-Valued Functions

September 11, 2007, at 01:09 AM by 172.16.29.250 -
Changed line 20 from:
Sep 28Parikshit Gopalan, U. WashingtonTBA
to:
Changed lines 71-72 from:

TBA

to:

Hardness of Reconstructing Multivariate Polynomials over Finite Fields

Added lines 75-91:

In the generic polynomial reconstruction problem, we are given a set of points along with target values for those points. Our goal is to find a low degree polynomial that fits the data well. This problem arises in several settings in computer science.

We consider the setting of low-degree multivariate polynomials over finite fields. Over GF[2] for any degree d, we show that it is NP-hard to tell whether some degree d polynomial fits the data almost everywhere, or if every polynomial of degree d fits the data at no more than 1 -2^{-d} fraction of points. This holds even with the promise that the polynomial that fits the data (if it exists) is in fact linear. Previously the only known hardness of approximation (or even NP-completeness) was for the case when d =1, which follows from a celebrated result of Hastad.

This is joint work with Subhash Khot and Rishi Saket.

September 11, 2007, at 12:44 AM by 172.16.29.250 -
Changed line 20 from:
Sep 28 
to:
Sep 28Parikshit Gopalan, U. WashingtonTBA
Deleted line 69:
Changed lines 71-74 from:
to:

TBA

Parikshit Gopalan, U. Washington

Added lines 76-77:

September 10, 2007, at 03:38 AM by 172.16.29.250 -
Changed lines 74-75 from:

regev? A Hypercontractive Inequality for Matrix-Valued Functions

to:

A Hypercontractive Inequality for Matrix-Valued Functions

September 10, 2007, at 03:37 AM by 172.16.29.250 -
Changed line 21 from:
Oct 5Benny Applebaum, PrincetonTBA
to:
Oct 5Benny Applebaum, PrincetonTBA
Changed line 23 from:
Oct 19Oded Regev, Tel Aviv
to:
Added lines 72-95:

regev? A Hypercontractive Inequality for Matrix-Valued Functions

Oded Regev, Tel Aviv University

The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier analysis of real-valued functions on the Boolean cube. We present a version of this inequality for *matrix-valued* functions on the Boolean cube. The proof is by a simple inductive argument based on a powerful inequality by Ball, Carlen, and Lieb.

Time permitting, we will present two applications of the inequality: 1. Bounds on k-out-of-n quantum random access codes:

   Consider a map that encodes n classical bits into n/2 qubits, in such a
   way that any set of k bits can be recovered with probability p
   by some measurement. We show that the success probability p cannot be better
   than exponentially small in k.

2. We will describe a new approach to obtain lower bounds on locally decodable

   codes (LDCs) with more than 2 queries.

NOTE: This talk requires no prior knowledge of quantum computation.

Joint work with Avraham Ben-Aroya and Ronald de Wolf.


September 03, 2007, at 10:30 AM by Boaz Barak -
Changed line 21 from:
Oct 5 
to:
Oct 5Benny Applebaum, PrincetonTBA
August 30, 2007, at 10:36 AM by 128.112.95.79 -
Changed lines 36-37 from:

Sept 21

to:

Sept 21 Restless Bandits with Bursty Rewards

August 30, 2007, at 10:35 AM by 128.112.95.79 -
Changed line 19 from:
Sep 21Kamesh Munagala, Duke
to:
Sep 21Kamesh Munagala, DukeRestless Bandits with Bursty Rewards
Added lines 40-70:

We consider a variant of the classic multi-armed bandit (MAB) problem from stochastic control theory. We call this variant "Feedback MAB". Here, the reward obtained by playing each of n independent arms varies according to an underlying "on/off" Markov process with known parameters. The evolution of the Markov chain happens irrespective of whether the arm is played, and furthermore, the exact state of the Markov chain is only revealed to the player when the arm is played and the reward observed. At most one arm (or in general, M arms) can be played any time step. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward.

This problem is an instance of a Partially Observable Markov Decision Process (POMDP), and a special case of the notoriously intractable "restless bandit" problem in control theory. Unlike the classic Stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies. The state of the system at any time step encodes the beliefs about the states of different arms, and the policy decisions change these beliefs – this aspect complicates the design and analysis of simple algorithms.

We design a constant factor approximation to the Feedback MAB problem by solving and rounding a natural LP relaxation to this problem. The resulting policy is simple to implement, intuitive, and crucially exploits interesting structure in the LP solution. As far as we are aware, this is the first (multiplicative) approximation algorithm for a POMDP problem.

To appear in FOCS 2007. Joint work with Sudipto Guha, UPenn.

August 22, 2007, at 03:23 PM by 66.180.184.68 -
Changed line 17 from:
to:
Added lines 32-40:

Abstracts


Sept 21

Kamesh Munagala, Duke


August 22, 2007, at 03:18 PM by 66.180.184.68 -
Changed lines 19-24 from:
Sep 21Kamesh Munagala, Duke
Sep 28 
Oct 5 
Oct 12 
Oct 19Oded Regev, Tel Aviv
Oct 26No talk
to:
Sep 21Kamesh Munagala, Duke
Sep 28 
Oct 5 
Oct 12 
Oct 19Oded Regev, Tel Aviv
Oct 26No talk
Changed lines 26-27 from:
Nov 9 
Nov 16 
to:
Nov 9 
Nov 16 
Changed lines 29-40 from:
Nov 30 
Dec 7  

Abstracts for the talks.

Sept 21

Kamesh Munagala, Duke


to:
Nov 30 
Dec 7 
August 22, 2007, at 03:17 PM by 66.180.184.68 -
Changed line 19 from:
Sep 21Kamesh Munagala, Duke.
to:
Sep 21Kamesh Munagala, Duke
Changed lines 30-32 from:
Dec 7 
to:
Dec 7  
Deleted line 33:
Deleted lines 35-36:

August 22, 2007, at 03:05 PM by 66.180.184.68 -
Changed lines 1-2 from:

Princeton Theory Lunch Schedule, 2006-07.

to:

Princeton Theory Lunch Schedule, 2007-08.

Changed lines 15-16 from:

Spring 2007

to:

Fall 2007

Changed lines 19-34 from:
to:
Sep 21Kamesh Munagala, Duke.
Sep 28 
Oct 5 
Oct 12 
Oct 19Oded Regev, Tel Aviv
Oct 26No talk
Nov 2No talkFall break
Nov 9 
Nov 16 
Nov 23No talkThanksgiving holiday
Nov 30 
Dec 7 
Deleted lines 35-55:

Fall 2006

DateSpeakerTitle
Sep 22Nikhil Bansal, IBM TJ Watson.The Santa Claus Problem
Sep 29Martin J. Strauss, Michigan/PrincetonSublinear-time Heavy Hitters with Universal Guarantees
Oct 6Vijay V. Vazirani, Georgia TechNew Market Models and Algorithms II
Oct 13Nick Harvey, MITAlgebraic Structures and Algorithms for Matching and Matroid Problems
Oct 20Eric Allender, RutgersOn the Complexity of Numerical Analysis (update)
Oct 27No talk
Nov 3No talkFall break
Nov 10 DIMACS mixer at Princeton
Nov 17Vitaly Feldman, HarvardOn Computational Hardness of Agnostic Learning
Nov 24No talkThanksgiving holiday
Dec 1Sanjeev Khanna, U PennDisjoint Paths in Networks
Thu, Dec 7, 4pmMikkel Thorup, At&T Labs-ResearchCompact Oracles for Approximate Distances around Obstacles in the Plane
Jan 19Michael Schapira, Hebrew UniversitySetting Lower Bounds on Truthfulness

Deleted lines 38-56:

Sept 22 The Santa Claus Problem

Nikhil Bansal, IBM TJ Watson

Suppose Santa Claus has n presents that he wants to distribute among m kids. Let p_{ij} denote the happiness that kid i receives from present j. The Santa's goal is to distribute presents among kids in such a way that the least lucky kid is as happy as possible. That is, to maximize min_{i=1,..,m} sum_{j \in S_i} p_{ij} where S_i is a set of presents received by the i-th kid.

The problem arises naturally in the context of fair allocation of goods. However, it remains poorly understood from the point of view of approximability. I will survey the known results, and describe some approximation algorithms for certain special cases of the problem.

Joint work with Maxim Sviridenko.

Changed lines 40-59 from:

Sept 29 Sublinear-time Heavy Hitters with Universal Guarantees

Martin J. Strauss, Michigan/Princeton

In the heavy hitters problem, the goal is to track the most frequent items in a changing collection, using little space. Over the last decade, a number of algorithms have been proposed for this problem. We present the first algorithms that simultaneously satisfy three important criteria: (i) the space is optimal, up to log factors (ii) the reconstruction time is polynomial in the number of heavy hitters and polylogarithmic in the universe size (iii) a single set of measurements (constructed randomly) works for all collections of items.

Joint work with Anna Gilbert, Joel Tropp, and Roman Vershynin

The talk is based on two papers. One is available as http://arxiv.org/abs/cs.DS/0608079 and the other is in progress.

to:

Sept 21

Kamesh Munagala, Duke

Deleted lines 44-506:

Oct 6 New Market Models and Algorithms II

Vijay V, Vazirani, Georgia Tech

The notion of a ``market'' has undergone a paradigm shift with the Internet -- totally new and highly successful markets have been defined and launched by companies such as Google, Yahoo!, Amazon, MSN and Ebay. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner.

In view of these new realities, the study of market equilibria, an important, though essentially non-algorithmic, theory within Mathematical Economics, needs to be revived and rejuvenated with an inherently algorithmic approach. Such a theory should not only address traditional market models but also define new models for some of the new markets.

In this two-talk series, I will give a feel for the exciting work going on on this front. Interestingly enough, this work has also contributed handsomely to the theory of algorithms itself. In particular, the highly successful primal-dual schema from exact and approximation algorithms, which was so far used for combinatorially solving special classes of linear programs, has been extended to solving nonlinear convex programs.

Both talks are self-contained; the first is meant for a general audience and will be the Princeton CS department colloquium on Wednesday, Oct 4. The second will be at the Princeton theory lunch on Friday, Oct 6:

http://www.cs.princeton.edu/theory/index.php/Main/TheoryLunch

1). Resource Allocation Markets

This talk is based on the following three papers:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/adwords.pdf

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG.pdf

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG2.pdf

2). Spending Constraint Utilities, with Applications to the Adwords Market

This talk is based on:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/spending.pdf


Oct 13 Algebraic Structures and Algorithms for Matching and Matroid Problems

Nick Harvey, MIT

The task of finding a maximum matching in a graph is one of the classical problems in the theory of algorithms, inspiring the definition of the class P. Until recently, the fastest algorithm for this task took O(n^2.5) steps in an n vertex (dense) graph. But an exciting development due to Mucha and Sankoswki (FOCS '04) dropped the running time to O(n^w) where w < 2.38 is the exponent of matrix multiplication. However, their result was quite complicated, relying on certain structural decompositions of graphs and complicated data structures to maintain certain properties dynamically.

In this talk, I will present two new results: The first is a simple randomized algorithm achieving the same result, allowing for a self-contained presentation of this result. The second is an extension of this algorithm to the task of matroid intersection (for linear matroids over any field).


Oct 20 On the Complexity of Numerical Analysis (update)

Eric Allender, (Chair, Rutgers Dept. of Computer Science)

The Euclidean Traveling Salesman Problem is a well-known NP-hard problem, but some are surprised to learn that it is not known to lie in NP. This lecture will present a new upper bound on the complexity of this problem: it lies in the "counting hierarchy". The proof falls out of some more general considerations concerning arithmetic circuits, by way of a detour through the field of computation of the Real Numbers in the Blum-Shub-Smale model.

As a recent development, we also discuss some new connections between derandomization of Arithmetic Circuit Identity Testing and some arithmetic circuit questions in the Blum-Shub-Smale model. This is joint work with Peter Buergisser, Johan Kjeldgaard-Pedersen, and Peter Bro Miltersen, and was presented at the 2006 IEEE Conference on Computational Complexity; it is available on my home page. I will also discuss some more recent work by Peter Buergisser.


Nov 17 On Computational Hardness of Agnostic Learning

Vitaly Feldman, Harvard

The agnostic learning framework of Kearns, Schapire and Sellie is an extension of Valiant's PAC learning model, in which examples that are given to the learning algorithm are labelled by an unknown and unrestricted (and hence adversarial) function. An agnostic learning algorithm for a class of functions $C$ is required to produce a hypothesis whose error (with respect to the distribution of examples) is "close" to the optimal possible by a function from $C$. A large number of questions regarding the learnability of even the simplest function classes in this natural model remain open since the introduction of the model in 1992.

In this talk I will show that agnostic learning of parities with respect to the uniform distribution reduces to learning of parities with random classification noise. Learning with random noise is a substantially more tractable model and, in particular, using a noise-tolerant parity learning algorithm by Blum, Kalai and Wasserman we obtain the first non-trivial algorithm for agnostic learning of parities with respect to the uniform distribution. The reduction also shows that learning of juntas and DNF expressions with respect to the uniform distribution can be reduced to learning parities with random classification noise.

The second problem we will discuss is the problem of weak agnostic learning of monomials by a proper learning algorithm (that is, an algorithm that produces a monomial as its hypothesis) formulated by Avrim Blum. We solve this open problem by showing that it is NP-hard. In other words, we prove that finding a monomial that is consistent with $1/2+\epsilon$ fraction of examples is NP-hard even when there exists a monomial consistent with $1-\epsilon$ fraction of examples (for any constant $\epsilon$). The result can also be interpreted as an optimal hardness of approximation result for maximizing agreements with a monomial. It substantially improves on previous hardness of approximation results for this problem (due to Ben-David et al., and Bshouty and Burroughs).

The talk will be largely self-contained and both results will be also interpreted outside of the learning context. Parts of this talk are based on joint work with Parikshit Gopalan, Subhash Khot and Ashok Ponnuswami.


Dec 1 Disjoint Paths in Networks

Sanjeev Khanna, U Penn

A fundamental problem in combinatorial optimization is the edge-disjoint paths problem (EDP). We are given a network and a collection of source-destination pairs in the network. The goal is maximize the number of pairs that can be connected by edge-disjoint paths (i.e. the paths can not share any edges). Even special cases of EDP correspond to non-trivial optimization problems and the problem becomes NP-hard in very restricted settings. A well-studied approach for designing approximation algorithms for EDP is to use the multicommodity flow relaxation. In this talk, we will describe some recent progress on understanding the integrality gap of the multicommodity flow relaxation as well as present some new hardness of approximation results for EDP and related problems in directed graphs.


Dec 7 Compact Oracles for Approximate Distances around Obstacles in the Plane

Mikkel Thorup, AT&T Labs-Research

Consider the Euclidean plane with arbitrary polygonal obstacles with a total of n corners. For any fixed a>0, we construct an oracle in near-linear time and space. Given any pair of points, the oracle approximates the obstacle avoiding distance within a factor (1+a) in O(log n) time. To get exact such distances, the best current space bound is O(n^{11}) [Chang Mitchell SODA'99].


Jan 19 Setting Lower Bounds on Truthfulness

Michael Schapira, Hebrew University

We present and discuss general techniques for proving inapproximability results for truthful mechanisms. We make use of these techniques to prove lower bounds on the approximability of several non-utilitarian multi-parameter problems.

In particular, we demonstrate the strength of our techniques by exhibiting a lower bound of $2-\frac{1}{m}$ for the scheduling problem with unrelated machines (formulated as a mechanism design problem in the seminal paper of Nisan and Ronen on Algorithmic Mechanism Design). Our lower bound applies to truthful randomized mechanisms (disregarding any computational assumptions on the running time of these mechanisms). Moreover, it holds even for the weaker notion of truthfulness for randomized mechanisms -- i.e., truthfulness in expectation. This lower bound nearly matches the known $\frac{7}{4}$ (randomized) truthful upper bound for the case of two machines (a non-truthful FPTAS exists). No lower bound for truthful randomized mechanisms in multi-parameter settings was previously known.

Based on joint work with Ahuva Mu'alem


Feb 9 A Combinatorial, Primal-Dual Approach to Semidefinite Programs

Satyen Kale, Princeton University

Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact.

In my talk, I will describe applications of a Lagrangian relaxation technique, the Multiplicative Weights Update method in the design of efficient algorithms for various optimization problems. We generalize the method to the setting of symmetric matrices rather than real numbers. The new algorithm yields the first truly general, combinatorial, primal-dual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for approximating the Sparsest Cut and Balanced Separator in both directed and undirected weighted graphs to factors of O(log n) and O(sqrt{log n}), and also for the Min UnCut problem. The algorithm also has applications in quantum computing and derandomization.

This is joint work with Sanjeev Arora.


Feb 16 On Translational Motion Planning in 3-Space

Esther Ezra, Tel-Aviv University

Constructing the union of geometric objects and bounding its combinatorial complexity is a fundamental problem in computational and combinatorial geometry. A major application of this problem is to translational motion planning, where a polyhedral robot $R$ translates in a fixed polyhedral environment, and one wishes to compute all placements of $R$ (known as free placements) at which it avoids collision with any obstacle.

In this talk, I will give a brief overview of the area, and explain its importance. I will then discuss the union problem, where the input consists of $n$ ``fat'' tetrahedra in 3-space, of arbitrary sizes, and show that in this case the combinatorial complexity of the union is $O(n^{2+\eps})$, for any $\eps > 0$, thus almost settling a conjecture of Pach et al. Our bound is the first (non-trivial) subcubic bound presented for this problem, and is almost tight in the worst case. An immediate consequence of this result implies that the combinatorial complexity of the union of $n$ cubes in 3-space, arbitrarily aligned and of arbitrary sizes, is also $O(n^{2+\eps})$, for any $\eps > 0$.

The talk will be self-contained.

Joint work with Micha Sharir.


Feb 23 A complete set of rotationally and translationally invariant features based on a generalization of the bispectrum to non-commutative groups

Risi Kondor, Columbia University

Deriving translation and rotation invariant representations is a fundamental problem in computer vision with a substantial literature. I propose a new set of features which

a, are simultaneously invariant to translation and rotation;
b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit);
c, do not involve matching with a template image or any similar discontinuous operation.

The new features are based on Kakarala's generalization of the bispectrum to compact Lie groups and a projection onto the sphere. I validated the method on a handwritten digit recognition dataset with randomly translated and rotated digits.

paper: http://arxiv.org/abs/cs.CV/0701127


Mar 2 Tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver hierarchy

Iannis Tourlakis, U. Toronto

ne of the most powerful techniques used for approximation algorithms is semidefinite programming relaxations. Such relaxations have led to breakthrough approximation algorithms, e.g., for Max Cut and Sparsest cut. However, they have not had much success with Vertex Cover. A series of papers (Kleinberg-Goemans, Charikar, Hatami-Magen-Markakis) show that several natural SDP relaxations for Vertex Cover have integrality gaps of 2-o(1) giving no improvement over the naive greedy algorithm for Vertex Cover.

Do better SDPs exist for Vertex Cover? A natural class of SDPs to consider are those derived by applying the Lovasz-Schrijver LS+ lift-and-project operator to the standard LP relaxation for Vertex Cover. Repeatedly applying this operator over many rounds results in a sequence of tighter and tighter SDP relaxations where the rth round SDP is computable in time n^O(r) and where the nth SDP solves Vertex Cover exactly on n vertex graphs. Ruling out non-trivial approximations for Vertex Cover in this hierarchy rules out one of the most promising classes of algorithms for Vertex Cover and thus may give evidence on the true approximability of Vertex Cover.

Schoenebeck, Trevisan and Tulsiani recently showed that the integrality gap for Vertex Cover remains 7/6 after even linearly many rounds of LS+. However, this gap is less than the inapproximability factor of 1.36 that Dinur and Safra rule out using PCP-based techniques. Determining the exact strength of the LS+ relaxation for Vertex Cover after even 2 rounds remained open.

We show the first tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver hierarchy. In particular, we prove ithat the integrality gap remains 2-o(1) after even Omega(sqrt(log n/log log n)) rounds of LS+.

This is joint work with Konstantinos Georgiou, Avner Magen and Toniann Pitassi.


Mar 9 Fully Polynomial Time Approximation Schemes for Stochastic Dynamic Programming

Nir Halman, MIT

We develop a framework for obtaining (deterministic) Fully Polynomial Time Approximation Schemes (FPTAS) for stochastic univariate dynamic programming problems with either convex or monotone single-period cost function.

Using our framework, we give the first FPTAS for several NP-Hard problems in various fields of research.

Joint work with Diego Klabjan, Chung-Lun Li, James Orlin and David Simchi-Levi.


Mar 16 List Decoding with Optimal Data Rate

Venkatesan Guruswami, U. Washington

Suppose you want to communicate a message of k packets on a noisy communication channel. So you judiciously encode it as a redundant collection of n = ck packets and transmit these. What is the fewest number of correct packets one needs to receive in order to have any hope of recovering the message?

Well, clearly one needs at least k correct packets. In this talk, I will describe an encoding scheme that attains this information-theoretic limit: for any desired constant eps > 0, it enables recovery of the message as long as at least k(1+eps) packets are received intact. The location of the correct packets and the errors on the remaining packets can be picked adversarially by the channel. Our codes are simple to describe: they are certain *folded* Reed-Solomon codes, which are just the well known Reed-Solomon codes, but viewed as a code over a larger alphabet via a bundling together of codeword symbols.

This achieves the optimal trade-off between redundancy and error-resilience for a worst-case noise model where the channel can corrupt the transmitted symbols arbitrarily subject to a bound on the total number of errors. These results are obtained in an error-recovery model called list decoding. The talk will introduce and motivate the problem of list decoding, and then give a peek into the algebraic ideas and constructions that lead to the above result.

Based on joint work with Atri Rudra.


Mar 30 Sampling algorithms and coresets for Lp regression and applications

Petros Drineas, RPI

L2 regression, also known as the least squares approximation problem, and more generally Lp regression, are fundamental problems that have numerous applications in mathematics and statistical data analysis. Recall that the over-constrained Lp regression problem takes as input an n x d matrix A, where n > d, an n-vector b, and a number p, and it returns as output a d-vector x such that ||Ax-b||_p is minimized. Several randomized algorithms for this problem, each of which provides a relative-error approximation to the exact solution, are described. The first algorithm applies novel ``subspace sampling'' probabilities to randomly sample constraints and thus construct a coreset for the L2 regression problem. The second algorithm (devised by T. Sarlos) applies a random projection and uses a ``fast'' version of the Johnson-Lindenstrauss lemma to obtain an efficient approximation to the L2 regression problem. The third algorithm generalizes the concept of a ``well-conditioned'' basis and of ``subspace-preserving sampling'' to obtain efficiently a coreset for Lp regression, for all $p\in[1,\infty)$. Applications of these random projection and random sampling algorithms to data analysis will be briefly discussed.


April 6 Approximating Minimum Bounded Degree Spanning Trees to within one of Optimal

Mohit Singh, CMU

In the Minimum Bounded Degree Spanning Tree problem, we are given an edge-weighted undirected graph with degree bound k for each vertex v and the task is to find a spanning tree of minimum cost which satisfies all the degree bounds. In this talk I will present an efficent algorithm which returns a spanning tree in which degree of any vertex v is at most k+1 and whose cost is at most the cost of the optimal spanning tree of maximum degree k. This settles a conjecture of Goemans affirmatively and generalizes a result of Furer and Raghavachari to weighted graphs.

The algorithm generalizes when vertices have distinct upper and lower bounds on vertex degrees and returns a spanning tree of optimal cost which violates the degree bounds by an additive one.

The main technique used is the iterative rounding method introduced by Jain in the design of approximation algorithms. Our major technical contribution is to extend the ideas of this method and apply them effectively to a new domain of problems.

The talk will be self-contained.

This is joint work with Lap Chi Lau.


April 13 Improved Approximation for Directed Cut Problems

Amit Agarwal, Princeton

A fundamental problem associated with graphs is that of finding the cheapest set of edges which cuts the graph into two or more pieces. Graph cut problems have a number of important applications in divide and conquer algorithms. In the minimum multicut problem, the goal is to partition the graph so as to separate a given set of source-sink pairs.

Our main result is an improvement upon the current best O(n^{1/2}) approximation for the multicut problem in directed graphs. Our algorithm uses the natural LP relaxation for the problem. We present a randomized rounding algorithm with a more sophisticated charging scheme and analysis to obtain an O(n^{11/23}) approximation. Our result also implies an improved upper bound on the worse case ratio between maximum multicommodity flow and minimum multicut in directed graphs.

Perhaps more interesting than the specific bound we obtain is the fact that sqrt{n} is not the right answer for this problem as well as the techniques we use in establishiing this.

This is joint work with Noga Alon and Moses Charikar


April 27 Tradeoffs between Local and Global Distortions of Metric Spaces

Konstantin Makarychev, Princeton

Suppose that every k points in a metric space X are D-distortion embeddable into l1. We give upper and lower bounds on the distortion required to embed the entire space X into l1. This is a natural mathematical question and is also motivated by the study of relaxations obtained by lift-and-project methods for graph partitioning problems. In this setting, we show that X can be embedded into l1 with distortion O (D log (n/k)). Moreover, we give a lower bound showing that this result is tight if D is bounded away from 1. For D = (1 + delta) we give a lower bound of Omega(log (n/k) / log (1/delta)); and for D=1, we give a lower bound of Omega(log n / (log k + log log n)). Our bounds significantly improve on the results of Arora, Lovasz, Newman, Rabani, Rabinovich and Vempala, who initiated a study of these questions.

This is a joint work with Moses Charikar and Yury Makarychev


May 4 The Hiring Problem and Lake Wobegon Strategies

Michael Mitzenmacher, Harvard University

We introduce the hiring problem, in which a growing company continuously interviews and decides whether to hire applicants. This problem is similar in spirit but quite different from the well-studied secretary problem. It captures aspects of decision making under uncertainty, and specifically issues that arise in systems where items are bought or sold at negotiated prices in an imperfect information market.

We analyze natural strategies of hiring above the current average, considering both the mean and the median averages; we call these Lake Wobegon strategies. Like the hiring problem itself, our strategies are intuitive, simple to describe, and amenable to mathematically and economically significant modifications. We demonstrate several intriguing behaviors of the two strategies. Specifically, we show dramatic differences between hiring above the mean and above the median. We also show that both strategies are intrinsically connected to the lognormal distribution, leading to only very weak concentration results, and the marked importance of the first few hires on the overall outcome.

Joint work with Andrei Broder (Yahoo! Research), Adam Kirsch (Harvard), Ravi Kumar (Yahoo! Research), Eli Upfal (Brown), and Sergei Vassilvitskii (Stanford).


May 11 An approximation algorithm for the cubic Grothendieck problem

Assaf Naor, NYU

Given a symmetric 3-tensor of real number {a_{ijk}:1<=i,j,k<=n}, we consider the problem of approximating in polynomial time the maximum of \sum_{i,j,k} a_{ijk} x_i x_j x_k over the discrete cube {-1,1}^n. We will present both upper and lower bounds for this problem, and show how our new algorithm can be used in combinatorial optimization. Our approach reduces the problem to that of efficiently computing the diameter of convex bodies in the \ell_1 norm, for which we will present optimal result.

joint work with Subhash Khot.


May 18 Learning from related sources

Koby Crammer, University of Pennsylvania

We often like to build a model for one scenario based on data from similar or nearby cases. For example, consider the problem of building a model which predicts a sentiment about books from short reviews, using reviews and sentiment of DVDs. Another example is of learning movies preference for one viewers from rating provided by other similar users. There is a natural tradeoff between using data from more users and using data from only similar users.

In this talk, I will discuss the problem of learning good models using data from multiple related or similar sources. I will present a theoretical approach which extends the standard probably approximately correct (PAC) learning framework, and show how it can be applied in order to determine which sources of data should be used for classification and regression.

I will then proceed to the analysis of the problem where the we are like to model a target domain using data labeled from sources with a different distribution. I will discuss shortly the problem of representation and will try to answer under what conditions can we adapt a classifier trained on the source domain for use in the target domain.


May 17, 2007, at 06:14 PM by 172.16.29.250 -
Changed lines 7-9 from:

'''Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.'''

to:

Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.

May 17, 2007, at 06:14 PM by 172.16.29.250 -
Changed lines 6-10 from:

Important: Reverting back to the old time: 11:45am

Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.

to:

'''Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.'''

May 17, 2007, at 06:13 PM by 172.16.29.250 -
Changed lines 34-35 from:
May 18Koby Crammer, UPenn
to:
May 18Koby Crammer, UPennLearning from related sources
Changed lines 554-566 from:

to:

May 18 Learning from related sources

Koby Crammer, University of Pennsylvania

We often like to build a model for one scenario based on data from similar or nearby cases. For example, consider the problem of building a model which predicts a sentiment about books from short reviews, using reviews and sentiment of DVDs. Another example is of learning movies preference for one viewers from rating provided by other similar users. There is a natural tradeoff between using data from more users and using data from only similar users.

In this talk, I will discuss the problem of learning good models using data from multiple related or similar sources. I will present a theoretical approach which extends the standard probably approximately correct (PAC) learning framework, and show how it can be applied in order to determine which sources of data should be used for classification and regression.

I will then proceed to the analysis of the problem where the we are like to model a target domain using data labeled from sources with a different distribution. I will discuss shortly the problem of representation and will try to answer under what conditions can we adapt a classifier trained on the source domain for use in the target domain.


May 10, 2007, at 09:39 AM by 172.16.29.250 -
Changed line 33 from:
May 11Assaf Naor, NYU[[#assaf|An approximation algorithm for the cubic Grothendieck problem]
to:
May 10, 2007, at 09:38 AM by 172.16.29.250 -
Changed line 33 from:
May 11Assaf Naor, NYU[[#assafAn approximation algorithm for the cubic Grothendieck problem]
to:
May 11Assaf Naor, NYU[[#assaf|An approximation algorithm for the cubic Grothendieck problem]
May 10, 2007, at 09:38 AM by 172.16.29.250 -
Changed line 33 from:
May 11Assaf Naor, NYU
to:
May 11Assaf Naor, NYU[[#assafAn approximation algorithm for the cubic Grothendieck problem]
Added lines 539-554:

May 11 An approximation algorithm for the cubic Grothendieck problem

Assaf Naor, NYU

Given a symmetric 3-tensor of real number {a_{ijk}:1<=i,j,k<=n}, we consider the problem of approximating in polynomial time the maximum of \sum_{i,j,k} a_{ijk} x_i x_j x_k over the discrete cube {-1,1}^n. We will present both upper and lower bounds for this problem, and show how our new algorithm can be used in combinatorial optimization. Our approach reduces the problem to that of efficiently computing the diameter of convex bodies in the \ell_1 norm, for which we will present optimal result.

joint work with Subhash Khot.


May 02, 2007, at 08:45 AM by 172.16.29.250 -
Changed lines 32-33 from:
May 4Michael Mitzenmacher, Harvard
to:
May 4Michael Mitzenmacher, HarvardThe Hiring Problem and Lake Wobegon Strategies
May 11Assaf Naor, NYU
May 18Koby Crammer, UPenn
Added lines 508-538:

May 4 The Hiring Problem and Lake Wobegon Strategies

Michael Mitzenmacher, Harvard University

We introduce the hiring problem, in which a growing company continuously interviews and decides whether to hire applicants. This problem is similar in spirit but quite different from the well-studied secretary problem. It captures aspects of decision making under uncertainty, and specifically issues that arise in systems where items are bought or sold at negotiated prices in an imperfect information market.

We analyze natural strategies of hiring above the current average, considering both the mean and the median averages; we call these Lake Wobegon strategies. Like the hiring problem itself, our strategies are intuitive, simple to describe, and amenable to mathematically and economically significant modifications. We demonstrate several intriguing behaviors of the two strategies. Specifically, we show dramatic differences between hiring above the mean and above the median. We also show that both strategies are intrinsically connected to the lognormal distribution, leading to only very weak concentration results, and the marked importance of the first few hires on the overall outcome.

Joint work with Andrei Broder (Yahoo! Research), Adam Kirsch (Harvard), Ravi Kumar (Yahoo! Research), Eli Upfal (Brown), and Sergei Vassilvitskii (Stanford).


April 25, 2007, at 01:10 PM by 128.112.95.79 -
Changed line 31 from:
Apr 27Konstantin Makarychev, Princeton
to:
Changed lines 484-505 from:

to:

April 27 Tradeoffs between Local and Global Distortions of Metric Spaces

Konstantin Makarychev, Princeton

Suppose that every k points in a metric space X are D-distortion embeddable into l1. We give upper and lower bounds on the distortion required to embed the entire space X into l1. This is a natural mathematical question and is also motivated by the study of relaxations obtained by lift-and-project methods for graph partitioning problems. In this setting, we show that X can be embedded into l1 with distortion O (D log (n/k)). Moreover, we give a lower bound showing that this result is tight if D is bounded away from 1. For D = (1 + delta) we give a lower bound of Omega(log (n/k) / log (1/delta)); and for D=1, we give a lower bound of Omega(log n / (log k + log log n)). Our bounds significantly improve on the results of Arora, Lovasz, Newman, Rabani, Rabinovich and Vempala, who initiated a study of these questions.

This is a joint work with Moses Charikar and Yury Makarychev


April 24, 2007, at 10:34 PM by 172.16.29.250 -
Changed lines 31-33 from:
Apr 27Piotr Indyk, MIT
May 4 
to:
Apr 27Konstantin Makarychev, Princeton
May 4Michael Mitzenmacher, Harvard
April 11, 2007, at 11:09 AM by 172.16.29.250 -
Changed line 479 from:

fact that sqrt{n} is not the right answer for this problem and

to:

fact that sqrt{n} is not the right answer for this problem as well as

April 11, 2007, at 11:07 AM by 172.16.29.250 -
Changed lines 6-10 from:

Important: new time: 11am

We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.

to:

Important: Reverting back to the old time: 11:45am

Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.

Changed line 29 from:
Apr 13Amit Agarwal, Princeton
to:
Added lines 459-484:

April 13 Improved Approximation for Directed Cut Problems

Amit Agarwal, Princeton

A fundamental problem associated with graphs is that of finding the cheapest set of edges which cuts the graph into two or more pieces. Graph cut problems have a number of important applications in divide and conquer algorithms. In the minimum multicut problem, the goal is to partition the graph so as to separate a given set of source-sink pairs.

Our main result is an improvement upon the current best O(n^{1/2}) approximation for the multicut problem in directed graphs. Our algorithm uses the natural LP relaxation for the problem. We present a randomized rounding algorithm with a more sophisticated charging scheme and analysis to obtain an O(n^{11/23}) approximation. Our result also implies an improved upper bound on the worse case ratio between maximum multicommodity flow and minimum multicut in directed graphs.

Perhaps more interesting than the specific bound we obtain is the fact that sqrt{n} is not the right answer for this problem and the techniques we use in establishiing this.

This is joint work with Noga Alon and Moses Charikar


April 09, 2007, at 10:04 PM by 172.16.29.250 -
Changed line 30 from:
Apr 20 
to:
Apr 20No talkNew York Area Theory Day at Columbia
March 13, 2007, at 11:52 PM by 172.16.29.250 -
Changed lines 1-2 from:

Princeton Theory Lunch Schedule, Fall 2006.

to:

Princeton Theory Lunch Schedule, 2006-07.

March 13, 2007, at 11:51 PM by 172.16.29.250 -
Changed line 28 from:
Apr 6Mohit Singh, CMU
to:
Added line 371:
Added line 387:
Added line 401:
Deleted line 424:
Changed lines 427-456 from:
to:

April 6 Approximating Minimum Bounded Degree Spanning Trees to within one of Optimal

Mohit Singh, CMU

In the Minimum Bounded Degree Spanning Tree problem, we are given an edge-weighted undirected graph with degree bound k for each vertex v and the task is to find a spanning tree of minimum cost which satisfies all the degree bounds. In this talk I will present an efficent algorithm which returns a spanning tree in which degree of any vertex v is at most k+1 and whose cost is at most the cost of the optimal spanning tree of maximum degree k. This settles a conjecture of Goemans affirmatively and generalizes a result of Furer and Raghavachari to weighted graphs.

The algorithm generalizes when vertices have distinct upper and lower bounds on vertex degrees and returns a spanning tree of optimal cost which violates the degree bounds by an additive one.

The main technique used is the iterative rounding method introduced by Jain in the design of approximation algorithms. Our major technical contribution is to extend the ideas of this method and apply them effectively to a new domain of problems.

The talk will be self-contained.

This is joint work with Lap Chi Lau.

March 13, 2007, at 11:45 PM by 172.16.29.250 -
Changed line 27 from:
Mar 30Petros Drineas, RPI
to:
Added lines 399-427:

Mar 30 Sampling algorithms and coresets for Lp regression and applications

Petros Drineas, RPI

L2 regression, also known as the least squares approximation problem, and more generally Lp regression, are fundamental problems that have numerous applications in mathematics and statistical data analysis. Recall that the over-constrained Lp regression problem takes as input an n x d matrix A, where n > d, an n-vector b, and a number p, and it returns as output a d-vector x such that ||Ax-b||_p is minimized. Several randomized algorithms for this problem, each of which provides a relative-error approximation to the exact solution, are described. The first algorithm applies novel ``subspace sampling'' probabilities to randomly sample constraints and thus construct a coreset for the L2 regression problem. The second algorithm (devised by T. Sarlos) applies a random projection and uses a ``fast'' version of the Johnson-Lindenstrauss lemma to obtain an efficient approximation to the L2 regression problem. The third algorithm generalizes the concept of a ``well-conditioned'' basis and of ``subspace-preserving sampling'' to obtain efficiently a coreset for Lp regression, for all $p\in[1,\infty)$. Applications of these random projection and random sampling algorithms to data analysis will be briefly discussed.



March 07, 2007, at 07:15 PM by 128.112.95.79 -
Changed line 29 from:
Apr 13 
to:
Apr 13Amit Agarwal, Princeton
February 27, 2007, at 01:08 AM by 172.16.29.250 -
Changed lines 23-24 from:
Mar 2Iannis Tourlakis, Toronto[[#tourlakis|Tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver

hierarchy]]

to:
February 27, 2007, at 01:07 AM by 172.16.29.250 -
Changed lines 23-25 from:
Mar 2Iannis Tourlakis, Toronto
Mar 9Nir Halman, MIT
Mar 16Venkat Guruswami, U. Washington
to:
Mar 2Iannis Tourlakis, Toronto[[#tourlakis|Tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver

hierarchy]]

Mar 9Nir Halman, MITFully Polynomial Time Approximation Schemes for Stochastic Dynamic Programming
Mar 16Venkat Guruswami, U. WashingtonList Decoding with Optimal Data Rate
Changed lines 331-332 from:

to:

Mar 2 Tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver hierarchy

Iannis Tourlakis, U. Toronto

ne of the most powerful techniques used for approximation algorithms is semidefinite programming relaxations. Such relaxations have led to breakthrough approximation algorithms, e.g., for Max Cut and Sparsest cut. However, they have not had much success with Vertex Cover. A series of papers (Kleinberg-Goemans, Charikar, Hatami-Magen-Markakis) show that several natural SDP relaxations for Vertex Cover have integrality gaps of 2-o(1) giving no improvement over the naive greedy algorithm for Vertex Cover.

Do better SDPs exist for Vertex Cover? A natural class of SDPs to consider are those derived by applying the Lovasz-Schrijver LS+ lift-and-project operator to the standard LP relaxation for Vertex Cover. Repeatedly applying this operator over many rounds results in a sequence of tighter and tighter SDP relaxations where the rth round SDP is computable in time n^O(r) and where the nth SDP solves Vertex Cover exactly on n vertex graphs. Ruling out non-trivial approximations for Vertex Cover in this hierarchy rules out one of the most promising classes of algorithms for Vertex Cover and thus may give evidence on the true approximability of Vertex Cover.

Schoenebeck, Trevisan and Tulsiani recently showed that the integrality gap for Vertex Cover remains 7/6 after even linearly many rounds of LS+. However, this gap is less than the inapproximability factor of 1.36 that Dinur and Safra rule out using PCP-based techniques. Determining the exact strength of the LS+ relaxation for Vertex Cover after even 2 rounds remained open.

We show the first tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver hierarchy. In particular, we prove ithat the integrality gap remains 2-o(1) after even Omega(sqrt(log n/log log n)) rounds of LS+.

This is joint work with Konstantinos Georgiou, Avner Magen and Toniann Pitassi.


Mar 9 Fully Polynomial Time Approximation Schemes for Stochastic Dynamic Programming

Nir Halman, MIT

We develop a framework for obtaining (deterministic) Fully Polynomial Time Approximation Schemes (FPTAS) for stochastic univariate dynamic programming problems with either convex or monotone single-period cost function.

Using our framework, we give the first FPTAS for several NP-Hard problems in various fields of research.

Joint work with Diego Klabjan, Chung-Lun Li, James Orlin and David Simchi-Levi.


Mar 16 List Decoding with Optimal Data Rate

Venkatesan Guruswami, U. Washington

Suppose you want to communicate a message of k packets on a noisy communication channel. So you judiciously encode it as a redundant collection of n = ck packets and transmit these. What is the fewest number of correct packets one needs to receive in order to have any hope of recovering the message?

Well, clearly one needs at least k correct packets. In this talk, I will describe an encoding scheme that attains this information-theoretic limit: for any desired constant eps > 0, it enables recovery of the message as long as at least k(1+eps) packets are received intact. The location of the correct packets and the errors on the remaining packets can be picked adversarially by the channel. Our codes are simple to describe: they are certain *folded* Reed-Solomon codes, which are just the well known Reed-Solomon codes, but viewed as a code over a larger alphabet via a bundling together of codeword symbols.

This achieves the optimal trade-off between redundancy and error-resilience for a worst-case noise model where the channel can corrupt the transmitted symbols arbitrarily subject to a bound on the total number of errors. These results are obtained in an error-recovery model called list decoding. The talk will introduce and motivate the problem of list decoding, and then give a peek into the algebraic ideas and constructions that lead to the above result.

Based on joint work with Atri Rudra.


February 22, 2007, at 04:52 PM by 128.112.95.79 -
Changed line 25 from:
Mar 16 
to:
Mar 16Venkat Guruswami, U. Washington
February 20, 2007, at 11:07 AM by 172.16.29.250 -
Changed lines 6-10 from:

Important: New time

[+We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.+]

to:

Important: new time: 11am

We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.

February 20, 2007, at 11:06 AM by 172.16.29.250 -
Changed lines 6-7 from:

Important

to:

Important: New time

February 20, 2007, at 11:06 AM by 172.16.29.250 -
Changed lines 8-10 from:

'''We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.'''

to:

[+We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.+]

February 20, 2007, at 11:04 AM by 172.16.29.250 -
Changed lines 6-8 from:

(All events held at 12 noon, Room 402, CS Department, 35 Olden St., unless stated otherwise)

to:

Important

'''We are experimenting with a new format. Talks will begin at 11am in room 402, CS Department, 35 Olden St., and we will go out to lunch after that.'''

February 19, 2007, at 10:43 AM by 172.16.29.250 -
Changed line 26 from:
Apr 6 
to:
Apr 6Mohit Singh, CMU
February 16, 2007, at 02:27 PM by 128.112.95.79 -
Changed lines 319-320 from:

a, are simultaneously invariant to translation and rotation; b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit);

to:

a, are simultaneously invariant to translation and rotation;
b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit);\\

February 16, 2007, at 02:26 PM by 128.112.95.79 -
Changed line 20 from:
Feb 23Risi Kondor, Columbia
to:
Added lines 313-329:

Feb 23 A complete set of rotationally and translationally invariant features based on a generalization of the bispectrum to non-commutative groups

Risi Kondor, Columbia University

Deriving translation and rotation invariant representations is a fundamental problem in computer vision with a substantial literature. I propose a new set of features which

a, are simultaneously invariant to translation and rotation; b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit); c, do not involve matching with a template image or any similar discontinuous operation.

The new features are based on Kakarala's generalization of the bispectrum to compact Lie groups and a projection onto the sphere. I validated the method on a handwritten digit recognition dataset with randomly translated and rotated digits.

paper: http://arxiv.org/abs/cs.CV/0701127


February 08, 2007, at 05:21 AM by 172.16.29.250 -
Added line 13:
Changed lines 18-20 from:
Feb 9Satyen Kale, Princeton.A Combinatorial, Primal-Dual Approach to Semidefinite Programs
Feb 16Esther Ezra, Tel Aviv UnivOn Translational Motion Planning in 3-Space
Feb 23 
to:
Feb 9Satyen Kale, PrincetonA Combinatorial, Primal-Dual Approach to Semidefinite Programs
Feb 16Esther Ezra, Tel Aviv UnivOn Translational Motion Planning in 3-Space
Feb 23Risi Kondor, Columbia
Added line 34:
Added lines 266-312:

Feb 9 A Combinatorial, Primal-Dual Approach to Semidefinite Programs

Satyen Kale, Princeton University

Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact.

In my talk, I will describe applications of a Lagrangian relaxation technique, the Multiplicative Weights Update method in the design of efficient algorithms for various optimization problems. We generalize the method to the setting of symmetric matrices rather than real numbers. The new algorithm yields the first truly general, combinatorial, primal-dual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for approximating the Sparsest Cut and Balanced Separator in both directed and undirected weighted graphs to factors of O(log n) and O(sqrt{log n}), and also for the Min UnCut problem. The algorithm also has applications in quantum computing and derandomization.

This is joint work with Sanjeev Arora.


Feb 16 On Translational Motion Planning in 3-Space

Esther Ezra, Tel-Aviv University

Constructing the union of geometric objects and bounding its combinatorial complexity is a fundamental problem in computational and combinatorial geometry. A major application of this problem is to translational motion planning, where a polyhedral robot $R$ translates in a fixed polyhedral environment, and one wishes to compute all placements of $R$ (known as free placements) at which it avoids collision with any obstacle.

In this talk, I will give a brief overview of the area, and explain its importance. I will then discuss the union problem, where the input consists of $n$ ``fat'' tetrahedra in 3-space, of arbitrary sizes, and show that in this case the combinatorial complexity of the union is $O(n^{2+\eps})$, for any $\eps > 0$, thus almost settling a conjecture of Pach et al. Our bound is the first (non-trivial) subcubic bound presented for this problem, and is almost tight in the worst case. An immediate consequence of this result implies that the combinatorial complexity of the union of $n$ cubes in 3-space, arbitrarily aligned and of arbitrary sizes, is also $O(n^{2+\eps})$, for any $\eps > 0$.

The talk will be self-contained.

Joint work with Micha Sharir.


February 08, 2007, at 05:02 AM by 172.16.29.250 -
Deleted line 53:
February 08, 2007, at 05:01 AM by 172.16.29.250 -
Changed lines 20-21 from:
Mar 2Iannis Tourlakis, Toronto
Mar 9Nir Halman, MIT
to:
Mar 2Iannis Tourlakis, Toronto
Mar 9Nir Halman, MIT
Changed lines 31-32 from:
to:

February 08, 2007, at 05:00 AM by 172.16.29.250 -
Added lines 13-14:

Spring 2007

Added lines 17-35:
Feb 9Satyen Kale, Princeton.A Combinatorial, Primal-Dual Approach to Semidefinite Programs
Feb 16Esther Ezra, Tel Aviv UnivOn Translational Motion Planning in 3-Space
Feb 23 
Mar 2Iannis Tourlakis, Toronto
Mar 9Nir Halman, MIT
Mar 16 
Mar 23No talkSpring break
Mar 30Petros Drineas, RPI
Apr 6 
Apr 13 
Apr 20 
Apr 27Piotr Indyk, MIT
May 4 

Fall 2006

DateSpeakerTitle
Changed lines 48-49 from:
Jan 19Michael Schapira, Hebrew UniversitySetting Lower Bounds on Truthfulness
to:
Jan 19Michael Schapira, Hebrew UniversitySetting Lower Bounds on Truthfulness
January 17, 2007, at 10:40 AM by 172.16.29.250 -
Changed lines 26-27 from:
Thu, Dec 7, 4pmMikkel Thorup, At&T Labs-ResearchCompact Oracles for Approximate Distances around Obstacles in the Plane
to:
Thu, Dec 7, 4pmMikkel Thorup, At&T Labs-ResearchCompact Oracles for Approximate Distances around Obstacles in the Plane
Jan 19Michael Schapira, Hebrew UniversitySetting Lower Bounds on Truthfulness
Added lines 215-241:

Jan 19 Setting Lower Bounds on Truthfulness

Michael Schapira, Hebrew University

We present and discuss general techniques for proving inapproximability results for truthful mechanisms. We make use of these techniques to prove lower bounds on the approximability of several non-utilitarian multi-parameter problems.

In particular, we demonstrate the strength of our techniques by exhibiting a lower bound of $2-\frac{1}{m}$ for the scheduling problem with unrelated machines (formulated as a mechanism design problem in the seminal paper of Nisan and Ronen on Algorithmic Mechanism Design). Our lower bound applies to truthful randomized mechanisms (disregarding any computational assumptions on the running time of these mechanisms). Moreover, it holds even for the weaker notion of truthfulness for randomized mechanisms -- i.e., truthfulness in expectation. This lower bound nearly matches the known $\frac{7}{4}$ (randomized) truthful upper bound for the case of two machines (a non-truthful FPTAS exists). No lower bound for truthful randomized mechanisms in multi-parameter settings was previously known.

Based on joint work with Ahuva Mu'alem


December 05, 2006, at 01:07 AM by 172.16.29.250 -
Changed lines 26-27 from:
to:
Thu, Dec 7, 4pmMikkel Thorup, At&T Labs-ResearchCompact Oracles for Approximate Distances around Obstacles in the Plane
December 05, 2006, at 01:06 AM by 172.16.29.250 -
Changed lines 26-27 from:
to:
Added lines 202-214:

Dec 7 Compact Oracles for Approximate Distances around Obstacles in the Plane

Mikkel Thorup, AT&T Labs-Research

Consider the Euclidean plane with arbitrary polygonal obstacles with a total of n corners. For any fixed a>0, we construct an oracle in near-linear time and space. Given any pair of points, the oracle approximates the obstacle avoiding distance within a factor (1+a) in O(log n) time. To get exact such distances, the best current space bound is O(n^{11}) [Chang Mitchell SODA'99].


November 28, 2006, at 11:26 PM by 172.16.29.250 -
Changed lines 25-26 from:
Dec 1Sanjeev Khanna, U PennTBA
to:
Dec 1Sanjeev Khanna, U PennDisjoint Paths in Networks
Changed lines 190-191 from:

Dec 1 TBA

to:

Dec 1 Disjoint Paths in Networks

Added lines 194-199:

A fundamental problem in combinatorial optimization is the edge-disjoint paths problem (EDP). We are given a network and a collection of source-destination pairs in the network. The goal is maximize the number of pairs that can be connected by edge-disjoint paths (i.e. the paths can not share any edges). Even special cases of EDP correspond to non-trivial optimization problems and the problem becomes NP-hard in very restricted settings. A well-studied approach for designing approximation algorithms for EDP is to use the multicommodity flow relaxation. In this talk, we will describe some recent progress on understanding the integrality gap of the multicommodity flow relaxation as well as present some new hardness of approximation results for EDP and related problems in directed graphs.

November 15, 2006, at 07:59 AM by 172.16.29.250 -
Changed line 23 from:
Nov 17Vitaly Feldman, HarvardTBA
to:
Changed lines 172-173 from:

Nov 17 TBA

to:

Nov 17 On Computational Hardness of Agnostic Learning

Added lines 176-187:

The agnostic learning framework of Kearns, Schapire and Sellie is an extension of Valiant's PAC learning model, in which examples that are given to the learning algorithm are labelled by an unknown and unrestricted (and hence adversarial) function. An agnostic learning algorithm for a class of functions $C$ is required to produce a hypothesis whose error (with respect to the distribution of examples) is "close" to the optimal possible by a function from $C$. A large number of questions regarding the learnability of even the simplest function classes in this natural model remain open since the introduction of the model in 1992.

In this talk I will show that agnostic learning of parities with respect to the uniform distribution reduces to learning of parities with random classification noise. Learning with random noise is a substantially more tractable model and, in particular, using a noise-tolerant parity learning algorithm by Blum, Kalai and Wasserman we obtain the first non-trivial algorithm for agnostic learning of parities with respect to the uniform distribution. The reduction also shows that learning of juntas and DNF expressions with respect to the uniform distribution can be reduced to learning parities with random classification noise.

The second problem we will discuss is the problem of weak agnostic learning of monomials by a proper learning algorithm (that is, an algorithm that produces a monomial as its hypothesis) formulated by Avrim Blum. We solve this open problem by showing that it is NP-hard. In other words, we prove that finding a monomial that is consistent with $1/2+\epsilon$ fraction of examples is NP-hard even when there exists a monomial consistent with $1-\epsilon$ fraction of examples (for any constant $\epsilon$). The result can also be interpreted as an optimal hardness of approximation result for maximizing agreements with a monomial. It substantially improves on previous hardness of approximation results for this problem (due to Ben-David et al., and Bshouty and Burroughs).

The talk will be largely self-contained and both results will be also interpreted outside of the learning context. Parts of this talk are based on joint work with Parikshit Gopalan, Subhash Khot and Ashok Ponnuswami.

October 16, 2006, at 11:17 PM by 172.16.29.250 -
Changed lines 19-20 from:
Oct 20Eric Allender, RutgersTBA
Oct 27 
to:
Oct 20Eric Allender, RutgersOn the Complexity of Numerical Analysis (update)
Oct 27No talk
Changed lines 149-152 from:

Oct 20 TBA

Eric Allender, Rutgers

to:

Oct 20 On the Complexity of Numerical Analysis (update)

Eric Allender, (Chair, Rutgers Dept. of Computer Science)

The Euclidean Traveling Salesman Problem is a well-known NP-hard problem, but some are surprised to learn that it is not known to lie in NP. This lecture will present a new upper bound on the complexity of this problem: it lies in the "counting hierarchy". The proof falls out of some more general considerations concerning arithmetic circuits, by way of a detour through the field of computation of the Real Numbers in the Blum-Shub-Smale model.

As a recent development, we also discuss some new connections between derandomization of Arithmetic Circuit Identity Testing and some arithmetic circuit questions in the Blum-Shub-Smale model. This is joint work with Peter Buergisser, Johan Kjeldgaard-Pedersen, and Peter Bro Miltersen, and was presented at the 2006 IEEE Conference on Computational Complexity; it is available on my home page. I will also discuss some more recent work by Peter Buergisser.

October 02, 2006, at 05:33 AM by 172.16.29.250 -
Changed line 17 from:
Oct 6Vijay V. Vazirani, Georgia TechAdwords and Generalized Online Matching
to:
Oct 6Vijay V. Vazirani, Georgia TechNew Market Models and Algorithms II
Changed lines 76-77 from:

Oct 6 Adwords and Generalized Online Matching

to:

Oct 6 New Market Models and Algorithms II

Changed lines 80-95 from:

Internet search engine companies, such as Google, Yahoo and MSN, have revolutionized not only the use of the Internet by individuals but also the way businesses advertise to consumers. The latter is also responsible for their dramatic revenues, and their ability to provide their services for free.

I will talk about this latter innovation and the underlying computational problem it raises. This problem is an elegant generalization of the online bipartite matching problem. We give an optimal algorithm for this problem achieving a competitive ratio of 1-1/e. The design of this algorithm is made possible by the new notion of tradeoff-revealing linear programs.

This talk is based on the following paper:

to:

The notion of a ``market'' has undergone a paradigm shift with the Internet -- totally new and highly successful markets have been defined and launched by companies such as Google, Yahoo!, Amazon, MSN and Ebay. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner.

In view of these new realities, the study of market equilibria, an important, though essentially non-algorithmic, theory within Mathematical Economics, needs to be revived and rejuvenated with an inherently algorithmic approach. Such a theory should not only address traditional market models but also define new models for some of the new markets.

In this two-talk series, I will give a feel for the exciting work going on on this front. Interestingly enough, this work has also contributed handsomely to the theory of algorithms itself. In particular, the highly successful primal-dual schema from exact and approximation algorithms, which was so far used for combinatorially solving special classes of linear programs, has been extended to solving nonlinear convex programs.

Both talks are self-contained; the first is meant for a general audience and will be the Princeton CS department colloquium on Wednesday, Oct 4. The second will be at the Princeton theory lunch on Friday, Oct 6:

http://www.cs.princeton.edu/theory/index.php/Main/TheoryLunch

1). Resource Allocation Markets

This talk is based on the following three papers:

Changed lines 112-113 from:

(Joint work with Aranyak Mehta, Amin Saberi and Umesh Vazirani)

to:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG.pdf

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG2.pdf

2). Spending Constraint Utilities, with Applications to the Adwords Market

This talk is based on:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/spending.pdf

September 29, 2006, at 02:54 PM by 128.112.95.79 -
Changed line 17 from:
Oct 6Vijay V. Vazirani, Georgia TechNew Market Models and Algorithms
to:
Oct 6Vijay V. Vazirani, Georgia TechAdwords and Generalized Online Matching
Changed lines 76-77 from:

Oct 6 New Market Models and Algorithms

to:

Oct 6 Adwords and Generalized Online Matching

Changed lines 80-112 from:

The notion of a ``market'' has undergone a paradigm shift with the Internet -- totally new and highly successful markets have been defined and launched by companies such as Google, Yahoo!, Amazon, MSN and Ebay. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner.

In view of these new realities, the study of market equilibria, an important, though essentially non-algorithmic, theory within Mathematical Economics, needs to be revived and rejuvenated with an inherently algorithmic approach. Such a theory should not only address traditional market models but also define new models for some of the new markets.

In this two-talk series, I will give a feel for the exciting work going on on this front. Interestingly enough, this work has also contributed handsomely to the theory of algorithms itself. In particular, the highly successful primal-dual schema from exact and approximation algorithms, which was so far used for combinatorially solving special classes of linear programs, has been extended to solving nonlinear convex programs.

Both talks are self-contained; the first is meant for a general audience and will be the Princeton CS department colloquium on Wednesday, Oct 4. The second will be at the Princeton theory lunch on Friday, Oct 6.

1). Resource Allocation Markets, based on

AdWords and Generalized On-line Matching Eisenberg-Gale Markets I Eisenberg-Gale Markets II

2). Spending Constraint Utilities with Applications to the AdWords Market

to:

Internet search engine companies, such as Google, Yahoo and MSN, have revolutionized not only the use of the Internet by individuals but also the way businesses advertise to consumers. The latter is also responsible for their dramatic revenues, and their ability to provide their services for free.

I will talk about this latter innovation and the underlying computational problem it raises. This problem is an elegant generalization of the online bipartite matching problem. We give an optimal algorithm for this problem achieving a competitive ratio of 1-1/e. The design of this algorithm is made possible by the new notion of tradeoff-revealing linear programs.

This talk is based on the following paper:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/adwords.pdf

(Joint work with Aranyak Mehta, Amin Saberi and Umesh Vazirani)

September 24, 2006, at 01:36 PM by 128.112.95.79 -
Changed line 18 from:
Oct 13Nick Harvey, MITTBA
to:
Changed lines 115-116 from:

Oct 13 TBA

to:

Oct 13 Algebraic Structures and Algorithms for Matching and Matroid Problems

Added lines 119-135:

The task of finding a maximum matching in a graph is one of the classical problems in the theory of algorithms, inspiring the definition of the class P. Until recently, the fastest algorithm for this task took O(n^2.5) steps in an n vertex (dense) graph. But an exciting development due to Mucha and Sankoswki (FOCS '04) dropped the running time to O(n^w) where w < 2.38 is the exponent of matrix multiplication. However, their result was quite complicated, relying on certain structural decompositions of graphs and complicated data structures to maintain certain properties dynamically.

In this talk, I will present two new results: The first is a simple randomized algorithm achieving the same result, allowing for a self-contained presentation of this result. The second is an extension of this algorithm to the task of matroid intersection (for linear matroids over any field).

September 18, 2006, at 05:05 PM by 128.112.95.79 -
Changed line 20 from:
Oct 27No talkFall break
to:
Oct 27 
Changed line 22 from:
Nov 10 DIMACS mixer at Princeton
to:
Nov 10 DIMACS mixer at Princeton
September 18, 2006, at 04:08 PM by 128.112.95.79 -
Changed lines 1-2 from:

Princeton Theory Lunch Schedule, 2005-06.

to:

Princeton Theory Lunch Schedule, Fall 2006.

September 18, 2006, at 04:05 PM by 128.112.95.79 -
Changed lines 15-45 from:
Sept 9Adi Avidor, Tel Aviv University.Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT
Sept 16Boaz Barak, PrincetonHow To Play Almost Any Mental Game Over The Net --Concurrent composition via Super-Polynomial Simulation
Sept 23 No theory lunch owing to faculty meeting
Sept 30Elliot AnshelevichThe Price of Stability for Network Design
Oct 7Nir AilonAggregating Inconsistent Information: Ranking and Clustering
Oct 14 
Oct 21 
Oct 28Eyal Ackerman, Technion
Nov 4 
Nov 11Paulo Zuliani, PrincetonQuantum Programming
Nov 18 No theory lunch
Nov 25Shengyu Zhang, Princeton
Dec 2Elad Hazan, PrincetonNew algorithms for Online Game Playing and Universal Portfolio Management.
Dec 9Benny Sudakov, PrincetonAdditive Approximation for Edge-Deletion Problems
Jan 20Liam Roditty, Tel Aviv University
Feb 10Jon Kelner, MIT
March 3Danny Harnik, TechnionOn the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications
March 10Ravi Kannan, YaleMatrix-valued Random Variables, Clustering
March 17Shengyu Zhang, Princeton
March 24No talk planned (spring break)
March 31Vladlen Koltun, Stanford.
Apr 7Elchanan Mossel, UC Berkeley.
Apr 14Ashish Goel, Stanford
Apr 21Laszlo Lovasz
Apr 28No lunch; NYU/Columbia Theory Day
May 5Tal Malkin, Columbia
May 12Michel Goemans, MITBounded Degree Minimum Spanning Trees
June 30 at 2pmEsther Ezra, Tel Aviv UniversityOn Translational Motion Planning in 3-Space
to:
Sep 22Nikhil Bansal, IBM TJ Watson.The Santa Claus Problem
Sep 29Martin J. Strauss, Michigan/PrincetonSublinear-time Heavy Hitters with Universal Guarantees
Oct 6Vijay V. Vazirani, Georgia TechNew Market Models and Algorithms
Oct 13Nick Harvey, MITTBA
Oct 20Eric Allender, RutgersTBA
Oct 27No talkFall break
Nov 3No talkFall break
Nov 10 DIMACS mixer at Princeton
Nov 17Vitaly Feldman, HarvardTBA
Nov 24No talkThanksgiving holiday
Dec 1Sanjeev Khanna, U PennTBA
Changed lines 34-66 from:

Sept 9 Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT

Adi Avidor, Tel Aviv University

Goemans and Williamson obtained an approximation algorithm for the MAX CUT problem with a performance ratio of $\alpha_{GW}~ 0.87856$. Their algorithm starts by solving a standard SDP relaxation of MAX CUT and then rounds the optimal solution obtained using a random hyperplane. In some cases, the optimal solution of the SDP relaxation happens to lie in a low dimensional space. Can an improved performance ratio be obtained for such

instances? We show that the answer is yes in dimensions two and three and conjecture that this is also the case in any higher fixed dimension. In two dimensions an optimal $32/(25+5\sqrt{5})$ approximation algorithm was already obtained by Goemans. (Note that $32/(25+5\sqrt{5}) ~ 0.88456$.) We obtain an alternative derivation of this result using Gegenbauer polynomials. Our main result is an improved rounding procedure for SDP solutions that lie in $R3$ with a performance ratio of about

$0.8818$ . The rounding procedure uses an interesting yin-yan coloring of the three dimensional sphere. The improved performance ratio obtained resolves, in the negative, an open problem posed by Feige and Schechtman [STOC'01]. They asked whether there are MAX CUT instances with integrality ratios arbitrarily close to $\alpha_{GW}~0.87856$ that have optimal embedding, i.e., optimal solutions of their SDP relaxations, that lie in R^3

Joint work with Uri Zwick

to:

Sept 22 The Santa Claus Problem

Nikhil Bansal, IBM TJ Watson

Suppose Santa Claus has n presents that he wants to distribute among m kids. Let p_{ij} denote the happiness that kid i receives from present j. The Santa's goal is to distribute presents among kids in such a way that the least lucky kid is as happy as possible. That is, to maximize min_{i=1,..,m} sum_{j \in S_i} p_{ij} where S_i is a set of presents received by the i-th kid.

The problem arises naturally in the context of fair allocation of goods. However, it remains poorly understood from the point of view of approximability. I will survey the known results, and describe some approximation algorithms for certain special cases of the problem.

Joint work with Maxim Sviridenko.

Changed lines 55-81 from:

Sept 16 How To Play Almost Any Mental Game Over The Net - Concurrent Composition via Super-Polynomial Simulation

Boaz Barak, Princeton University

We consider the problem of designing a secure protocol for ANY multi-party functionality (for example, electronic auctions, electronic elections, privacy-preserving data mining), that remains secure even when executed concurrently in the network with many copies of itself and other protocols. Previous such constructions either required a third trusted party or used highly non-standard computational assumptions.

Under reasonably standard computational assumptions, we construct a protocol for this problem that remains secure (under a relaxed definition of security) even in the concurrent, asynchronous setting. The relaxation of security we consider is simulation in quasi-polynomial (as opposed to polynomial) time. This relaxation seems to suffice to obtain protocols for any task whose privacy, integrity and input independence cannot broken by efficient adversaries.

Joint work with Amit Sahai (UCLA). An extended abstract of this work will appear in FOCS 2005.

to:

Sept 29 Sublinear-time Heavy Hitters with Universal Guarantees

Martin J. Strauss, Michigan/Princeton

In the heavy hitters problem, the goal is to track the most frequent items in a changing collection, using little space. Over the last decade, a number of algorithms have been proposed for this problem. We present the first algorithms that simultaneously satisfy three important criteria: (i) the space is optimal, up to log factors (ii) the reconstruction time is polynomial in the number of heavy hitters and polylogarithmic in the universe size (iii) a single set of measurements (constructed randomly) works for all collections of items.

Joint work with Anna Gilbert, Joel Tropp, and Roman Vershynin

The talk is based on two papers. One is available as http://arxiv.org/abs/cs.DS/0608079 and the other is in progress.

Changed lines 76-109 from:

Sept 30 The Price of Stability for Network Design

E. Anshelevich

Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions (the Nash equilibria) may look quite different from the centrally enforced optimum. We study the price of stability, i.e. the quality of the best Nash equilibrium compared to the optimum network cost. The best Nash equilibrium solution has a natural meaning of stability in this context: it is the optimal solution that can be proposed from which no user will "deviate".

We consider two versions of this game: one where agents may divide the cost of the edges they use in any manner they desire, and one where the cost of each such edge is divided equally between the agents whose connections make use of it. In the first version, determining whether or not a Nash equilibrium exists is NP-complete. However, when the goal of each player is to connect a terminal to a common source, we prove that there is a Nash equilibrium as cheap as the optimal network, and give a polynomial time algorithm to find a (1+epsilon)-approximate Nash equilibrium that does not cost much more. In the second version, however, a Nash equilibrium always exists and can be achieved via best-response dynamics. In this version we can show a tight bound of O(log k) on the price of stability (where k is the number of agents). I will discuss these results and possibly mention some extensions as well.

This is joint work with: Anirban Dasgupta, Jon Kleinberg, Eva Tardos, Tom Wexler, and Tim Roughgarden

to:

Oct 6 New Market Models and Algorithms

Vijay V, Vazirani, Georgia Tech

The notion of a ``market'' has undergone a paradigm shift with the Internet -- totally new and highly successful markets have been defined and launched by companies such as Google, Yahoo!, Amazon, MSN and Ebay. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner.

In view of these new realities, the study of market equilibria, an important, though essentially non-algorithmic, theory within Mathematical Economics, needs to be revived and rejuvenated with an inherently algorithmic approach. Such a theory should not only address traditional market models but also define new models for some of the new markets.

In this two-talk series, I will give a feel for the exciting work going on on this front. Interestingly enough, this work has also contributed handsomely to the theory of algorithms itself. In particular, the highly successful primal-dual schema from exact and approximation algorithms, which was so far used for combinatorially solving special classes of linear programs, has been extended to solving nonlinear convex programs.

Both talks are self-contained; the first is meant for a general audience and will be the Princeton CS department colloquium on Wednesday, Oct 4. The second will be at the Princeton theory lunch on Friday, Oct 6.

1). Resource Allocation Markets, based on

AdWords and Generalized On-line Matching Eisenberg-Gale Markets I Eisenberg-Gale Markets II

2). Spending Constraint Utilities with Applications to the AdWords Market

Changed lines 115-147 from:

Oct 7 Aggregating Inconsistent Information: Ranking and Clustering

Nir Ailon, Princeton.

A ranking of n web pages is to be chosen from outputs of k search engines.How do we choose one ranking minimizing the "disagreement" with the k rankings? A clustering of n genes is to be chosen from outputs of k clustering algorithms.How do we choose one clustering minimizing the "disagreement"

with the k clusterings?

These information aggregation problems date back to as early as 1785, when the French philosopher Condorcet considered voting systems where each voter chooses a full ranking of a set of candidates.Recently, their computational aspects have been studied.

In this talk, I will describe new algorithms improving on the best known approximation ratios for both the ranking and the clustering problems with respect to a standard measure of disagreement. I will also discuss related graph theoretic optimization problems, known as the minimum feedback arc set in tournaments and the correlation clustering in complete graphs. Additionally, I will show that the problem of finding a minimum feedback arc set in tournaments has no poly-time algorithm, assuming NP is not contained in BPP. This almost settles a long-standing conjecture of Bang-Jensen and Thomassen, that the problem is NP-hard.

This is joint work with Moses Charikar and Alantha Newman.

to:

Oct 13 TBA

Nick Harvey, MIT

Changed lines 121-155 from:

Oct 21 Additive Approximation for Edge-Deletion Problems

Benny Sudakov, Princeton

A graph property is monotone if it is closed under removal of vertices and edges. In this talk we consider the following edge-deletion problem; given a monotone property P and a graph G, compute the smallest number of edge deletions that are needed in order to turn G into a graph satisfying P. We denote this quantity by $E_P(G)$. Our first result states that for any monotone graph property P, any $\epsilon >0$ and n-vertex input graph $G$ one can approximate $E_P(G)$ up to an additive error of $\epsilon n^2

Given the above, a natural question is for which monotone properties one can obtain better additive approximations of $E_P(G)$.

Our second main result essentially resolves this problem by giving a precise characterization of the monotone graph properties for which such approximations exist. We will show that for any dense monotone property, that is, a property for which there are graphs on n vertices with $\Omega (n2)$ edges that satisfy it, it is NP-hard to approximate $E_P(G)$ up to an additive error of $n^{2-\delta}$, for any fixed positive $\delta$.

The proof requires several new ideas and involves tools from Extremal Graph Theory together with spectral techniques.Interestingly, prior to this work it was not even known that computing $E_P(G)$ precisely for dense monotone properties is $NP$-hard. We thus answer (in a strong form) a question of Yannakakis raised in 1981.

Joint work with N. Alon and A. Shapira.

to:

Oct 20 TBA

Eric Allender, Rutgers

Changed lines 127-147 from:

Oct 28 On the maximum number of edges in $k$-quasi-planar graphs

Eyal Ackerman, Technion

A topological graph is called \emph{$k$-quasi-planar}, if it does not contain $k$ pairwise crossing edges. It is conjectured that for every fixed $k$ the maximum number of edges in a $k$-quasi-planar graph on $n$ vertices is $O(n)$. We provide, for the first time, an affirmative answer to the case $k=4$. We also give the simplest proof and the best upper bound known, for the

maximum number of edges in 3-quasi-planar graphs on $n$ vertices. Moreover, we show a tight upper bound for 3-quasi-planar graphs in which every pair of edges meet at most once.

Joint work with Gabor Tardos.

to:

Nov 17 TBA

Vitaly Feldman, Harvard

Changed lines 132-150 from:

Quantum Programming

Paulo Zuliani, Princeton

In this talk we introduce a programming language for the expression of quantum algorithms. Quantum algorithms are usually described by means of pseudocode, supported by two underlying models: quantum circuits and quantum Turing machines. Those models are useful in complexity analysis of quantum computation and in low-level description of implementations. However, they are inadequate for specifying and reasoning about correctness of quantum algorithms at a higher level.We present a quantum programming language that contains the features required to program a `universal' quantum computer, including initialisation and observation. Moreover, the language has a formal semantics and body of laws, and provides a refinement calculus supporting the verification and derivation of programs against their specifications. We express in thelanguage a selection of quantum algorithms and derive one of them (Deutsch-Jozsa's) from its specification.

to:

Dec 1 TBA

Sanjeev Khanna, U Penn

Deleted lines 137-262:

Dec 2 New Algorithms for Online Game Playing and Universal Portfolio Management

Elad Hazan, Princeton

We introduce a new algorithm, Smooth Prediction, and a new analysis technique that is applicable to a variety of online optimization scenarios, including regret minimization for Lipschitz regret functions studied by Hannan, universal portfolios by Cover, Kalai and Vempala, onine optimization by Zinkevich, and others. Our algorithm is more efficient and applies to a more general setting (e.g. when the payoff function is unknown).

The algorithm extends a method proposed by Hannan in the 1950's, called `Follow the Leader". It adds a barrier function to the convex optimization routine required to find the ``leader" at every iteration.

One application of our algorithm is the well studied universal portfolio management problem, for which our algorithm is the first to combine optimal convergence rate with computational efficiency. For more general settings, our algorithm is the first to achieve optimal convergence rate.

Joint work with Amit Agarwal.


March 3, 2006 On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications

Danny Harnik, Technion

We consider NP problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness rather than the length of original input. Such compression enables to succinctly store in stances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We investigate when such compression is possible and give a new classification of NP with respect to this compression.


April 21, 2006 Distances and limits of graphs

Laszlo Lovasz, Microsoft Research

How to define that two large graphs are "similar"? For example, two large random graphs wi th the same edge density (but possibly with a different number of nodes) behave very simil arly from virtually all points of view.

It is possible to define a distance of two graphs that captures global similarity. You can

 take the completion of the resulting metric space (i.e., limits of graph sequences conver

gent in this metric); it turns out that the limit objects you have to add have several ver y natural descriptions; one of these is 2-variable symmetric measurable functions from [0, 1]^2 to [0,1]. The completed space is compact; this fact is essentially equivalent to Szem eredi's Regularity Lemma. This compactness can be used in proofs; for example, it yields v ery short (although non-effective) proofs for results in property testing.

This is joint work with Chrisitan Borgs, Jennifer Chayes, Vera Sos, Balazs Szegedy and Kati Vesztergombi.


May 12, 2006 Bounded Degree Minimum Spanning Trees

Michel X. Goemans, M.I.T.

In this talk, I will consider the minimum cost spanning tree problem under the restriction that all degrees must be at most a given value $k$. The main result is that one can efficiently find a spanning tree of maximum degree at most $k+2$ whose cost is at most the cost of the optimum spanning tree of maximum degree $k$. This is almost best possible.

The approach uses a sequence of simple algebraic, polyhedral and combinatorial arguments. It illustrates many techniques and ideas in combinatorial optimization as it involves polyhedral characterizations, uncrossing, matroid intersection, and graph orientations (or packing of spanning trees). The talk will be self-contained.

The result generalizes to the setting where every vertex has both upper and lower bounds and gives then a spanning tree which violates the bounds by at most 2 units and whose cost is at most the cost of the optimum tree. It also gives a better understanding of the subtour relaxation for both the symmetric and asymmetric traveling salesman problems.


June 30, 2006, NEW TIME: 2pm On Translational Motion Planning in 3-Space

Esther Ezra, Tel Aviv University

Constructing the union of geometric objects and bounding its combinatorial complexity is a fundamental problem in computational and combinatorial geometry. A major application of this problem is to translational motion planning, where a polyhedral robot $R$ translates in a fixed polyhedral environment, and one wishes to compute all placements of $R$ (known as free placements) at which it avoids collision with any obstacle.

In this talk, I will give a brief overview of the area, and explain its importance. I will then discuss the union problem, where the input consists of $n$ ``fat'' tetrahedra in $3$-space of arbitrary sizes, and show that in this case the combinatorial complexity of the union is O(n^{5/2+\eps})$, for any $\eps > 0$, thus obtaining the first subcubic upper bound. An immediate consequence of this result implies that the combinatorial complexity of the union of $n$ cubes in 3-space of arbitrary side lengths, is $O(n^{5/2+\eps})$, for any $\eps > 0$. In addition, I will show that when the tetrahedra have nearly equal size, this bound becomes $O(n^{2+\eps})$, for any $\eps > 0$, which is almost tight in the worst case.

The talk will be self-contained.

Joint work with Micha Sharir.


September 18, 2006, at 03:20 PM by 128.112.95.79 -
Changed lines 11-12 from:
to:

Theory Lunch archives from previous semesters

June 28, 2006, at 05:18 PM by 172.16.40.253 -
Changed lines 41-44 from:
June 30Esther Ezra, Tel Aviv UniversityOn Translational Motion Planning in 3-Space
to:
June 30 at 2pmEsther Ezra, Tel Aviv UniversityOn Translational Motion Planning in 3-Space
Changed lines 361-362 from:

June 30, 2006 On Translational Motion Planning in 3-Space

to:

June 30, 2006, NEW TIME: 2pm On Translational Motion Planning in 3-Space

Added line 391:
June 01, 2006, at 08:10 PM by 172.16.40.253 -
Changed lines 41-44 from:
June 30Esther Ezra, Tel Aviv University
to:
June 30Esther Ezra, Tel Aviv UniversityOn Translational Motion Planning in 3-Space
Changed lines 358-390 from:

to:

June 30, 2006 On Translational Motion Planning in 3-Space

Esther Ezra, Tel Aviv University

Constructing the union of geometric objects and bounding its combinatorial complexity is a fundamental problem in computational and combinatorial geometry. A major application of this problem is to translational motion planning, where a polyhedral robot $R$ translates in a fixed polyhedral environment, and one wishes to compute all placements of $R$ (known as free placements) at which it avoids collision with any obstacle.

In this talk, I will give a brief overview of the area, and explain its importance. I will then discuss the union problem, where the input consists of $n$ ``fat'' tetrahedra in $3$-space of arbitrary sizes, and show that in this case the combinatorial complexity of the union is O(n^{5/2+\eps})$, for any $\eps > 0$, thus obtaining the first subcubic upper bound. An immediate consequence of this result implies that the combinatorial complexity of the union of $n$ cubes in 3-space of arbitrary side lengths, is $O(n^{5/2+\eps})$, for any $\eps > 0$. In addition, I will show that when the tetrahedra have nearly equal size, this bound becomes $O(n^{2+\eps})$, for any $\eps > 0$, which is almost tight in the worst case.

The talk will be self-contained.

Joint work with Micha Sharir.


April 21, 2006, at 11:49 PM by Moses Charikar -
Changed line 40 from:
May 12Michel Goemans, MITBounded Degree Minimum Spanning Trees
to:
May 12Michel Goemans, MITBounded Degree Minimum Spanning Trees
Changed lines 154-155 from:

#ailon Nir Ailon, Princeton.

to:

Nir Ailon, Princeton.

Added lines 331-358:

May 12, 2006 Bounded Degree Minimum Spanning Trees

Michel X. Goemans, M.I.T.

In this talk, I will consider the minimum cost spanning tree problem under the restriction that all degrees must be at most a given value $k$. The main result is that one can efficiently find a spanning tree of maximum degree at most $k+2$ whose cost is at most the cost of the optimum spanning tree of maximum degree $k$. This is almost best possible.

The approach uses a sequence of simple algebraic, polyhedral and combinatorial arguments. It illustrates many techniques and ideas in combinatorial optimization as it involves polyhedral characterizations, uncrossing, matroid intersection, and graph orientations (or packing of spanning trees). The talk will be self-contained.

The result generalizes to the setting where every vertex has both upper and lower bounds and gives then a spanning tree which violates the bounds by at most 2 units and whose cost is at most the cost of the optimum tree. It also gives a better understanding of the subtour relaxation for both the symmetric and asymmetric traveling salesman problems.


April 21, 2006, at 09:04 PM by 172.16.27.74 -
Changed line 38 from:
Apr 28Petros Drineas, Rennsaeler
to:
Apr 28No lunch; NYU/Columbia Theory Day
April 19, 2006, at 08:26 PM by 172.16.40.253 -
Added line 37:
Apr 21Laszlo Lovasz
Changed lines 308-330 from:

NP with respect to this compression.

to:

NP with respect to this compression.


April 21, 2006 Distances and limits of graphs

Laszlo Lovasz, Microsoft Research

How to define that two large graphs are "similar"? For example, two large random graphs wi th the same edge density (but possibly with a different number of nodes) behave very simil arly from virtually all points of view.

It is possible to define a distance of two graphs that captures global similarity. You can

 take the completion of the resulting metric space (i.e., limits of graph sequences conver

gent in this metric); it turns out that the limit objects you have to add have several ver y natural descriptions; one of these is 2-variable symmetric measurable functions from [0, 1]^2 to [0,1]. The completed space is compact; this fact is essentially equivalent to Szem eredi's Regularity Lemma. This compactness can be used in proofs; for example, it yields v ery short (although non-effective) proofs for results in property testing.

This is joint work with Chrisitan Borgs, Jennifer Chayes, Vera Sos, Balazs Szegedy and Kati Vesztergombi.

April 04, 2006, at 02:14 PM by 128.31.33.15 -
Added line 39:
May 12Michel Goemans, MITBounded Degree Minimum Spanning Trees
March 21, 2006, at 12:57 PM by Boaz Barak -
Changed lines 36-38 from:
Apr 14Ashish Goel, Stanford
Apr 28Petros Drineas, Rennsaeler
May 5Tal Malkin, Columbia
to:
Apr 14Ashish Goel, Stanford
Apr 28Petros Drineas, Rennsaeler
May 5Tal Malkin, Columbia
March 21, 2006, at 12:56 PM by Boaz Barak -
Changed lines 244-245 from:

#Zuliani Quantum Programming

to:

Quantum Programming

March 21, 2006, at 12:55 PM by Boaz Barak -
Added line 38:
May 5Tal Malkin, Columbia
March 16, 2006, at 12:24 PM by Boaz Barak -
Changed line 33 from:
March 24Tal Malkin, Columbia
to:
March 24No talk planned (spring break)
March 07, 2006, at 11:49 PM by Sanjeev Arora -
Changed line 31 from:
March 10Ravi Kannan, Yale
to:
March 10Ravi Kannan, YaleMatrix-valued Random Variables, Clustering
March 05, 2006, at 09:14 PM by Sanjeev Arora -
Added lines 36-37:
Apr 14Ashish Goel, Stanford
Apr 28Petros Drineas, Rennsaeler
March 05, 2006, at 12:42 AM by 172.16.40.253 -
Changed lines 36-39 from:
to:
June 30Esther Ezra, Tel Aviv University
March 01, 2006, at 09:06 AM by Boaz Barak -
Changed lines 290-291 from:

March 3, 2006 On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications

to:

March 3, 2006 On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications

February 27, 2006, at 11:47 AM by Boaz Barak -
Changed lines 36-38 from:
to:
Changed lines 111-113 from:

#elliot Sept 30 The Price of Stability for Network Design

to:

Sept 30 The Price of Stability for Network Design

February 27, 2006, at 11:40 AM by Boaz Barak -
Changed lines 39-42 from:
to:

Changed lines 46-47 from:

Sept 9 Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT

to:

Sept 9 Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT

Changed lines 81-83 from:

Sept 16How To Play Almost Any Mental Game Over The Net - Concurrent Composition via Super-Polynomial Simulation

to:

Sept 16 How To Play Almost Any Mental Game Over The Net - Concurrent Composition via Super-Polynomial Simulation

Changed lines 110-112 from:

#elliot Sept 30 The Price of Stability for Network Design

to:

#elliot Sept 30 The Price of Stability for Network Design

Changed lines 146-147 from:

Oct 7 Aggregating Inconsistent Information: Ranking and Clustering

to:

Oct 7 Aggregating Inconsistent Information: Ranking and Clustering

Changed lines 181-182 from:

Oct 21 Additive Approximation for Edge-Deletion Problems

to:

Oct 21 Additive Approximation for Edge-Deletion Problems

Changed lines 218-219 from:

Oct 28 On the maximum number of edges in $k$-quasi-planar graphs

to:

Oct 28 On the maximum number of edges in $k$-quasi-planar graphs

Changed lines 240-248 from:
      #Zuliani Quantum Programming

      Paulo Zuliani, Princeton
to:

#Zuliani Quantum Programming

Paulo Zuliani, Princeton

Changed lines 261-264 from:

Dec 2 New Algorithms for Online Game Playing and Universal Portfolio Management

to:

Dec 2 New Algorithms for Online Game Playing and Universal Portfolio Management

February 27, 2006, at 11:36 AM by Boaz Barak -
Changed line 12 from:
to:
Changed lines 26-27 from:
Dec 2Elad Hazan, PrincetonNew algorithms for Online Game Playing and Universal Portfolio Management.
Dec 9Benny Sudakov, Princeton
to:
Changed line 30 from:
March 3Danny Harnik, Technion
to:
Changed lines 181-182 from:

Oct 21 Additive Approximation for Edge-Deletion Problems

to:

Oct 21 Additive Approximation for Edge-Deletion Problems

Added lines 290-306:

March 3, 2006 On the Compressibility of NP Instances (for the Future) and (Today's) Cryptographic Applications

Danny Harnik, Technion

We consider NP problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness rather than the length of original input. Such compression enables to succinctly store in stances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We investigate when such compression is possible and give a new classification of NP with respect to this compression.

February 27, 2006, at 11:33 AM by Boaz Barak -
Added lines 1-289:

Princeton Theory Lunch Schedule, 2005-06.

 Computer Science Department,
 Princeton University

(All events held at 12 noon, Room 402, CS Department, 35 Olden St., unless stated otherwise)

click here to subscribe to the mailing list for seminar announcements

DateSpeakerTitle
Sept 9Adi Avidor, Tel Aviv University.Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT
Sept 16Boaz Barak, PrincetonHow To Play Almost Any Mental Game Over The Net --Concurrent composition via Super-Polynomial Simulation
Sept 23 No theory lunch owing to faculty meeting
Sept 30Elliot AnshelevichThe Price of Stability for Network Design
Oct 7Nir AilonAggregating Inconsistent Information: Ranking and Clustering
Oct 14 
Oct 21 
Oct 28Eyal Ackerman, Technion
Nov 4 
Nov 11Paulo Zuliani, PrincetonQuantum Programming
Nov 18 No theory lunch
Nov 25Shengyu Zhang, Princeton
Dec 2Elad Hazan, PrincetonNew algorithms for Online Game Playing and Universal Portfolio Management.
Dec 9Benny Sudakov, Princeton
Jan 20Liam Roditty, Tel Aviv University
Feb 10Jon Kelner, MIT
March 3Danny Harnik, Technion
March 10Ravi Kannan, Yale
March 17Shengyu Zhang, Princeton
March 24Tal Malkin, Columbia
March 31Vladlen Koltun, Stanford.
Apr 7Elchanan Mossel, UC Berkeley.

Abstracts for the talks.

Sept 9 Rounding two and three dimensional solutions of the SDP relaxation of MAX CUT

Adi Avidor, Tel Aviv University

Goemans and Williamson obtained an approximation algorithm for the MAX CUT problem with a performance ratio of $\alpha_{GW}~ 0.87856$. Their algorithm starts by solving a standard SDP relaxation of MAX CUT and then rounds the optimal solution obtained using a random hyperplane. In some cases, the optimal solution of the SDP relaxation happens to lie in a low dimensional space. Can an improved performance ratio be obtained for such

instances? We show that the answer is yes in dimensions two and three and conjecture that this is also the case in any higher fixed dimension. In two dimensions an optimal $32/(25+5\sqrt{5})$ approximation algorithm was already obtained by Goemans. (Note that $32/(25+5\sqrt{5}) ~ 0.88456$.) We obtain an alternative derivation of this result using Gegenbauer polynomials. Our main result is an improved rounding procedure for SDP solutions that lie in $R3$ with a performance ratio of about

$0.8818$ . The rounding procedure uses an interesting yin-yan coloring of the three dimensional sphere. The improved performance ratio obtained resolves, in the negative, an open problem posed by Feige and Schechtman [STOC'01]. They asked whether there are MAX CUT instances with integrality ratios arbitrarily close to $\alpha_{GW}~0.87856$ that have optimal embedding, i.e., optimal solutions of their SDP relaxations, that lie in R^3

Joint work with Uri Zwick


Sept 16How To Play Almost Any Mental Game Over The Net - Concurrent Composition via Super-Polynomial Simulation

Boaz Barak, Princeton University

We consider the problem of designing a secure protocol for ANY multi-party functionality (for example, electronic auctions, electronic elections, privacy-preserving data mining), that remains secure even when executed concurrently in the network with many copies of itself and other protocols. Previous such constructions either required a third trusted party or used highly non-standard computational assumptions.

Under reasonably standard computational assumptions, we construct a protocol for this problem that remains secure (under a relaxed definition of security) even in the concurrent, asynchronous setting. The relaxation of security we consider is simulation in quasi-polynomial (as opposed to polynomial) time. This relaxation seems to suffice to obtain protocols for any task whose privacy, integrity and input independence cannot broken by efficient adversaries.

Joint work with Amit Sahai (UCLA). An extended abstract of this work will appear in FOCS 2005.


#elliot Sept 30 The Price of Stability for Network Design

E. Anshelevich

Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions (the Nash equilibria) may look quite different from the centrally enforced optimum. We study the price of stability, i.e. the quality of the best Nash equilibrium compared to the optimum network cost. The best Nash equilibrium solution has a natural meaning of stability in this context: it is the optimal solution that can be proposed from which no user will "deviate".

We consider two versions of this game: one where agents may divide the cost of the edges they use in any manner they desire, and one where the cost of each such edge is divided equally between the agents whose connections make use of it. In the first version, determining whether or not a Nash equilibrium exists is NP-complete. However, when the goal of each player is to connect a terminal to a common source, we prove that there is a Nash equilibrium as cheap as the optimal network, and give a polynomial time algorithm to find a (1+epsilon)-approximate Nash equilibrium that does not cost much more. In the second version, however, a Nash equilibrium always exists and can be achieved via best-response dynamics. In this version we can show a tight bound of O(log k) on the price of stability (where k is the number of agents). I will discuss these results and possibly mention some extensions as well.

This is joint work with: Anirban Dasgupta, Jon Kleinberg, Eva Tardos, Tom Wexler, and Tim Roughgarden


Oct 7 Aggregating Inconsistent Information: Ranking and Clustering

#ailon Nir Ailon, Princeton.

A ranking of n web pages is to be chosen from outputs of k search engines.How do we choose one ranking minimizing the "disagreement" with the k rankings? A clustering of n genes is to be chosen from outputs of k clustering algorithms.How do we choose one clustering minimizing the "disagreement"

with the k clusterings?

These information aggregation problems date back to as early as 1785, when the French philosopher Condorcet considered voting systems where each voter chooses a full ranking of a set of candidates.Recently, their computational aspects have been studied.

In this talk, I will describe new algorithms improving on the best known approximation ratios for both the ranking and the clustering problems with respect to a standard measure of disagreement. I will also discuss related graph theoretic optimization problems, known as the minimum feedback arc set in tournaments and the correlation clustering in complete graphs. Additionally, I will show that the problem of finding a minimum feedback arc set in tournaments has no poly-time algorithm, assuming NP is not contained in BPP. This almost settles a long-standing conjecture of Bang-Jensen and Thomassen, that the problem is NP-hard.

This is joint work with Moses Charikar and Alantha Newman.


Oct 21 Additive Approximation for Edge-Deletion Problems

Benny Sudakov, Princeton

A graph property is monotone if it is closed under removal of vertices and edges. In this talk we consider the following edge-deletion problem; given a monotone property P and a graph G, compute the smallest number of edge deletions that are needed in order to turn G into a graph satisfying P. We denote this quantity by $E_P(G)$. Our first result states that for any monotone graph property P, any $\epsilon >0$ and n-vertex input graph $G$ one can approximate $E_P(G)$ up to an additive error of $\epsilon n^2

Given the above, a natural question is for which monotone properties one can obtain better additive approximations of $E_P(G)$.

Our second main result essentially resolves this problem by giving a precise characterization of the monotone graph properties for which such approximations exist. We will show that for any dense monotone property, that is, a property for which there are graphs on n vertices with $\Omega (n2)$ edges that satisfy it, it is NP-hard to approximate $E_P(G)$ up to an additive error of $n^{2-\delta}$, for any fixed positive $\delta$.

The proof requires several new ideas and involves tools from Extremal Graph Theory together with spectral techniques.Interestingly, prior to this work it was not even known that computing $E_P(G)$ precisely for dense monotone properties is $NP$-hard. We thus answer (in a strong form) a question of Yannakakis raised in 1981.

Joint work with N. Alon and A. Shapira.


Oct 28 On the maximum number of edges in $k$-quasi-planar graphs

Eyal Ackerman, Technion

A topological graph is called \emph{$k$-quasi-planar}, if it does not contain $k$ pairwise crossing edges. It is conjectured that for every fixed $k$ the maximum number of edges in a $k$-quasi-planar graph on $n$ vertices is $O(n)$. We provide, for the first time, an affirmative answer to the case $k=4$. We also give the simplest proof and the best upper bound known, for the

maximum number of edges in 3-quasi-planar graphs on $n$ vertices. Moreover, we show a tight upper bound for 3-quasi-planar graphs in which every pair of edges meet at most once.

Joint work with Gabor Tardos.


      #Zuliani Quantum Programming

      Paulo Zuliani, Princeton

In this talk we introduce a programming language for the expression of quantum algorithms. Quantum algorithms are usually described by means of pseudocode, supported by two underlying models: quantum circuits and quantum Turing machines. Those models are useful in complexity analysis of quantum computation and in low-level description of implementations. However, they are inadequate for specifying and reasoning about correctness of quantum algorithms at a higher level.We present a quantum programming language that contains the features required to program a `universal' quantum computer, including initialisation and observation. Moreover, the language has a formal semantics and body of laws, and provides a refinement calculus supporting the verification and derivation of programs against their specifications. We express in thelanguage a selection of quantum algorithms and derive one of them (Deutsch-Jozsa's) from its specification.


Dec 2 New Algorithms for Online Game Playing and Universal Portfolio Management

Elad Hazan, Princeton

We introduce a new algorithm, Smooth Prediction, and a new analysis technique that is applicable to a variety of online optimization scenarios, including regret minimization for Lipschitz regret functions studied by Hannan, universal portfolios by Cover, Kalai and Vempala, onine optimization by Zinkevich, and others. Our algorithm is more efficient and applies to a more general setting (e.g. when the payoff function is unknown).

The algorithm extends a method proposed by Hannan in the 1950's, called `Follow the Leader". It adds a barrier function to the convex optimization routine required to find the ``leader" at every iteration.

One application of our algorithm is the well studied universal portfolio management problem, for which our algorithm is the first to combine optimal convergence rate with computational efficiency. For more general settings, our algorithm is the first to achieve optimal convergence rate.

Joint work with Amit Agarwal.