Main»Theory Lunch

Princeton Theory Lunch Schedule, 2007-08.

Theory Group, Computer Science Department, Princeton University

Lunch will be served at 11:45am in room 402, CS Department, 35 Olden St. The talk will begin at 12:05.

click here to subscribe to the mailing list for seminar announcements

Theory Lunch archives from previous semesters

Talks since Fall 2008

For the theory talks since Fall 2008, please visit our new Theory calendar.


Previous years

Spring 2008

DateSpeakerTitle
Jan 15Robi Krauthgamer,WeizmannOvercoming the l_1 nonembeddability barrier
Jan 23Mark Braverman,TorontoNoisy sorting without resampling
Feb 8C. Seshadhri, PrincetonAdaptive Algorithms for Online Optimization
Feb 15Prasad Raghavendra, U. WashingtonOptimal Algorithms and Inapproximability Results for every CSP?
Feb 22Arie Matsliah, TechnionApproximate Hypergraph Partitioning and Applications
Feb 29Venkat Guruswami, U. WashingtonList decoding of binary codes: Improved results via new concatenation schemes
Mar 7Nikhil Bansal, IBMDegree bounded network design
Mar 14Mikkel Thorup, AT&TMaximum Overhang
Mar 21Alexandr Andoni, MITThe Computational Hardness of Estimating Edit Distance
Mar 28James Lee, U. WashingtonSpectral bounds without conformal mappings
Apr 4Nikhil Srivastava, YaleGraph Sparsification by Effective Resistances
Apr 11Rohit Khandekar, IBMStateless Distributed Gradient Descent for Positive Linear Programs
Apr 18Vijay V. Vazirani, Georgia TechNash Bargaining via Flexible Budget Markets
Apr 25Umesh Vazirani, UC Berkeley
May 2Adam Meyerson, UCLARandomized K-Server on Hierarchical Binary Trees
May 9all day workshopBob Tarjan's 60th Birthday Celebration
May 16Navin Goyal, Georgia TechThe VPN Conjecture is True
May 23cancelled

Fall 2007

DateSpeakerTitle
Sep 21Kamesh Munagala, DukeRestless Bandits with Bursty Rewards
Sep 28Parikshit Gopalan, U. WashingtonHardness of Reconstructing Multivariate Polynomials over Finite Fields
Oct 5Benny Applebaum, PrincetonCryptography in Constant Parallel Time
Oct 12Ragesh Jaiswal, UCSDChernoff-type Direct Product Theorems
Oct 19Oded Regev, Tel AvivThe Unique Games Conjecture with Entangled Provers is False
Oct 26Andrej BogdanovPseudorandom bits for polynomials
Nov 2Michael SchapiraInterdomain Routing and Games
Nov 9Elad Hazan
Nov 16Jérémie RolandNon-locality, negative probabilities, and communication complexity
Nov 23No talkThanksgiving holiday
Nov 30Kostas DaskalakisComputing Approximate Nash Equilibria
Dec 7 

Abstracts


Sept 21 Restless Bandits with Bursty Rewards

Kamesh Munagala, Duke

We consider a variant of the classic multi-armed bandit (MAB) problem from stochastic control theory. We call this variant "Feedback MAB". Here, the reward obtained by playing each of n independent arms varies according to an underlying "on/off" Markov process with known parameters. The evolution of the Markov chain happens irrespective of whether the arm is played, and furthermore, the exact state of the Markov chain is only revealed to the player when the arm is played and the reward observed. At most one arm (or in general, M arms) can be played any time step. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward.

This problem is an instance of a Partially Observable Markov Decision Process (POMDP), and a special case of the notoriously intractable "restless bandit" problem in control theory. Unlike the classic Stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies. The state of the system at any time step encodes the beliefs about the states of different arms, and the policy decisions change these beliefs – this aspect complicates the design and analysis of simple algorithms.

We design a constant factor approximation to the Feedback MAB problem by solving and rounding a natural LP relaxation to this problem. The resulting policy is simple to implement, intuitive, and crucially exploits interesting structure in the LP solution. As far as we are aware, this is the first (multiplicative) approximation algorithm for a POMDP problem.

To appear in FOCS 2007. Joint work with Sudipto Guha, UPenn.


Sept 28 Hardness of Reconstructing Multivariate Polynomials over Finite Fields

Parikshit Gopalan, U. Washington

In the generic polynomial reconstruction problem, we are given a set of points along with target values for those points. Our goal is to find a low degree polynomial that fits the data well. This problem arises in several settings in computer science.

We consider the setting of low-degree multivariate polynomials over finite fields. Over GF[2] for any degree d, we show that it is NP-hard to tell whether some degree d polynomial fits the data almost everywhere, or if every polynomial of degree d fits the data at no more than 1 -2^{-d} fraction of points. This holds even with the promise that the polynomial that fits the data (if it exists) is in fact linear. Previously the only known hardness of approximation (or even NP-completeness) was for the case when d =1, which follows from a celebrated result of Hastad.

This is joint work with Subhash Khot and Rishi Saket.


October 5th Cryptography in Constant Parallel Time

Benny Applebaum, Princeton

We study the parallel time-complexity of basic cryptographic primitives such as one-way functions, pseudorandom generators, encryption schemes and signatures. Specifically, we consider the possibility of computing instances of these primitives by NC0 circuits, in which each output bit depends on a constant number of input bits. While NC0 functions might seem too simple to have cryptographic strength, we show that, under standard cryptographic assumptions (e.g., that integer factorization is hard), most cryptographic tasks can be carried out by such functions.

In this talk, I will survey several results regarding cryptography in NC0.

First, I will show that every "moderately easy" one-way function (resp., pseudorandom generator, encryption scheme, signature scheme), say computable in NC1, can be compiled into a corresponding one-way function (resp., low-stretch pseudorandom generator, encryption scheme, signature scheme) in which each output bit depends on only four input bits.

Second, I will discuss new relations between the parallel complexity of simple cryptographic tasks (e.g., generation of pseudorandom bits and computation of a one-way permutation) to the parallel complexity of complicated cryptographic tasks (e.g., encryption, signature or zero-knowledge proofs).

Third, I will describe a new connection between NC0 cryptography and hardness of approximation.

Finally, I will examine the possibility of carrying out cryptographic tasks by functions in which each output bit depends on a constant number of input bits, and each input bit affects a constant number of *output* bits at the same time. I will characterize what cryptographic tasks can be performed by such functions.

The talk is self-contained, and does not assume previous background in cryptography. The talk is based on joint works with Yuval Ishai and Eyal Kushilevitz (FOCS '04, CCC '05, RANDOM '06, CRYPTO '07).


Oct 12 Chernoff-type Direct Product Theorems

Ragesh Jaiswal, UCSD

Consider a challenge-response protocol where the probability of a correct response is at least p for a legitimate user, and at most q < p for an attacker. One example is a CAPTCHA challenge, where a human should have a significantly higher chance of answering a single challenge (e.g., uncovering a distorted letter) than an attacker. Another example would be an argument system without perfect completeness. A natural approach to boost the gap between legitimate users and attackers would be to issue many challenges, and accept if the response is correct for more than a threshold fraction, for the threshold chosen between p and q. We give the first proof that parallel repetition with thresholds improves the security of such protocols. We do this with a very general result about an attacker’s ability to solve a large fraction of many independent instances of a hard problem, showing a Chernoff-like convergence of the fraction solved incorrectly to the probability of failure for a single instance.

Joint work with Russell Impagliazzo (UCSD, IAS) and Valentine Kabanets (SFU). This work was presented at CRYPTO 2007.


Oct 19 The Unique Games Conjecture with Entangled Provers is False

Oded Regev, Tel Aviv University

We consider one-round games between a classical verifier and two provers who share entanglement. We show that when the constraints enforced by the verifier are `unique' constraints (i.e., permutations), the value of the game can be well approximated by a semidefinite program. Essentially the only algorithm known previously was for the special case of binary answers, as follows from the work of Tsirelson in 1980. Among other things, our result implies that the variant of the unique games conjecture where we allow the provers to share entanglement is false. Our proof is based on a novel `quantum rounding technique', showing how to take a solution to an SDP and transform it to a strategy for entangled provers.

NOTE: This talk requires absolutely no prior knowledge of quantum computation.

Joint work with Julia Kempe and Ben Toner.


Oct 26 Pseudorandom bits for polynomials

Andrej Bogdanov, Tsinghua University

Despite our general belief that randomized computations can be simulated deterministically in an efficient way, there are relatively few models for which results of this type can be shown unconditionally. On the other hand, the study of derandomization in simple models of computation has proved fruitful even in domains where no applications were apparent at the outset.

In this talk we consider pseudorandom generators for the class of low-degree multivariate polynomials over GF(2). In the case of degree 1 polynomials (linear functions), such pseudorandom generators were first constructed by Naor and Naor in 1989 and have been widely studied ever since. We show that, for any constant d, assuming the "Inverse Conjecture for the d-th Gowers norm" of Samorodnitsky, the sum of d generators for degree 1 polynomials is pseudorandom against degree d polynomials. The "Inverse Conjecture" is known to hold for d = 2 and d = 3 and in those cases our results are unconditional.

This is based on joint work with Emanuele Viola.


Nov 2 Interdomain Routing and Games

Michael Schapira, The Hebrew University of Jerusalem http://www.cs.huji.ac.il/~mikesch/

We present a game-theoretic model that captures many of the intricacies of interdomain routing in today's Internet. In this model, the strategic agents are source-nodes located on a network, who aim to send traffic to a unique destination node . The model enables complex dynamic interactions between the agents (asynchronous, sequential, and based on partial-information).

We provide realistic conditions that guarantee that executing the Border gateway Protocol (BGP), the only protocol currently used for interdomain routing, is in the best interest of each of the players. Moreover, we show that even coalitions of players of any size cannot improve their routing outcomes by jointly deviating from BGP. Unlike the vast majority of works in mechanism design, these results do not require any monetary transfers (to or by the agents).

Based on joint work with Hagay Levin and Aviv Zohar



Nov 16 Non-locality, negative probabilities, and communication complexity

Jérémie Roland, UC Berkeley

Note: We will not assume any knowledge about quantum mechanics (or any particular field whatsoever.

Abstract: Ever since its inception, quantum mechanics has appeared rather counter-intuitive in contrast to classical physics. In particular, one odd property of quantum mechanics is its apparent non-locality: this was first emphasized by the EPR paradox, and later clarified by Bell's theorem, which states that no local hidden variable (LHV) model may reproduce the results of an EPR-type experiment. One possible explanation that was first proposed by Feynman to resolve these apparent paradoxes was the (rather abstract) idea that quantum mechanics allowed probabilities to be negative.

Here, we will prove that any EPR-type experiment that respects non-signaling may be reproduced by a LHV model with quasi-probabilities, and give applications to communication complexity. More specifically, we will show how to derive a quasi-LHV model from a communication protocol. This yields a new lower bound method for the deterministic communication complexity, expressed as a linear program. The duality theorem of linear programming then clarifies the relation between Bell inequalities, negative probabilities and communication complexity

Time permitting, we will also consider the problem of computing with so-called "non-local boxes", i.e. abstract devices whose inputs-outputs follow the most simple non-local yet non-signaling probability distribution, and as such may be described by a quasi-LHV model. As was first proved by van Dam, communication complexity becomes trivial as soon as non-local boxes may be used for free. Here, we will quantify the cost of non-local boxes in such computation by showing how a communication protocol may be translated into a protocol using non-local boxes, and by giving a lower bound on the number of non-local boxes necessary to compute a given function.


Feb 8 Adaptive Algorithms for Online Optimization

C. Seshadhri, Princeton University

The online learning framework captures a wide variety of learning problems. The setting is as follows - in each round, we have to choose a point from some fixed convex domain. Then, we are presented a convex loss function, according to which we incur a loss. The loss over T rounds is simply the sum of all the losses. The aim of most online learning algorithm is to minimize *regret* : the difference of the algorithm's loss and the loss of the best fixed decision in hindsight. Unfortunately, in situations where the loss function may vary a lot, the regret is not a good measure of performance. We define *adaptive regret", a notion that is a much better measure of how well our algorithm is adapting to the changing loss functions. We provide a procedure that converts any standard low-regret algorithm to one that provides low adaptive regret. We use an interesting mix of techniques, and use streaming ideas to make our algorithm efficient. This technique can be applied in many scenarios, such as portfolio management, online shortest paths, and the tree update problem, to name a few.

Joint work with Elad Hazan


Feb 15 Optimal Algorithms and Inapproximability Results for every CSP?

Prasad Raghavendra, U. Washington

Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming.

Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by certain simple SDP. Specifically, we show a generic conversion from SDP integrality gap instances to UGC hardness results. This result holds both for maximization and minimization problems over arbitrary finite domains.

Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP.

Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of the natural SDP. Thus the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2SAT and Unique Games.


Feb 22 Approximate Hypergraph Partitioning and Applications

Arie Matsliah, Technion

I will present an algorithm that given eps>0 and a graph G on n vertices outputs an eps-regular partition of G in O(n) time.

Unlike the previous approaches for this problem that "reproved" Szemerdi's regularity lemma algorithmically and therefore guaranteed to find partitions of tower-size only, our algorithm will find a small regular partition if one exists.

The main tool that we develop for the above task is an algorithm for finding approximate partitions of hypergraphs, which generalizes the algorithm of Goldreich-Goldwasser-Ron for finding approximate partitions of graphs.

Joint work with Eldar Fischer and Asaf Shapira

Paper: http://www.cs.technion.ac.il/~ariem/papers/ahp.pdf


Feb 29 List decoding of binary codes: Improved results via new concatenation schemes

Venkat Guruswami, IAS and U. Washington

Recent progress in algebraic coding theory has led to the construction of error-correcting codes with the optimal trade-off between information rate and the fraction of worst-case errors corrected (in the model of list decoding). These codes, which are folded versions of the classical Reed-Solomon codes, are defined over large alphabets. The corresponding program of achieving list decoding capacity for binary codes -- namely constructing binary codes of rate approaching 1-H(p) to list decode a fraction p of errors -- remains a major long term challenge.

In this talk, I will discuss the following three results on list decoding of binary codes. All of them use a concatenation scheme with an outer folded Reed-Solomon code, and crucially rely on the powerful list decoding property of these codes.

  1. Polynomial-time constructible binary codes with the currently best known trade-off between error-correction radius and rate, achieving the so-called Blokh-Zyablov bound.
  2. Existence of binary concatenated codes that achieve list decoding capacity, which is encouraging given the preeminent role played by code concatenation in constructing good binary codes.
  3. Explicit binary codes for correcting a fraction (1/2-eps) of errors for eps -> 0 with block length growing as 1/eps^3; improving this cubic dependence will likely require a substantially new approach.

Based on joint works with Atri Rudra appearing in RANDOM'07, SODA'08, and CCC'08.


Mar 7 Degree bounded network design

Nikhil Bansal, IBM

Computing the minimum cost spanning tree in an undirected graph is well understood since the 1920's. However, consider the generalization where we are given degree bounds b_v on each vertex v, and we want to find the minimum cost spanning tree subject to these bounds. In a recent break through result, Goemans gave a LP based approximation algorithm that computes the minimum cost tree, while violating the degree bounds by at most an additive +2. This was subsequently improved and simplified by Singh and Lau who gave an additive guarantee of +1.

In this talk, I will describe a new extremely simple proof of the +1 result. Then I will discuss extensions to directed graphs, matroids and more general network design problems. Most of the talk will consist of giving a flavor of the polyhedral techniques used in the design and analysis of these algorithms.

Joint work with Rohit Khandekar and Viswanath Nagarajan.


Mar 14 Maximum Overhang

Mikkel Thorup, AT&T

Stacking $n$ blocks on a table so as to achieve maximum overhang over the side has a long history. The problem appears in physics and engineering textbooks from as early as the mid 19th century, but was apparently first brought to the attention of the mathematical community in 1923 when J.G. Coffin posed it in the ``Problems and Solutions" section of the American Mathematical Monthly; no solution was presented there. Since then the problem has been widely publicized by Gardner in Scientific American. In a cover article in Amer. J. Phys. in 2005 it was claimed that order log n would be best possible, while a book "Mad about Physics" from 2001 suggested that $n^{1/2}$ would be possible. Both of these mutually contradictory views are wrong! At SODA'06, Paterson and Zwick constructed n-block stacks with overhangs of order $n^{1/3}$, but can we go even further? We prove that order $n^{1/3}$ is best possible, resolving the long-standing overhang problem up to a constant factor.

In the talk we will both discuss the construction of Paterson and Zwick, and our proof that nothing substantially better is possible.

Joint work with Mike Paterson (Warwick), Yuval Peres (Berkeley), Peter Winkler (Dartmouth), and Uri Zwick (Tel Aviv).


Mar 21 The Computational Hardness of Estimating Edit Distance

Alexandr Andoni, MIT

Edit distance (aka Levenshtein distance) between two strings is the minimum number of insertions/deletions/substitutions needed to transform one string into the other. This distance is of key importance in fields like computational biology and text processing. Unfortunately, edit distance appears to be a "hard" distance to deal with, eluding efficient algorithms, both theoretically and in practice.

We present the first non-trivial communication complexity lower bound for the problem of estimating the edit distance. To the best of our knowledge, this is the first computational setting in which the complexity of computing the edit distance is provably larger than that of Hamming distance.

Our lower bound implies that protocols with O(1) communication can only obtain approximation of Omega~(log d), where d is the strings' length. This model is of particular importance, since it captures constant-size sketches as well as embeddings into spaces like L_1 and squared-L_2, two prevailing algorithmic approaches to deal with edit distance. Furthermore, the bound holds even if we restrict our attention to strings where any character appears at most once in a string (namely, the Ulam metric). In this latter case, our lower bound nearly matches the upper bound.

Joint work with Robert Krauthgamer (Weizmann Inst and IBM Almaden). Based on a result presented at FOCS'07.


Mar 28 Spectral bounds without conformal mappings

James Lee, University of Washington

In 1980, Yang and Yau gave optimal bounds on the second eigenvalue of the Laplacian on a 2-dimensional surface of bounded genus. Spielman and Teng showed that such a bound holds in the graph setting for planar graphs, which implies that for bounded degree planar graphs, Lipton-Tarjan separators will be found by a simple spectral algorithm. Later, Kelner extended this to all graphs of bounded genus.

All of these approaches are based on conformal mappings, or their discrete analogue (e.g. Koebe's embedding theorem), and it seems difficult to extend them to families of graphs that don't possess a conformal structure. We give a more intrinsic approach to these spectral bounds, based on metric embeddings, together with a bit of duality, and some elementary multi-commodity flow and crossing number arguments. In particular, we are able to positively resolve a conjecture of Spielman and Teng which gives eigenvalue bounds for graphs that exclude any fixed minor.

Finally, unlike previous spectral approaches, we can obtain separators without a bounded degree assumption; although the second eigenvector may perform poorly in this setting, we show that our test vector still gives a \sqrt{n}-sized separator, yielding an alternate proof of the Alon-Seymour-Thomas result on separators in non-planar graphs.

[Joint work with Punya Biswal and Satish Rao]


Apr 4 Graph Sparsification by Effective Resistances

Nikhil Srivastava, Yale

A sparsifier of a graph G is a sparse subgraph H that approximates it in some appropriate way. Benczur and Karger gave the first sparsifiers in '96, in which the weight of every cut in H was within a factor of (1+/-\epsilon) of its weight in G. In this work, we consider a stronger notion of approximation (introduced by Spielman and Teng in '04) that requires the Laplacian quadratic forms of H and G to be close -- specifically, that x^TL'x = (1+/-\epsilon) x^TLx for all vectors x in R^n, where L and L' are the Laplacians of G and H respectively. This subsumes the Benczur-Karger notion, which corresponds to the special case of x in {0,1}^n.

We show that every graph contains a (strong) sparsifier with O(n log n) edges, and that one can be found in nearly-linear time by randomly sampling each edge of the graph with probability proportional to its effective resistance. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in O(log n) time.

We conjecture the existence of sparsifiers with O(n) edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph.

Joint work with Dan Spielman, to appear in STOC 2008.

Paper: http://arxiv.org/abs/0803.0929


Apr 11 Stateless Distributed Gradient Descent for Positive Linear Programs

Rohit Khandekar, IBM

I shall describe a framework of distributed and stateless solutions for packing and covering linear programs, which are solved by multiple agents operating in a cooperative but uncoordinated manner. Our model has a separate "agent" controlling each variable and an agent is allowed to read-off the current values only of those constraints in which it has non-zero coefficients. This is a natural model for many distributed applications like multi-commodity flow control, maximum bipartite matching, and dominating sets.

The most appealing feature of our algorithms is their simplicity and fast convergence. For example, in the case of the flow control problem, our algorithm associates edge-lengths that are exponential in the edge-congestions and each commodity increases (or decreases) its flow multiplicatively if its path-length is less (or more) than a threshold. Our algorithm starting from any feasible solution, always maintains feasibility, and converges to a near-optimum solution in poly-logarithmic time.

While exponential dual variables are used in several packing/ covering LP algorithms before, this is the first algorithm which is both stateless and has poly-logarithmic convergence. Our algorithms can be thought of as applying distributed gradient descent/ascent on a carefully chosen potential.

This talk is based on a joint work with Baruch Awerbuch and is going to appear at STOC 08.


Apr 18 Nash Bargaining via Flexible Budget Markets

Vijay V. Vazirani, Georgia Tech

In his seminal 1950 paper, John Nash defined the bargaining problem; the ensuing theory of bargaining lies today at the heart of game theory. In this work, we initiate an algorithmic study of Nash bargaining problems.

We consider a class of Nash bargaining problems whose solution can be stated as a convex program. For these problems, we show that there corresponds a market whose equilibrium allocations yield the solution to the convex program and hence the bargaining problem. For several of these markets, we give combinatorial, polynomial time algorithms, using the primal-dual paradigm.

Unlike the traditional Fisher market model, in which buyers spend a fixed amount of money, in these markets, each buyer declares a lower bound on the amount of utility she wishes to derive. The amount of money she actually spends is a specific function of this bound and the announced prices of goods.

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate topics as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.


May 2 Randomized K-Server on Hierarchical Binary Trees

Adam Meyerson, UCLA

Metric K-Server is a major problem in the area of online algorithms. We are given a metric space along with a set of K initial server locations, and a sequence of location requests arrives one at a time. As each request arrives, we must move one of our servers to that location. The goal is to minimize the total (or average) distance traveled by servers. This models emergency response, and also has a wide range of applications relating to caching and paging, robot navigation, and reconfigurable computing. In general we cannot compute an optimum solution without foreknowledge of the request sequence, so we will seek an algorithm with good competitive performance (minimizing the worst-case ratio of our total distance traveled to the optimum).

The major result on K-Server is the Work Function Algorithm by Koutsoupias and Papadimitriou, establishing a 2k-1 competitive deterministic algorithm. Slightly improved results (k-competitive) exist for some special cases and a deterministic lower bound of k is also known. However, in many cases it is possible to improve online competitive results using a randomized algorithm. In this talk, I will present a randomized algorithm for K-Server on a special class of metric spaces -- hierarchical binary trees. While this seems a restrictive case, a slight generalization of this result to non-binary hierarchically structured trees would apply to arbitrary finite metrics because of a recent set of results on randomized metric embedding. This talk presents a number of new ideas, including a model of an online problem as a hierarchy of independent online decision makers and an improved algorithm for a special case of the metrical task system problem.

This is joint work with Aaron Cote (UCLA) and Laura Poplawski (Northeastern) and has been accepted to STOC 2008.


May 16 The VPN conjecture is true

Navin Goyal, Georgia Tech

We consider the following network design problem. A group of nodes (terminals) in a large network wishes to reserve bandwidth to form a sub-network called virtual private network (VPN). The VPN should be able to support various communication patterns that may arise between the terminals. These communication patterns may change with time, and the only restriction on them is that for each terminal there is an upper bound on the total amount of traffic handled by that terminal. This leads to the well-studied VPN design problem wherein we must find paths between every pair of terminals and reserve sufficient capacity on the edges on these paths so that all possible communication patterns satisfying the given upper bounds can be routed. Each edge has cost proportional to the bandwidth reserved on it. The goal is to minimize the total cost of the VPN.

Formally, we are given an undirected graph G=(V,E) with edges costs c(e) and a set of terminal nodes W. A hose demand matrix for W is any symmetric matrix [D_{ij}] such that for each i, \sum_{j \neq i} D_{ij} \leq 1. We must compute the minimum cost edge capacities that are able to support the oblivious routing of every hose matrix in the network. An oblivious routing template is a simple path P_{ij} for each pair i,,j \in W. Given such a template, if we are to route a demand matrix D, then for each i,,j we send D_{ij} units of flow along each P_{ij}. A well-studied conjecture states that there exists an optimal VPN that is a tree, i.e., the union of paths P_{ij} is a tree. In this talk, I will explain our recent proof of this conjecture. This also yields the first polynomial time algorithm for computing the optimal VPN.

This is joint work with Neil Olver and Bruce Shepherd.