PARTIAL LIST OF ABSTRACTS: Dimacs Workshop on Approximation Feb 20-22. 2000

Yossi Azar

Approximation Algorithms for Unsplittable Flow and the

Half-Disjoint Paths problem

We consider the bounded unsplittable flow problem. In this problem we are given a (possibly directed) graph with $n$ vertices and $m$ edges, and a set of terminal pairs of vertices each with demand of at most $1/K$ for a given $K$. The objective is to connect a subset of the terminal pairs each by a single flow path as to maximize the total demand of the satisfied terminal pairs subject to unit capacity constraints. The case $K=1$ is the general unsplittable flow problem and it becomes the well known edge disjoint path problem if all demands are exactly $1$. The case $K=2$ is called the bounded case and it becomes the half-disjoint path problem if all demands are exactly $1/2$. The best approximation algorithm for the unsplittable flow problem achieves a ratio of $O(\sqrt{m})$ and no better algorithm exists unless $NP=P$. We show an $O(\sqrt{n})$ approximation algorithm for the bounded unsplittable flow problem (and in particular for half-disjoint paths). For larger $K$ we show a better bound of $O(Kn^{1/K})$. Our algorithms work also in the online setting with almost the same performance. Here, the terminal pairs arrive in an arbitrary order and should be connected or rejected before the arrival of the next terminal pair. We also show that the results are tight in the online setting by proving a matching lower bound.

Joint work with Oded Regev


Joseph Cheriyan

Approximating minimum metric cost k-outconnected subgraphs

We give a 3-approximation algorithm for the following problem: given a complete graph with METRIC edge weights and node requirements $r_u$ for each node $u$, find a minimum-weight subgraph that contains $\max\{r_u,r_v\}$ openly disjoint paths between every pair of nodes $u,v$.

The algorithm is based on the following ``splitting off'' result: Let $G$ be a graph which is $k$-outconnected from a specified root node $s$, that is, $G$ has $k$ openly disjoint paths between $s$ and $v$ for every node $v$. We give necessary and sufficient conditions for the existence of a pair $sv,sw$ of edges such that replacing these edges by a new edge $vw$ gives a graph that is $k$-outconnected from $s$.

(Joint work with T.Jordan and Z.Nutov)


Moses Charikar

Approximation Algorithms for Min-Sum Clustering


Given a set of points in a metric space, we consider the problem of partitioning the points into $k$ clusters so as to minimize the sum of all intra-cluster pairwise distances. Guttmann-Beck and Hassin gave a 2 approximation for this problem when the number of clusters $k$ is constant. No non-trivial result is known for general $k$. We give a bicriteria approximation for this problem for any value of $k$, producing a solution with at most $\alpha k$ clusters, with cost bounded by $\beta$ times the optimal cost (for $k$ clusters). Here $\alpha, \beta > 1$ are constants.

Irit Dinur

On the hardness of approximating Label-Cover


The Label Cover problem is a combinatorial graph problem that has been utilized for showing hardness-of-approximation of numerous problems. We present a direct combinatorial reduction from low error-probability PCP to Label-Cover thus improving previously known hardness results.

Based on joint work with Shmuel Safra.

Guy Even

Improved Approximations of Crossings in Graph Drawings and

VLSI Layout Areas


We give improved approximations for two classical embedding problems. First, we compute drawings of bounded degree graphs on the plane in which the sum of the number of vertices and the number of crossings is $O(\log^3 n)$ times the minimum. This is an improvement of a logarithmic factor relative to the best known result (Bhatt-Leighton and Leighton-Rao). This has consequences in a variety of VLSI layout problems. Second, we compute a VLSI layout of a graph of degree $4$ in a grid of constant aspect ratio. The area of the layout is $O(\log^4 n)$ times the minimum layout area. This is an $O(\log^2 n)$ improvement over Bhatt-Leighton and Leighton-Rao.

Joint work with Sudipto Guha and Baruch Schieber.


Sudipto Guha

Nested Graph Dissection and Approximation Algorithms


We provide a framework of dissection of graph with respect to sub-additive estimators, with respect to constant number of them simultaneously, such that the upper bounds on the estimators on the induced subtrees of this dissection decrease geometrically. In some sense this achieves a partition of the graph with respect to (several) unknown estimators, given that the only handle we have on these estimators is via a test for the smallest separator.

We present improved approximations of chordal completion and related problems as an example. We also improve (in some cases) the approximation results of minimum drawing size and layout area of a graph on a grid, pathwidth and elimination height.

Anupam Gupta

A Constant Factor Approximation Algorithm for a Class of

Classification Problems


David Johnson

Average-Case Analysis of the Sum-of-Squares Bin Packing Algorithm


In a "discrete distribution" we are given a bin capacity B and a probability distribution over the item sizes from 1 to B-1. Such distributions model many real-world applications of bin packing, where our goal is typically to minimize the total number of bins used or, equivalently, to minimize the wasted space in the non-empty bins of the packing. If N is the number of items generated, it is known that for any discrete distribution, the total expected waste in the optimal packing must grow either as O(N), O(sqrt(N)), or O(1), depending on the distribution. Standard bin packing algorithms such as Best Fit can be far from optimum, for instance yielding linear waste under distributions where the optimal expected waste is O(1). This is true even for offline algorithms such as Best Fit Decreasing.

We describe a remarkably simple O(nB)-time online algorithm, the "Sum-of-Squares" algorithm (SS), for which the expected wasted space is provably O(sqrt(N)) whenever the optimal expected waste is sublinear. Based on this, we then present an O(nBlogB) algorithm that essentially matches the optimal expected waste growth rate for all discrete distributions.

The asymptotic worst-case ratio for SS lies somewhere between 1.54 and 3, but SS can be incorporated into a randomized offline O(nBlogB) algorithm whose worst-case expected ratio is asymptotically 1.

This is joint work with Janos Csirik, Claire Kenyon, Jim Orlin,

Peter Shor, and Richard Weber.


Marek Karpinski

Approximation Hardness of Sorting by Reversals


Abstract. We prove that the problem MIN-SBR of sorting a permutation by the minimum number of reversals is hard to approximate (NP-hard by randomized reductions) within any constant factor less than some explicit threshold. This excludes an existence of a PTAS for this problem under usual assumptions, thus settling a question which was open for some time. The proof method uses certain new explicit approximation hardness techniques for bounded dependency, and bounded degree optimization problems.

The MIN-SBR problem has been motivated and extensively studied in computational molecular biology, but existence of PTASs remained an open issue. This problem connects also to the well known problem of sorting by prefix reversals (or, so called, pancake sorting). Our nonapproximability result for MIN-SBR is also in sharp contrast to its signed version for which efficient exact algorithms have been designed recently. (Joint work with P.Berman.)



Sanjeev Khanna

On the Hardness of 4-coloring a 3-colorable Graph


We give a new proof of the result that it is NP-hard to color a 3-colorable graph using just four colors. In contrast to the previous proof of this result, our proof does not rely on the PCP theorem. Our construction also shows that 4-coloring of 3-colorable graphs remains NP-hard even on bounded-degree graphs.

This is joint work with Venkatesan Guruswami.


Jon Kleinberg

Fairness in Routing and Load Balancing


We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria interacts in a complex fashion with the optimization of network utilization and throughput; in this work, we undertake an investigation of this relationship through the framework of approximation algorithms. In a range of settings including both high-speed networks and Internet applications, max-min fairness has emerged as a widely accepted formulation of the notion of fairness. Informally, we say that an allocation of bandwidth is max-min fair if there is no way to give more bandwidth to any connection without decreasing the allocation to a connection of lesser or equal bandwidth. Given a collection of transmission routes, this criterion imposes a certain equilibrium condition on the bandwidth allocation, and some simple flow control mechanisms converge quickly to this equilibrium state. Indeed, the vast majority of previous work on max-min fairness has focused on this issue of associating rates with connections that are specified by a fixed set of paths. Very little work has been devoted to understanding the relationship between the way in which one selects paths for routing, and the amount of throughput one obtains from the resulting max-min fair allocation on these paths.

In this work, we consider the problem of selecting paths for routing so as to provide a bandwidth allocation that is as fair as possible (in the max-min sense). We obtain the first approximation algorithms for thisbasic optimization problem, for single-source unsplittable routings in an arbitrary directed graph. Special cases of our model include several fundamental load balancing problems, endowing them with a natural fairness criterion to which our approach can be applied. Our results form an interesting counterpart to earlier work of Megiddo, who considered max-min fairness for single-source fractional flow. The optimization problems in our setting become NP-complete, and require the development of new techniques for relating fractional relaxations of routing to the equilibrium constraints imposed by the fairness criterion.


Joint work with Yuval Rabani and Eva Tardos.


Benjamin Sudakov

On two segmentation problems


The hypercube segmentation problem is the following: Given a set $S$ of $m$ vertices of the discrete $d$-dimensional cube $\{0,1\}^d$, find $k$ vertices $P_1, \ldots, P_k, P_i \in \{0,1\}^d$ and a partitions of $S$ into $k$ segments $S_1, \ldots, S_k$ so as to maximize the sum $$\sum_{i=1}^k \sum_{c \in S_i} P_i \odot c,$$ where $\odot$ is the overlap operator between two vertices of the $d$-dimensional cube, defined to be the number of positions they have in common. This problem (among other ones) is considered in a recent paper by Kleinberg, Papadimitriou and Raghavan , where the authors design a deterministic approximation algorithm that runs in polynomial time for every fixed $k$, and produces a solution whose value is within $0.828$ of the optimum, as well as a randomized algorithm that runs in linear time for every fixed $k$, and produces a solution whose expected value is within $0.7$ of the optimum. Here we design an improved approximation algorithm; for every fixed $\epsilon >0$ and every fixed $k$, our algorithm produces in linear time a solution whose value is within $(1-\epsilon)$ of the optimum. Therefore, for every fixed $k$, this is a polynomial approximation scheme for the problem. The algorithm is deterministic, but it is convenient to first describe it as a randomized algorithm and then to derandomize it using some properties of expander graphs. We also consider a segmented version of the minimum spanning tree problem, where we show that no approximation can be achieved in polynomial time, unless P=NP.