Results 1  10
of
21
A Tight Bound on Approximating Arbitrary Metrics by Tree Metrics
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; t ..."
Abstract

Cited by 317 (8 self)
 Add to MetaCart
In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; there exist metric spaces where any tree embedding must have distortion#sto n)distortion. This problem lies at the heart of numerous approximation and online algorithms including ones for group Steiner tree, metric labeling, buyatbulk network design and metrical task system. Our result improves the performance guarantees for all of these problems.
Measured descent: A new embedding method for finite metrics
 In Proc. 45th FOCS
, 2004
"... We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for ..."
Abstract

Cited by 98 (32 self)
 Add to MetaCart
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any npoint metric space (X, d) embeds in Hilbert space with distortion O ( √ αX · log n), where αX is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O ( √ (log λX)log n) distortion embedding, where λX is the doubling constant of X. Since λX ≤ n, this result recovers Bourgain’s theorem, but when the metric X is, in a sense, “lowdimensional, ” improved bounds are achieved. Our embeddings are volumerespecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volumerespecting embeddings for all 1 ≤ k ≤ n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted npoint planar graph O(log n) embeds in ℓ∞ with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n) 2). 1
Approximate labeling via graphcuts based on linear programming
 In Pattern Analysis and Machine Intelligence
, 2007
"... A new framework is presented for both understanding and developing graphcut based combinatorial algorithms suitable for the approximate optimization of a very wide class of MRFs that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of line ..."
Abstract

Cited by 74 (8 self)
 Add to MetaCart
(Show Context)
A new framework is presented for both understanding and developing graphcut based combinatorial algorithms suitable for the approximate optimization of a very wide class of MRFs that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of stateoftheart techniques like the αexpansion algorithm, which is included merely as a special case. Moreover, contrary to αexpansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, e.g. even for MRFs with nonmetric potentials. In addition, they are capable of providing perinstance suboptimality bounds in all occasions, including discrete Markov Random Fields with an arbitrary potential function. These bounds prove to be very tight in practice (i.e. very close to 1), which means that the resulting solutions are almost optimal. Our algorithms ’ effectiveness is demonstrated by presenting experimental results on a variety of low level vision tasks, such as stereo matching, image restoration, image completion and optical flow estimation, as well as on synthetic problems.
Nonembeddability theorems via Fourier analysis
"... Various new nonembeddability results (mainly into L1) are proved via Fourier analysis. In particular, it is shown that the Edit Distance on {0, 1}d has L1 distortion (log d) 12o(1). We also give new lower bounds on the L1 distortion of flat tori, quotients of the discrete hypercube under group ac ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
(Show Context)
Various new nonembeddability results (mainly into L1) are proved via Fourier analysis. In particular, it is shown that the Edit Distance on {0, 1}d has L1 distortion (log d) 12o(1). We also give new lower bounds on the L1 distortion of flat tori, quotients of the discrete hypercube under group actions, and the transportation cost (Earthmover) metric.
A linear programming formulation and approximation algorithms for the metric labeling problem
 SIAM J. Discrete Math
"... We consider approximation algorithms for the metric labeling problem. This problem was introduced in a paper by Kleinberg and Tardos [J. ACM, 49 (2002), pp. 616–630] and captures many classification problems that arise in computer vision and related fields. They gave an O(log k log log k) approximat ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
We consider approximation algorithms for the metric labeling problem. This problem was introduced in a paper by Kleinberg and Tardos [J. ACM, 49 (2002), pp. 616–630] and captures many classification problems that arise in computer vision and related fields. They gave an O(log k log log k) approximation for the general case, where k is the number of labels, and a 2approximation for the uniform metric case. (In fact, the bound for general metrics can be improved to O(log k) by the work of Fakcheroenphol, Rao, and Talwar [Proceedings
The hardness of metric labeling
 IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS’04
, 2004
"... The Metric Labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems. The input to the problem consists of a set of labels and a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each verte ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The Metric Labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems. The input to the problem consists of a set of labels and a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each vertex, an assignment cost is given. The goal is to find a minimumcost assignment of the vertices to the labels. The cost of the solution consists of two parts: the assignment costs of the vertices and the separation costs of the edges (each edge pays its weight times the distance between the two labels to which its endpoints are assigned). Due to the simple structure and variety of the applications, the problem and its special cases (with various distance functions on the labels) have recently received much attention. Metric Labeling has a known logarithmic approximation, and it has been an open question for several years whether a constant approximation exists. We refute this possibility and show that no constant approximation can be obtained for the problem unless P=NP, and we also show that the problem ishard to approximate, unless NP has quasipolynomial time algorithms.
Vertex sparsifiers: New results from old techniques
 IN 13TH INTERNATIONAL WORKSHOP ON APPROXIMATION, RANDOMIZATION, AND COMBINATORIAL OPTIMIZATION, VOLUME 6302 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2010
"... Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) What if we want H to be a “simple ” graph? What if we allow H to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flowsparsifier H that log k log log k maintains congestion up to a factor of O (), where k = K. (b) a convex combination of trees over the terminals K that maintains congestion up to a factor of O(log k). (c) for a planar graph G, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0extension problem, the first one in which the preimages of each terminal are connected in G. Moreover, this result extends to minorclosed families of graphs. Our bounds immediately imply improved approximation guarantees for several terminalbased cut and ordering problems.
On Earthmover Distance, Metric Labeling, and 0Extension
, 2006
"... We study the fundamental classification problems 0Extension and Metric Labeling. 0Extension is closely ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We study the fundamental classification problems 0Extension and Metric Labeling. 0Extension is closely
Consistent weighted sampling
, 2008
"... We describe an efficient procedure for sampling representatives from a weighted set such that the probability that for any weightings S and T, the probability that the two choose the same sample is the Jacard similarity: P x min(S(x), T (x)) P r[sample(S) = sample(T)] = P max(S(x), T (x)) The samp ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We describe an efficient procedure for sampling representatives from a weighted set such that the probability that for any weightings S and T, the probability that the two choose the same sample is the Jacard similarity: P x min(S(x), T (x)) P r[sample(S) = sample(T)] = P max(S(x), T (x)) The sampling process takes expected time linear in the number of nonzero weights, independent of the weights themselves. We discuss and develop the implementation of our sampling schemes, reducing the requisite computation substantially, and reducing the randomness required to only four bits in expectation.