Results 1  10
of
64
The geometry of graphs and some of its algorithmic applications
 Combinatorica
, 1995
"... In this paper we explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graphtheoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric representations that r ..."
Abstract

Cited by 543 (20 self)
 Add to MetaCart
(Show Context)
In this paper we explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graphtheoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric representations that respect the metric of the (possibly weighted) graph. Given a graph G we map its vertices to a normed space in an attempt to (i) Keep down the dimension of the host space and (ii) Guarantee a small distortion, i.e., make sure that distances between vertices in G closely match the distances between their geometric images. In this paper we develop efficient algorithms for embedding graphs lowdimensionally with a small distortion. Further algorithmic applications include: 0 A simple, unified approach to a number of problems on multicommodity flows, including the LeightonRae Theorem [29] and some of its extensions. 0 For graphs embeddable in lowdimensional spaces with a small distortion, we can find lowdiameter decompositions (in the sense of [4] and [34]). The parameters of the decomposition depend only on the dimension and the distortion and not on the size of the graph. 0 In graphs embedded this way, small balanced separators can be found efficiently. Faithful lowdimensional representations of statistical data allow for meaningful and efficient clustering, which is one of the most basic tasks in patternrecognition. For the (mostly heuristic) methods used
Euclidean distortion and the Sparsest Cut
 In Proceedings of the 37th Annual ACM Symposium on Theory of Computing
, 2005
"... BiLipschitz embeddings of finite metric spaces, a topic originally studied in geometric analysis and Banach space theory, became an integral part of theoretical computer science following work of Linial, London, and Rabinovich [29]. They presented an algorithmic version of a result of Bourgain [8] ..."
Abstract

Cited by 120 (25 self)
 Add to MetaCart
(Show Context)
BiLipschitz embeddings of finite metric spaces, a topic originally studied in geometric analysis and Banach space theory, became an integral part of theoretical computer science following work of Linial, London, and Rabinovich [29]. They presented an algorithmic version of a result of Bourgain [8] which shows that every
Measured descent: A new embedding method for finite metrics
 In Proc. 45th FOCS
, 2004
"... We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for ..."
Abstract

Cited by 98 (32 self)
 Add to MetaCart
(Show Context)
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any npoint metric space (X, d) embeds in Hilbert space with distortion O ( √ αX · log n), where αX is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O ( √ (log λX)log n) distortion embedding, where λX is the doubling constant of X. Since λX ≤ n, this result recovers Bourgain’s theorem, but when the metric X is, in a sense, “lowdimensional, ” improved bounds are achieved. Our embeddings are volumerespecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volumerespecting embeddings for all 1 ≤ k ≤ n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted npoint planar graph O(log n) embeds in ℓ∞ with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n) 2). 1
QoS Routing in Networks with Inaccurate Information: Theory and Algorithms
 IEEE/ACM Transactions on Networking
, 1997
"... This paper investigates the problem of routing flows with QualityofService (QoS) requirements through one or more networks, when the information available for making such routing decisions is inaccurate. Inaccuracy in the information used in computing QoS routes, e.g., network state such as link a ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
This paper investigates the problem of routing flows with QualityofService (QoS) requirements through one or more networks, when the information available for making such routing decisions is inaccurate. Inaccuracy in the information used in computing QoS routes, e.g., network state such as link and node metrics, arises naturally in a number of different environments that are reviewed in the paper. Our goal is to determine the impact of such inaccuracy on the ability of the path selection process to successfully identify paths with adequate available resources. In particular, we focus on devising algorithms capable of selecting path(s) that are most likely to successfully accommodate the desired QoS, in the presence of uncertain network state information. For the purpose of our analysis, we assume that this uncertainty is expressed through probabilistic models, and we briefly discuss sample cases that can give rise to such models. We establish that the impact of uncertainty is minima...
The price of being nearsighted
 In SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
, 2006
"... Achieving a global goal based on local information is challenging, especially in complex and largescale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible tradeoff between the amount of local information and the quality o ..."
Abstract

Cited by 83 (12 self)
 Add to MetaCart
(Show Context)
Achieving a global goal based on local information is challenging, especially in complex and largescale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible tradeoff between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρ∆) 1/kapproximation for general covering and packing problems in time O(k 2), where ρ depends on the LP’s coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n 1/k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its kneighborhood. 1
Subexponential algorithms for Unique Games and related problems
 IN 51 ST IEEE FOCS
, 2010
"... We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c f ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c fraction of its constraints, outputs an assignment satisfying 1 − ε fraction of the constraints. 2. An exp(n ε /δ)time algorithm that, given as input an nvertex regular graph that has a set S of δn vertices with edge expansion at most ε c, outputs a set S ′ of at most δn vertices with edge expansion at most ε. We also obtain a subexponential algorithm with improved approximation for the MultiCut problem, as well as subexponential algorithms with improved approximations to MaxCut, SparsestCut and Vertex Cover on some interesting subclasses of instances. Khot’s Unique Games Conjecture (UGC) states that it is NPhard to achieve approximation guarantees such as ours for unique games. While our results stop short of refusing the UGC, they do suggest that Unique Games is significantly easier than NPhard problems such as 3SAT,3LIN, Label Cover and more, that are believed not to have a subexponential algorithm achieving a nontrivial approximation ratio. The main component in our algorithms is a new result on graph decomposition that may have other applications. Namely we show that for every δ> 0 and a regular nvertex graph G, by changing at most δ fraction of G’s edges, one can break G into disjoint parts so that the induced graph on each part has at most n ε eigenvalues larger than 1 − η (where ε, η depend polynomially on δ). Our results are based on combining this decomposition with previous algorithms for unique games on graphs with few large eigenvalues (Kolla and Tulsiani 2007, Kolla 2010).
Fast Distributed Algorithms for (Weakly) Connected Dominating Sets and LinearSize Skeletons (Extended Abstract)
"... Motivated by routing issues in ad hoc networks, we present polylogarithmictime distributed algorithms for two problems. Given a network, we first show how to compute connected and weakly connected dominating sets whose size is at most O(log #) times optimal, # being the maximum degree of the input ..."
Abstract

Cited by 73 (4 self)
 Add to MetaCart
Motivated by routing issues in ad hoc networks, we present polylogarithmictime distributed algorithms for two problems. Given a network, we first show how to compute connected and weakly connected dominating sets whose size is at most O(log #) times optimal, # being the maximum degree of the input network. This is bestpossible if NP ] and if the processors are limited to polynomialtime computation. We then show how to construct dominating sets which satisfy the above properties, as well as the "low stretch" property that any two adjacent nodes in the network have their dominators at a distance of at most O(log n) in the network. (Given a dominating set S, a dominator of a vertex u is any v S such that the distance between u and v is at most one.) We also show our time bounds to be essentially optimal.
Scalable SPARQL Querying of Large RDF Graphs
"... The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
(Show Context)
The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multinode RDF data management systems. In so doing, we introduce techniques for (1) leveraging stateoftheart single node RDFstore technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.