Results 1  10
of
25
Triangulation and Embedding using Small Sets of Beacons
, 2008
"... Concurrent with recent theoretical interest in the problem of metric embedding, a growing body of research in the networking community has studied the distance matrix defined by nodetonode latencies in the Internet, resulting in a number of recent approaches that approximately embed this distance ..."
Abstract

Cited by 96 (11 self)
 Add to MetaCart
Concurrent with recent theoretical interest in the problem of metric embedding, a growing body of research in the networking community has studied the distance matrix defined by nodetonode latencies in the Internet, resulting in a number of recent approaches that approximately embed this distance matrix into lowdimensional Euclidean space. There is a fundamental distinction, however, between the theoretical approaches to the embedding problem and this recent Internetrelated work: in addition to computational limitations, Internet measurement algorithms operate under the constraint that it is only feasible to measure distances for a linear (or nearlinear) number of node pairs, and typically in a highly structured way. Indeed, the most common framework for Internet measurements of this type is a beaconbased approach: one chooses uniformly at random a constant number of nodes (‘beacons’) in the network, each node measures its distance to all beacons, and one then has access to only these measurements for the remainder of the algorithm. Moreover, beaconbased algorithms are often designed not for embedding but for the more basic problem of triangulation, in which one uses the triangle inequality to infer the distances that have not been measured. Here we give algorithms with provable performance guarantees for beaconbased triangulation and
Distance Estimation and Object Location via Rings of Neighbors
 In 24 th Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2005
"... We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Fo ..."
Abstract

Cited by 77 (7 self)
 Add to MetaCart
We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Focusing on metrics of low doubling dimension, we approach these problems with a common technique called rings of neighbors, which refers to a sparse distributed data structure that underlies all our constructions. Apart from improving the previously known bounds for these problems, our contributions include extending Kleinberg’s small world model to doubling metrics, and a short proof of the main result in Chan et al. [14]. Doubling dimension is a notion of dimensionality for general metrics that has recently become a useful algorithmic concept in the theoretical computer science literature. 1
Advances in metric embedding theory
 IN STOC ’06: PROCEEDINGS OF THE THIRTYEIGHTH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2006
"... Metric Embedding plays an important role in a vast range of application areas such as computer vision, computational biology, machine learning, networking, statistics, and mathematical psychology, to name a few. The theory of metric embedding received much attention in recent years by mathematicians ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
Metric Embedding plays an important role in a vast range of application areas such as computer vision, computational biology, machine learning, networking, statistics, and mathematical psychology, to name a few. The theory of metric embedding received much attention in recent years by mathematicians as well as computer scientists and has been applied in many algorithmic applications. A cornerstone of the field is a celebrated theorem of Bourgain which states that every finite metric space on n points embeds in Euclidean space with O(log n) distortion. Bourgain’s result is best possible when considering the worst case distortion over all pairs of points in the metric space. Yet, it is possible that an embedding can do much better in terms of the average distortion. Indeed, in most practical applications of metric embedding the main criteria for the quality of an embedding is its average distortion over all pairs. In this paper we provide an embedding with constant average distortion for arbitrary metric spaces, while maintaining the same worst case bound provided by Bourgain’s theorem. In fact, our embedding possesses a much stronger property. We define the ℓqdistortion of a uniformly distributed pair of points. Our embedding achieves the best possible ℓqdistortion for all 1 ≤ q ≤ ∞ simultaneously. These results have several algorithmic implications, e.g. an O(1) approximation for the unweighted uncapacitated quadratic assignment problem. The results are based on novel embedding methods which improve on previous methods in another important aspect: the dimension. The dimension of an embedding is of very high importance in particular in applications and much effort has been invested in analyzing it. However, no previous result im
LowDimensional Embedding with Extra Information
, 2004
"... A frequently arising problem in computational geometry is when a physical structure, such as an adhoc wireless sensor network or a protein backbone, can measure local information about its geometry (e.g., distances, angles, and/or orientations), and the goal is to reconstruct the global geometry fr ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
A frequently arising problem in computational geometry is when a physical structure, such as an adhoc wireless sensor network or a protein backbone, can measure local information about its geometry (e.g., distances, angles, and/or orientations), and the goal is to reconstruct the global geometry from this partial information. More precisely, we are given a graph, the approximate lengths of the edges, and possibly extra information, and our goal is to assign coordinates to the vertices that satisfy the given constraints up to a constant factor away from the best possible. We obtain the first subexponentialtime (quasipolynomialtime) algorithm for this problem given a complete graph of Euclidean distances with additive error and no extra information. For general graphs, the analogous problem is NPhard even with exact distances. Thus, for general graphs, we consider natural types of extra information that make the problem more tractable, including approximate angles between edges, the order type of vertices, a model of coordinate noise, or knowledge about the range of distance measurements. Our quasipolynomialtime algorithm for no extra information can also be viewed as a polynomialtime algorithm given an “extremum oracle ” as extra information. We give several approximation algorithms and contrasting hardness results for these scenarios.
Visibility preserving terrain simplification – An experimental study
 In Proc. 18th Annu. ACM Sympos. Comput. Geom
, 2002
"... The terrain surface simplification problem has been studied extensively, as it has important applications in geographic information systems and computer graphics. The goal is to obtain a new surface that is combinatorially as simple as possible, while maintaining a prescribed degree of similarity wi ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
The terrain surface simplification problem has been studied extensively, as it has important applications in geographic information systems and computer graphics. The goal is to obtain a new surface that is combinatorially as simple as possible, while maintaining a prescribed degree of similarity with the original input surface. In this paper, we propose new algorithms for simplifying terrain surfaces, designed specifically for a new measure of quality based on preserving interpoint (geodesic) distances. We are motivated by various geographic information system and mapping applications. We have implemented the suggested algorithms and give experimental evidence of their effectiveness in simplifying terrains according to the suggested measure of quality. We experimentally compare their performance with that of another leading simplification method. 1
Additive Spanners and (α, β)Spanners
"... An (α, β)spanner of an unweighted graph G is a subgraph H that distorts distances in G up to a multiplicative factor of α and an additive term β. It is well known that any graph contains a (multiplicative) (2k − 1, 0)spanner of size O(n 1+1/k) and an (additive) (1, 2)spanner of size O(n 3/2). How ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
An (α, β)spanner of an unweighted graph G is a subgraph H that distorts distances in G up to a multiplicative factor of α and an additive term β. It is well known that any graph contains a (multiplicative) (2k − 1, 0)spanner of size O(n 1+1/k) and an (additive) (1, 2)spanner of size O(n 3/2). However no other additive spanners are known to exist. In this paper we develop a couple of new techniques for constructing (α, β)spanners. Our first result is an additive (1, 6)spanner of size O(n 4/3). The construction algorithm can be understood as an economical agent that assigns costs and values to paths in the graph, purchasing affordable paths and ignoring expensive ones, which are intuitively wellapproximated by paths already purchased. We show that this path buying algorithm can be parameterized in different ways to yield other sparsenessdistortion tradeoffs. Our second result addresses the problem of which (α, β)spanners can be computed efficiently, ideally in linear time. We show that for any k, a (k, k − 1)spanner with size O(kn 1+1/k) can be found in linear time, and further, that in a distributed network the algorithm terminates in a constant number of rounds. Previous spanner constructions with similar performance had roughly twice the multiplicative distortion.
Tell me who I am: An interactive recommendation system
 In Proc. 18th Ann. ACM Symp. on Parallelism in Algorithms and Architectures
, 2006
"... We consider a model of recommendation systems, where each member from a given set of players has a binary preference to each element in a given set of objects: intuitively, each player either likes or dislikes each object. However, the players do not know their preferences. To find his preference of ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
We consider a model of recommendation systems, where each member from a given set of players has a binary preference to each element in a given set of objects: intuitively, each player either likes or dislikes each object. However, the players do not know their preferences. To find his preference of an object, a player may probe it, but each probe incurs unit cost. The goal of the players is to learn their complete preference vector (approximately) while incurring minimal cost. This is possible if many players have similar preference vectors: such a set of players with similar “taste ” may split the cost of probing all objects among them, and share the results of their probes by posting them on a public billboard. The problem is that players do not know a priori whose taste is close to theirs. In this paper we present a distributed randomized peertopeer algorithm in which each player outputs a vector which is close to the best possible approximation of the player’s real preference vector after a polylogarithmic number of rounds. constraint. The algorithm works under adversarial preferences. Previous algorithms either made severely limiting assumptions on the structure of the preference vectors, or had polynomial overhead.
Compact Routing with Slack in Low Doubling Dimension ABSTRACT
"... We consider the problem of compact routing with slack in networks of low doubling dimension. Namely, we seek nameindependent routing schemes with (1 + ɛ) stretch and polylogarithmic storage at each node: since existing lower bound precludes such a scheme, we relax our guarantees to allow for (i) a s ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of compact routing with slack in networks of low doubling dimension. Namely, we seek nameindependent routing schemes with (1 + ɛ) stretch and polylogarithmic storage at each node: since existing lower bound precludes such a scheme, we relax our guarantees to allow for (i) a small fraction of nodes to have large storage, say size of O(n log n) bits, or (ii) a small fraction of sourcedestination pairs to have larger, but still constant, stretch. In this paper, given any constant ɛ ∈ (0, 1), any δ ∈ Θ(1 / polylog n) and any connected edgeweighted undirected graph G with doubling dimension α ∈ O(log log n) andarbitrary node names, we present
On the Internet delay space dimensionality
, 2008
"... We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured roundtrip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded, with reasonably low distortion, into a 4 to 9dimensional Euclidea ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured roundtrip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded, with reasonably low distortion, into a 4 to 9dimensional Euclidean space. The application of Principal Component Analysis (PCA) reveals the same dimensionality values. Our work addresses the question: to what extent is the dimensionality an intrinsic property of the delay space, defined without reference to a host metric such as Euclidean space? Is the intrinsic dimensionality of the Internet delay space approximately equal to the dimension determined using embedding techniques or PCA? If not, what explains the discrepancy? What properties of the network contribute to its overall dimensionality? Using datasets obtained via the King [14] method, we study different measures of dimensionality to establish the following conclusions. First, based on its powerlaw behavior, the structure of the delay space can be better characterized by fractal measures. Second, the intrinsic dimension is significantly smaller than the value predicted by the previous studies; in fact by our measures it is less than 2. Third, we demonstrate a particular way in which the AS topology is reflected in the delay space; subnetworks composed of hosts which share an upstream Tier1 autonomous system in common possess lower dimensionality than the combined delay space. Finally, we observe that fractal measures, due to their sensitivity to nonlinear structures, display higher precision for measuring the influence of subtle features of the delay space geometry.