Results 1 - 10
of
16
Brite: Universal topology generation from a user’s perspective
, 2001
"... Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunate ..."
Abstract
-
Cited by 137 (1 self)
- Add to MetaCart
Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on design principles which include representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation and visualization applications such as ns and SSF. We call such a tool a universal topology generator. In this paper we discuss the design, implementation and usage of the BRITE universal topology generation tool that we have built. We also describe the BRITE Analysis Engine, BRIANA, which is an independent piece of software designed and built upon BRITE design goals of flexibility and extensibility. The purpose of BRIANA is to act as a repository of analysis routines along with a user–friendly interface that allows its use on different topology formats.
Understanding internet topology: principles, models, and validation
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 2005
"... Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the phys ..."
Abstract
-
Cited by 51 (8 self)
- Add to MetaCart
(Show Context)
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet.
Topology Inference in the Presence of Anonymous Routers
- In IEEE INFOCOM
, 2003
"... Many topology discovery systems rely on traceroute to discover path information in public networks. However, for some routers, traceroute detects their existence but not their address; we term such routers anonymous routers.Thispaper considers the problem of inferring the network topology in the pr ..."
Abstract
-
Cited by 47 (1 self)
- Add to MetaCart
(Show Context)
Many topology discovery systems rely on traceroute to discover path information in public networks. However, for some routers, traceroute detects their existence but not their address; we term such routers anonymous routers.Thispaper considers the problem of inferring the network topology in the presence of anonymous routers. We illustrate how obvious approaches to handle anonymous routers lead to incomplete, inflated, or inaccurate topologies. We formalize the topology inference problem and show that producing both exact and approximate solutions is intractable. Two heuristics are proposed and evaluated through simulation. These heuristics have been used to infer the topology of the 6Bone, and could be incorporated into existing tools to infer more comprehensive and accurate topologies.
Network discovery and verification
- In Proceedings of the International Workshop on Graph-Theoretic Concepts in Computer Science (WG’05), LNCS 3787
, 2005
"... Abstract. Consider the problem of discovering (or verifying) the edges and non-edges of a network, modelled as a connected undirected graph, using a minimum number of queries. A query at a vertex v discovers (or verifies) all edges and non-edges whose endpoints have different distance from v. In the ..."
Abstract
-
Cited by 47 (8 self)
- Add to MetaCart
(Show Context)
Abstract. Consider the problem of discovering (or verifying) the edges and non-edges of a network, modelled as a connected undirected graph, using a minimum number of queries. A query at a vertex v discovers (or verifies) all edges and non-edges whose endpoints have different distance from v. In the network discovery problem, the edges and non-edges are initially unknown, and the algorithm must select the next query based only on the results of previous queries. We study the problem using competitive analysis and give a randomized on-line algorithm with competitive ratio O ( √ n log n) for graphs with n vertices. We also show that no deterministic algorithm can have competitive ratio better than 3. In the network verification problem, the graph is known in advance and the goal is to compute a minimum number of queries that verify all edges and nonedges. This problem has previously been studied as the problem of placing landmarks in graphs or determining the metric dimension of a graph. We show that there is no approximation algorithm for this problem with ratio o(log n) unless P = N P. Furthermore, we prove that the optimal number of queries for d-dimensional hypercubes is Θ(d / log d). 1
DNS-based Internet client clustering and characterization
- In Proc. of 4th IEEE Workshop on Workload Characterization (WWC01
, 2001
"... This paper proposes a novel protocol which uses the Internet Domain Name System (DNS) to partition Web clients into disjoint sets, each of which is associated with a single DNS server. We dene an L-DNS cluster to be a grouping of Web Clients that use the same Local DNS server to resolve Internet hos ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
This paper proposes a novel protocol which uses the Internet Domain Name System (DNS) to partition Web clients into disjoint sets, each of which is associated with a single DNS server. We dene an L-DNS cluster to be a grouping of Web Clients that use the same Local DNS server to resolve Internet host names. We identify such clusters in real-time using data obtained from a Web Server in conjunction with that server's Authoritative DNS|both instrumented with an implementation of our clustering algorithm. Using these clusters, we perform measurements from four distinct Internet locations. Our results show that L-DNS clustering enables a better estimation of proximity of a Web Client to a Web Server than previously proposed techniques. Thus, in a Content Distribution Network, a DNS-based scheme that redirects a request from a web client to one of many servers based on the client's name server coordinates (e.g., hops/latency/loss-rates between the client and servers) would perform better with our algorithm. 1
Incentive-Compatible Interdomain Routing with Linear Utilities
, 2009
"... We revisit the problem of incentive-compatible interdomain routing, examining the quite realistic special case in which the utilities of autonomous systems (ASes) are linear functions of the traffic in the incident links and the traffic leaving each AS. We show that incentive-compatibility toward m ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We revisit the problem of incentive-compatible interdomain routing, examining the quite realistic special case in which the utilities of autonomous systems (ASes) are linear functions of the traffic in the incident links and the traffic leaving each AS. We show that incentive-compatibility toward maximizing total welfare is achievable efficiently, and in the uncapacitated case, by an algorithm that can be easily implemented by the border gateway protocol (BGP), the standard protocol for interdomain routing.
Approximate Discovery of Random Graphs
- LECTURE NOTES IN COMPUTER SCIENCE
, 2007
"... In the layered-graph query model of network discovery, a query at a node v of an undirected graph G discovers all edges and non-edges whose endpoints have different distance from v. We study the number of queries at randomly selected nodes that are needed for approximate network discovery in Erdős ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
In the layered-graph query model of network discovery, a query at a node v of an undirected graph G discovers all edges and non-edges whose endpoints have different distance from v. We study the number of queries at randomly selected nodes that are needed for approximate network discovery in Erdős-Rényi random graphs Gn,p. We show that a constant number of queries is sufficient if p is a constant, while Ω(n α) queries are needed if p = n ε /n, for arbitrarily small choices of ε =3/(6 · i +5)with i ∈ N. Notethatα>0 is a constant depending only on ε. Our proof of the latter result yields also a somewhat surprising result on pairwise distances in random graphs which may be of independent interest: We show that for a random graph Gn,p with p = n ε /n, for arbitrarily small choices of ε>0 as above, in any constant cardinality subset of the nodes the pairwise distances are all identical with high probability.
Network Discovery and Verification with Distance Queries ∗
, 2006
"... The network discovery (verification) problem asks for a minimum subset Q ⊆ V of queries in an undirected graph G = (V, E) such that these queries discover all edges and non-edges of the graph. This is motivated by the common approach of combining local measurements in order to obtain maps of the Int ..."
Abstract
- Add to MetaCart
(Show Context)
The network discovery (verification) problem asks for a minimum subset Q ⊆ V of queries in an undirected graph G = (V, E) such that these queries discover all edges and non-edges of the graph. This is motivated by the common approach of combining local measurements in order to obtain maps of the Internet or other dynamically growing networks. In the distance query model, a query at node q returns the distances from q to all other nodes in the graph. We describe how the existence of an individual edge or non-edge in G can be deduced by potentially combining the results of several queries. This leads to a characterization of when a set of queries Q “discovers ” the graph G. In the on-line network discovery problem, the graph is initially unknown, and the algorithm has to select queries one by one based only on the results of the previous ones. We study the problem using competitive analysis and give a randomized on-line algorithm with competitive ratio O ( √ n log n) for graphs on n nodes. We also show lower bounds Ω ( √ n) and Ω(log n) on competitive ratios of deterministic on-line algorithms and randomized on-line algorithms, respectively. In the off-line network verification problem, the graph is known in the beginning and the problem asks for a minimum number of queries to verify all edges and non-edges. We show that the problem is N P-hard and present an O(log n)-approximation algorithm. 1
Approximate Discovery of Random Graphs ∗
"... In the layered-graph query model of network discovery, a query at a node v of an undirected graph G discovers all edges and non-edges whose endpoints have different distance from v. We study the number of queries at randomly selected nodes that are needed for approximate network discovery in Erdős-R ..."
Abstract
- Add to MetaCart
(Show Context)
In the layered-graph query model of network discovery, a query at a node v of an undirected graph G discovers all edges and non-edges whose endpoints have different distance from v. We study the number of queries at randomly selected nodes that are needed for approximate network discovery in Erdős-Rényi random graphs Gn,p. We show that a constant number of queries is sufficient if p is a constant, while Ω(n α) queries are needed if p = n ε /n, for arbitrarily small ε> 0, where α> 0 is a constant depending only on ε. Our proof of the latter result yields also a somewhat surprising result on pairwise distances in random graphs which may be of independent interest: We show that for a random graph Gn,p with p = n ε /n, for arbitrarily small ε> 0, in any constant cardinality subset of the nodes the pairwise distances are all identical with high probability. 1