Results 1 - 10
of
493
Vivaldi: A Decentralized Network Coordinate System
- In SIGCOMM
, 2004
"... Large-scale Internet applications can benefit from an ability to predict round-trip times to other hosts without having to contact them first. Explicit measurements are often unattractive because the cost of measurement can outweigh the benefits of exploiting proximity information. Vivaldi is a simp ..."
Abstract
-
Cited by 602 (4 self)
- Add to MetaCart
(Show Context)
Large-scale Internet applications can benefit from an ability to predict round-trip times to other hosts without having to contact them first. Explicit measurements are often unattractive because the cost of measurement can outweigh the benefits of exploiting proximity information. Vivaldi is a simple, light-weight algorithm that assigns synthetic coordinates to hosts such that the distance between the coordinates of two hosts accurately predicts the communication latency between the hosts.
BRITE: An approach to universal topology generation,”
- in Proceedings of the IEEE Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems,
, 2001
"... Abstract Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. U ..."
Abstract
-
Cited by 448 (12 self)
- Add to MetaCart
(Show Context)
Abstract Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the generated topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on the design principles of representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, node degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation applications such as ns and SSF and visualization tools like otter. We call such a tool a universal topology generator.
Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh
, 2003
"... In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burd ..."
Abstract
-
Cited by 424 (22 self)
- Add to MetaCart
In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel. Key contributions of this work include: i) an algorithm that sends data to di#erent points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.
Internet traffic engineering by optimizing OSPF weights
- in Proc. IEEE INFOCOM
, 2000
"... Abstract—Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, splitting flow at nodes where several outgoing links are on shortest paths to the destination. The weights of the links, and thereby the shortest pa ..."
Abstract
-
Cited by 403 (13 self)
- Add to MetaCart
(Show Context)
Abstract—Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, splitting flow at nodes where several outgoing links are on shortest paths to the destination. The weights of the links, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic recommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimizing the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settings that performed
Distributed topology control for power efficient operation in multihop wireless ad hoc networks
, 2001
"... Abstract — The topology of wireless multihop ad hoc networks can be controlled by varying the transmission power of each node. We propose a simple distributed algorithm where each node makes local decisions about its transmission power and these local decisions collectively guarantee global connecti ..."
Abstract
-
Cited by 383 (18 self)
- Add to MetaCart
Abstract — The topology of wireless multihop ad hoc networks can be controlled by varying the transmission power of each node. We propose a simple distributed algorithm where each node makes local decisions about its transmission power and these local decisions collectively guarantee global connectivity. Specifically, based on the directional information, a node grows it transmission power until it finds a neighbor node in every direction. The resulting network topology increases network lifetime by reducing transmission power and reduces traffic interference by having low node degrees. Moreover, we show that the routes in the multihop network are efficient in power consumption. We give an approximation scheme in which the power consumption of each route can be made arbitrarily close to the optimal by carefully choosing the parameters. Simulation results demonstrate significant performance improvements. I.
Difficulties in Simulating the Internet
- IEEE/ACM Transactions on Networking
, 2001
"... Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the & ..."
Abstract
-
Cited by 341 (8 self)
- Add to MetaCart
Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site, to the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants, and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator. 1 Introduction Due to the network's complexity, simulation plays a vital role in attempting to characterize both the behavior of the current Internet and the possible effects of proposed changes to its operation. Yet modeling and simulating the Internet is not an easy task. The goal of this paper ...
Scalability and Accuracy in a Large-Scale Network Emulator
- IN PROCEEDINGS OF THE 5TH SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI
, 2002
"... This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application softwa ..."
Abstract
-
Cited by 296 (45 self)
- Add to MetaCart
(Show Context)
This paper presents ModelNet, a scalable Internet emulation environment that enables researchers to deploy unmodified software prototypes in a configurable Internet-like environment and subject them to faults and varying network conditions. Edge nodes running user-specified OS and application software are configured to route their packets through a set of ModelNet core nodes, which cooperate to subject the traffic to the bandwidth, congestion constraints, latency, and loss profile of a target network topology.
This paper describes and evaluates the ModelNet architecture and its implementation, including novel techniques to balance emulation accuracy against scalability. The current ModelNet prototype is able to accurately subject thousands of instances of a distributed application to Internet-like conditions with gigabits of bisection bandwidth. Experiments with several large-scale distributed services demonstrate the generality and effectiveness of the infrastructure.
A quantitative comparison of graph-based models for internet topology
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 1997
"... Graphs are commonly used to model the topological structure of internetworks, to study problems ranging from routing to resource reservation. A variety of graphs are found in the literature, including fixed topologies such as rings or stars, "well-known" topologies such as the ARPAnet, and ..."
Abstract
-
Cited by 265 (3 self)
- Add to MetaCart
(Show Context)
Graphs are commonly used to model the topological structure of internetworks, to study problems ranging from routing to resource reservation. A variety of graphs are found in the literature, including fixed topologies such as rings or stars, "well-known" topologies such as the ARPAnet, and randomly generated topologies. While many researchers rely upon graphs for analytic and simulation studies, there has been little analysis of the implications of using a particular model, or how the graph generation method may a ect the results of such studies. Further, the selection of one generation method over another is often arbitrary, since the differences and similarities between methods are not well understood. This paper considers the problem of generating and selecting graph models that reflect the properties of real internetworks. We review generation methods in common use, and also propose several new methods. We consider a set of metrics that characterize the graphs produced by a method, and we quantify similarities and differences amongst several generation methods with respect to these metrics. We also consider the effect of the graph model in the context of a speciffic problem, namely multicast routing.
Why We Don't Know How to Simulate the Internet
, 1997
"... Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the li ..."
Abstract
-
Cited by 232 (4 self)
- Add to MetaCart
Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site and the levels of congestion (load) seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a look at a collaborative effort to build a common simulation environment for conducting Internet studies.
On the Origins of Power Laws in Internet Topologies
- 39th Annual Allerton Conference on Communication, Control, and Computing, 2001 www.merit.edu/~mrt
"... Recent empirical studies [6] have shown that Internet topologies exhibit power laws of the form � for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs within a neighborhood versus neighborhood size (i ..."
Abstract
-
Cited by 220 (4 self)
- Add to MetaCart
(Show Context)
Recent empirical studies [6] have shown that Internet topologies exhibit power laws of the form � for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely