Results 1 - 10
of
656
Resilient Overlay Networks
, 2001
"... A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A R ..."
Abstract
-
Cited by 1160 (31 self)
- Add to MetaCart
(Show Context)
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
Error and attack tolerance of complex networks
, 2000
"... Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network [1]. C ..."
Abstract
-
Cited by 1013 (7 self)
- Add to MetaCart
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network [1]. Complex communication networks [2] display a surprising degree of robustness: while key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems’ components. In this paper we demonstrate that error tolerance is not shared by all redundant systems, but it is displayed only by a class of inhomogeneously wired networks, called scale-free networks. We find that scale-free networks, describing a number of systems, such as the World Wide Web (www) [3–5], Internet [6], social networks [7] or a cell [8], display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected by even unrealistically high failure rates. However,
Bro: A System for Detecting Network Intruders in Real-Time
, 1999
"... We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes highspeed (FDDI-rate) monitoring, real-time notification, clear ..."
Abstract
-
Cited by 925 (42 self)
- Add to MetaCart
(Show Context)
We describe Bro, a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder's traffic transits. We give an overview of the system's design, which emphasizes highspeed (FDDI-rate) monitoring, real-time notification, clear separation between mechanism and policy, and extensibility. To achieve these ends, Bro is divided into an “event engine” that reduces a kernel-filtered network traffic stream into a series of higher-level events, and a “policy script interpreter” that interprets event handlers written in a specialized language used to express a site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications via syslog. We also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these, and give particulars of how Bro analyzes the six applications integrated into it so far: Finger, FTP, Portmapper, Ident, Telnet and Rlogin. The system is publicly available in source code form.
End-to-End Internet Packet Dynamics,”
- Proc. SIGCOMM '97,
, 1997
"... Abstract We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end ..."
Abstract
-
Cited by 843 (19 self)
- Add to MetaCart
(Show Context)
Abstract We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.
Practical network support for IP traceback
, 2000
"... This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source ad ..."
Abstract
-
Cited by 678 (13 self)
- Add to MetaCart
(Show Context)
This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed “post-mortem ” – after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology. 1.
On Inferring Autonomous System Relationships in the Internet
- IEEE/ACM Transactions on Networking
, 2000
"... ..."
Heuristics for Internet Map Discovery
, 2000
"... Mercator is a program that uses hop-limited probes---the same primitive used in traceroute---to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route capable routers wherever possible to enhan ..."
Abstract
-
Cited by 385 (13 self)
- Add to MetaCart
Mercator is a program that uses hop-limited probes---the same primitive used in traceroute---to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route capable routers wherever possible to enhance the fidelity of the resulting map, and employs novel mechanisms for resolving aliases (interfaces belonging to the same router). This paper describes the design of these heuristics and our experiences with Mercator, and presents some preliminary analysis of the resulting Internet map.
Difficulties in Simulating the Internet
- IEEE/ACM Transactions on Networking
, 2001
"... Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the & ..."
Abstract
-
Cited by 341 (8 self)
- Add to MetaCart
Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site, to the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants, and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator. 1 Introduction Due to the network's complexity, simulation plays a vital role in attempting to characterize both the behavior of the current Internet and the possible effects of proposed changes to its operation. Yet modeling and simulating the Internet is not an easy task. The goal of this paper ...
Multicast-Based Inference of Network-Internal Characteristics: Accuracy of Packet Loss Estimation
- IEEE Transactions on Information Theory
, 1998
"... We explore the use of end-to-end multicast traffic as measurement probes to infer network-internal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique explo ..."
Abstract
-
Cited by 323 (40 self)
- Add to MetaCart
(Show Context)
We explore the use of end-to-end multicast traffic as measurement probes to infer network-internal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique exploits the inherent correlation between such observations to infer the performance of paths between branch points in the multicast tree spanning the probe source and its receivers. We evaluate through analysis and simulation the accuracy of our estimator under a variety of network conditions. In particular, we report on the error between inferred loss rates and actual loss rates as we vary the network topology, propagation delay, packet drop policy, background traffic mix, and probe traffic type. In all but one case, estimated losses and probe losses agree to within 2 percent on average. We feel this accuracy is enough to reliably identify congested links in a wide-area internetwork. Keywords---Internet performance, end-to-end measurements, Maximum Likelihood Estimator, tomography I.