Results 1 - 10
of
843
Practical Network Coding
, 2003
"... We propose a distributed scheme for practical network coding that obviates the need for centralized knowledge of the graph topology, the encoding functions, and the decoding functions, and furthermore obviates the need for information to be communicated synchronously through the network. The resu ..."
Abstract
-
Cited by 462 (15 self)
- Add to MetaCart
We propose a distributed scheme for practical network coding that obviates the need for centralized knowledge of the graph topology, the encoding functions, and the decoding functions, and furthermore obviates the need for information to be communicated synchronously through the network. The result is a practical system for network coding that is robust to random packet loss and delay as well as robust to any changes in the network topology or capacity due to joins, leaves, node or link failures, congestion, and so on. We simulate such a practical network coding system using the network topologies of several commercial Internet Service Providers, and demonstrate that it can achieve close to the theoretically optimal performance.
Planetlab: An overlay testbed for broad-coverage services
- ACM SIGCOMM Computer Communication Review
, 2003
"... PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a diverse collection of links. PlanetLab allows multiple services to run concurrently and continuously, each in its own slice ..."
Abstract
-
Cited by 445 (3 self)
- Add to MetaCart
(Show Context)
PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a diverse collection of links. PlanetLab allows multiple services to run concurrently and continuously, each in its own slice of PlanetLab. This paper describes our initial implementation of PlanetLab, including the mechanisms used to implement virtualization, and the collection of core services used to manage PlanetLab. 1.
Democratizing content publication with Coral
- IN NSDI
, 2004
"... CoralCDN is a peer-to-peer content distribution network that allows a user to run a web site that offers high performance and meets huge demand, all for the price of a cheap broadband Internet connection. Volunteer sites that run CoralCDN automatically replicate content as a side effect of users acc ..."
Abstract
-
Cited by 314 (21 self)
- Add to MetaCart
(Show Context)
CoralCDN is a peer-to-peer content distribution network that allows a user to run a web site that offers high performance and meets huge demand, all for the price of a cheap broadband Internet connection. Volunteer sites that run CoralCDN automatically replicate content as a side effect of users accessing it. Publishing through CoralCDN is as simple as making a small change to the hostname in an object's URL; a peer-to-peer DNS layer transparently redirects browsers to nearby participating cache nodes, which in turn cooperate to minimize load on the origin web server. One of the system's key goals is to avoid creating hot spots that might dissuade volunteers and hurt performance. It achieves this through Coral, a latency-optimized hierarchical indexing infrastructure based on a novel abstraction called a distributed sloppy hash table, or DSHT.
iPlane: An information plane for distributed services
- In OSDI 2006
"... Abstract — In this paper, we present the design, implementation, and evaluation of the iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, the iPlane builds ..."
Abstract
-
Cited by 297 (25 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we present the design, implementation, and evaluation of the iPlane, a scalable service providing accurate predictions of Internet path performance for emerging overlay services. Unlike the more common black box latency prediction techniques in use today, the iPlane builds an explanatory model of the Internet. We predict end-to-end performance by composing measured performance of segments of known Internet paths. This method allows us to accurately and efficiently predict latency, bandwidth, capacity and loss rates between arbitrary Internet hosts. We demonstrate the feasibility and utility of the iPlane service by applying it to several representative overlay services in use today: content distribution, swarming peer-to-peer filesharing, and voice-over-IP. In each case, we observe that using iPlane’s predictions leads to a significant improvement in end user performance. 1
A Framework for Classifying Denial of Service Attacks
- In Proceedings of ACM SIGCOMM
, 2003
"... Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This pap ..."
Abstract
-
Cited by 211 (12 self)
- Add to MetaCart
(Show Context)
Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, transient ramp-up behavior and novel techniques such as spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results through monitoring at a second site, controlled experiments, and simulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classification mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet.
DIMES: Let the Internet measure itself
- Computer Communication Review
, 2005
"... Abstract — Today’s Internet maps, which are all collected from a small number of vantage points, are falling short of being accurate. We suggest here a paradigm shift for this task. DIMES is a distributed measurement infrastructure for the Internet that is based on the deployment of thousands of lig ..."
Abstract
-
Cited by 207 (33 self)
- Add to MetaCart
(Show Context)
Abstract — Today’s Internet maps, which are all collected from a small number of vantage points, are falling short of being accurate. We suggest here a paradigm shift for this task. DIMES is a distributed measurement infrastructure for the Internet that is based on the deployment of thousands of light weight measurement agents around the globe. We describe the rationale behind DIMES deployment, discuss its design trade-offs and algorithmic challenges, and analyze the structure of the Internet as it seen with DIMES. I.
Towards an Accurate AS-Level Traceroute Tool
, 2003
"... Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conve ..."
Abstract
-
Cited by 193 (19 self)
- Add to MetaCart
Traceroute is widely used to detect routing problems, characterize end-to-end paths, and discover the Internet topology. Providing an accurate list of the Autonomous Systems (ASes) along the forwarding path would make traceroute even more valuable to researchers and network operators. However, conventional approaches to mapping traceroute hops to AS numbers are not accurate enough. Address registries are often incomplete and out-of-date. BGP routing tables provide a better IP-to-AS mapping, though this approach has significant limitations as well. Based on our extensive measurements, about 10% of the traceroute paths have one or more hops that do not map to a unique AS number, and around 15% of the traceroute AS paths have an AS loop. In addition, some traceroute AS paths have extra or missing AS hops due to Internet eXchange Points, sibling ASes managed by the same institution, and ASes that do not advertise routes to their infrastructure. Using the BGP tables as a starting point, we propose techniques for improving the IP-to-AS mapping as an important step toward an AS-level traceroute tool. Our algorithms draw on analysis of traceroute probes, reverse DNS lookups, BGP routing tables, and BGP update messages collected from multiple locations. We also discuss how the improved IP-to-AS mapping allows us to home in on cases where the BGP and traceroute AS paths differ for legitimate reasons.
Hop-count filtering: an effective defense against spoofed DDoS traffic
, 2003
"... IP spoofing has been exploited by Distributed Denial of Service (DDoS) attacks to (1) conceal flooding sources and localities in flooding traffic, and (2) coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near ..."
Abstract
-
Cited by 187 (4 self)
- Add to MetaCart
(Show Context)
IP spoofing has been exploited by Distributed Denial of Service (DDoS) attacks to (1) conceal flooding sources and localities in flooding traffic, and (2) coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near victims is essential to their own protection as well as to their avoidance of becoming involuntary DoS reflectors. Although an attacker can forge any field in the IP header, he or she cannot falsify the number of hops an IP packet takes to reach its destination. This hop-count information can be inferred from the Time-to-Live (TTL) value in the IP header. Using a mapping between IP addresses and their hop-counts to an Internet server, the server can distinguish spoofed IP packets from legitimate ones. Base on this observation, we present a novel filtering technique that is immediately deployable to weed out spoofed IP packets. Through analysis using network measurement data, we show that Hop-Count Filtering (HCF) can identify close to 90 % of spoofed IP packets, and then discard them with little collateral damage. We implement and evaluate HCF in the Linux kernel, demonstrating its benefits using experimental measurements.
Characterizing Residential Broadband Networks
- Proc. of ACM IMC
, 2007
"... A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applicatio ..."
Abstract
-
Cited by 173 (7 self)
- Add to MetaCart
(Show Context)
A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applications, including voice-over-IP, online games, and peer-to-peer content sharing/delivery systems. However, to date, few studies have investigated commercial broadband deployments, and rigorous measurement data that characterize these networks at scale are lacking. In this paper, we present the first large-scale measurement study of major cable and DSL providers in North America and Europe. We describe and evaluate the measurement tools we developed for this purpose. Our study characterizes several properties of broadband networks, including link capacities, packet round-trip times and jitter, packet loss rates, queue lengths, and queue drop policies. Our analysis reveals important ways in which residential networks differ from how the Internet is conventionally thought to operate. We also discuss the implications of our findings for many emerging protocols and systems, including delay-based congestion control (e.g., PCP) and network coordinate systems (e.g., Vivaldi).
On Selfish Routing in Internet-Like Environments
, 2004
"... A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to either choose routes themselves (e.g., source routing) or use overlay routing networks (e.g., Detour or RON). Such approaches result in selfish routing, because routing decisions are no longe ..."
Abstract
-
Cited by 160 (10 self)
- Add to MetaCart
(Show Context)
A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to either choose routes themselves (e.g., source routing) or use overlay routing networks (e.g., Detour or RON). Such approaches result in selfish routing, because routing decisions are no longer based on system-wide criteria but are instead designed to optimize hostbased or overlay-based metrics. A series of theoretical results showing that selfish routing can result in suboptimal system behavior have cast doubts on this approach. In this paper, we use a game-theoretic approach to investigate the performance of selfish routing in Internet-like environments, using realistic topologies and traffic demands in our simulations. We show that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments. However, such performance benefit comes at the expense of significantly increased congestion on certain links. Moreover, the adaptive nature of selfish overlays can significantly reduce the effectiveness of traffic engineering by making network traffic less predictable.