Results 1 - 10
of
352
Characterizing Residential Broadband Networks
- Proc. of ACM IMC
, 2007
"... A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applicatio ..."
Abstract
-
Cited by 173 (7 self)
- Add to MetaCart
(Show Context)
A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applications, including voice-over-IP, online games, and peer-to-peer content sharing/delivery systems. However, to date, few studies have investigated commercial broadband deployments, and rigorous measurement data that characterize these networks at scale are lacking. In this paper, we present the first large-scale measurement study of major cable and DSL providers in North America and Europe. We describe and evaluate the measurement tools we developed for this purpose. Our study characterizes several properties of broadband networks, including link capacities, packet round-trip times and jitter, packet loss rates, queue lengths, and queue drop policies. Our analysis reveals important ways in which residential networks differ from how the Internet is conventionally thought to operate. We also discuss the implications of our findings for many emerging protocols and systems, including delay-based congestion control (e.g., PCP) and network coordinate systems (e.g., Vivaldi).
Power awareness in network design and routing
- In Proc. IEEE INFOCOM
, 2008
"... Abstract—Exponential bandwidth scaling has been a fundamental driver of the growth and popularity of the Internet. However, increases in bandwidth have been accompanied by increases in power consumption, and despite sustained system design efforts to address power demand, significant technological c ..."
Abstract
-
Cited by 81 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Exponential bandwidth scaling has been a fundamental driver of the growth and popularity of the Internet. However, increases in bandwidth have been accompanied by increases in power consumption, and despite sustained system design efforts to address power demand, significant technological challenges remain that threaten to slow future bandwidth growth. In this paper we describe the power and associated heat management challenges in today’s routers. We advocate a broad approach to addressing this problem that includes making powerawareness a primary objective in the design and configuration of networks, and in the design and implementation of network protocols. We support our arguments by providing a case study of power demands of two standard router platforms that enables us to create a generic model for router power consumption. We apply this model in a set of target network configurations and use mixed integer optimization techniques to investigate power consumption, performance and robustness in static network design and in dynamic routing. Our results indicate the potential for significant power savings in operational networks by including power-awareness. I.
Data Center TCP (DCTCP)
"... Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impai ..."
Abstract
-
Cited by 74 (6 self)
- Add to MetaCart
(Show Context)
Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP’s demands on the limited buffer space available in data center switches. For example, bandwidth hungry “background ” flows build up queues at the switches, and thus impact the performance of latency sensitive “foreground ” traffic. To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.
Cloud control with distributed rate limiting
- In SIGCOMM
, 2007
"... Today’s cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed ..."
Abstract
-
Cited by 71 (4 self)
- Add to MetaCart
(Show Context)
Today’s cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service’s network traffic. Our abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. We present two designs—one general purpose, and one optimized for TCP—that allow service operators to explicitly trade off between communication costs and system accuracy, efficiency, and scalability. Both designs are capable of rate limiting thousands of flows with negligible overhead (less than 3 % in the tested configuration). We demonstrate that our TCP-centric design is scalable to hundreds of nodes while robust to both loss and communication delay, making it practical for deployment in nationwide service providers.
Reproducible Network Experiments Using Container-Based Emulation
, 2012
"... In an ideal world, all research papers would be runnable: simply click to replicate all results, using the same setup as the authors. One approach to enable runnable network systems papers is Container-Based Emulation (CBE), where an environment of virtual hosts, switches, and links runs on a moder ..."
Abstract
-
Cited by 70 (2 self)
- Add to MetaCart
In an ideal world, all research papers would be runnable: simply click to replicate all results, using the same setup as the authors. One approach to enable runnable network systems papers is Container-Based Emulation (CBE), where an environment of virtual hosts, switches, and links runs on a modern multicore server, using real application and kernel code with software-emulated network elements. CBE combines many of the best features of software simulators and hardware testbeds, but its performance fidelity is unproven. In this paper, we put CBE to the test, using our prototype, Mininet-HiFi, to reproduce key results from published network experiments such as DCTCP, Hedera, and router buffer sizing. We report lessons learned from a graduate networking class at Stanford, where 37 students used our platform to replicate 16 published results of their own choosing. Our experiences suggest that CBE makes research results easier to reproduce and build upon.
Experimental Evaluation of TCP Protocols for High-Speed Networks
"... In this paper we present experimental results evaluating the performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and H-TCP proposals in a series of benchmark tests. In summary, we find that both Scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows ..."
Abstract
-
Cited by 69 (2 self)
- Add to MetaCart
In this paper we present experimental results evaluating the performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and H-TCP proposals in a series of benchmark tests. In summary, we find that both Scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics. Scalable-TCP, HS-TCP, FAST-TCP and BIC-TCP all exhibit much greater RTT unfairness than does standard TCP, to the extent that long RTT flows may be completely starved of bandwidth. Scalable-TCP, HS-TCP and BIC-TCP all exhibit slow convergence and sustained unfairness following changes in network conditions such as the start-up of a new flow. FAST-TCP exhibits complex convergence behaviour.
Routers with Very Small Buffers
- in IEEE Infocom
, 2006
"... Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay produc ..."
Abstract
-
Cited by 66 (8 self)
- Add to MetaCart
Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewer than twenty packet buffers are sufficient for high throughput. Specifically, we argue that O(log W) buffers are sufficient, where W is the window size of each flow. We support our claim with analysis and a variety of simulations. The change we need to make to TCP is minimal—each sender just needs ∗ This work was supported under DARPA/MTO DOD-N award no. W911NF-04-0001/KK4118 (LASOR PROJECT)
Part III: Routers with very small buffers
"... Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay produc ..."
Abstract
-
Cited by 58 (4 self)
- Add to MetaCart
Internet routers require buffers to hold packets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewer
Open issues in router buffer sizing
- ACM/SIGCOMM Computer Communication Review
, 2006
"... Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link’s bandwidth-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We ..."
Abstract
-
Cited by 49 (2 self)
- Add to MetaCart
(Show Context)
Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link’s bandwidth-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We show that the use of such small buffers can lead to excessively high loss rates (up to 5%-15 % in our simulations) in congested access links that carry many flows. Even if the link is fully utilized, small buffers lead to lower throughput for most large TCP flows, and significant variability in the per-flow throughput and transfer latency. We also discuss some important issues in router buffer sizing that are often ignored. 1.
On content-centric router design and implications
, 2010
"... In this paper, we investigate a sample line-speed contentcentric router’s design, its resources and its usage scenarios. We specifically take a closer look at one of the suggested functionalities for these routers, the content store. The design is targeted at pull-based environments, where content c ..."
Abstract
-
Cited by 39 (4 self)
- Add to MetaCart
(Show Context)
In this paper, we investigate a sample line-speed contentcentric router’s design, its resources and its usage scenarios. We specifically take a closer look at one of the suggested functionalities for these routers, the content store. The design is targeted at pull-based environments, where content can be pulled from the network by any interested entity. We discuss the interaction between the pull-based protocols and the content-centric router. We also provide some basic feasibility metrics, discussing some applicability aspects for such routers. 1.