Results 1 - 10
of
354
Service Disciplines for Guaranteed Performance Service in Packet-Switching Networks
- Proceedings of the IEEE
, 1995
"... While today’s computer networks support only best-effort service, future packet-switching integrated-services networks will have to support real-time communication services that allow clients to transport information with performance guarantees expressed in terms of delay, delay jitter, throughput, ..."
Abstract
-
Cited by 614 (4 self)
- Add to MetaCart
(Show Context)
While today’s computer networks support only best-effort service, future packet-switching integrated-services networks will have to support real-time communication services that allow clients to transport information with performance guarantees expressed in terms of delay, delay jitter, throughput, and loss rate. An important issue in providing guaranteed performance service is the choice of the packet service discipline at switching nodes. In this paper, we survey several service disciplines that are proposed in the literature to provide per-connection end-to-end peqormance guarantees in packet-switching networks. We describe their mechanisms, their similarities and differences, and the performance guarantees they can provide. Various issues and tradeoffs in designing service disciplines for guaranteed performance service are discussed, and a general framework for studying and comparing these disciplines are presented. I.
Hierarchical Packet Fair Queueing Algorithms
- IEEE/ACM Transactions on Networking
, 1997
"... In this paper, we propose to use the idealized Hierarchical Generalized Processor Sharing (H-GPS) model to simultaneously support guaranteed real-time, rate-adaptive best-effort, and controlled link-sharing services. We design Hierarchical Packet Fair Queueing (H-PFQ) algorithms to approximate H-GPS ..."
Abstract
-
Cited by 340 (7 self)
- Add to MetaCart
In this paper, we propose to use the idealized Hierarchical Generalized Processor Sharing (H-GPS) model to simultaneously support guaranteed real-time, rate-adaptive best-effort, and controlled link-sharing services. We design Hierarchical Packet Fair Queueing (H-PFQ) algorithms to approximate H-GPS by using one-level variable-rate PFQ servers as basic building blocks. By computing the system virtual time and per packet virtual start/finish times in unit of bits instead of seconds, most of the PFQ algorithms in the literature can be properly defined as variable-rate servers. We develop techniques to analyze delay and fairness properties of variable-rate and hierarchical PFQ servers. We demonstrate that in order to provide tight delay bounds with an H-PFQ server, it is essential for the one-level PFQ servers to have small Worst-case Fair Indices (WFI). We propose a new PFQ algorithm called WF 2 Q+ that is the first to have all the following three properties: (a) providing the tightest...
Fair Scheduling in Wireless Packet Networks
- IEEE/ACM Transactions on Networking
, 1997
"... Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and e ..."
Abstract
-
Cited by 339 (21 self)
- Add to MetaCart
(Show Context)
Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however a base station has only a limited knowledge of the arrival processes of uplink flows. In this paper, we propose a new model for wireless fair scheduling based on an adaptation of fluid fair queueing to handle location-dependent error bursts. We describe an ideal wireless fair scheduling algorithm which provides a packetized implementation of the fluid model while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wir...
Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks
, 1998
"... Router mechanisms designed to achieve fair bandwidth allocations, like Fair Queueing, have many desirable properties for congestion control in the Internet. However, such mechanisms usually need to maintain state, manage buffers, and/or perform packet scheduling on a per flow basis, and this complex ..."
Abstract
-
Cited by 253 (13 self)
- Add to MetaCart
(Show Context)
Router mechanisms designed to achieve fair bandwidth allocations, like Fair Queueing, have many desirable properties for congestion control in the Internet. However, such mechanisms usually need to maintain state, manage buffers, and/or perform packet scheduling on a per flow basis, and this complexity may prevent them from being cost-effectively implemented and widely deployed. In this paper, we propose an architecture that significantly reduces this implementation complexity yet still achieves approximately fair bandwidth allocations. We apply this approach to an island of routers -- that is, a contiguous region of the network -- and we distinguish between edge routers and core routers. Edge routers maintain per flow state; they estimate the incoming rate of each flow and insert a label into each packet header based on this estimate. Core routers maintain no per flow state; they use FIFO packet scheduling augmented by a probabilistic dropping algorithm that uses the packet labels an...
The Design, Implementation and Evaluation of SMART: A Scheduler for Multimedia Applications
, 1997
"... This paper argues for the need to design a new processor scheduling algorithm that can handle the mix of applications we see today. We present a scheduling algorithm which we have implemented in the Solaris UNIX operating system [Eykholt et al. 1992], and demonstrate its improved performance over ex ..."
Abstract
-
Cited by 240 (6 self)
- Add to MetaCart
(Show Context)
This paper argues for the need to design a new processor scheduling algorithm that can handle the mix of applications we see today. We present a scheduling algorithm which we have implemented in the Solaris UNIX operating system [Eykholt et al. 1992], and demonstrate its improved performance over existing schedulers in research and practice on real applications. In particular, we have quantitatively compared against the popular weighted fair queueing and UNIX SVR4 schedulers in supporting multimedia applications in a realistic workstation environment...
Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling
- In Proc. EuroSys
, 2010
"... As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input ..."
Abstract
-
Cited by 190 (21 self)
- Add to MetaCart
(Show Context)
As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.
A New Model for Packet Scheduling in Multihop Wireless Networks
, 2000
"... The goal of packet scheduling disciplines is to achieve fair and maximum allocation of channel bandwidth. However, these two criteria can potentially be in conflict in a generic topology multihop wireless network where a single logical channel is shared among multiple contending ows and spatial reus ..."
Abstract
-
Cited by 152 (8 self)
- Add to MetaCart
(Show Context)
The goal of packet scheduling disciplines is to achieve fair and maximum allocation of channel bandwidth. However, these two criteria can potentially be in conflict in a generic topology multihop wireless network where a single logical channel is shared among multiple contending ows and spatial reuse of the channel bandwidth is possible. In this paper, we propose a new model for packet scheduling that addresses this conflict. The main results of this paper are the following: (a) a two-tier service model that provides a minimum "fair" allocation of the channel bandwidth for each packet flow and additionally maximizes spatial reuse of bandwidth, (b) an ideal centralized packet scheduling algorithm that realizes the above service model, and (c) a practical distributed backoff-based channel contention mechanism that approximates the ideal service within the framework of the CSMA/CA protocol.
Router plugins: A software architecture for next generation routers
- IEEE/ACM transactions on Networking
, 1998
"... Present day routers typically employ monolithic operating systems which are not easily upgradahle and extensible. With the rapid rate of protocol development it is becoming increasingly important to dynamically upgrade router software in an incre-mental fashion. We have designed and implemented a hi ..."
Abstract
-
Cited by 145 (7 self)
- Add to MetaCart
Present day routers typically employ monolithic operating systems which are not easily upgradahle and extensible. With the rapid rate of protocol development it is becoming increasingly important to dynamically upgrade router software in an incre-mental fashion. We have designed and implemented a high performance, modular, extended integrated services router software architecture in the NetBSD operating system kernel. This architecture allows code modules, called plugins, to be dynamically added and configured at run time. One of the novel features of our design is the ability to bind different plugins to individual flows; this allows for distinct plugin implementations to seamlessly coexist in the same runtime environment. High performance is achieved through a carefully designed modular architecture; an innovative packet classification algorithm that is both powerful and highly efficient; and by caching that exploits the flow-like character-istics of Internet traffic. Compared to a monolithic best-effort kernel, our implementation requires an average increase in packet processing overhead of only 8 % , or 500 cycles/2.lms per packet when run-ning on a P61233. 1.1 Keywords High performance integrated services routing, modular router architecture, router plugins 2.
Anticipatory scheduling: A disk scheduling framework to overcome deceptive idleness in synchronous I/O
, 2001
"... Disk schedulers in current operating systems are generally work-conserving, i.e., they schedule a request as son as the previous request has finished. Such schedulers often require multiple outstanding requests from each process to meet system-level goals of performance and quality of service. U ..."
Abstract
-
Cited by 136 (2 self)
- Add to MetaCart
(Show Context)
Disk schedulers in current operating systems are generally work-conserving, i.e., they schedule a request as son as the previous request has finished. Such schedulers often require multiple outstanding requests from each process to meet system-level goals of performance and quality of service. Unfortunately, many common applications issue disk read requests in a synchronous manna% interspersing successive requests with shor periods of computation. The scheduler chooses the next request too early; this induces deceptive idleness, a condition where the scheduler incorrectly assumes that the test request issuing process has no further requests, and becomes forced to switch to a toques? from another pro- Ce3S.