Results 1 - 10
of
13
An improved algorithm for CIOQ switches
- Proc. 12th Annual European Symp. on Algorithms (ESA), Springer LNCS 3221
, 2004
"... Abstract The problem of maximizing the weighted throughput in various switching settings has beenintensively studied recently through competitive analysis. To date, the most general model that has been investigated is the standard CIOQ (Combined Input and Output Queued) switcharchitecture with inter ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
(Show Context)
Abstract The problem of maximizing the weighted throughput in various switching settings has beenintensively studied recently through competitive analysis. To date, the most general model that has been investigated is the standard CIOQ (Combined Input and Output Queued) switcharchitecture with internal fabric speedup S> = 1. CIOQ switches, that comprise the backbone ofpacket routing networks, are N * N switches controlled by a switching policy that incorporatestwo components: Admission control and scheduling. An admission control strategy is essential to determine the packets stored in the FIFO queues in input and output ports, while the schedulingpolicy conducts the transfer of packets through the internal fabric, from input ports to output ports. The online problem of maximizing the total weighted throughput of CIOQ switcheswas recently investigated by Kesselman and Ros'en in [15]. They presented two different online algorithms for the general problem that achieve non-constant competitive ratios (linear in eitherthe speedup or the number of distinct values, or logarithmic in the value range). We introduce the first constant-competitive algorithm for the general case of the problem, with arbitraryspeedup and packet values. Specifically, our algorithm is 8-competitive, and is also simple and easy to implement. 1 Introduction Overview: Recently, packet routing networks have become the dominant platform for data transfer. The backbone of such networks is composed of N * N switches, that accept packets through multiple incoming connections and route them through multiple outgoing connections. As network traffic continuously increases and traffic patterns constantly change, switches routinely have to efficiently cope with overloaded traffic, and are forced to discard packets due to insufficient buffer space, while attempting to forward the more valuable packets to their destinations.
An Experimental Study of New and Known Online Packet Buffering Algorithms
"... We present the first experimental study of online packet buffering algorithms for network switches. The design and analysis of such strategies has received considerable research attention in the theory community recently. We consider a basic scenario in which m queues of size B have to be maintained ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
We present the first experimental study of online packet buffering algorithms for network switches. The design and analysis of such strategies has received considerable research attention in the theory community recently. We consider a basic scenario in which m queues of size B have to be maintained so as to maximize the packet throughput. A Greedy strategy, which always serves the most populated queue, achieves a competitive ratio of only 2. Therefore, various online algorithms with improved competitive factors were developed in the literature. In this paper we first develop a new online algorithm, called HSFOD, which is especially designed to perform well under real-world conditions. We prove that its competitive ratio is equal to 2. The major part of this paper is devoted to the experimental study in which we have implemented all the proposed algorithms, including HSFOD, and tested them on packet traces from benchmark libraries. We have evaluated the experimentally observed competitivess, the running times, memory requirements and actual packet throughput of the strategies. The tests were performed for varying values of m and B as well as varying switch speeds. The extensive experiments demonstrate that despite a relatively high theoretical competitive ratio, heuristic and greedy-like strategies are the methods of choice in a practical environment. In particular, HSFOD has the best experimentally observed competitiveness.
Buffer Management Problems
- ACM SIGACT News
, 2004
"... In recent years, there has been a lot of interest in Quality of Service (QoS) networks. In regular IP networks, packets are indistinguishable and in case of overload any packet may be dropped. In a commercial environment, it is much more preferable to allow better service to higher-paying customers ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
In recent years, there has been a lot of interest in Quality of Service (QoS) networks. In regular IP networks, packets are indistinguishable and in case of overload any packet may be dropped. In a commercial environment, it is much more preferable to allow better service to higher-paying customers or customers with critical requirements. The idea of Quality of Service guarantees is that packets are marked with values which indicate their importance. This naturally leads to decision problems at network switches when many packets arrive and overload occurs. In this paper, we give an overview of several models that have been studied in this area from an online perspective. These models differ by restrictions such as bounded delay, bounded size of queue etc. We first present results for a single buffer in Section 1 and then for multiple buffers in Section 2. This paper is not meant as a comprehensive survey of the work in this area. There are many more variations of these problems that have been studied, for instance, multiple output buffers [14]. Our goal was merely to give a taste of this problem area, and we hope you enjoy it. 1 Single buffer We consider a QoS buffering system that is able to hold B packets. Time is slotted. At the beginning of a time step a set of packets (possibly empty) arrives and at the end of the time step a single packet may
Rate vs. Buffer Size -- Greedy Information Gathering on the Line
, 2007
"... We consider packet networks with limited buffer space at the nodes, and are interested in the question of maximizing the number of packets that arrive to destination rather than being dropped due to full buffers. We initiate a more refined analysis of the throughput competitive ratio of admission an ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
We consider packet networks with limited buffer space at the nodes, and are interested in the question of maximizing the number of packets that arrive to destination rather than being dropped due to full buffers. We initiate a more refined analysis of the throughput competitive ratio of admission and scheduling policies in the Competitive Network Throughput model [2], taking into account not only the network size but also the buffer size and the injection rate of the traffic. We specifically consider the problem of information gathering on the line, with limited buffer space, under adversarial traffic. We examine how the buffer size and the injection rate of the traffic affect the performance of the greedy protocol for this problem. We establish upper bounds on the competitive ratio of the greedy protocol in terms of the network size, the buffer size, and the adversary’s rate, and present lower bounds which are tight up to constant factors. These results show, for example, that provisioning the network with sufficiently large buffers may substantially improve the performance of the greedy protocol in some cases, whereas for some high-rate adversaries, using larger buffers does not have any effect on the competitive ratio of the protocol.
Scheduling in Networks with Limited Buffers
"... In networks with limited buffer capacity, packet loss can occur at a link even when the average packet arrival rate is low compared to the links speed. To offer strong loss-rate guarantees, ISPs may need to adopt stringent routing constraints to limit the load at the network links and the routing p ..."
Abstract
- Add to MetaCart
(Show Context)
In networks with limited buffer capacity, packet loss can occur at a link even when the average packet arrival rate is low compared to the links speed. To offer strong loss-rate guarantees, ISPs may need to adopt stringent routing constraints to limit the load at the network links and the routing path length. However, to simultaneously maximize revenue, ISPs should be interested in scheduling algorithms that lead to the least stringent routing constraints. This work attempts to address the ISPs needs as follows. First, by proposing an algorithm that performs well (in terms of routing constraints) on networks of output queued (OQ) routers (that is, ideal routers), and second, by bounding the extra switch fabric speed and buffer capacity required for the emulation of these algorithms in combined input-output queued (CIOQ) routers. The first part of the thesis studies the problem of minimizing the maximum session loss rate in networks of OQ routers. It introduces the Rolling Priority algorithm, a local online scheduling algorithm that offers superior loss guarantees compared to FCFS/Drop Tail and FCFS/Random Drop. Rolling Priority has the following properties: (1) it does not favor any sessions over others at any link, (2) ensures a proportion of packets from each session are subject to a negligibly small loss probability at every link along the sessions path, and (3) maximizes the proportion of packets subject to negligible loss probability. The second part of the thesis studies the emulation of OQ routers using CIOQ. The OQ routers are equipped with a buffer of capacity B packets at every output. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid for the emulation of every OQ algorithm at speedup B, no CIOQ policy is valid at iii speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF, a well-studied CIOQ policy, is not valid at any speedup s < B. We then introduce a CIOQ policy CEH, that is valid at speedup s ≥ 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + B−1 s−1 . iv
On the Emulation of Finite-Buffered Output Queued Switches Using Combined Input-Output Queuing
"... Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ..."
Abstract
- Add to MetaCart
(Show Context)
Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ≥ 1 packets at every output. We analyze the resource requirements of CIOQ policies in terms of the required fabric speedup and the additional buffer capacity needed at the CIOQ inputs: A CIOQ policy is said to be (s, b)-valid (for OQ emulation) if a CIOQ employing this policy can emulate an OQ switch using fabric speedup s ≥ 1, and without exceeding buffer occupancy b at any input port. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid at speedup B, no CIOQ policy is valid at speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF in particular is not valid at any speedup s < B. We then introduce a CIOQ policy, CEH, that is valid at speedup s ≥ p 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + j B−1 s−1 k. Although the speedup required for the emulation of preemptive scheduling algorithms is not constant, it may be feasible in high-speed electronic or optical switches, which are expected to have limited buffering capacity. For non-preemptive scheduling algorithms, we characterize a trade-off between the CIOQ speedup and the input buffer occupancy. Specifically, we show that for any greedy policy that is valid at speedup s> 2, the input buffer occupancy cannot l m B−1 exceed 1 +. We also show that a greedy variant of the CCF policy is (2, B)-valid for the emulation of non-preemptive s−2 OQ algorithms with PIFO service disciplines. 1 1