Results 1  10
of
12
Perspectives on Network Calculus  No Free Lunch, but Still Good Value
, 2012
"... ACM Sigcomm 2006 published a paper [26] which was perceived to unify the deterministic and stochastic branches of the network calculus (abbreviated throughout as DNC and SNC) [39]. Unfortunately, this seemingly fundamental unification—which has raised the hope of a straightforward transfer of all re ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
ACM Sigcomm 2006 published a paper [26] which was perceived to unify the deterministic and stochastic branches of the network calculus (abbreviated throughout as DNC and SNC) [39]. Unfortunately, this seemingly fundamental unification—which has raised the hope of a straightforward transfer of all results from DNC to SNC—is invalid. To substantiate this claim, we demonstrate that for the class of stationary andergodic processes, whichis prevalentin traffic modelling, the probabilistic arrival model from [26] is quasideterministic, i.e., the underlying probabilities are either zero or one. Thus, the probabilistic framework from [26] is unable to account for statistical multiplexing gain, which is in fact the raison d’être of packetswitched networks. Other previous formulations of SNC can capture statistical multiplexing
1 Characterizing the Impact of the Workload on the Value of Dynamic Resizing in Data Centers
"... Abstract—Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically r ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow timescale nonstationarities of the workload (e.g., the peaktomean ratio) and the fast timescale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimizationbased modeling of the slow timescale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
Sharp PerFlow Delay Bounds for Bursty Arrivals: The Case of FIFO, SP, and EDF Scheduling
"... The practicality of the stochastic network calculus (SNC) is often questioned on grounds of potential looseness of its performance bounds. In this paper, it is uncovered that for bursty arrival processes (specifically MarkovModulated OnOff (MMOO)), whose amenability to perflow analysis is typica ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
The practicality of the stochastic network calculus (SNC) is often questioned on grounds of potential looseness of its performance bounds. In this paper, it is uncovered that for bursty arrival processes (specifically MarkovModulated OnOff (MMOO)), whose amenability to perflow analysis is typically proclaimed as a highlight of SNC, the bounds can unfortunately be very loose (e.g., by several orders of magnitude off). In response to this uncovered weakness of SNC, the (Standard) perflow bounds are herein improved by deriving a general samplepath bound, using martingale based techniques, which accommodates FIFO, SP, and EDF scheduling. The obtained (Martingale) bounds capture an extra exponential decay factor of O
A Guide to the Stochastic Network Calculus
"... Abstract—The aim of the stochastic network calculus is to comprehend statistical multiplexing and scheduling of nontrivial traffic sources in a framework for endtoend analysis of multinode networks. To date, several models, some of them with subtle yet important differences, have been explored t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The aim of the stochastic network calculus is to comprehend statistical multiplexing and scheduling of nontrivial traffic sources in a framework for endtoend analysis of multinode networks. To date, several models, some of them with subtle yet important differences, have been explored to achieve these objectives. Capitalizing on previous works, this paper contributes an intuitive approach to the stochastic network calculus, where we seek to obtain its fundamental results in the possibly easiest way. For this purpose, we will now and then trade generality or precision for simplicity. In detail, the method that is assembled in this work uses moment generating functions, known from the theory of effective bandwidths, to characterize traffic arrivals and network service. Thereof, affine envelope functions with exponentially decaying overflow profile are derived to compute statistical endtoend backlog and delay bounds for networks. I.
Stochastic service curve and delay bound analysis: a single node case
 Computer Science from University of Kaiserslautern
, 2013
"... ar ..."
(Show Context)
Robust Queueing Theory
 SUBMITTED TO OPERATIONS RESEARCH
"... We propose an alternative approach for studying queueing systems by employing robust optimization as opposed to stochastic analysis. While traditional stochastic queueing theory relies on Kolmogorov’s axioms of probability and models arrivals and services as renewal processes, we use the limit laws ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose an alternative approach for studying queueing systems by employing robust optimization as opposed to stochastic analysis. While traditional stochastic queueing theory relies on Kolmogorov’s axioms of probability and models arrivals and services as renewal processes, we use the limit laws of probability as the axioms of our methodology and model the queueing systems primitives by uncertainty sets. In this framework, we obtain closed form expressions for the steadystate waiting times in multiserver queues with heavytailed arrival and service processes. These expressions are not available under traditional stochastic queueing theory for heavytailed processes, while they lead to the same qualitative insights for independent and identically distributed arrival and service times. We also develop an exact calculus for analyzing a network of queues with multiple servers based on the following key principle: a) the departure from a queue, b) the superposition, and c) the thinning of arrival processes have the same uncertainty set representation as the original arrival processes. We show that our approach, which we call the Robust Queueing Network Analyzer (RQNA) a) yields results with error percentages in single digits (for all experiments we performed) relative to simulation, b) performs significantly better than the Queueing Network Analyzer (QNA) proposed in Whitt (1983), and c) is to a large extent insensitive to the number of servers per queue, the network size, degree of feedback, traffic intensity, and somewhat sensitive to the degree of diversity of external arrival distributions in the network.
1Towards a Statistical Network Calculus– Dealing with Uncertainty in Arrivals
"... Abstract—The stochastic network calculus (SNC) has become an attractive methodology to derive probabilistic performance bounds. So far the SNC is based on (tacitly assumed) exact probabilistic assumptions about the arrival processes. Yet, in practice, these are only true approximately–at best. In ma ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—The stochastic network calculus (SNC) has become an attractive methodology to derive probabilistic performance bounds. So far the SNC is based on (tacitly assumed) exact probabilistic assumptions about the arrival processes. Yet, in practice, these are only true approximately–at best. In many situations it is hard, if possible at all, to make such assumptions a priori. A more practical approach would be to base the SNC operations on measurements of the arrival processes (preferably even online). In this paper, we develop this idea and incorporate measurements into the framework of SNC taking the further uncertainty resulting from estimation errors into account. This is a crucial step towards a statistical network calculus (StatNC) eventually lending itself to a selfmodelling operation of networks with a minimum of a priori assumptions. In numerical experiments, we are able to substantiate the novel opportunities by StatNC.
TOWARDS A STATISTICAL NETWORK CALCULUS– DEALING WITH UNCERTAINTY IN ARRIVALS
"... Abstract. The stochastic network calculus (SNC) has become an attractive methodology to derive probabilistic performance bounds. So far the SNC is based on (tacitly assumed) exact probabilistic assumptions about the arrival processes. Yet, in practice, these are only true approximately–at best. In m ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The stochastic network calculus (SNC) has become an attractive methodology to derive probabilistic performance bounds. So far the SNC is based on (tacitly assumed) exact probabilistic assumptions about the arrival processes. Yet, in practice, these are only true approximately–at best. In many situations it is hard, if possible at all to make such assumptions a priori. A more practical approach would be to base the SNC operations on measurements of the arrival processes (preferably even online). In this report, we develop this idea and incorporate measurements into the framework of SNC taking the further uncertainty resulting from estimation errors into account. This is a crucial step towards a statistical network calculus (StatNC) eventually lending itself to a selfmodelling operation of networks with a minimum of a priori assumptions. In numerical experiments, we are able to substantiate the novel opportunities by StatNC. 1.
.Characterizing the Impact of the Workload on the Value of Dynamic Resizing in Data CentersI
"... Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to ..."
Abstract
 Add to MetaCart
(Show Context)
Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow timescale nonstationarities of the workload (e.g., the peaktomean ratio) and the fast timescale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimizationbased modeling of the slow timescale with stochastic modeling of the fast timescale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
.Characterizing the Impact of the Workload on the Value of Dynamic Resizing in Data CentersI
"... Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to ..."
Abstract
 Add to MetaCart
(Show Context)
Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow timescale nonstationarities of the workload (e.g., the peaktomean ratio) and the fast timescale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimizationbased modeling of the slow timescale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.