Results 1  10
of
22
Optimality, fairness, and robustness in speed scaling designs
"... System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prov ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prove that a popular dynamic speed scaling algorithm is 2competitive for this objective and that no “natural” speed scaler can improve on this. Further, we prove that energyproportional speed scaling works well across two common scheduling policies: Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS). Third, we show that under SRPT and PS, gatedstatic speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness, notably SRPT’s bias against large jobs and the bias against short jobs in nonpreemptive policies. However, PS remains fair under speed scaling. Together, these results show that the speed scalers studied here can achieve any two, but only two, of optimality, fairness, and robustness. 1.
Online PrimalDual For Nonlinear Optimization with Applications to Speed Scaling
 In: Proceedings of the 10th Workshop on Approximation and Online Algorithms (WAOA
, 2012
"... ar ..."
(Show Context)
A Tutorial on Amortized Local Competitiveness in Online Scheduling
, 2011
"... potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
(Show Context)
potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic
Race to Idle: New Algorithms for Speed Scaling with a Sleep State
"... We study an energy conservation problem where a variablespeed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadlin ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We study an energy conservation problem where a variablespeed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadlinebased scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, Irani et al. [12] devised an offline 2approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = s α +γ, where s is the processor speed, Han et al. [11] gave an (α α + 2)competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First we prove NPhardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs scritschedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4/3. For powerfunctionsP(s) = βs α +γ, weobtainanapproximation of 137/117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of scritschedules. For general convex power functions, we give another 2approximation algorithm. For functions P(s) = βs α + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e −1−1/e)/(eW−1(−e −1−1/e) + 1) < 1.211, where W−1 is the lower branch of the Lambert W function. 1
Algorithms for dynamic speed scaling
 In STACS 2011, volume 9 of LIPIcs. Schloss Dagstuhl  LeibnizZentrum fuer Informatik
, 2011
"... Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.
Deadline Scheduling and Power Management for Speed Bounded Processors
"... Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied separately. It is indeed natural to consider an integrated model in which a
How to Schedule When You Have to Buy Your Energy
 In: Proc. of the 13th/14th Workshop on Approximation Algorithms for Comb. Optimization Problems/Randomization and Computation (APPROX/RANDOM
, 2010
"... Abstract. We consider a situation where jobs arrive over time at a data center, consisting of identical speedscalable processors. For each job, the scheduler knows how much income is lost as a function of how long the job is delayed. The scheduler also knows the fixed cost of a unit of energy. The ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a situation where jobs arrive over time at a data center, consisting of identical speedscalable processors. For each job, the scheduler knows how much income is lost as a function of how long the job is delayed. The scheduler also knows the fixed cost of a unit of energy. The online scheduler determines which jobs to run on which processors, and at what speed to run the processors. The scheduler's objective is to maximize profit, which is the income obtained from jobs minus the energy costs. We give a (1+ )speed O(1)competitive algorithm, and show that resource augmentation is necessary to achieve O(1)competitiveness.
Energy Efficient Scheduling of Parallelizable Jobs
"... In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of resource augmentation over the optimal solution. However, these lower bounds were given only for certain families of algorithms that do not recognize the parallelizability of alive jobs. In this work, we circumvent previous lower bounds shown and give a scalable algorithm under the natural assumption that the algorithm can know the current parallelizability of a job. When a general power function is considered, this is also the first algorithm that has a constant competitive ratio for the problem using any amount of resource augmentation. 1
Nonclairvoyant scheduling for weighted flow time and energy on speed bounded processors
 In Proc. CATS
, 2010
"... Abstract. We consider the online scheduling problem of minimizing total weighted flow time plus energy in the dynamic speed scaling model, where a processor can scale its speed dynamically between 0 and some maximum speed T. In the past few years this problem has been studied extensively under the c ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the online scheduling problem of minimizing total weighted flow time plus energy in the dynamic speed scaling model, where a processor can scale its speed dynamically between 0 and some maximum speed T. In the past few years this problem has been studied extensively under the clairvoyant setting, which requires the size of a job to be known at release time [1, 5, 6, 9, 15, 18–20]. For the nonclairvoyant setting, despite its practical importance, the progress is relatively limited. Only recently an online algorithm LAPS is known to be O(1)competitive for minimizing (unweighted) flow time plus energy in the infinite speed model (i.e., T = ∞) [11, 12]. This paper makes two contributions to the nonclairvoyant scheduling. First, we resolve the open problem that the unweighted result of LAPS can be extended to the more realistic model with bounded maximum speed. Second, we show that another nonclairvoyant algorithm WRR is O(1)competitive when weighted flow time is concerned. Note that WRR is not as efficient as LAPS for scheduling unweighted jobs as WRR has a much bigger constant hidden in its competitive ratio. This is the corrected version of the paper with the same title in CATS 2010 [13]; in particular, Lemmas 2 and 4 of Section 3 and the ordering of jobs in the potential analysis of Section 4 were given incorrectly before and are fixed in this version. On the other hand, the conjecture, given in Section 5, about the generalization of LAPS to the weighted setting has recently been resolved [14]. T.W. Lam is partly supported by HKU Grant 7176104.
Slow Down & Sleep for Profit in Online Deadline Scheduling
, 2012
"... We present and study a new model for energyaware and profitoriented scheduling on a single processor. The processor features dynamic speed scaling as well as suspension to a sleep mode. Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines. On the arrival of a n ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present and study a new model for energyaware and profitoriented scheduling on a single processor. The processor features dynamic speed scaling as well as suspension to a sleep mode. Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines. On the arrival of a new job, the scheduler may either accept or reject the job. Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values. Here, power consumption at speed s is given by P (s) = sα + β and the energy investment is power integrated over time. Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs γ. The objective is to minimize the total value of rejected jobs plus the total energy. Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profitoriented scheduling models. We show that rejectionoblivious schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio w.r.t. the processor parameters α and β. It turns out that the worstcase performance of such schedulers depends linearly on the jobs ’ value densities (the ratio between a job’s value and its work). We give an algorithm whose competitiveness nearly matches this lower bound. If the maximum value density is not too large, the competitiveness becomes αα + 2eα. Also, we show that it suffices to restrict the value density of lowvalue jobs only. Using a technique from [12] we transfer our results to processors with a fixed maximum speed.