Results 1  10
of
39
Speed Scaling Functions for Flow Time Scheduling based on Active Job Count
"... Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of activ ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of active jobs. The new speed functions are more stable and also more efficient. They can support better job selection strategies to improve the competitive ratios of existing algorithms [5,8], and, more importantly, to remove the requirement of extra speed. These functions further distinguish themselves from others as they can readily be used in the nonclairvoyant model (where the size of a job is only known when the job finishes). As a first step, we study the scheduling of batched jobs (i.e., jobs with the same release time) in the nonclairvoyant model and present the first competitive algorithm for minimizing flow time plus energy (as well as for weighted flow time plus energy); the performance is close to optimal. 1
Optimality, fairness, and robustness in speed scaling designs
"... System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prov ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prove that a popular dynamic speed scaling algorithm is 2competitive for this objective and that no “natural” speed scaler can improve on this. Further, we prove that energyproportional speed scaling works well across two common scheduling policies: Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS). Third, we show that under SRPT and PS, gatedstatic speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness, notably SRPT’s bias against large jobs and the bias against short jobs in nonpreemptive policies. However, PS remains fair under speed scaling. Together, these results show that the speed scalers studied here can achieve any two, but only two, of optimality, fairness, and robustness. 1.
Competitive Nonmigratory Scheduling for Flow Time and Energy
 SPAA'08
, 2008
"... Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online algorithm called CRR (classified round robin) on m ≥ 2 processors, showing that its flow time plus energy is within O(1) times of the optimal nonmigratory offline algorithm, when the maximum allowable speed is slightly relaxed. This result still holds even if the comparison is made against the optimal migratory offline algorithm (the competitive ratio increases by a factor of 2.5). As a special case, our work also contributes to the traditional online flowtime scheduling. Specifically, for minimizing flow time only, CRR can yield a competitive ratio one or even arbitrarily smaller than one, when using sufficiently faster processors. Prior to our work, similar result is only known for online algorithms that needs migration [21, 23], while the best nonmigratory result can achieve an O(1) competitive ratio [14]. The above result stems from an interesting observation that there always exists some optimal migratory schedule S that can be converted (in an offline sense) to a nonmigratory schedule S ′ with a moderate increase in flow time plus energy. More importantly, this nonmigratory schedule always dispatches jobs in the same way as CRR.
Nonclairvoyant Speed Scaling for Weighted Flow Time
"... Abstract. We study online job scheduling on a processor that can vary its speed dynamically to manage its power. We attempt to extend the recent success in analyzing total unweighted flow time plus energy to total weighted flow time plus energy. We first consider the nonclairvoyant setting where th ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We study online job scheduling on a processor that can vary its speed dynamically to manage its power. We attempt to extend the recent success in analyzing total unweighted flow time plus energy to total weighted flow time plus energy. We first consider the nonclairvoyant setting where the size of a job is only known when the job finishes. We show an online algorithm WLAPS that is 8α 2competitive for weighted flow time plus energy under the traditional power model, which assumes the power P (s) toruntheprocessoratspeeds to be s α for some α>1. More interestingly, for any arbitrary power function P (s), WLAPS remains competitive when given a more energyefficient processor; precisely, WLAPS is 16(1 + 1 ɛ)2competitive when using a processor that, given the power P (s), can run at speed (1 + ɛ)s for some ɛ>0. Without such speedup, no nonclairvoyant algorithm can be O(1)competitive for an arbitrary power function [8]. For the clairvoyant setting (where the size of a job is known at release time), previous results on minimizing weighted flow time plus energy rely on scaling the speed continuously over time [5–7]. The analysis of WLAPS has inspired us to devise a clairvoyant algorithm LLB which can transform any continuous speed scaling algorithm to one that scales the speed at discrete times only. Under an arbitrary power function, LLB can give an 4(1 + 1 ɛ)competitive algorithm using a processor with (1 + ɛ)speedup. 1
Scalably Scheduling PowerHeterogeneous Processors
"... Abstract. We show that a natural online algorithm for scheduling jobs on a heterogeneous multiprocessor, with arbitrary power functions, is scalable for the objective function of weighted flow plus energy. 1 ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We show that a natural online algorithm for scheduling jobs on a heterogeneous multiprocessor, with arbitrary power functions, is scalable for the objective function of weighted flow plus energy. 1
Deadline Scheduling and Power Management for Speed Bounded Processors
"... Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied separately. It is indeed natural to consider an integrated model in which a
Energy Efficient Deadline Scheduling in Two Processor Systems
"... Abstract. The past few years have witnessed different scheduling algorithms for a processor that can manage its energy usage by scaling dynamically its speed. In this paper we attempt to extend such work to the twoprocessor setting. Specifically, we focus on deadline scheduling and study online alg ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The past few years have witnessed different scheduling algorithms for a processor that can manage its energy usage by scaling dynamically its speed. In this paper we attempt to extend such work to the twoprocessor setting. Specifically, we focus on deadline scheduling and study online algorithms for two processors with an objective of maximizing the throughput, while using the smallest possible energy. The motivation comes from the fact that dualcore processors are getting common nowadays. Our first result is a new analysis of the energy usage of the speed function OA [15,4,8] with respect to the optimal twoprocessor schedule. This immediately implies a trivial twoprocessor algorithm that is 16competitive for throughput and O(1)competitive for energy. A more interesting result is a new online strategy for selecting jobs for the two processors. Together with OA, it improves the competitive ratio for throughput from 16 to 3, while increasing that for energy by a factor of 2. Note that even if the energy usage is not a concern, no algorithm can be better than 2competitive with respect to throughput. 1
Sleep with Guilt and Work Faster to Minimize Flow plus Energy
"... Abstract. In this paper we extend the study of flowenergy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleL ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we extend the study of flowenergy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleLonger is interesting; among others, it may force the processor to idle or even sleep even though new jobs have already arrived. IdleLonger works in both clairvoyant and nonclairvoyant settings. We show how to adapt two existing speed scaling algorithms AJC [15] (clairvoyant) and LAPS [9] (nonclairvoyant) to the new model. The adapted algorithms, when coupled with IdleLonger, are shown to be O(1)competitive clairvoyant and nonclairvoyant algorithms for minimizing flow plus energy on a processor that allows sleep management and speed scaling. The above results are based on the traditional model with no limit on processor speed. If the processor has a maximum speed, the problem becomes more difficult as the processor, once overslept, cannot rely on unlimited extra speed to catch up the delay. Nevertheless, we are able to enhance IdleLonger and AJC so that they remain O(1)competitive for flow plus energy under the bounded speed model. Nonclairvoyant scheduling in the bounded speed model is left as an open problem. 1
How to Schedule When You Have to Buy Your Energy
 In: Proc. of the 13th/14th Workshop on Approximation Algorithms for Comb. Optimization Problems/Randomization and Computation (APPROX/RANDOM
, 2010
"... Abstract. We consider a situation where jobs arrive over time at a data center, consisting of identical speedscalable processors. For each job, the scheduler knows how much income is lost as a function of how long the job is delayed. The scheduler also knows the fixed cost of a unit of energy. The ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a situation where jobs arrive over time at a data center, consisting of identical speedscalable processors. For each job, the scheduler knows how much income is lost as a function of how long the job is delayed. The scheduler also knows the fixed cost of a unit of energy. The online scheduler determines which jobs to run on which processors, and at what speed to run the processors. The scheduler's objective is to maximize profit, which is the income obtained from jobs minus the energy costs. We give a (1+ )speed O(1)competitive algorithm, and show that resource augmentation is necessary to achieve O(1)competitiveness.
Nonclairvoyantly scheduling powerheterogeneous processors
 In Green Computing Conference
, 2010
"... Abstract—We show that a natural nonclairvoyant online algorithm for scheduling jobs on a powerheterogeneous multiprocessor is boundedspeed boundedcompetitive for the objective of flow plus energy. KeywordsSpeed scaling, power management I. ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We show that a natural nonclairvoyant online algorithm for scheduling jobs on a powerheterogeneous multiprocessor is boundedspeed boundedcompetitive for the objective of flow plus energy. KeywordsSpeed scaling, power management I.