Results 1  10
of
30
Speed Scaling Functions for Flow Time Scheduling based on Active Job Count
"... Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of activ ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of active jobs. The new speed functions are more stable and also more efficient. They can support better job selection strategies to improve the competitive ratios of existing algorithms [5,8], and, more importantly, to remove the requirement of extra speed. These functions further distinguish themselves from others as they can readily be used in the nonclairvoyant model (where the size of a job is only known when the job finishes). As a first step, we study the scheduling of batched jobs (i.e., jobs with the same release time) in the nonclairvoyant model and present the first competitive algorithm for minimizing flow time plus energy (as well as for weighted flow time plus energy); the performance is close to optimal. 1
Scheduling for speed bounded processors
 In Proc. ICALP
, 2008
"... Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to manage their energy usage. This leads to the problem of designing execution strategies that are both energy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentially optimum possible algorithms for them. In the first problem, jobs with arbitrary sizes and deadlines arrive online and the goal is to maximize the throughput, i.e. the total size of jobs completed successfully. We give an algorithm that is 4competitive for throughput and O(1)competitive for the energy used. This improves upon the 14 throughput competitive algorithm of Chan et al. [10]. Our throughput guarantee is optimal as any online algorithm must be at least 4competitive even if the energy concern is ignored [7]. In the second problem, we consider optimizing the tradeoff between the total flow time incurred and the energy consumed by the jobs. We give a 4competitive algorithm to minimize total flow time plus energy for unweighted unit size jobs, and a (2 + o(1))α / ln αcompetitive algorithm to minimize fractional weighted flow time plus energy. Prior to our work, these guarantees were known only when the processor speed was unbounded (T = ∞) [4]. 1
Improved bounds for speed scaling in devices obeying the cuberoot rule
, 2012
"... scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedul ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the processor power is of the form sα where s is the processor speed, and α> 1 is some constant; α ≈ 3 for CMOS based processors. In this paper we introduce and analyze a natural class of speed scaling algorithms that we call qOA. The algorithm qOA sets the speed of the processor to be q times the speed that the optimal offline algorithm would run the jobs in the current state. When α = 3, we show that qOA is 6.7competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). We also give almost matching upper and lower bounds for qOA for general α. Finally, we give the first nontrivial lower bound, namely eα−1 /α, on the competitive ratio of a general deterministic online algorithm for this problem. ACM Classification: F.2.2
A.: The bell is ringing in speedscaled multiprocessor scheduling
 In: Proceedings of ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
, 2009
"... This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increas ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any βapproximation algorithm for a single processor yields a randomized βBαapproximation algorithm for multiple processors without migration, where Bα is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any βcompetitive online algorithm for a single processor yields a βBαcompetitive online algorithm for multiple processors without migration. Finally, we show that any βapproximation algorithm for multiple processors with migration yields a deterministic βBαapproximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes.
Competitive Nonmigratory Scheduling for Flow Time and Energy
 SPAA'08
, 2008
"... Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online algorithm called CRR (classified round robin) on m ≥ 2 processors, showing that its flow time plus energy is within O(1) times of the optimal nonmigratory offline algorithm, when the maximum allowable speed is slightly relaxed. This result still holds even if the comparison is made against the optimal migratory offline algorithm (the competitive ratio increases by a factor of 2.5). As a special case, our work also contributes to the traditional online flowtime scheduling. Specifically, for minimizing flow time only, CRR can yield a competitive ratio one or even arbitrarily smaller than one, when using sufficiently faster processors. Prior to our work, similar result is only known for online algorithms that needs migration [21, 23], while the best nonmigratory result can achieve an O(1) competitive ratio [14]. The above result stems from an interesting observation that there always exists some optimal migratory schedule S that can be converted (in an offline sense) to a nonmigratory schedule S ′ with a moderate increase in flow time plus energy. More importantly, this nonmigratory schedule always dispatches jobs in the same way as CRR.
A Tutorial on Amortized Local Competitiveness in Online Scheduling
, 2011
"... potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
(Show Context)
potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic
Deadline Scheduling and Power Management for Speed Bounded Processors
"... Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied separately. It is indeed natural to consider an integrated model in which a
Energy Efficient Deadline Scheduling in Two Processor Systems
"... Abstract. The past few years have witnessed different scheduling algorithms for a processor that can manage its energy usage by scaling dynamically its speed. In this paper we attempt to extend such work to the twoprocessor setting. Specifically, we focus on deadline scheduling and study online alg ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The past few years have witnessed different scheduling algorithms for a processor that can manage its energy usage by scaling dynamically its speed. In this paper we attempt to extend such work to the twoprocessor setting. Specifically, we focus on deadline scheduling and study online algorithms for two processors with an objective of maximizing the throughput, while using the smallest possible energy. The motivation comes from the fact that dualcore processors are getting common nowadays. Our first result is a new analysis of the energy usage of the speed function OA [15,4,8] with respect to the optimal twoprocessor schedule. This immediately implies a trivial twoprocessor algorithm that is 16competitive for throughput and O(1)competitive for energy. A more interesting result is a new online strategy for selecting jobs for the two processors. Together with OA, it improves the competitive ratio for throughput from 16 to 3, while increasing that for energy by a factor of 2. Note that even if the energy usage is not a concern, no algorithm can be better than 2competitive with respect to throughput. 1
Sleep with Guilt and Work Faster to Minimize Flow plus Energy
"... Abstract. In this paper we extend the study of flowenergy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleL ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we extend the study of flowenergy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleLonger is interesting; among others, it may force the processor to idle or even sleep even though new jobs have already arrived. IdleLonger works in both clairvoyant and nonclairvoyant settings. We show how to adapt two existing speed scaling algorithms AJC [15] (clairvoyant) and LAPS [9] (nonclairvoyant) to the new model. The adapted algorithms, when coupled with IdleLonger, are shown to be O(1)competitive clairvoyant and nonclairvoyant algorithms for minimizing flow plus energy on a processor that allows sleep management and speed scaling. The above results are based on the traditional model with no limit on processor speed. If the processor has a maximum speed, the problem becomes more difficult as the processor, once overslept, cannot rely on unlimited extra speed to catch up the delay. Nevertheless, we are able to enhance IdleLonger and AJC so that they remain O(1)competitive for flow plus energy under the bounded speed model. Nonclairvoyant scheduling in the bounded speed model is left as an open problem. 1
Energy Efficient Scheduling of Parallelizable Jobs
"... In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of resource augmentation over the optimal solution. However, these lower bounds were given only for certain families of algorithms that do not recognize the parallelizability of alive jobs. In this work, we circumvent previous lower bounds shown and give a scalable algorithm under the natural assumption that the algorithm can know the current parallelizability of a job. When a general power function is considered, this is also the first algorithm that has a constant competitive ratio for the problem using any amount of resource augmentation. 1