Results 1  10
of
15
Speed scaling for weighted flow times
 in Proc. ACMSIAM SODA, 2007
"... Intel’s SpeedStep and AMD’s PowerNOW technologies allow the Windows XP operating system to dynamically change the speed of the processor to prolong battery life. In this setting, the operating system must not only have a job selection policy to determine which job to run, but also a speed scaling po ..."
Abstract

Cited by 86 (19 self)
 Add to MetaCart
Intel’s SpeedStep and AMD’s PowerNOW technologies allow the Windows XP operating system to dynamically change the speed of the processor to prolong battery life. In this setting, the operating system must not only have a job selection policy to determine which job to run, but also a speed scaling policy to determine the speed at which the job will be run. We give an online speed scaling algorithm that is O(1)competitive for the objective of weighted flow time plus energy. This algorithm also allows us to efficiently construct an O(1)approximate schedule for minimizing weighted flow time subject to an energy constraint. 1
Improved bounds for speed scaling in devices obeying the cuberoot rule
, 2012
"... scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedul ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the processor power is of the form sα where s is the processor speed, and α> 1 is some constant; α ≈ 3 for CMOS based processors. In this paper we introduce and analyze a natural class of speed scaling algorithms that we call qOA. The algorithm qOA sets the speed of the processor to be q times the speed that the optimal offline algorithm would run the jobs in the current state. When α = 3, we show that qOA is 6.7competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). We also give almost matching upper and lower bounds for qOA for general α. Finally, we give the first nontrivial lower bound, namely eα−1 /α, on the competitive ratio of a general deterministic online algorithm for this problem. ACM Classification: F.2.2
A.: The bell is ringing in speedscaled multiprocessor scheduling
 In: Proceedings of ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
, 2009
"... This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increas ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any βapproximation algorithm for a single processor yields a randomized βBαapproximation algorithm for multiple processors without migration, where Bα is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any βcompetitive online algorithm for a single processor yields a βBαcompetitive online algorithm for multiple processors without migration. Finally, we show that any βapproximation algorithm for multiple processors with migration yields a deterministic βBαapproximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes.
Online PrimalDual For Nonlinear Optimization with Applications to Speed Scaling
 In: Proceedings of the 10th Workshop on Approximation and Online Algorithms (WAOA
, 2012
"... ar ..."
(Show Context)
A Tutorial on Amortized Local Competitiveness in Online Scheduling
, 2011
"... potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
(Show Context)
potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic
Algorithms for dynamic speed scaling
 In STACS 2011, volume 9 of LIPIcs. Schloss Dagstuhl  LeibnizZentrum fuer Informatik
, 2011
"... Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.
Energy Efficient Scheduling of Parallelizable Jobs
"... In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of resource augmentation over the optimal solution. However, these lower bounds were given only for certain families of algorithms that do not recognize the parallelizability of alive jobs. In this work, we circumvent previous lower bounds shown and give a scalable algorithm under the natural assumption that the algorithm can know the current parallelizability of a job. When a general power function is considered, this is also the first algorithm that has a constant competitive ratio for the problem using any amount of resource augmentation. 1
Algorithms for Energy Saving
, 2010
"... Energy has become a scarce and expensive resource. There is a growing awareness in society that energy saving is a critical issue. This paper surveys algorithmic solutions to reduce energy consumption in computing environments. We focus on the system and device level. More specifically, we study po ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Energy has become a scarce and expensive resource. There is a growing awareness in society that energy saving is a critical issue. This paper surveys algorithmic solutions to reduce energy consumption in computing environments. We focus on the system and device level. More specifically, we study powerdown mechanisms as well as dynamic speed scaling techniques in modern microprocessors.
Speed Scaling with a Solar Cell
"... Abstract. We consider the setting of a device that obtains it energy from a battery and some regenerative source such as a solar cell. We consider the speed scaling problem of scheduling a collection of tasks with release times, deadlines, and sizes so as to minimize the energy recharge rate of the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the setting of a device that obtains it energy from a battery and some regenerative source such as a solar cell. We consider the speed scaling problem of scheduling a collection of tasks with release times, deadlines, and sizes so as to minimize the energy recharge rate of the regenerative source. This is the first theoretical investigation of speed scaling for devices with a regenerative energy source. We show that the problem can be expressed as a polynomial sized convex program. We show that using the KKT conditions, one can obtain an efficient algorithm to verify the optimality of a schedule. We show that the energy optimal YDS schedule, is 2approximate with respect to the recharge rate. We show that the online algorithm BKP is O(1)competitive with respect to recharge rate. 1