Results 1  10
of
26
Scheduling for speed bounded processors
 In Proc. ICALP
, 2008
"... Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to manage their energy usage. This leads to the problem of designing execution strategies that are both energy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentially optimum possible algorithms for them. In the first problem, jobs with arbitrary sizes and deadlines arrive online and the goal is to maximize the throughput, i.e. the total size of jobs completed successfully. We give an algorithm that is 4competitive for throughput and O(1)competitive for the energy used. This improves upon the 14 throughput competitive algorithm of Chan et al. [10]. Our throughput guarantee is optimal as any online algorithm must be at least 4competitive even if the energy concern is ignored [7]. In the second problem, we consider optimizing the tradeoff between the total flow time incurred and the energy consumed by the jobs. We give a 4competitive algorithm to minimize total flow time plus energy for unweighted unit size jobs, and a (2 + o(1))α / ln αcompetitive algorithm to minimize fractional weighted flow time plus energy. Prior to our work, these guarantees were known only when the processor speed was unbounded (T = ∞) [4]. 1
Energy efficient online deadline scheduling
 IN PROC. SODA
, 2007
"... This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (of which the rate is roughly a cubic function of the speed ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
(Show Context)
This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (of which the rate is roughly a cubic function of the speed). As the speed is upper bounded, the system may be overloaded with jobs and no scheduling algorithms can meet the deadlines of all jobs. An optimal schedule is expected to maximize the throughput, and furthermore, its energy usage should be the smallest among all schedules that achieve the maximum throughput. In designing a scheduling algorithm, one has to face the dilemma of selecting more jobs and being conservative in energy usage. Even if we ignore energy usage, the best possible online algorithm is 4competitive on throughput [12]. On the other hand, existing work on energyefficient scheduling focuses on minimizing the energy to complete all jobs on a processor with unbounded speed, giving several O(1)competitive algorithms with respect to the energy usage [2,20]. This paper presents the first online algorithm for the more realistic setting where processor speed is bounded and the system may be overloaded; the algorithm is O(1)competitive on both throughput and energy usage. If the maximum speed of the online scheduler is relaxed slightly to (1+ǫ)T for some ǫ> 0, we can improve the competitive ratio on throughput to arbitrarily close to one, while maintaining O(1)competitive on energy usage.
Improved bounds for speed scaling in devices obeying the cuberoot rule
, 2012
"... scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedul ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the processor power is of the form sα where s is the processor speed, and α> 1 is some constant; α ≈ 3 for CMOS based processors. In this paper we introduce and analyze a natural class of speed scaling algorithms that we call qOA. The algorithm qOA sets the speed of the processor to be q times the speed that the optimal offline algorithm would run the jobs in the current state. When α = 3, we show that qOA is 6.7competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). We also give almost matching upper and lower bounds for qOA for general α. Finally, we give the first nontrivial lower bound, namely eα−1 /α, on the competitive ratio of a general deterministic online algorithm for this problem. ACM Classification: F.2.2
Discrete and Continuous MinEnergy Schedules for Variable Voltage Processors ∗
"... Current dynamic voltage scaling techniques allow the speed of processors to be set dynamically in order to save energy consumption, which is a major concern in microprocessor design. A theoretical model for minenergy job scheduling was first proposed a decade ago, and it was shown that for any conv ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Current dynamic voltage scaling techniques allow the speed of processors to be set dynamically in order to save energy consumption, which is a major concern in microprocessor design. A theoretical model for minenergy job scheduling was first proposed a decade ago, and it was shown that for any convex energy function, the minenergy schedule for a set of n jobs has a unique characterization and is computable in O(n 3) time. This algorithm has remained as the most efficient known despite many investigations of this model. In this paper we give a new algorithm with running time O(n 2 log n) for finding the minenergy schedule. In contrast to the previous algorithm which outputs optimal speed levels from high to low iteratively, the new algorithm is based on finding successive approximations to the optimal schedule. At the core of the approximation is an efficient partitioning of the job set into high and low speed subsets by any speed threshold, without computing the exact speed function.
Average rate speed scaling
 In Latin American Theoretical Informatics Symposium, 2008. Nikhil Bansal, HoLeung Chan, Kirk Pruhs, and Dmitriy RogozhnikovKatz. Improved
, 2007
"... Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. Yao ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dualobjective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. Yao, Demers, and Shenker [4] considered the problem where the QoS constraint is deadline feasibility and the objective is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each job at a constant speed between its release and its deadline. They showed that the competitive ratio of AVR is at most (2α) α /2 if a processor running at speed s uses power s α. We show the competitive ratio of AVR is at least ((2 − δ)α) α /2, where δ is a function of α that approaches zero as α approaches infinity. This shows that the competitive analysis of AVR by Yao, Demers, and Shenker is essentially tight, at least for large α. We also give an alternative proof that the competitive ratio of AVR is at most (2α) α /2 using a potential function argument. We believe that this analysis is significantly simpler and more elementary than the original analysis of AVR in [4]. 1
Polynomial Time Algorithms for Minimum Energy Scheduling
, 908
"... The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. One common method for saving energy is to simply suspend the system during the idle times. No energy is consumed in the suspend mode. However, the ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
The aim of power management policies is to reduce the amount of energy consumed by computer systems while maintaining satisfactory level of performance. One common method for saving energy is to simply suspend the system during the idle times. No energy is consumed in the suspend mode. However, the process of waking up the system itself requires a certain fixed amount of energy, and thus suspending the system is beneficial only if the idle time is long enough to compensate for this additional energy expenditure. In the specific problem studied in the paper, we have a set of jobs with release times and deadlines that need to be executed on a single processor. Preemptions are allowed. The processor requires energy L to be woken up and, when it is on, it uses one unit of energy per one unit of time. It has been an open problem whether a schedule minimizing the overall energy consumption can be computed in polynomial time. We solve this problem in positive, by providing an O(n5)time algorithm. In addition we provide an O(n4)time algorithm for computing the minimum energy schedule when all jobs have unit length. 1
Race to Idle: New Algorithms for Speed Scaling with a Sleep State
"... We study an energy conservation problem where a variablespeed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadlin ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We study an energy conservation problem where a variablespeed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadlinebased scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, Irani et al. [12] devised an offline 2approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = s α +γ, where s is the processor speed, Han et al. [11] gave an (α α + 2)competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First we prove NPhardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs scritschedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4/3. For powerfunctionsP(s) = βs α +γ, weobtainanapproximation of 137/117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of scritschedules. For general convex power functions, we give another 2approximation algorithm. For functions P(s) = βs α + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e −1−1/e)/(eW−1(−e −1−1/e) + 1) < 1.211, where W−1 is the lower branch of the Lambert W function. 1
Algorithms for dynamic speed scaling
 In STACS 2011, volume 9 of LIPIcs. Schloss Dagstuhl  LeibnizZentrum fuer Informatik
, 2011
"... Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Many modern microprocessors allow the speed/frequency to be set dynamically. The general goal is to execute a sequence of jobs on a variablespeed processor so as to minimize energy consumption. This paper surveys algorithmic results on dynamic speed scaling. We address settings where (1) jobs have strict deadlines and (2) job flow times are to be minimized.
Optimizing Throughput and Energy in Online Deadline Scheduling
"... Abstract: This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (the rate is believed to be a cubic function of t ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Abstract: This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (the rate is believed to be a cubic function of the speed). As the speed is upper bounded, the processor may be overloaded with jobs and no scheduling algorithms can guarantee to meet the deadlines of all jobs. An optimal schedule is expected to maximize the throughput, and furthermore, its energy usage should be the smallest among all schedules that achieve the maximum throughput. In designing a scheduling algorithm, one has to face the dilemma of selecting more jobs and being conservative in energy usage. If we ignore energy usage, the best possible online algorithm is 4competitive on throughput [Koren and Shasha 1995]. On the other hand, existing work on energyefficient scheduling focuses on a setting where the processor speed is unbounded and the concern is on minimizing the energy to complete all jobs; O(1)competitive online algorithms with respect to energy usage have been known [Yao et al. 1995; Bansal et al. 2007a; Li et al. 2006]. This paper presents the first online algorithm for the more realistic setting where processor speed is bounded and the system may be overloaded; the algorithm is O(1)competitive on both throughput and energy usage. If the maximum speed of
Nonpreemptive speed scaling
 IN: PROCEEDINGS OF SCANDINAVIAN SYMPOSIUM AND WORKSHOPS ON ALGORITHM THEORY (SWAT
, 2012
"... We consider the following variant of the speed scaling problem introduced by Yao, Demers, and Shenker. We are given a set of jobs and we have a variablespeed processor to process them. The higher the processor speed, the higher the energy consumption. Each job is associated with its own release tim ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider the following variant of the speed scaling problem introduced by Yao, Demers, and Shenker. We are given a set of jobs and we have a variablespeed processor to process them. The higher the processor speed, the higher the energy consumption. Each job is associated with its own release time, deadline, and processing volume. The objective is to find a feasible schedule that minimizes the energy consumption. Moreover, no preemption of jobs is allowed. Unlike the preemptive version that is known to be in P, the nonpreemptive version of speed scaling is strongly NPhard. In this work, we present a constant factor approximation algorithm for it. The main technical idea is to transform the problem into the unrelated machine scheduling problem with Lpnorm objective.