Results 1  10
of
168
Algorithms for power savings
 In SODA ’03: Proceedings of the fourteenth annual ACMSIAM symposium on Discrete algorithms
, 2003
"... This paper examines two di erent mechanisms for saving power in batteryoperated embedded systems. The rst is that the system can be placed in a sleep state if it is idle. However, a xed amount of energy is required to bring the system back into an active state in which it can resume work. The secon ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
This paper examines two di erent mechanisms for saving power in batteryoperated embedded systems. The rst is that the system can be placed in a sleep state if it is idle. However, a xed amount of energy is required to bring the system back into an active state in which it can resume work. The second way inwhichpower savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P (s) whichindicates the power consumption level given a particular speed. We assume that P (s) isconvex, nondecreasing and nonnegative for s 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the rst theoretical analysis of systems that can use both mechanisms. We givean o ine algorithm that is within a factor of two of the optimal algorithm. We alsogivean online algorithm with a constant competitive ratio. 1
Energyefficient algorithms for flow time minimization
 In Proc. of STACS 2006
"... Topic classification: Algorithms and data structures We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this paper we are i ..."
Abstract

Cited by 95 (4 self)
 Add to MetaCart
(Show Context)
Topic classification: Algorithms and data structures We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this paper we are interested in schedules that guarantee a good QualityofService. More specifically, our goal is to schedule a sequence of jobs on a variable speed processor so as to minimize the total cost consisting of the power consumption and the total flow time of all the jobs. We first show that when the amount of work, for any job, may take an arbitrary value, then no online algorithm can achieve a constant competitive ratio. Therefore, most of the paper is concerned with unitsize jobs. We devise a deterministic constant competitive online algorithm and show that the offline problem can be solved in polynomial time. 1
Algorithmic problems in power management
 SIGACT News
, 2005
"... We survey recent research that has appeared in the theoretical computer science literature on algorithmic ..."
Abstract

Cited by 73 (4 self)
 Add to MetaCart
(Show Context)
We survey recent research that has appeared in the theoretical computer science literature on algorithmic
Poweraware speed scaling in processor sharing systems
 In Proc. of INFOCOM
, 2009
"... Abstract—Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean ..."
Abstract

Cited by 70 (14 self)
 Add to MetaCart
(Show Context)
Abstract—Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provides at least one key benefit — significantly improved robustness to bursty traffic and misestimation of workload parameters. I.
Getting the Best Response for Your Erg
"... We consider the speed scaling problem of minimizing the average response time of a collection of dynamically released jobs subject to a constraint A on energy used. We propose an algorithmic approach in which an energy optimal schedule is computed for a huge A, and then the energy optimal schedule ..."
Abstract

Cited by 64 (10 self)
 Add to MetaCart
(Show Context)
We consider the speed scaling problem of minimizing the average response time of a collection of dynamically released jobs subject to a constraint A on energy used. We propose an algorithmic approach in which an energy optimal schedule is computed for a huge A, and then the energy optimal schedule is maintained as A decreases. We show that this approach yields an efficient algorithm for equiwork jobs. We note that the energy optimal schedule has the surprising feature that the job speeds are not monotone functions of the available energy. We then explain why this algorithmic approach is problematic for arbitrary work jobs. Finally, we explain how to use the algorithm for equiwork jobs to obtain an algorithm for arbitrary work jobs that is O(1)approximate with respect to average response time, given an additional factor of (1 + ffl)energy.
Speed Scaling of Tasks with Precedence Constraints
, 2005
"... We consider the problem of speeding scaling to conserve energy in a distributedsetting where there are precedence constraints between tasks, and where the performance measure is the makespan. That is, we consider an energy bounded versionof the classic problem P  prec  Cmax. We show that, without ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
We consider the problem of speeding scaling to conserve energy in a distributedsetting where there are precedence constraints between tasks, and where the performance measure is the makespan. That is, we consider an energy bounded versionof the classic problem P  prec  Cmax. We show that, without loss of generality,one need only consider constant power schedules. We then show how to reduce this problem to the problem Q  prec  Cmax to obtain a polylog(m)approximation algorithm.
Speed Scaling Functions for Flow Time Scheduling based on Active Job Count
"... Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of activ ..."
Abstract

Cited by 46 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of active jobs. The new speed functions are more stable and also more efficient. They can support better job selection strategies to improve the competitive ratios of existing algorithms [5,8], and, more importantly, to remove the requirement of extra speed. These functions further distinguish themselves from others as they can readily be used in the nonclairvoyant model (where the size of a job is only known when the job finishes). As a first step, we study the scheduling of batched jobs (i.e., jobs with the same release time) in the nonclairvoyant model and present the first competitive algorithm for minimizing flow time plus energy (as well as for weighted flow time plus energy); the performance is close to optimal. 1
Poweraware scheduling for makespan and flow
 In Proc. 18th Annual ACM Symp. Parallelism in Algorithms and Architectures
, 2006
"... We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give a lineartime algorithm to compute all nondominated solutions for the general uniprocessor problem and a fast a ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
(Show Context)
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give a lineartime algorithm to compute all nondominated solutions for the general uniprocessor problem and a fast arbitrarilygood approximation for multiprocessor problems when every job requires the same amount of work. We also show that the multiprocessor problem becomes NPhard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting exact real arithmetic, including the extraction of roots. This hardness result holds even when scheduling equalwork jobs on a uniprocessor. We do, however, extend previous work by Pruhs et al. to give an arbitrarilygood approximation for scheduling equalwork jobs on a multiprocessor. 1
Temperatureaware scheduling and assignment for hard realtime applications on MPSoCs
, 2010
"... Increasing integrated circuit (IC) power densities and temperatures may hamper multiprocessor systemonchip (MPSoC) use in hard realtime systems. This article formalizes the temperatureaware realtime MPSoC assignment and scheduling problem and presents an optimal phased steadystate mixed intege ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Increasing integrated circuit (IC) power densities and temperatures may hamper multiprocessor systemonchip (MPSoC) use in hard realtime systems. This article formalizes the temperatureaware realtime MPSoC assignment and scheduling problem and presents an optimal phased steadystate mixed integer linear programming based solution that considers the impact of scheduling and assignment decisions on MPSoC thermal profiles to directly minimize the chip peak temperature. We also introduce a flexible heuristic framework for task assignment and scheduling that permits system designers to trade off accuracy for running time when solving large problem instances. Finally, for task sets with sufficient slack, we show that inserting idle times between task executions can further reduce the peak temperature of the MPSoC quite significantly.
Speed scaling on parallel processors
 In Proc. 19th Annual Symp. on Parallelism in Algorithms and Architectures (SPAA’07
, 2007
"... In this paper we investigate algorithmic instruments leading to low power consumption in computing devices. While previous work on energyefficient algorithms has mostly focused on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of sch ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
(Show Context)
In this paper we investigate algorithmic instruments leading to low power consumption in computing devices. While previous work on energyefficient algorithms has mostly focused on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release time, a deadline and a processing volume, on variable speed processors so as to minimize the total energy consumption. We first settle the complexity of speed scaling with unit size jobs. More specifically, we devise a polynomial time algorithm for agreeable deadlines and prove NPhardness results for arbitrary release dates and deadlines. For the latter setting we also develop a polynomial time algorithm achieving a constant factor approximation guarantee that is independent of the number of processors. Additionally, we study speed scaling of jobs with arbitrary processing requirements and, again, develop constant factor approximation algorithms. We finally transform our offline algorithms into constant competitive online strategies.