Results 11  20
of
168
Scheduling for speed bounded processors
 In Proc. ICALP
, 2008
"... Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to manage their energy usage. This leads to the problem of designing execution strategies that are both energy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentially optimum possible algorithms for them. In the first problem, jobs with arbitrary sizes and deadlines arrive online and the goal is to maximize the throughput, i.e. the total size of jobs completed successfully. We give an algorithm that is 4competitive for throughput and O(1)competitive for the energy used. This improves upon the 14 throughput competitive algorithm of Chan et al. [10]. Our throughput guarantee is optimal as any online algorithm must be at least 4competitive even if the energy concern is ignored [7]. In the second problem, we consider optimizing the tradeoff between the total flow time incurred and the energy consumed by the jobs. We give a 4competitive algorithm to minimize total flow time plus energy for unweighted unit size jobs, and a (2 + o(1))α / ln αcompetitive algorithm to minimize fractional weighted flow time plus energy. Prior to our work, these guarantees were known only when the processor speed was unbounded (T = ∞) [4]. 1
Reactive speed control in temperatureconstrained realtime systems
 RealTime Systems Journal
, 2008
"... In this paper, we study temperatureconstrained realtime systems, where realtime guarantees must be met without exceeding safe temperature levels within the processor. We give a short review on temperature issues in processors and describe how speed control can be used to tradeoff task delays ag ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we study temperatureconstrained realtime systems, where realtime guarantees must be met without exceeding safe temperature levels within the processor. We give a short review on temperature issues in processors and describe how speed control can be used to tradeoff task delays against processor temperature. In this paper, we describe how traditional worstcase execution scenarios do not apply in temperatureconstrained situations. As example, we adopt a simple reactive speed control technique. We show how this simple reactive scheme can improve the processor utilization compared with any constantspeed scheme. 1
Energy efficient online deadline scheduling
 In Proc. SODA
, 2007
"... Abstract. This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (of which the rate is roughly a cubic function of ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper extends the study of online algorithms for energyefficient deadline scheduling to the overloaded setting. Specifically, we consider a processor that can vary its speed between 0 and a maximum speed T to minimize its energy usage (of which the rate is roughly a cubic function of the speed). As the speed is upper bounded, the system may be overloaded with jobs and no scheduling algorithms can meet the deadlines of all jobs. An optimal schedule is expected to maximize the throughput, and furthermore, its energy usage should be the smallest among all schedules that achieve the maximum throughput. In designing a scheduling algorithm, one has to face the dilemma of selecting more jobs and being conservative in energy usage. Even if we ignore energy usage, the best possible online algorithm is 4competitive on throughput [12]. On the other hand, existing work on energyefficient scheduling focuses on minimizing the energy to complete all jobs on a processor with unbounded speed, giving several O(1)competitive algorithms with respect to the energy usage [2,20]. This paper presents the first online algorithm for the more realistic setting where processor speed is bounded and the system may be overloaded; the algorithm is O(1)competitive on both throughput and energy usage. If the maximum speed of the online scheduler is relaxed slightly to (1+ǫ)T for some ǫ> 0, we can improve the competitive ratio on throughput to arbitrarily close to one, while maintaining O(1)competitive on energy usage. 1
Minenergy voltage allocation for treestructured tasks
 Journal of Combinatorial Optimization
, 2006
"... Abstract. We study job scheduling on processors capable of running at variable voltage/speed to minimize energy consumption. Each job in a problem instance is specified by its arrival time and deadline, together with required number of CPU cycles. It is known that the minimum energy schedule for n j ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We study job scheduling on processors capable of running at variable voltage/speed to minimize energy consumption. Each job in a problem instance is specified by its arrival time and deadline, together with required number of CPU cycles. It is known that the minimum energy schedule for n jobs can be computed in O(n3) time, assuming a convex energy function. We investigate more efficient algorithms for computing the optimal schedule when the job sets have certain special structures. When the time intervals are structured as trees, the minimum energy schedule is shown to have a succinct characterization and is computable in time O(P) where P is the tree’s total path length. We also study an online averagerate heuristics AVR and prove that its energy consumption achieves a small constant competitive ratio for nested job sets and for job sets with limited overlap. Some simulation results are also given. 1
An efficient algorithm for computing optimal discrete voltage schedules
 SIAM J. on Computing
"... Abstract. We consider the problem of job scheduling on a variable voltage processor with d discrete voltage/speed levels. We give an algorithm which constructs a minimum energy schedule for n jobs in O(dn log n) time. Previous approaches solve this problem by first computing the optimal continuous ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of job scheduling on a variable voltage processor with d discrete voltage/speed levels. We give an algorithm which constructs a minimum energy schedule for n jobs in O(dn log n) time. Previous approaches solve this problem by first computing the optimal continuous solution in O(n3) time and then adjusting the speed to discrete levels. In our approach, the optimal discrete solution is characterized and computed directly from the inputs. We also show that O(n log n) time is required, hence the algorithm is optimal for fixed d. 1
Speed Scaling of Processes with Arbitrary Speedup Curves on a Multiprocessor
"... We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider th ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider the objective of energy plus flow time. We assume that a processor running at speed s uses power sα for some constant α> 1. For processes that may have side effects or that are not checkpointable, we show an Ω(m (α−1)/α2) bound on the competitive ratio of any randomized algorithm. For checkpointable processes without side effects, we give an O(logm)competitive algorithm. Thus for processes that may have side effects or that are not checkpointable, the achievable competitive ratio grows quickly with the number of processors, but for checkpointable processes without side effects, the achievable competitive ratio grows slowly with the number of processors. We then show a lower bound of Ω(log1/α m) on the competitive ratio of any randomized algorithm for checkpointable processes without side effects. 1
Improved bounds for speed scaling in devices obeying the cuberoot rule
 Proc. 36th International Colloqium on Automata, Languages and Programming
, 2009
"... ..."
(Show Context)
Dynamic Thermal Management through Task Scheduling ∗
"... The evolution of microprocessors has been hindered by their increasing power consumption and the heat generation speed ondie. High temperature impairs the processor’s reliability and reduces its lifetime. While hardware level dynamic thermal management (DTM) techniques, such as voltage and frequenc ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
The evolution of microprocessors has been hindered by their increasing power consumption and the heat generation speed ondie. High temperature impairs the processor’s reliability and reduces its lifetime. While hardware level dynamic thermal management (DTM) techniques, such as voltage and frequency scaling, can effectively lower the chip temperature when it surpasses the thermal threshold, they inevitably come at the cost of performance degradation. We propose an OS level technique that performs thermalaware job scheduling to reduce the number of thermal trespasses. Our scheduler reduces the amount of hardware DTMs and achieves higher performance while keeping the temperature low. Our methods leverage the natural discrepancies in thermal behavior among different workloads, and schedule them to keep the chip temperature below a given budget. We develop a heuristic algorithm based on the observation that there is a difference in the resulting temperature when a hot and a cool job are executed in a different order. To evaluate our scheduling algorithms, we developed a lightweight runtime temperature monitor to enable informed scheduling decisions. We have implemented our scheduling algorithm and the entire temperature monitoring framework in the Linux kernel. Our proposed scheduler can remove 10.573.6 % of the hardware DTMs in various combinations of workloads in a medium thermal environment. As a result, the CPU throughput was improved by up to 7.6% (4.1 % on average) even under a severe thermal environment. 1
A.: The bell is ringing in speedscaled multiprocessor scheduling
 In: Proceedings of ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
, 2009
"... This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increas ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α> 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any βapproximation algorithm for a single processor yields a randomized βBαapproximation algorithm for multiple processors without migration, where Bα is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any βcompetitive online algorithm for a single processor yields a βBαcompetitive online algorithm for multiple processors without migration. Finally, we show that any βapproximation algorithm for multiple processors with migration yields a deterministic βBαapproximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes.
Competitive Nonmigratory Scheduling for Flow Time and Energy
 SPAA'08
, 2008
"... Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Energy usage has been an important concern in recent research on online scheduling. In this paper we extend the study of the tradeoff between flow time and energy from the singleprocessor setting [8, 6] to the multiprocessor setting. Our main result is an analysis of a simple nonmigratory online algorithm called CRR (classified round robin) on m ≥ 2 processors, showing that its flow time plus energy is within O(1) times of the optimal nonmigratory offline algorithm, when the maximum allowable speed is slightly relaxed. This result still holds even if the comparison is made against the optimal migratory offline algorithm (the competitive ratio increases by a factor of 2.5). As a special case, our work also contributes to the traditional online flowtime scheduling. Specifically, for minimizing flow time only, CRR can yield a competitive ratio one or even arbitrarily smaller than one, when using sufficiently faster processors. Prior to our work, similar result is only known for online algorithms that needs migration [21, 23], while the best nonmigratory result can achieve an O(1) competitive ratio [14]. The above result stems from an interesting observation that there always exists some optimal migratory schedule S that can be converted (in an offline sense) to a nonmigratory schedule S ′ with a moderate increase in flow time plus energy. More importantly, this nonmigratory schedule always dispatches jobs in the same way as CRR.