Results 1  10
of
11
A Tutorial on Amortized Local Competitiveness in Online Scheduling
, 2011
"... potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
(Show Context)
potential functions are used to show that a particular online algorithm is locally competitive in an amortized sense. Algorithm analyses using potential functions are sometimes criticized as seeming to be black magic
Deadline Scheduling and Power Management for Speed Bounded Processors
"... Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Energy consumption has become an important issue in the study of processor scheduling. Energy reduction can be achieved by allowing a processor to vary the speed dynamically (dynamic speed scaling) [2–4, 7, 10] or to enter a sleep state [1, 5, 8]. In the past, these two mechanisms are often studied separately. It is indeed natural to consider an integrated model in which a
Energy Efficient Geographical Load Balancing via Dynamic Deferral of Workload,‖ arXiv: 1204.2320v1 [cs.NI] 11 Apr 2012
"... Abstract—With the increasing popularity of Cloud computing and Mobile computing, individuals, enterprises and research centers have started outsourcing their IT and computational needs to ondemand cloud services. Recently geographical load balancing techniques have been suggested for data centers h ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract—With the increasing popularity of Cloud computing and Mobile computing, individuals, enterprises and research centers have started outsourcing their IT and computational needs to ondemand cloud services. Recently geographical load balancing techniques have been suggested for data centers hosting cloud computation in order to reduce energy cost by exploiting the electricity price differences across regions. However, these algorithms do not draw distinction among diverse requirements for responsiveness across various workloads. In this paper, we use the flexibility from the Service Level Agreements (SLAs) to differentiate among workloads under bounded latency requirements and propose a novel approach for cost savings for geographical load balancing. We investigate how much workload to be executed in each data center and how much workload to be delayed and migrated to other data centers for energy saving while meeting deadlines. We present an offline formulation for geographical load balancing problem with dynamic deferral and give online algorithms to determine the assignment of workload to the data centers and the migration of workload between data centers in order to adapt with dynamic electricity price changes. We compare our algorithms with the greedy approach and show that significant cost savings can be achieved by migration of workload and dynamic deferral with future electricity price prediction. We validate our algorithms on MapReduce traces and show that geographic load balancing with dynamic deferral can provide 2030 % costsavings. Index Terms—Cloud Computing; Data Center; Deadline. I.
Energy Efficient Scheduling of Parallelizable Jobs
"... In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider scheduling parallelizable jobs in the nonclairvoyant speed scaling setting to minimize the objective of weighted flow time plus energy. Previously, strong lower bounds were shown on this model in the unweighted setting even when the algorithm is given a constant amount of resource augmentation over the optimal solution. However, these lower bounds were given only for certain families of algorithms that do not recognize the parallelizability of alive jobs. In this work, we circumvent previous lower bounds shown and give a scalable algorithm under the natural assumption that the algorithm can know the current parallelizability of a job. When a general power function is considered, this is also the first algorithm that has a constant competitive ratio for the problem using any amount of resource augmentation. 1
Nonclairvoyant scheduling for weighted flow time and energy on speed bounded processors
 In Proc. CATS
, 2010
"... Abstract. We consider the online scheduling problem of minimizing total weighted flow time plus energy in the dynamic speed scaling model, where a processor can scale its speed dynamically between 0 and some maximum speed T. In the past few years this problem has been studied extensively under the c ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the online scheduling problem of minimizing total weighted flow time plus energy in the dynamic speed scaling model, where a processor can scale its speed dynamically between 0 and some maximum speed T. In the past few years this problem has been studied extensively under the clairvoyant setting, which requires the size of a job to be known at release time [1, 5, 6, 9, 15, 18–20]. For the nonclairvoyant setting, despite its practical importance, the progress is relatively limited. Only recently an online algorithm LAPS is known to be O(1)competitive for minimizing (unweighted) flow time plus energy in the infinite speed model (i.e., T = ∞) [11, 12]. This paper makes two contributions to the nonclairvoyant scheduling. First, we resolve the open problem that the unweighted result of LAPS can be extended to the more realistic model with bounded maximum speed. Second, we show that another nonclairvoyant algorithm WRR is O(1)competitive when weighted flow time is concerned. Note that WRR is not as efficient as LAPS for scheduling unweighted jobs as WRR has a much bigger constant hidden in its competitive ratio. This is the corrected version of the paper with the same title in CATS 2010 [13]; in particular, Lemmas 2 and 4 of Section 3 and the ordering of jobs in the potential analysis of Section 4 were given incorrectly before and are fixed in this version. On the other hand, the conjecture, given in Section 5, about the generalization of LAPS to the weighted setting has recently been resolved [14]. T.W. Lam is partly supported by HKU Grant 7176104.
EnergyOptimized Dynamic Deferral of Workload for Capacity Provisioning in Data Centers
"... Abstract—This paper explores the opportunity for energy cost saving in data centers that utilizes the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach for capacity provisioning under bounded latency requirements of the workload. We investigate how many servers to ke ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This paper explores the opportunity for energy cost saving in data centers that utilizes the flexibility from the Service Level Agreements (SLAs) and proposes a novel approach for capacity provisioning under bounded latency requirements of the workload. We investigate how many servers to keep active and how much workload to delay for energy saving while meeting latency constraints. We present an offline LP formulation for capacity provisioning by dynamic deferral and give two online algorithms to determine the capacity of the data center and the assignment of workload to servers dynamically. We prove the feasibility of the online algorithms and show that their worst case performances are bounded by constant factors with respect to the offline formulation. To the best of our knowledge, this is the first formulation for capacity provisioning in data centers considering workload deferral with bounded latency. We validate our algorithms on MapReduce workload by provisioning capacity on a Hadoop cluster and show that the algorithms actually perform much better in practice compared to the naive ‘follow the workload ’ provisioning, resulting in 2040 % costsavings. I.
New results for nonpreemptive speed scaling
 In MFCS
, 2014
"... Abstract. We consider the speed scaling problem introduced in the seminal paper of Yao et al. [24]. In this problem, a number of jobs, each with its own processing volume, release time, and deadline, needs to be executed on a speedscalable processor. The power consumption of this processor is P (s) ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We consider the speed scaling problem introduced in the seminal paper of Yao et al. [24]. In this problem, a number of jobs, each with its own processing volume, release time, and deadline, needs to be executed on a speedscalable processor. The power consumption of this processor is P (s) = sα, where s is the processing speed, and α> 1 is a constant. The total energy consumption is power integrated over time, and the objective is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the nonpreemptive version of the problem, except that it is strongly NPhard and allows a (large) constant factor approximation [5, 8, 16]. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomialtime approximation scheme for it, thereby showing that (at least) this special case is not APXhard, unless NP ⊆ DTIME(2poly(logn)). The second contribution of this work is a polynomialtime algorithm for the special case of equalvolume jobs. In addition, we show that two other special cases of this problem allow fully polynomialtime approximation schemes. 1
Multiprocessor Speed Scaling for Jobs with Arbitrary Sizes and Deadlines
"... Energy consumption has become an important concern in the design of modern processors, not only for batteryoperated mobile devices with single processors but also for server farms or laptops with multicore processors. A popular technology to reduce energy usage is dynamic speed scaling (see e.g., ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Energy consumption has become an important concern in the design of modern processors, not only for batteryoperated mobile devices with single processors but also for server farms or laptops with multicore processors. A popular technology to reduce energy usage is dynamic speed scaling (see e.g., [1, 2, 3, 6]) where the processor can vary its speed dynamically. The power consumption