Results 1  10
of
12
An Online Scalable Algorithm for Average Flowtime in Broadcast Scheduling
 In SODA 10: Proceedings of the twentyfirst annual ACMSIAM symposium on Discrete algorithms
, 2010
"... In this paper the online pullbased broadcast model is considered. In this model, there are n pages of data stored at a server and requests arrive for pages online. When the server broadcasts page p, all outstanding requests for the same page p are simultaneously satisfied. We consider the problem o ..."
Abstract

Cited by 16 (12 self)
 Add to MetaCart
In this paper the online pullbased broadcast model is considered. In this model, there are n pages of data stored at a server and requests arrive for pages online. When the server broadcasts page p, all outstanding requests for the same page p are simultaneously satisfied. We consider the problem of minimizing average (total) flow time online where all pages are unitsized. For this problem, there has been a decadelong search for an online algorithm which is scalable, i.e. (1 + ɛ)speed O(1)competitive for any fixed ɛ> 0. In this paper, we give the first analysis of an online scalable algorithm. 1
Longest wait first for broadcast scheduling
 IN WAOA ’09: PROCEEDINGS OF 7TH WORKSHOP ON APPROXIMATION AND ONLINE ALGORITHMS
, 2009
"... We consider online algorithms for broadcast scheduling. In the pullbased broadcast model there are n unitsized pages of information at a server and requests arrive online for pages. When the server transmits a page p, all outstanding requests for that page are satisfied. There is a lower bound of ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
We consider online algorithms for broadcast scheduling. In the pullbased broadcast model there are n unitsized pages of information at a server and requests arrive online for pages. When the server transmits a page p, all outstanding requests for that page are satisfied. There is a lower bound of Ω(n) on the competitiveness of online algorithms to minimize average flowtime [27]; therefore we consider resource augmentation analysis in which the online algorithm is given extra speed over the adversary. The longestwaitfirst (LWF) algorithm is a natural algorithm that has been shown to have good empirical performance [2]. Edmonds and Pruhs showed that LWF is 6speed O(1)competitive using a very complex analysis; they also showed that LWF is not O(1)competitive with less than 1.618speed. In this paper we make several contributions to the analysis of LWF and broadcast scheduling. – An intuitive and easy to understand analysis of LWF that shows that it is O(1/ɛ 2) competitive for average flowtime with 4+ɛ speed. – LWF is O(1/ɛ 3) competitive for average flowtime with 3.4+ɛ speed. We use our insights to prove that a natural extension of LWF is O(1)speed O(1) competitive for more general objective functions such as average delayfactor and Lk norms of delayfactor (for fixed k). These metrics generalize average flowtime and Lk norms of flowtime respectively and ours are the first nontrivial results for these objective functions.
Competitive algorithms from competitive equilibria: nonclairvoyant scheduling under polyhedral constraints
 In Symposium on Theory of Computing, STOC 2014
"... We introduce and study a general scheduling problem that we term the Packing Scheduling problem (PSP). In this problem, jobs can have different arrival times and sizes; a scheduler can process job j at rate xj, subject to arbitrary packing constraints over the set of rates (x) of the outstanding job ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We introduce and study a general scheduling problem that we term the Packing Scheduling problem (PSP). In this problem, jobs can have different arrival times and sizes; a scheduler can process job j at rate xj, subject to arbitrary packing constraints over the set of rates (x) of the outstanding jobs. The PSP framework captures a variety of scheduling problems, including the classical problems of unrelated machines scheduling, broadcast scheduling, and scheduling jobs of different parallelizability. It also captures scheduling constraints arising in diverse modern environments ranging from individual computer architectures to data centers. More concretely, PSP models multidimensional resource requirements and parallelizability, as well as network bandwidth requirements found in data center scheduling. In this paper, we design nonclairvoyant online algorithms for PSP and its special cases – in this setting, the scheduler is unaware of the sizes of jobs. Our results are summarized as follows. • For minimizing total weighted completion time, we show a O(1)competitive algorithm. Surprisingly, we achieve this result by applying the wellknown Proportional Fairness algorithm (PF) to perform allocations each time instant. Though PF has been extensively studied in the context of maximizing fairness in resource allocation, we present the first analysis in adversarial and gen
Online scalable algorithm for minimizing ;knorms of weighted flow time on unrelated machines
 In SODA
, 2011
"... We consider the problem of scheduling jobs that arrive online in the unrelated machine model to minimize `k norms of weighted flowtime. In the unrelated setting, the processing time and weight of a job depends on the machine it is assigned to, and it is perhaps the most general machine model conside ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We consider the problem of scheduling jobs that arrive online in the unrelated machine model to minimize `k norms of weighted flowtime. In the unrelated setting, the processing time and weight of a job depends on the machine it is assigned to, and it is perhaps the most general machine model considered in scheduling literature. Chadha et al. [10] obtained a recent breakthrough result in obtaining the first nontrivial algorithm for minimizing weighted flowtime (that is, the `1 norm) in this very general setting via a novel potential function based analysis. They described a simple algorithm and showed that for any > 0 it is (1 + )speed O(1/2)competitive (a scalable algorithm). In this paper we give the first nontrivial and scalable algorithm for minimizing `k norms of weighted flowtime in the unrelated machine model; for any > 0, the algorithm is O(k/2+2/k)competitive. The algorithm is immediatedispatch and nonmigratory. Our result is of both practical and theoretical interest. Scheduling to minimize `k norms of flowtime for some small k> 1 has been shown to balance total response time and fairness, which is desirable in practice. On the theoretical side, `k norms for k> 1 pose substantial technical hurdles when compared to when k = 1 even for the single machine case. Our work develops a novel potential function as well as several tools that can be used to lower bound the optimal solution. 1
Scheduling with Setup Costs and Monotone Penalties
"... We consider single processor preemptive scheduling with jobdependent setup times. In this model, a jobdependent setup time is incurred when a job is started for the first time, and each time it is restarted after preemption. This model is a common generalization of preemptive scheduling, and actua ..."
Abstract
 Add to MetaCart
We consider single processor preemptive scheduling with jobdependent setup times. In this model, a jobdependent setup time is incurred when a job is started for the first time, and each time it is restarted after preemption. This model is a common generalization of preemptive scheduling, and actually of nonpreemptive scheduling as well. The objective is to minimize the sum of any general nonnegative, nondecreasing cost functions of the completion times of the jobs — this generalizes objectives of minimizing weighted flow time, flowtime squared, tardiness or the number of tardy jobs among many others. Our main result is a randomized polynomial time O(1)speed O(1)approximation algorithm for this problem. Without speedup, no polynomial time finite multiplicative approximation is possible unless P = NP. We extend the approach of Bansal et al. (FOCS 2007) of rounding a linear programming relaxation which accounts for costs incurred due to the nonpreemptive nature of the schedule. A key new idea used in the rounding is that a point in the intersection polytope of two matroids can be decomposed as a convex combination of incidence vectors of sets that are independent in both matroids. In fact, we use this for the intersection of a partition matroid and a laminar matroid, in which case the decomposition can be found efficiently using network flows. Our approach gives a randomized polynomial time offline O(1)speed O(1)approximation algorithm for the broadcast scheduling problem with general cost functions as well.
Online Batch Scheduling for Flow Objectives
, 2013
"... Batch scheduling gives a powerful way of increasing the throughput by aggregating multiple homogeneous jobs. It has applications in large scale manufacturing as well as in server scheduling. In batch scheduling, when explained in the setting of server scheduling, the server can process requests of ..."
Abstract
 Add to MetaCart
Batch scheduling gives a powerful way of increasing the throughput by aggregating multiple homogeneous jobs. It has applications in large scale manufacturing as well as in server scheduling. In batch scheduling, when explained in the setting of server scheduling, the server can process requests of the same type up to a certain number simultaneously. Batch scheduling can be seen as capacitated broadcast scheduling, a popular model considered in scheduling theory. In this paper, we consider an online batch scheduling model. For this model we address flow time objectives for the first time and give positive results for average flow time, the knorms of flow time and maximum flow time. For average flow time and the the knorms of flow time we show algorithms that are O(1)competitive with a small constant amount of resource augmentation. For maximum flow time we show a 2competitive algorithm and this is the best possible competitive ratio for any online algorithm.
New Approximations for Broadcast Scheduling via Variants of αpoint Rounding
"... We revisit the pullbased broadcast scheduling model. In this model, there are n unitsized pages of information available at the server. Clients send their requests to the server over time asking for specific pages. The server can transmit only one page at each time. When the server transmits a pag ..."
Abstract
 Add to MetaCart
We revisit the pullbased broadcast scheduling model. In this model, there are n unitsized pages of information available at the server. Clients send their requests to the server over time asking for specific pages. The server can transmit only one page at each time. When the server transmits a page, all outstanding requests for the page are simultaneously satisfied, and this is what distinguishes broadcast scheduling from the standard scheduling setting where each job must be processed separately by the server. Broadcast scheduling has received a considerable amount of attention due to the algorithmic challenges that it gives in addition to its applications in multicast systems and wireless and LAN networks. In this paper, we give the following new approximation results for two popular objectives: • For the objective of minimizing the maximum flow time, we give the first PTAS. Previously, it was known that the algorithm FirstInFirstOut (FIFO) is a 2approximation, and it is tight [14, 16]. It has been suggested as an open problem to obtain a better approximation [14, 4, 25, 31]. • For the objective of maximizing the throughput, we give a 0.7759approximation which improves upon the previous best known 0.75approximation [23]. Our key techniques for these improvements are novel variants of αpoint rounding that can effectively reduce congestion in schedule which is often the main hurdle in designing scheduling algorithms based on linear programming. We believe that our new rounding schemes could be of potential use for other scheduling problems. 1
EnergyEfficient Online Scheduling with Deadlines
, 2010
"... Whether viewed as an environmental, financial, or convenience concern, efficient management of power resources is an important problem. In this paper, we explore the problem of scheduling tasks on a single variablespeed processor. Our work differs from previous results in two major ways. First, we ..."
Abstract
 Add to MetaCart
Whether viewed as an environmental, financial, or convenience concern, efficient management of power resources is an important problem. In this paper, we explore the problem of scheduling tasks on a single variablespeed processor. Our work differs from previous results in two major ways. First, we consider a model where not all tasks need to be completed, and where the goal is to maximize the difference between the benefit of completed tasks and the cost of energy (previous work assumed that all tasks must be completed). Second, we permit a wide range of functions relating task completion time to energy (previous work assumed a polynomial relationship). We begin by exploring multiple speed packet scheduling, and we develop 2competitive algorithm where tasks are unitsized and indivisible. This extends to a fractional version where benefit can be obtained for partiallycompleted tasks, and also extends to permit arbitrary nonnegative relationships between task value and completion time. The proof introduces a novel version of online maximumweight matching which may be of independent interest. We then consider the problem of processor scheduling with preemption. We develop a randomized polylogarithmic competitive algorithm by showing how to effectively “guess ” a speed close to that which the optimal solution will use. We also prove a number of lower bounds, indicating that our result cannot be significantly improved and that no deterministic algorithm can be better than polynomiallycompetitive. We also consider the case where all tasks must be completed by their deadlines and the goal is to minimize energy, improving upon the best previous competitive result (as well as extending to arbitrary convex functions). Finally, we consider a problem variant where speedup affects distinct tasks differently, and provide a logarithmicspeedup competitive result and matching lower bounds. 1
Speed Scaling of Processes with Arbitrary Speedup Curves on a Multiprocessor
"... “With multicore it’s like we are throwing this Hail Mary pass down the field and now we have to run down there as fast as we can to see if we can catch it.” — David Patterson, UC Berkeley computer science professor We consider the setting of a multiprocessor where the speeds of the m processors can ..."
Abstract
 Add to MetaCart
“With multicore it’s like we are throwing this Hail Mary pass down the field and now we have to run down there as fast as we can to see if we can catch it.” — David Patterson, UC Berkeley computer science professor We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the jobs to processors, and scale the speeds of the processors. We consider the objective of energy plus flow time. For jobs that my have side effects or that are not checkpointable, we show an Ω(m (α−1)/α2) bound on the competitive ratio of any deterministic algorithm. For jobs without side effects that may be efficiently check pointed, we give an O(log m)competitive algorithm. Thus for jobs that may have side effects or that are not checkpointable, the achievable competitive ratio grows quickly with the number of processors, but for checkpointable jobs without side effects, the achievable competitive ratio grow slowly with the number of processors. We then lower bound of Ω(log 1/α m) on the competitive ratio of any algorithm for checkpointable processes without side effects. Finally we slightly improve the upper bound on the competitive ratio for the single processor case, which is equivalent to the case that all jobs are fully parallelizable, by giving an improved analysis of a previously proposed algorithm. This is a regular (not short) submission.
Randomization Constrained
, 2002
"... Yao's principle is a fundamental techique for proving lower bounds on randomized algorithm and is based on a gametheoretical duality result by yon Neumann. In this paper, we prove an extension of the principle to the case when one of the players is further constrained by a set of linear ine ..."
Abstract
 Add to MetaCart
Yao's principle is a fundamental techique for proving lower bounds on randomized algorithm and is based on a gametheoretical duality result by yon Neumann. In this paper, we prove an extension of the principle to the case when one of the players is further constrained by a set of linear inequalities. The corresponding duality result is interpreted in a variety of algorithmic contexts, including multiobjective optimization problems, performance tail of randomized algorithms, constrained adversaries, resource augmentation method, smoothed analysis, highprobability results, and loosecompetitiveness.