Results 1  10
of
35
Improved Approximation Schemes for Scheduling Unrelated Parallel Machines
 In ACM symposium on Theory of computing
, 1999
"... We consider the problem of scheduling n independent jobs on m unrelated parallel machines, where each job has to be processed by exactly one machine, processing job j on machine i requires p ij time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. Focusing ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
We consider the problem of scheduling n independent jobs on m unrelated parallel machines, where each job has to be processed by exactly one machine, processing job j on machine i requires p ij time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. Focusing on the case when m is xed, we present for both preemptive and nonpreemptive variants of the problem fully polynomial approximation schemes whose running times depend only linearly on n. We also study an extension of the problem, where processing job j on machine i incurs a cost of c ij , and thus there are two optimization criteria: makespan and cost. We show that for any xed m, there is a fully polynomial approximation scheme that, given values T and C, computes for any xed > 0 a schedule in O(n) time with makespan at most (1 + )T and cost at most (1 + )C, if there exists a schedule of makespan T and cost C. 1 Introduction Let n and m denote the number of jobs and machines, respect...
Scheduling Malleable Parallel Tasks: An Asymptotic Fully PolynomialTime Approximation Scheme
 Algorithmica
, 2004
"... A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time app ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time approximation scheme. For any xed > 0, the algorithm computes a nonpreemptive schedule of length at most (1 + ) times the optimum (plus an additive term) and has running time polynomial in n; m and 1=.
Provably efficient twolevel adaptive scheduling
 In JSSPP, SaintMalo
, 2006
"... Abstract. Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernellevel job scheduler allots processors to jobs and a userlevel thread scheduler maps the ready threads of a job onto the allotted processors. This paper presents twolevel sch ..."
Abstract

Cited by 19 (15 self)
 Add to MetaCart
(Show Context)
Abstract. Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernellevel job scheduler allots processors to jobs and a userlevel thread scheduler maps the ready threads of a job onto the allotted processors. This paper presents twolevel scheduling schemes for scheduling “adaptive ” multithreaded jobs whose parallelism can change during execution. The AGDEQ algorithm uses dynamicequipartioning (DEQ) as a jobscheduling policy and an adaptive greedy algorithm (AGreedy) as the thread scheduler. The ASDEQ algorithm uses DEQ for job scheduling and an adaptive workstealing algorithm (ASteal) as the thread scheduler. AGDEQ is suitable for scheduling in centralized scheduling environments, and ASDEQ is suitable for more decentralized settings. Both twolevel schedulers achieve O(1)competitiveness with respect to makespan for any set of multithreaded jobs with arbitrary release time. They are also O(1)competitive for any batched jobs with respect to mean response time. Moreover, because the length of the scheduling quantum can be adjusted to amortize the cost of contextswitching during processor reallocation, our schedulers provide control over the scheduling overhead and ensure effective utilization of processors. 1
On Preemptive Resource Constrained Scheduling: Polynomialtime Approximation Schemes
, 2002
"... We study resource constrained scheduling problems where the objective is to compute feasible preemptive schedules minimizing the makespan and using no more resources than what are available. ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
We study resource constrained scheduling problems where the objective is to compute feasible preemptive schedules minimizing the makespan and using no more resources than what are available.
Adaptive scheduling with parallelism feedback
 In PPoPP, pages 100 – 109
, 2006
"... Abstract Multiprocessor scheduling in a shared multiprogramming environment is often structured as twolevel scheduling, where a kernellevel job scheduler allots processors to jobs and a userlevel task scheduler schedules the work of a job on the allotted processors. In this context, the number of ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Abstract Multiprocessor scheduling in a shared multiprogramming environment is often structured as twolevel scheduling, where a kernellevel job scheduler allots processors to jobs and a userlevel task scheduler schedules the work of a job on the allotted processors. In this context, the number of processors allotted to a particular job may vary during the job's execution, and the task scheduler must adapt to these changes in processor resources. For overall system efficiency, the task scheduler should also provide parallelism feedback to the job scheduler to avoid the situation where a job is allotted processors that it cannot use productively. We present an adaptive task scheduler for multitasked jobs with dependencies that provides continual parallelism feedback to the job scheduler in the form of requests for processors. Our scheduler guarantees that a job completes near optimally while utilizing at least a constant fraction of the allotted processor cycles. Our scheduler can be applied to schedule dataparallel programs, such as those written in High Performance Fortran (HPF), *Lisp, C*, NESL, and ZPL. Our analysis models the job scheduler as the task scheduler's adversary, challenging the task scheduler to be robust to the system environment and the job scheduler's administrative policies. For example, the job scheduler can make available a huge number of processors exactly when the job has little use for them. To analyze the performance of our adaptive task scheduler under this stringent adversarial assumption, we introduce a new technique called "trim analysis," which allows us to prove that our task scheduler performs poorly on at most a small number of time steps, exhibiting nearoptimal behavior on the vast majority. To be precise, suppose that a job has work T1 and criticalpath length T∞ and is running on a machine with P processors. Using trim analysis, we prove that our scheduler completes the job in O(T1/ P + T∞ + L lg P ) time steps, where L is the length of a scheduling quantum and P denotes the O(T∞ + L lg P )trimmed availability. This quantity is the average of the processor availabil * Yuxiong He is a Visiting Scholar at MIT CSAIL and a Ph.D. candidate at the National University of Singapore. † Wen Jing Hsu is a Visiting Scientist at MIT CSAIL and Associate Professor at Nanyang Technological University. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ity over all time steps excluding the O(T∞ + L lg P ) time steps with the highest processor availability. When T1/T∞ P (the job's parallelism dominates the O(T∞ + L lg P )trimmed availability), the job achieves nearly perfect linear speedup. Conversely, when T1/T∞ P , the asymptotic running time of the job is nearly the length of its critical path.
Scheduling parallel jobs to minimize the makespan
, 2006
"... We consider the NPhard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, jobdependent number of machines when being processed. We prove that the makespan of any nonpreemptive list ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider the NPhard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, jobdependent number of machines when being processed. We prove that the makespan of any nonpreemptive listschedule is within a factor of 2 of the optimal preemptive makespan. This gives the bestknown approximation algorithms for both the preemptive and the nonpreemptive variant of the problem. We also show that no listscheduling algorithm can achieve a better performance guarantee than 2 for the nonpreemptive problem, no matter which priority list is chosen. Listscheduling also works in the online setting where jobs arrive over time and the length of a job becomes known only when it completes; it therefore yields a deterministic online algorithm with competitive ratio 2 as well. In addition, we consider a different online model in which jobs arrive one by one and need to be scheduled before the next job becomes known. We show that no listscheduling algorithm has a constant competitive ratio. Still, we present the first online algorithm for scheduling parallel jobs with a constant competitive ratio in this context. We also prove a new informationtheoretic lower bound of 2.25 for the competitive ratio of any deterministic online algorithm for this model. Moreover, we show that 6/5 is a lower bound for the competitive ratio of any deterministic online algorithm of the preemptive version of the model jobs arriving over time.
Approximate Strong Separation with Application in Fractional Graph Coloring and Preemptive Scheduling
 IN PROCEEDINGS OF THE 19TH INTERNATIONAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 2001
"... In this paper we show that approximation algorithms for the weighted independent set and sdimensional knapsack problem with ratio a can be turned into approximation algorithms with the same ratio for fractional weighted graph coloring and preemptive resource constrained scheduling. In order to o ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
In this paper we show that approximation algorithms for the weighted independent set and sdimensional knapsack problem with ratio a can be turned into approximation algorithms with the same ratio for fractional weighted graph coloring and preemptive resource constrained scheduling. In order to obtain these results, we generalize known results by Grötschel, Lovasz and Schrijver on separation, nonemptiness test, optimization and violation in the direction of approximability.
Scheduling Parallel Tasks Approximation Algorithms
, 2003
"... Scheduling is a crucial problem in parallel and distributed processing. It consists in determining where and when the tasks of parallel programs will be executed. The design of parallel algorithms has to be reconsidered by the influence of new execution supports (namely, clusters of workstations, gr ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Scheduling is a crucial problem in parallel and distributed processing. It consists in determining where and when the tasks of parallel programs will be executed. The design of parallel algorithms has to be reconsidered by the influence of new execution supports (namely, clusters of workstations, grid computing and global computing) which are characterized by a larger number of heterogeneous processors, often organized by hierarchical subsystems. Parallel Tasks model (tasks that require more than one processor for their execution) has been introduced about 15 years ago as a promising alternative for scheduling parallel applications, especially in the case of slow communication media. The basic idea is to consider the application at a rough level of granularity (larger tasks in order to decrease the relative weight of communications). As the main difficulty for scheduling in actual systems comes from handling efficiently the communications, this new view of the problem allows to consider them implicitely, thus leading to more tractable problems. We kindly invite the reader to look at the chapter of Maciej Drozdowski (in this book) for a detailed presentation of various kinds of Parallel Tasks in a general context and the survey paper from Feitelson et al. [14] for a discussion in the field of parallel processing. Even if the basic problem of scheduling Parallel Tasks remains NPhard, some approximation algorithms can be designed. A lot of results have been derived recently for scheduling the different types of Parallel Tasks, namely, Rigid, Moldable or Malleable ones. We will distinguish Parallel Tasks inside a same application or between applications in a multiuser context. Various optimization criteria will be discussed. 1 This chapter aims to present several approximation algorithms for scheduling moldable and malleable tasks with a special emphasis on new execution supports.
A Linear Time Approximation Scheme for Job Shop Scheduling
"... In this paper we present a linear time approximation scheme for the nonpreemptive job shop scheduling problem when the number of machines and the number of operations per job are fixed. We also show how to extend the approximation scheme for the preemptive version of the problem. ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
(Show Context)
In this paper we present a linear time approximation scheme for the nonpreemptive job shop scheduling problem when the number of machines and the number of operations per job are fixed. We also show how to extend the approximation scheme for the preemptive version of the problem.
Grouping techniques for scheduling problems: simpler and faster
 Proceedings of the 9th Annual European Symposium on Algorithms, 2001
, 2001
"... In this paper we describe a general grouping technique to devise faster and simpler approximation schemes for several scheduling problems. We illustrate the technique on two different scheduling problems: scheduling on unrelated parallel machines with costs and the job shop scheduling problem. The t ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we describe a general grouping technique to devise faster and simpler approximation schemes for several scheduling problems. We illustrate the technique on two different scheduling problems: scheduling on unrelated parallel machines with costs and the job shop scheduling problem. The time complexity of the resulting approximation schemes is always linear in the number n of jobs, and the multiplicative constant hidden in the O(n) running time is reasonably small and independent of the error ε. 1