Results 1 - 10
of
34
Steady-State Scheduling on Heterogeneous Clusters: Why and How?
, 2004
"... In this paper, we consider steady-state scheduling techniques for heterogeneous systems, such as clusters and grids. We advocate the use of steady-state scheduling to solve a variety of important problems, which would be too difficult to tackle with the objective of makespan minimization. We give a ..."
Abstract
-
Cited by 42 (20 self)
- Add to MetaCart
In this paper, we consider steady-state scheduling techniques for heterogeneous systems, such as clusters and grids. We advocate the use of steady-state scheduling to solve a variety of important problems, which would be too difficult to tackle with the objective of makespan minimization. We give a few successful examples before discussing the main limitations of the approach.
Optimal sharing of bags of tasks in heterogeneous clusters
- in Proc. of the fifteenth annual ACM Symposium on Parallel Algorithms and Architectures
"... We prove that “FIFO ” worksharing protocols provide asymptotically optimal solutions to a problem related to sharing a bag of identically complex tasks in a heterogeneous network of workstations (HNOW) N. In the HNOW-Exploitation Problem, one seeks to accomplish as much work as possible on N during ..."
Abstract
-
Cited by 40 (1 self)
- Add to MetaCart
(Show Context)
We prove that “FIFO ” worksharing protocols provide asymptotically optimal solutions to a problem related to sharing a bag of identically complex tasks in a heterogeneous network of workstations (HNOW) N. In the HNOW-Exploitation Problem, one seeks to accomplish as much work as possible on N during a prespecified fixed period of L time units. The worksharing protocols we study are crafted within an architectural model that characterizes N via parameters that measure workstations ’ computational and communicational powers. The protocols are self-scheduling, in that they determine completely both an amount of work to allocate to each of N ’s workstations and a schedule for all related interworkstation communications. A protocol observes a FIFO regimenifithasN ’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their workloads. The optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other protocol during all sufficiently long worksharing episodes. Simulation experiments indicate that the superiority of FIFO protocols is often observed during worksharing episodes of only a few minutes ’ duration.
Techniques for Mapping Tasks to Machines in Heterogeneous Computing Systems
- 2004 INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP 2004
, 2004
"... Heterogeneous computing (HC) is the coordinated use of different types of machines, networks, and interfaces to maximize their combined performance and/or cost-effectiveness. HC systems are becoming a plausible technique for eciently solving computationally intensive problems. The applicability and ..."
Abstract
-
Cited by 33 (2 self)
- Add to MetaCart
Heterogeneous computing (HC) is the coordinated use of different types of machines, networks, and interfaces to maximize their combined performance and/or cost-effectiveness. HC systems are becoming a plausible technique for eciently solving computationally intensive problems. The applicability and strength of HC systems are derived from their ability to match computing needs to appropriate resources. In an HC system, tasks need to be matched to machines, and the execution of the tasks must be scheduled. The goal of this invited keynote paper is to: (1) introduce the reader to some of the different distributed and parallel types of HC environments
Using Moldability to Improve the Performance of Supercomputer Jobs
, 2001
"... Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production t ..."
Abstract
-
Cited by 31 (7 self)
- Add to MetaCart
Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job.
Broadcast Trees for Heterogeneous Platforms
, 2004
"... In this paper, we deal with broadcasting on heterogeneous platforms. Typically, the message to be broadcast is split into several slices, which are sent by the source processor in a pipeline fashion. A spanning tree is used to implement this operation, and the objective is to find the tree which max ..."
Abstract
-
Cited by 29 (2 self)
- Add to MetaCart
(Show Context)
In this paper, we deal with broadcasting on heterogeneous platforms. Typically, the message to be broadcast is split into several slices, which are sent by the source processor in a pipeline fashion. A spanning tree is used to implement this operation, and the objective is to find the tree which maximizes the throughput, i.e. the average number of slices sent by the source processor every time-unit. We introduce several heuristics to solve this problem. The good news is that the best heuristics perform quite efficiently, reaching more than 70 % of the absolute optimal throughput, thereby providing a simple yet efficient approach to achieve very good performance for broadcasting on heterogeneous platforms.
Optimizing the steady-state throughput of scatter and reduce operations on heterogeneous platforms
, 2005
"... ..."
An Overview of MSHN: The Management System for Heterogeneous Networks
- In 8th IEEE Heterogeneous Computing Workshop (HCW ’99
, 1999
"... The Management System for Heterogeneous Networks (MSHN) is a resource management system for use in heterogeneous environments. This paper describes the goals of MSHN, its architecture, and both completed and ongoing research experiments. MSHN's main goal is to determine the best way to support ..."
Abstract
-
Cited by 12 (4 self)
- Add to MetaCart
The Management System for Heterogeneous Networks (MSHN) is a resource management system for use in heterogeneous environments. This paper describes the goals of MSHN, its architecture, and both completed and ongoing research experiments. MSHN's main goal is to determine the best way to support the execution of many different applications, each with its own quality of service (QoS) requirements, in a distributed, heterogeneous environment. MSHN's architecture consists of seven distributed, potentially replicated components that communicate with one another using CORBA (Common Object Request Broker Architecture). MSHN's experimental investigations include: (1) the accurate, transparent determination of the end-to-end status of resources; (2) the identification of optimization criteria and how non-determinism and the granularity of models affect the performance of various scheduling heuristics that optimize those criteria; (3) the determination of how security should be incorporated betwe...
HiHCoHP: Toward a realistic communication model for hierarchical hyperclusters of heterogeneous processors
- In Proceedings of the 15th International Parallel & Distributed Processing Symposium (2nd IPDPS’01
, 2001
"... ..."
Complexity results and heuristics for pipelined multicast operations on heterogeneous platforms
, 2004
"... ..."
Software adaptation in quality sensitive applications to deal with hardware variability
- In IEEE Great Lakes Sym. on VLSI
, 2010
"... In this work, we propose a method to reduce the impact of process variations by adapting the application's algorithm at the software layer. We introduce the concept of hard-ware signatures as the measured post manufacturing hard-ware characteristics that can be used to drive software adap-tatio ..."
Abstract
-
Cited by 9 (8 self)
- Add to MetaCart
In this work, we propose a method to reduce the impact of process variations by adapting the application's algorithm at the software layer. We introduce the concept of hard-ware signatures as the measured post manufacturing hard-ware characteristics that can be used to drive software adap-tation across dierent die. Using H.264 encoding as an ex-ample, we demonstrate signicant yield improvements (as much as 40 % points at 0 % over-design), a reduction in over-design (by as much as 10 % points at 80 % yield) as well as application quality improvements (about 2.6dB increase in average PSNR at 80 % yield). Further, we investigate impli-cations of limited information exchange (i.e. signature mea-surement granularity) on yield and quality. We show that our proposed technique for determining optimal signature measurement points results in an improvement in PSNR of about 1.3dB over naive sampling for the H.264 encoder. We conclude that hardware-signature based application adap-tation is an easy and inexpensive (to implement), better informed (by actual application requirements) and eective way to manage yield-cost-quality tradeos in application-implementation design
ows.