• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 516
Next 10 →

G-JavaMPI: A Grid Middleware for Transparent MPI Task Migration

by Lin Chen, Tianchi Ma, Cho-li Wang, Francis C. M, Lau, Shanping Li - Engineering the Grid: Status and Perspective, Nova Science , 2006
"... Abstract. Resources in a grid are dynamic, heterogeneous, and widely distributed. End users need a simple and efficient way to aggregate and utilize these diverse resources. We introduce a grid middleware called G-JavaMPI, which combines a high-level message passing interface with the Java language ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
to support portable messaging-passing programming in a grid. Different from traditional MPI implementations, it supports transparent migration of MPI processes during execution. This feature facilitates more flexible task scheduling and more effective resource sharing. The migration mechanism is implemented

Power-Aware MPI Task Aggregation Prediction for High-End Computing Systems

by Dong Li, Dimitrios S. Nikolopoulos, Kirk Cameron, Bronis R. Supinski, Martin Schulz - In Proceedings of IEEE International Parallel and Distributed Processing Symposium , 2010
"... Abstract—Emerging large-scale systems have many nodes with several processors per node and multiple cores per processor. These systems require effective task distribution between cores, processors and nodes to achieve high levels of performance and utilization. Current scheduling strategies distribu ..."
Abstract - Cited by 11 (2 self) - Add to MetaCart
Abstract—Emerging large-scale systems have many nodes with several processors per node and multiple cores per processor. These systems require effective task distribution between cores, processors and nodes to achieve high levels of performance and utilization. Current scheduling strategies

Scalable, fault-tolerant membership for mpi tasks on hpc systems

by Jyothish Varma, Chao Wang, Frank Mueller, Christian Engelmann, Stephen L. Scott - In International Conference on Supercomputing , 2006
"... Reliability is increasingly becoming a challenge for highperformance computing (HPC) systems with thousands of nodes, such as IBM’s Blue Gene/L. A shorter mean-time-to-failure can be addressed by adding fault tolerance to reconfigure working nodes to ensure that communication and computation can pro ..."
Abstract - Cited by 13 (9 self) - Add to MetaCart
that maintains a consistent view of active nodes in the presence of faults. Our protocol shows response times in the order of hundreds of microseconds and singledigit milliseconds for reconfiguration using MPI over BlueGene/L and TCP over Gigabit, respectively. The protocol can be adapted to match the network

StarPU-MPI: Task programming over clusters of machines enhanced with accelerators

by Olivier Aumage, Nathalie Furmento, Raymond Namyst, Samuel Thibault, Olivier Aumage, Nathalie Furmento, Raymond Namyst, Samuel Thibault, Siegfried Benkner, Jack Dongarra, The European, Mpi Users Group, Olivier Aumage, Nathalie Furmento, Raymond Namyst, Samuel Thibault - In Siegfried Benkner Jesper Larsson Träff and Jack Dongarra, editors, The 19th European MPI Users’ Group Meeting (EuroMPI 2012), volume 7490 of LNCS , 2012
"... HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract - Cited by 13 (0 self) - Add to MetaCart
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

A 1 PB/s File System to Checkpoint Three Million MPI Tasks ∗

by Adam Moody, Kathryn Mohror, Dhabaleswar K
"... With the massive scale of high-performance computing systems, long-running scientific parallel applications periodically save the state of their execution to files called checkpoints to recover from system failures. Checkpoints are stored on external parallel file systems, but limited bandwidth make ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
million MPI processes, which is 20x faster than the system RAM disk and 1000x faster than the parallel file system.

A Task Migration Mechanism for MPI Applications

by You-hui Zhang, Youhui Zhang, Dan Pei, Dongsheng Wang, Weimin Zheng
"... Recently, the Cluster of Computers (COC) has been used to run large parallel programs increasingly. Task migration is a desirable and useful facility to implement Load-Balance and High-Availibility in COCs. This paper presents a quick migration protocol for MPI tasks, which allows nonmigrating tasks ..."
Abstract - Add to MetaCart
Recently, the Cluster of Computers (COC) has been used to run large parallel programs increasingly. Task migration is a desirable and useful facility to implement Load-Balance and High-Availibility in COCs. This paper presents a quick migration protocol for MPI tasks, which allows nonmigrating

MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems

by Thilo Kielmann, Rutger F. H. Hofman, Henri E. Bal, Aske Plaat, Raoul A. F. Bhoedjang - Proc PPoPP'99 , 1999
"... Writing parallel applications for computational grids is a challenging task. To achieve good performance, algorithms designed for local area networks must be adapted to the differences in link speeds. An important class of algorithms are collective operations, such as broadcast and reduce. We have d ..."
Abstract - Cited by 172 (27 self) - Add to MetaCart
Writing parallel applications for computational grids is a challenging task. To achieve good performance, algorithms designed for local area networks must be adapted to the differences in link speeds. An important class of algorithms are collective operations, such as broadcast and reduce. We have

Hector: Automated Task Allocation for MPI

by Samuel H. Russ, Brian Flachs, Jonathan Robinson, Bjorn Heckel , 1995
"... Many institutions already have networks of workstations, which could potentially be harnessed as a powerful parallel processing resource. A new, automatic task allocation system has been built on top of MPI, an environment that permits parallel programming by using the message--passing paradigm and ..."
Abstract - Cited by 8 (5 self) - Add to MetaCart
Many institutions already have networks of workstations, which could potentially be harnessed as a powerful parallel processing resource. A new, automatic task allocation system has been built on top of MPI, an environment that permits parallel programming by using the message--passing paradigm

Implementing mpi on the bluegene/l supercomputer

by George Almási, Charles Archer, José G. Castaños, C. Chris Erway, Philip Heidelberger, José E. Moreira, Kurt Pinnow, Joe Ratterman, Nils Smeds, Brian Toonen - In Proceedings of Euro-Par 2004
"... Abstract. The BlueGene/L supercomputer will consist of 65,536 dual-processor compute nodes interconnected by two high-speed networks: a three-dimensional torus network and a tree topology network. Each compute node can only address its own local memory, making message passing the natural programming ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
against the hardware limits and also the relative performance of the different modes of operation of BlueGene/L. We show that dedicating one of the processors of a node to communication functions greatly improves the bandwidth achieved by MPI operation, whereas running two MPI tasks per compute node can

MPI as a Coordination Layer for Communicating HPF Tasks

by Ian Foster, David R. Kohr, Rakesh Krishnaiyer, Alok Choudhary - In Proceedings of the 1996 MPI Developers Conference. IEEE Computer , 1996
"... Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications ..."
Abstract - Cited by 3 (3 self) - Add to MetaCart
of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms
Next 10 →
Results 1 - 10 of 516
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University