• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 27,112
Next 10 →

The high-level parallel language ZPL improves productivity and performance

by Bradford L. Chamberlain, Sung-eun Choi, Steven J. Deitz, Lawrence Snyder - In Proceedings of the IEEE International Workshop on Productivity and Performance in High-End Computing , 2004
"... In this paper, we qualitatively address how high-level parallel languages improve productivity and performance. Using ZPL as a case study, we discuss advantages that stem from a language having both a global (rather than a perprocessor) view of the computation and an underlying performance model tha ..."
Abstract - Cited by 25 (2 self) - Add to MetaCart
In this paper, we qualitatively address how high-level parallel languages improve productivity and performance. Using ZPL as a case study, we discuss advantages that stem from a language having both a global (rather than a perprocessor) view of the computation and an underlying performance model

Virtual Topologies: A New Concurrency Abstraction for High-Level Parallel Languages

by James Philbin, Suresh Jagannathan, Rajiv Mirani - DIMACS Workshop on Interconnection Networks and Mapping and Scheduling Parallel Computations , 1994
"... ion for High-Level Parallel Languages (Preliminary Report) James Philbin 1 , Suresh Jagannathan 1 , Rajiv Mirani 2 1 Computer Science Division, NEC Research Institute, 4 Independence Way, fphilbin|sureshg@research.nj.nec.com 2 Department of Computer Science, Yale University, New Haven, ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
ion for High-Level Parallel Languages (Preliminary Report) James Philbin 1 , Suresh Jagannathan 1 , Rajiv Mirani 2 1 Computer Science Division, NEC Research Institute, 4 Independence Way, fphilbin|sureshg@research.nj.nec.com 2 Department of Computer Science, Yale University, New Haven

Performance of a High-Level Parallel Language on a High-Speed Network

by Henri Bal, Raoul Bhoedjang, Rutger Hofman, Ceriel Jacobs, Koen Langendoen, Tim Rühl, Kees Verstoep - Journal of Parallel and Distributed Computing , 1997
"... Clusters of workstations are often claimed to be a good platform for parallel processing, especially if a fast network is used to interconnect the workstations. Indeed, high performance can be obtained for low-level message passing primitives on modern networks like ATM and Myrinet. Most applicati ..."
Abstract - Cited by 21 (14 self) - Add to MetaCart
). In this paper we investigate the issues involved in implementing a high-level programming environment on a fast network. We have implemented a portable runtime system for an object-based language (Orca) on a collection of processors connected by a Myrinet network. Many performance optimizations were required

Evaluating a High-Level Parallel Language (GpH) for Computational Grids

by Abdallah D. Al Zain, Phil W. Trinder, Greg J. Michaelson, Hans-wolfgang Loidl - IEEE Transactions on Parallel and Distributed Systems
"... Abstract—Computational GRIDs potentially offer low-cost, readily available, and large-scale high-performance platforms. For the parallel execution of programs, however, computational GRIDs pose serious challenges: they are heterogeneous and have hierarchical and often shared interconnects, with high ..."
Abstract - Cited by 6 (6 self) - Add to MetaCart
, with high and variable latencies between clusters. This paper investigates whether a programming language with high-level parallel coordination and a Distributed Shared Memory (DSM) model can deliver good and scalable performance on a range of computational GRID configurations. The high-level language

Evaluating a High-Level Parallel Language (GpH) for Computational GRIDs

by unknown authors
"... Abstract — Computational Grids potentially offer low cost, readily available, and large-scale high-performance platforms. For the parallel execution of programs, however, computational GRIDs pose serious challenges: they are heterogeneous, and have hierarchical and often shared interconnects, with h ..."
Abstract - Add to MetaCart
, with high and variable latencies between clusters. This paper investigates whether a programming language with high-level parallel coordination and a Distributed Shared Memory model (DSM) can deliver good, and scalable, performance on a range of computational GRID configurations. The high-level language

P³L: a Structured High-level Parallel Language, and its Structured Support

by Bruno Bacci, Marco Danelutto, Salvatore Orlando, Susanna Pelagatti, Marco Vanneschi , 1993
"... This paper presents a parallel programming methodology that ensures easy programming, efficiency, and portability of programs to different machines belonging to the class of the general-purpose, distributed memory, MIMD architectures. The methodology is based on the definition of a new, high-level, ..."
Abstract - Add to MetaCart
This paper presents a parallel programming methodology that ensures easy programming, efficiency, and portability of programs to different machines belonging to the class of the general-purpose, distributed memory, MIMD architectures. The methodology is based on the definition of a new, high-level

The Paradyn Parallel Performance Measurement Tools

by Barton P. Miller, Mark D. Callaghan, Jonathan M. Cargille, Jeffrey K. Hollingsworth, R. Bruce Irvin , Karen L. Karavanic, Krishna Kunchithapadam, Tia Newhall - IEEE COMPUTER , 1995
"... Paradyn is a performance measurement tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs (hours or days) and large (thousand node) systems, and automates much of the search for performance bottlenecks. It can provide precise ..."
Abstract - Cited by 447 (39 self) - Add to MetaCart
. It also provides an open interface for performance visualization, and a simple programming library to allow these visualizations to interface to Paradyn. Paradyn can gather and present performance data in terms of high-level parallel languages (such as data parallel Fortran) and can measure programs

Workload decomposition

by Sergio Briguglio, Beniamino Di Martino, Gregorio Vlad, John Wiley
"... strategies for hierarchical distributed-shared memory parallel systems and their implementation with integration of high-level parallel languages ..."
Abstract - Add to MetaCart
strategies for hierarchical distributed-shared memory parallel systems and their implementation with integration of high-level parallel languages

Visualizing High-Level Communication And Synchronization

by Rutger Hofman, Koen Langendoen, Henri Bal - IEEE Int. Conference on Algorithms and Architectures for Parallel Processing (ICA3PP), Singapore , 1996
"... High-level parallel languages ease writing of parallel programs. However, since they deepen the gap between language and underlying hardware, performance debugging is hard. It is essential to use tools that present the user with performance data at the language level. Besides this, for hard performa ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
High-level parallel languages ease writing of parallel programs. However, since they deepen the gap between language and underlying hardware, performance debugging is hard. It is essential to use tools that present the user with performance data at the language level. Besides this, for hard

Multi-core Threads, OpenMP

by Christian Perez (inria, Vincent Pichon (edf R, Gpu Cuda, Cc Cc, Cc C , 2012
"... Super-computer (Exascale) Programming a parallel applications (High level) parallel languages HPF, PGAS, … Not yet mature Platform oriented models ..."
Abstract - Add to MetaCart
Super-computer (Exascale) Programming a parallel applications (High level) parallel languages HPF, PGAS, … Not yet mature Platform oriented models
Next 10 →
Results 1 - 10 of 27,112
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University