Results 1 
6 of
6
Parallel Computing: Performance Metrics and Models
, 1995
"... We review the many performance metrics that have been proposed for parallel systems (i.e., program  architecture combinations). These include the many vari ants of speedup, efficiency, and isoefficiency. We give reasons why none of these metrics should be used independent of the run time of the pa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We review the many performance metrics that have been proposed for parallel systems (i.e., program  architecture combinations). These include the many vari ants of speedup, efficiency, and isoefficiency. We give reasons why none of these metrics should be used independent of the run time of the parallel system. The run time remains the dominant metric and the remaining metrics are important only to the extent they favor systems with better run time. We also lay out the mini mum requirements that a model for parallel computers should meet before it can be considered acceptable. While many models have been proposed, none meets all of these requirements. The BSP and LogP models are considered and the importance of the specifics of the interconnect topology in developing good parallel algorithms pointed out.
Hypercube Algorithms for Image Transformations
 Proceedings of the 1989 International Conference on Parallel Processing, The Pennsylvania State
, 1989
"... Efficient hypercube algorithms are developed for the following image transformations: shrinking, expanding, translation, rotation, and scaling. A 2 step shrinking and expanding of a gray scale NN image can be done in O (k) time on an N processor MIMD hypercube and in O (logN) time on an SIMD ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Efficient hypercube algorithms are developed for the following image transformations: shrinking, expanding, translation, rotation, and scaling. A 2 step shrinking and expanding of a gray scale NN image can be done in O (k) time on an N processor MIMD hypercube and in O (logN) time on an SIMD hypercube. Translation, rotation, and scaling of an NN image take O (logN) time on an N processor hypercube.
Parallel image correlation: case study to examine tradeoffs in algorithmtomachine mappings
 The Journal of Supercomputing
, 1998
"... Abstract. Performance of a parallel algorithm on a parallel machine depends not only on the time complexity of the algorithm, but also on how the underlying machine supports the fundamental operations used by the algorithm. This study analyzes various mappings of image correlation algorithms in SIM ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Performance of a parallel algorithm on a parallel machine depends not only on the time complexity of the algorithm, but also on how the underlying machine supports the fundamental operations used by the algorithm. This study analyzes various mappings of image correlation algorithms in SIMD, MIMD, and mixedmode environments. Experiments were conducted on the Intel Paragon, MasPar MP1, nCUBE 2, and PASM prototype. The machine features considered in this study include: modes of parallelism, communication/computation ratio, network topology and implementation, SIMD CU/PE overlap, and communication/computation overlap. Performance of an implementation can be enhanced by using algorithmic techniques that match the machine features. Some algorithmic techniques discussed here are additional communication versus redundant computation, data block transfers, and communication/computation overlap. The results presented are applicable to a large class of image processing tasks. Case studies, such as the one presented here, are a necessary step in developing software tools for mapping an application task onto a single parallel machine and for mapping the subtasks of an application task, or a set of independent application tasks, onto a heterogeneous suite of parallel machines.
PARALLEL ALGORITHMS FOR
, 1992
"... I would like to acknowledge the guidance and support of my advisor, Professor Meghanad D. Wagh. This thesis was possible because of his patience and great knowledge. I would also like thank to all my friends in Lehigh University. iii ..."
Abstract
 Add to MetaCart
(Show Context)
I would like to acknowledge the guidance and support of my advisor, Professor Meghanad D. Wagh. This thesis was possible because of his patience and great knowledge. I would also like thank to all my friends in Lehigh University. iii
Image Transformations on Hypercube and Mesh Multicomputers
"... Efficient hypercube and mesh algorithms are developed for the following image transformations: shrinking, expanding, translation, rotation, and scaling. A 2 step shrinking and expanding of a gray scale NN image can be done in O (k) time on an N processor MIMD hypercube, in O (logN) time on an ..."
Abstract
 Add to MetaCart
(Show Context)
Efficient hypercube and mesh algorithms are developed for the following image transformations: shrinking, expanding, translation, rotation, and scaling. A 2 step shrinking and expanding of a gray scale NN image can be done in O (k) time on an N processor MIMD hypercube, in O (logN) time on an SIMD hypercube, and in O(2 ) time on an N N mesh. Translation, rotation, and scaling of an NN image take O (logN) time on an N processor hypercube and O(N) time on an N N mesh..
Examine TradeOffs in AlgorithmtoMachine Mappings
"... Performance of a parallel algorithm on a parallel machine depends not only on the time complexity of the algorithm, but also on how the underlying machine supports the fundamental operations used by the algorithm. This study analyzes various mappings of image correlation algorithms in SIMD, MIMD, an ..."
Abstract
 Add to MetaCart
Performance of a parallel algorithm on a parallel machine depends not only on the time complexity of the algorithm, but also on how the underlying machine supports the fundamental operations used by the algorithm. This study analyzes various mappings of image correlation algorithms in SIMD, MIMD, and mixedmode environments. Experiments were conducted on the Intel Paragon, MasPar MP1, nCUBE 2, and PASM prototype. The machine features considered in this study include: modes of parallelism, communication/computation ratio, network topology and implementation, SIMD CU/PE overlap, and communication/computation overlap. Performance of an implementation can be enhanced by using algorithmic techniques that match the machine features. Some algorithmic techniques discussed here are additional communication versus redundant computation, data block transfers, and communication/computation overlap. The results presented are applicable to a large class of image processing tasks. Case studies, such as the one presented here, are a necessary step in developing software tools for mapping an application task onto a single parallel machine and for mapping the subtasks of an application task, or a set of independent application tasks, onto a heterogeneous suite of parallel machines.