Results 11 - 20
of
262,526
Algorithmic Support for Commodity-Based Parallel Computing Systems
, 2003
"... Follows Abstract The Computational Plant or Cplant is a commodity-based distributedmemory supercomputer under development at Sandia National Laboratories. ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
Follows Abstract The Computational Plant or Cplant is a commodity-based distributedmemory supercomputer under development at Sandia National Laboratories.
Parallel Computer Systems Based on Numerical Integrations
"... This paper deals with continuous system simulation. The systems can be described by system of differential equations or block diagram. Differential equations are usually solved by numerical methods that are integrated into simulation software such as Matlab, Maple or TKSL. Taylor series method has b ..."
Abstract
- Add to MetaCart
been used for numerical solutions of differential equations. The presented method has been proved to be both very accurate and fast and also procesed in parallel systems. The aim of the thesis is to design, implement and compare a few versions of the parallel system.
An Efficient Scheduling Algorithm for Multiprogramming on Parallel Computing Systems
"... 1 Introduction Many scheduling schemes for multiprogramming on parallel machines have been proposed in the literature. The simplest scheduling method is local scheduling. With local scheduling there is only a single queue in each processor. Except for higher (or lower) priorities being given, proces ..."
Abstract
- Add to MetaCart
1 Introduction Many scheduling schemes for multiprogramming on parallel machines have been proposed in the literature. The simplest scheduling method is local scheduling. With local scheduling there is only a single queue in each processor. Except for higher (or lower) priorities being given
Random Number Generation on Parallel Computer Systems
"... The generation of large scale series of "Pseudo--random" numbers is essential to the use of Monte Carlo simulations. All good Pseudo--random number generators (PRNG's) have a very large period, for which they satisfy discreet mathematical principles of uniform distribution. The size o ..."
Abstract
- Add to MetaCart
of the period is important for us to consider. Since large amounts, on the order of 10 18 could be used in these simulations. We implemented a parallel PRNG for use in High Performance Fortran. 1 Introduction There are several PRNG's we wish to implement on parallel architectures and systematically test
Fast Parallel Algorithms for Short-Range Molecular Dynamics
- JOURNAL OF COMPUTATIONAL PHYSICS
, 1995
"... Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dyn ..."
Abstract
-
Cited by 653 (7 self)
- Add to MetaCart
Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular
Hybrid Equipartitioning Job Scheduling Policies for Parallel Computer Systems
"... We propose a new family of job scheduling policies for parallel computer systems that can be optimized to adapt to changes in the workload. Simulation optimization is used to reveal important properties of optimal job scheduling policies. For this optimization a new approach is suggested that combin ..."
Abstract
- Add to MetaCart
We propose a new family of job scheduling policies for parallel computer systems that can be optimized to adapt to changes in the workload. Simulation optimization is used to reveal important properties of optimal job scheduling policies. For this optimization a new approach is suggested
GPFS: A Shared-Disk File System for Large Computing Clusters
- In Proceedings of the 2002 Conference on File and Storage Technologies (FAST
, 2002
"... GPFS is IBM's parallel, shared-disk file system for cluster computers, available on the RS/6000 SP parallel supercomputer and on Linux clusters. GPFS is used on many of the largest supercomputers in the world. GPFS was built on many of the ideas that were developed in the academic community ove ..."
Abstract
-
Cited by 521 (3 self)
- Add to MetaCart
GPFS is IBM's parallel, shared-disk file system for cluster computers, available on the RS/6000 SP parallel supercomputer and on Linux clusters. GPFS is used on many of the largest supercomputers in the world. GPFS was built on many of the ideas that were developed in the academic community
Parallel discrete event simulation
, 1990
"... Parallel discrete event simulation (PDES), sometimes I called distributed simulation, refers to the execution of a single discrete event simulation program on a parallel computer. PDES has attracted a considerable amount of interest in recent years. From a pragmatic standpoint, this interest arises ..."
Abstract
-
Cited by 818 (39 self)
- Add to MetaCart
Parallel discrete event simulation (PDES), sometimes I called distributed simulation, refers to the execution of a single discrete event simulation program on a parallel computer. PDES has attracted a considerable amount of interest in recent years. From a pragmatic standpoint, this interest arises
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract
-
Cited by 773 (23 self)
- Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We
The Amoeba Distributed Operating System
, 1992
"... INTRODUCTION Roughly speaking, we can divide the history of modern computing into the following eras: d 1970s: Timesharing (1 computer with many users) d 1980s: Personal computing (1 computer per user) d 1990s: Parallel computing (many computers per user) Until about 1980, computers were huge, e ..."
Abstract
-
Cited by 1069 (5 self)
- Add to MetaCart
people's computers or share files in various (often ad hoc) ways. Nowadays some systems have many processors per user, either in the form of a parallel computer or a large collection of CPUs shared by a small user community. Such systems are usually called parallel or distributed computer systems
Results 11 - 20
of
262,526