Results 1 - 10
of
30
Computational Steering Software Systems and Strategies
, 1997
"... Scientific visualization clearly plays a central role in the analysis of data generated by scientific simulations. Unfortunately, though visualization may in itself be more computationally intensive than the original simulation, it is often performed only as a mystical post-processing step after a l ..."
Abstract
-
Cited by 34 (12 self)
- Add to MetaCart
Scientific visualization clearly plays a central role in the analysis of data generated by scientific simulations. Unfortunately, though visualization may in itself be more computationally intensive than the original simulation, it is often performed only as a mystical post-processing step after a large-scale batch job is run. An alternative to the usual scientific computing algorithm of performing modeling, computation, and visualization sequentially is to "close the loop" and interactively steer all phases of the application. In this paper, we discuss two approaches we have undertaken to interactively visualize and steer scientific applications. The first of these approaches has been encapsulated in a software system called SCIRun. SCIRun is a shared memory based scientific programming environment that allows the interactive construction, debugging and steering of large-scale scientific computations. Using this "computational workbench," a scientist can design and modify simulations ...
PROTOMOL, an Object-Oriented Framework for Prototyping Novel Algorithms for Molecular Dynamics
- In Computational Science—ICCS 2003, International Conference
, 2002
"... Factory [Gamma et al. 1995, pp. 87-95] and the Prototype [Gamma et al. 1995, pp. 117-126] patterns. The Abstract Factory pattern delegates the object creation, and the Prototype pattern allows dynamic configuration. The factory is in charge of converting the user-specified force into an object that ..."
Abstract
-
Cited by 34 (18 self)
- Add to MetaCart
Factory [Gamma et al. 1995, pp. 87-95] and the Prototype [Gamma et al. 1995, pp. 117-126] patterns. The Abstract Factory pattern delegates the object creation, and the Prototype pattern allows dynamic configuration. The factory is in charge of converting the user-specified force into an object that has been properly setup to do computation. The factory creates replicas of "prototypes" that have been registered by the developer. This restricts the factory to create only supported objects, since not all combinations of R1-R5 make sense or are supported at a given stage of development.
Interactive Simulation and Visualization
, 1999
"... As computational engineering and science applications have grown in size and complexity, the process of analyzing and visualizing the resulting vast amounts of data has become an increasingly difficult task. Traditionally, data analysis and visualization are performed as post-processing steps after ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
As computational engineering and science applications have grown in size and complexity, the process of analyzing and visualizing the resulting vast amounts of data has become an increasingly difficult task. Traditionally, data analysis and visualization are performed as post-processing steps after a simulation has been run. As simulations have increased in size, this task has become increasingly difficult--often requiring significant computation, high-performance machines, high capacity storage, and high bandwidth networks. Computational steering is an emerging technology that addresses this problem by "closing the loop" and providing a mechanism for integrating modeling, simulation, data analysis, and visualization. This integration allows a researcher to interactively control simulations and perform data analysis while avoiding many of the pitfalls associated with the traditional batch/post processing cycle. In this paper, we describe the application of interactive simulation and v...
Avalon: An Alpha/Linux Cluster Achieves 10 Gflops for $150k
, 1998
"... We present two calculations from the disciplines of condensed matter physics and astrophysics. The simulations were performed on a 70 processor DEC Alpha cluster (Avalon) constructed entirely from commodity personal computer technology and freely available software, for a cost of 152 thousand dollar ..."
Abstract
-
Cited by 20 (7 self)
- Add to MetaCart
We present two calculations from the disciplines of condensed matter physics and astrophysics. The simulations were performed on a 70 processor DEC Alpha cluster (Avalon) constructed entirely from commodity personal computer technology and freely available software, for a cost of 152 thousand dollars. Avalon performed a 60 million particle molecular dynamics (MD) simulation of shock-induced plasticity using the SPaSM MD code. This simulation sustained approximately 10 Gflops over a 44 hour period, and saved 68 Gbytes of raw data. The resulting price/performance is $15/Mflop, or equivalently, 67 Gflops per million dollars. This is more than a factor of three better than last year's price/performance winners. This simulation is very similar to that which won part of the 1993 Gordon Bell performance prize using a 1024-node CM-5. Avalon also performed a gravitational treecode N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 6.78 Gflops over ...
Visualization of Multi-Dimensional Design and Optimization Data Using Cloud Visualization
- ASME Design Engineering Technical Conferences - Design Automation Conference
, 2002
"... Abstract Some of the first attempts at using visualization methods to aid decisions in design and optimization are found in [1]. More recent advances in computer visualization and Virtual Reality (VR) [2-4] are allowing designers and scientists to interact and manipulate vast amounts of data. Until ..."
Abstract
-
Cited by 16 (1 self)
- Add to MetaCart
Abstract Some of the first attempts at using visualization methods to aid decisions in design and optimization are found in [1]. More recent advances in computer visualization and Virtual Reality (VR) [2-4] are allowing designers and scientists to interact and manipulate vast amounts of data. Until these innovations, computers were solely relied on to interpret results and compute answers based on programs written. It is now possible to interact with these large datasets even while they are being used in running analyses [5-11]. Users have the ability to compress large amounts of data into a visual format, to investigate trends and relationships that could not be seen otherwise, and then make informed decisions regarding a product or process design. As our ability to generate more and more data for increasingly large engineering models improves, the need for methods for
In-Situ Processing and Visualization for Ultrascale Simulations
"... Abstract. The growing power of parallel supercomputers gives scientists the ability to simulate more complex problems at higher fidelity, leading to many high-impact scientific advances. To maximize the utilization of the vast amount of data generated by these simulations, scientists also need scala ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
(Show Context)
Abstract. The growing power of parallel supercomputers gives scientists the ability to simulate more complex problems at higher fidelity, leading to many high-impact scientific advances. To maximize the utilization of the vast amount of data generated by these simulations, scientists also need scalable solutions for studying their data to different extents and at different abstraction levels. As we move into peta- and exa-scale computing, simply dumping as much raw simulation data as the storage capacity allows for post-processing analysis and visualization is no longer a viable approach. A common practice is to use a separate parallel computer to prepare data for subsequent analysis and visualization. A naive realization of this strategy not only limits the amount of data that can be saved, but also turns I/O into a performance bottleneck when using a large parallel system. We conjecture that the most plausible solution for the peta- and exa-scale data problem is to reduce or transform the data in-situ as it is being generated, so the amount of data that must be transferred over the network is kept to a minimum. In this paper, we discuss different approaches to in-situ processing and visualization as well as the results of our preliminary study using large-scale simulation codes on massively parallel supercomputers. 1.
Overcoming instabilities in Verlet-I/r-RESPA with the mollified impulse method
"... The primary objective of this paper is to explain the derivation of symplectic molli ed Verlet-I/r-RESPA (MOLLY) methods that overcome linear and nonlinear instabilities that arise as numerical artifacts in Verlet-I/r-RESPA. These methods allow for lengthening of the longest time step used in molec ..."
Abstract
-
Cited by 13 (7 self)
- Add to MetaCart
(Show Context)
The primary objective of this paper is to explain the derivation of symplectic molli ed Verlet-I/r-RESPA (MOLLY) methods that overcome linear and nonlinear instabilities that arise as numerical artifacts in Verlet-I/r-RESPA. These methods allow for lengthening of the longest time step used in molecular dynamics (MD). We provide evidence that MOLLY methods can take a longest time step that is 50% greater than that of Verlet-I/r-RESPA, for a given drift, including no drift. A 350% increase in the timestep is possible using MOLLY with mild Langevin damping while still computing dynamic properties accurately. Furthermore, longer time steps also enhance the scalability of multiple time stepping integrators that use the popular Particle Mesh Ewald method for computing full electrostatics, since the parallel bottleneck of the fast Fourier transform associated with PME is invoked less often. An additional objective of this paper is to give sucient implementation details for these molli ed integrators, so that interested users may implement them into their MD codes, or use the program ProtoMol in which we have implemented these methods.
A Wrapper Generator for Wrapping High Performance Legacy Codes as Java/CORBA Components
- In Proceedings of Supercomputing Conference SC2000
, 2000
"... This paper describes a Wrapper Generator for wrapping high performance legacy codes as Java/CORBA components for use in a distributed component-based problemsolving environment. Using the Wrapper Generator we have automatically wrapped an MPI-based legacy code as a single CORBA object, and implement ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
This paper describes a Wrapper Generator for wrapping high performance legacy codes as Java/CORBA components for use in a distributed component-based problemsolving environment. Using the Wrapper Generator we have automatically wrapped an MPI-based legacy code as a single CORBA object, and implemented a problemsolving environment for molecular dynamic simulations. Performance comparisons between runs of the CORBA object and the original legacy code on a cluster of workstations and on a parallel computer are also presented.
Building Flexible Large-Scale Scientific Computing Applications with Scripting Languages
, 1997
"... We describe our use of scripting languages with a large-scale molecular dynamics code. We will show how one can build an interactive, highly modular, and easily extensible system without sacrificing performance, building a huge monolithic package, or complicating code development. We will also de ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
We describe our use of scripting languages with a large-scale molecular dynamics code. We will show how one can build an interactive, highly modular, and easily extensible system without sacrificing performance, building a huge monolithic package, or complicating code development. We will also describe our use of the Python language and the SWIG automated interface generation tool that we have developed for easily creating scripting language interfaces to C/C++ programs. 1
An Online Approach for Mining Collective Behaviors from Molecular Dynamics Simulations
- CONFERENCE ON COMPUTATIONAL MOLECULAR BIOLOGY
, 2009
"... Collective behavior involving distally separate regions in a protein is known to widely affect its function. In this paper, we present an online approach to study and characterize collective behavior in proteins as molecular dynamics simulations progress. Our representation of MD simulations as a st ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
(Show Context)
Collective behavior involving distally separate regions in a protein is known to widely affect its function. In this paper, we present an online approach to study and characterize collective behavior in proteins as molecular dynamics simulations progress. Our representation of MD simulations as a stream of continuously evolving data allows us to succinctly capture spatial and temporal dependencies that may exist and analyze them efficiently using data mining techniques. By using multi-way analysis we identify (a) parts of the protein that are dynamically coupled, (b) constrained residues / hinge sites that may potentially affect protein function and (c) time-points during the simulation where significant deviation in collective behavior occurred. We demonstrate the applicability of this method on two different protein simulations for barnase and cyclophilin A. For both these proteins we were able to identify constrained / flexible regions, showing good agreement with experimental results and prior computational work. Similarly, for the two simulations, we were able to identify time windows where there were significant structural deviations. Of these time-windows, for both proteins, over 70 % show collective displacements in two or more functionally relevant regions. Taken together, our results indicate that multi-way analysis techniques can be used to analyze protein dynamics and may be an attractive means to automatically track and monitor molecular dynamics simulations.