Results 11 - 20
of
7,719
The Paradyn Parallel Performance Measurement Tools
- IEEE COMPUTER
, 1995
"... Paradyn is a performance measurement tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs (hours or days) and large (thousand node) systems, and automates much of the search for performance bottlenecks. It can provide precise ..."
Abstract
-
Cited by 447 (39 self)
- Add to MetaCart
Paradyn is a performance measurement tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs (hours or days) and large (thousand node) systems, and automates much of the search for performance bottlenecks. It can provide precise
Characterization of Scientific Workloads on Systems with
"... Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. Abstract. Multi-core processors are planned for virtually all next-generation HPC systems. In a preliminary evaluation of ..."
Abstract
- Add to MetaCart
selection of MPI task and memory placement schemes can result in over 25 % performance improvement for key scientific calculations. We collected detailed performance data for several large-scale scientific applications. Analyses of the application performance results confirmed our micro
Characterization of scientific workloads on systems with multi-core processors
- In IISWC
, 2006
"... Abstract. Multi-core processors are planned for virtually all next-generation HPC systems. In a preliminary evaluation of AMD Opteron Dual-Core processor systems, we investigated the scaling behavior of a set of micro-benchmarks, kernels, and applications. In addition, we evaluated a number of proce ..."
Abstract
-
Cited by 24 (0 self)
- Add to MetaCart
large-scale scientific applications. Analyses of the application performance results confirmed our micro-benchmark and scaling results. Keywords: Performance characterization, Multi-core processor, AMD Opteron, micro-benchmarking, scientific applications.
On the assessment of surface heat flux and evaporation using large scale parameters
- Mon. Weather Rev
, 1972
"... ABSTRACT-In an introductory review it is reemphasized that the large-scale parameterization of the surface fluxes of sensible and latent heat is properly expressed in terms of energetic considerations over land while formulas of the bulk aerodynamic type are most suitable over the sea. A general fra ..."
Abstract
-
Cited by 357 (0 self)
- Add to MetaCart
ABSTRACT-In an introductory review it is reemphasized that the large-scale parameterization of the surface fluxes of sensible and latent heat is properly expressed in terms of energetic considerations over land while formulas of the bulk aerodynamic type are most suitable over the sea. A general
Survey of clustering data mining techniques
, 2002
"... Accrue Software, Inc. Clustering is a division of data into groups of similar objects. Representing the data by fewer clusters necessarily loses certain fine details, but achieves simplification. It models data by its clusters. Data modeling puts clustering in a historical perspective rooted in math ..."
Abstract
-
Cited by 408 (0 self)
- Add to MetaCart
applications such as scientific data exploration, information retrieval and text mining, spatial database applications, Web analysis, CRM, marketing, medical diagnostics, computational biology, and many others. Clustering is the subject of active research in several fields such as statistics, pattern
Conditional value-at-risk for general loss distributions
- Journal of Banking and Finance
, 2002
"... Abstract. Fundamental properties of conditional value-at-risk, as a measure of risk with significant advantages over value-at-risk, are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence o ..."
Abstract
-
Cited by 386 (28 self)
- Add to MetaCart
of models based on scenarios and finite sampling. Conditional value-at-risk is able to quantify dangers beyond value-at-risk, and moreover it is coherent. It provides optimization shortcuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out
BEOWULF: A Parallel Workstation For Scientific Computation
- In Proceedings of the 24th International Conference on Parallel Processing
, 1995
"... Network-of-Workstations technology is applied to the challenge of implementing very high performance workstations for Earth and space science applications. The Beowulf parallel workstation employs 16 PCbased processing modules integrated with multiple Ethernet networks. Large disk capacity and high ..."
Abstract
-
Cited by 341 (13 self)
- Add to MetaCart
Network-of-Workstations technology is applied to the challenge of implementing very high performance workstations for Earth and space science applications. The Beowulf parallel workstation employs 16 PCbased processing modules integrated with multiple Ethernet networks. Large disk capacity and high
Efficient Filtering of XML Documents for Selective Dissemination of Information
, 2000
"... Information Dissemination applications are gaining increasing popularity due to dramatic improvements in communications bandwidth and ubiquity. The sheer volume of data available necessitates the use of selective approaches to dissemination in order to avoid overwhelming users with unnecessaryi ..."
Abstract
-
Cited by 364 (17 self)
- Add to MetaCart
sophisticated filtering mechanisms that take structure information into account. We have developed several index organizations and search algorithms for performing efficient filtering of XML documents for large-scale information dissemination systems. In this paper we describe these techniques and examine
Improving MapReduce Performance in Heterogeneous Environments
, 2008
"... MapReduce is emerging as an important programming model for large-scale data-parallel applications such as web indexing, data mining, and scientific simulation. Hadoop is an open-source implementation of MapReduce enjoying wide adoption and is often used for short jobs where low response time is cri ..."
Abstract
-
Cited by 350 (19 self)
- Add to MetaCart
MapReduce is emerging as an important programming model for large-scale data-parallel applications such as web indexing, data mining, and scientific simulation. Hadoop is an open-source implementation of MapReduce enjoying wide adoption and is often used for short jobs where low response time
Communication Characteristics of Large-Scale Scientific Applications for Contemporary Cluster Architectures
- In International Parallel and Distributed Processing Symposium
, 2002
"... This paper examines the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the Message Passing Interface (MPI) and by using ..."
Abstract
-
Cited by 94 (12 self)
- Add to MetaCart
This paper examines the explicit communication characteristics of several sophisticated scientific applications, which, by themselves, constitute a representative suite of publicly available benchmarks for large cluster architectures. By focusing on the Message Passing Interface (MPI) and by using
Results 11 - 20
of
7,719