Results 1 - 10
of
17
From Flop to MegaFlops: Java for Technical Computing
- ACM Transactions on Programming Languages and Systems
, 1998
"... . Although there has been some experimentation with Java as a language for numerically intensive computing, there is a perception by many that the language is not suited for such work. In this paper we show how optimizing array bounds checks and null pointer checks creates loop nests on which ag ..."
Abstract
-
Cited by 52 (11 self)
- Add to MetaCart
(Show Context)
. Although there has been some experimentation with Java as a language for numerically intensive computing, there is a perception by many that the language is not suited for such work. In this paper we show how optimizing array bounds checks and null pointer checks creates loop nests on which aggressive optimizations can be used. Applying these optimizations by hand to a simple matrix-multiply test case leads to Java compliant programs whose performance is in excess of 500 Mflops on an RS/6000 SP 332MHz SMP node. We also report in this paper the effect that each optimization has on performance. Since all of these optimizations can be automated, we conclude that Java will soon be a serious contender for numerically intensive computing. 1 Introduction The scientific programming community has recently demonstrated a great deal of interest in the use of Java for technical computing. There are many compelling reasons for such use of Java: a large supply of programmers, it is obj...
Java Programming for High-Performance Numerical Computing
, 2000
"... Class Figure 5 Simple Array construction operations //Simple 3 x 3 array of integers intArray2D A = new intArray2D(3,3); //This new array has a copy of the data in A, //and the same rank and shape. ..."
Abstract
-
Cited by 52 (8 self)
- Add to MetaCart
(Show Context)
Class Figure 5 Simple Array construction operations //Simple 3 x 3 array of integers intArray2D A = new intArray2D(3,3); //This new array has a copy of the data in A, //and the same rank and shape.
Quicksilver: A Quasi-Static Compiler for Java
, 2000
"... This paper presents the design and implementation of the Quicksilver 1 quasi-static compiler for Java. Quasi-static compilation is a new approach that combines the benefits of static and dynamic compilation, while maintaining compliance with the Java standard, including support of its dynamic fea ..."
Abstract
-
Cited by 42 (6 self)
- Add to MetaCart
This paper presents the design and implementation of the Quicksilver 1 quasi-static compiler for Java. Quasi-static compilation is a new approach that combines the benefits of static and dynamic compilation, while maintaining compliance with the Java standard, including support of its dynamic features. A quasi-static compiler relies on the generation and reuse of persistent code images to reduce the overhead of compilation during program execution, and to provide identical, testable and reliable binaries over different program executions. At runtime, the quasi-static compiler adapts pre-compiled binaries to the current JVM instance, and uses dynamic compilation of the code when necessary to support dynamic Java features. Our system allows interprocedural program optimizations to be performed while maintaining binary compatibility. Experimental data obtained using a preliminary implementation of a quasi-static compiler in the Jalape~no JVM clearly demonstrates the benefits of our app...
Efficient Support for Complex Numbers in Java
, 1999
"... One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such a Complex class and a compiler that understands its se ..."
Abstract
-
Cited by 35 (9 self)
- Add to MetaCart
One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such a Complex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines. We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels. 1 Introduction The Java Grande Forum has identified several critical issues related to the role of Java (TM)1 in numerical computing [14]. One of the key requirements is that Java must support efficient operations on complex numbers. Complex arithmetic and access to elements of complex arrays must be as efficient as the manipulation o...
Automatic Loop Transformations and Parallelization for Java
- In Proceedings of the 2000 International Conference on Supercomputing
, 2000
"... From a software engineering perspective, the Java programming language provides an attractive platform for writing numerically intensive applications. A major drawback hampering its widespread adoption in this domain has been its poor performance on numerical codes. This paper describes a prototype ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
(Show Context)
From a software engineering perspective, the Java programming language provides an attractive platform for writing numerically intensive applications. A major drawback hampering its widespread adoption in this domain has been its poor performance on numerical codes. This paper describes a prototype Java compiler which demonstrates that it is possible to achieve performance levels approaching those of current state-of-the-art C, C++ and Fortran compilers on numerical codes. We describe a new transformation called alias versioning that takes advantage of the simplicity of pointers in Java. This transformation, combined with other techniques that we have developed, enables the compiler to perform high order loop transformations (for better data locality) and parallelization completely automatically. We believe that our compiler is the first to have such capabilities of optimizing numerical Java codes. We achieve, with Java, between 80 and 100% of the performance of highly optimized Fortra...
Optimizing Array Reference Checking in Java Programs
, 1998
"... The Java language specification requires that all array references be checked for validity. If a reference is invalid, an exception must be thrown. Furthermore, the environment at the time of the exception must be preserved and made available to whatever code handles the exception. Performing the ..."
Abstract
-
Cited by 27 (3 self)
- Add to MetaCart
The Java language specification requires that all array references be checked for validity. If a reference is invalid, an exception must be thrown. Furthermore, the environment at the time of the exception must be preserved and made available to whatever code handles the exception. Performing the checks at run-time incurs a large penalty in execution time. In this paper we describe a collection of transformations that can dramatically reduce this overhead in the common case (when the access is valid) while preserving the program state at the time of an exception. The transformations allow trade-offs to be made in the efficiency and size of the resulting code, and are fully compliant with the Java language semantics. Preliminary evaluation of the effectiveness of these transformations show that performance improvements of 10 times and more can be achieved for array-intensive Java programs.
Techniques for Obtaining High Performance in Java Programs
- ACM Computing Surveys
, 1999
"... This survey describes research directions in techniques to improve the performance of programs written in the Java programming language. The standard technique for Java execution is interpretation. A Javainterpreter dynamically executes Java bytecodes, which comprise the instruction set of the Java ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
This survey describes research directions in techniques to improve the performance of programs written in the Java programming language. The standard technique for Java execution is interpretation. A Javainterpreter dynamically executes Java bytecodes, which comprise the instruction set of the Java Virtual Machine (JVM). Execution-time performance of Java programs can be improved through compilation. Various types of Java compilers have been proposed including Just-In-Time (JIT) compilers that compile bytecodes into native processor instructions on the fly; direct compilers that directly translate the Java source code into the target processor's native language; and bytecode-to-source translators that generate either native code or an intermediate language, such as C, from the bytecodes. Some techniques, including bytecode optimization and executing Java programs in parallel, attempt to improve Javaruntime performance while maintaining Java's portability. Another alternative f...
The NINJA Project: Making Java Work for High Performance Numerical Computing
, 2001
"... this article from being used in a dynamic compiler. Moreover, by using the quasi-static dynamic compilation model [10], the more expensive optimization and 5 analysis techniques employed by TPO can be done off-line, sharply reducing the impact of compilation overhead ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
this article from being used in a dynamic compiler. Moreover, by using the quasi-static dynamic compilation model [10], the more expensive optimization and 5 analysis techniques employed by TPO can be done off-line, sharply reducing the impact of compilation overhead
Effective Enhancement of Loop Versioning in Java
"... Run-time exception checking is required by the Java Language Specification (JLS). Though providing higher software reliability, that mechanism negatively affects performance of Java programs, especially those computationally intensive. This paper pursues loop versioning, a simple program transfo ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
Run-time exception checking is required by the Java Language Specification (JLS). Though providing higher software reliability, that mechanism negatively affects performance of Java programs, especially those computationally intensive. This paper pursues loop versioning, a simple program transformation which often helps to avoid the checking overhead. Basing upon the Java Memory Model precisely defined in JLS, the work proposes a set of sufficient conditions for applicability of loop versioning. Scalable intra- and interprocedural analyses that efficiently check fulfilment of the conditions are also described. Implemented in Excelsior JET, an ahead-of-time compiler for Java, the developed technique results in significant performance improvements on some computational benchmarks.
Optimizing Java-specific overheads: Java at the speed of C
- In HPCN Europe
, 2001
"... rveldema,kielmann,bal¡ ..."
(Show Context)