Results 1 - 10
of
85
Debugging with Dynamic Slicing and Backtracking
- Software Practice and Experience
, 1993
"... this paper we present a debugging model, based on dynamic program slicing and execution backtracking techniques, that easily lends itself to automation. This model is based on experience with using these techniques to debug software. We also present a prototype debugging tool, SPYDER, that explicitl ..."
Abstract
-
Cited by 109 (0 self)
- Add to MetaCart
this paper we present a debugging model, based on dynamic program slicing and execution backtracking techniques, that easily lends itself to automation. This model is based on experience with using these techniques to debug software. We also present a prototype debugging tool, SPYDER, that explicitly supports the proposed model, and with which we are performing further debugging research
Fixed-Point Optimization Utility for C and C++ Based Digital Signal Processing Programs
- IEEE Trans. Circuits and Systems II
, 1998
"... Fixed-point optimization utility software is developed that can aid scaling and wordlength determination of digital signal processing algorithms written in C or C++++++. This utility consists of two programs: the range estimator and the fixed-point simulator. The former estimates the ranges of float ..."
Abstract
-
Cited by 75 (5 self)
- Add to MetaCart
(Show Context)
Fixed-point optimization utility software is developed that can aid scaling and wordlength determination of digital signal processing algorithms written in C or C++++++. This utility consists of two programs: the range estimator and the fixed-point simulator. The former estimates the ranges of floatingpoint variables for purposes of automatic scaling, and the latter translates floating-point programs into fixed-point equivalents to evaluate the fixed-point performance by simulation. By exploiting the operator overloading characteristics of C++++++, the range estimation and the fixed-point simulation can be conducted by simply modifying the variable declaration of the original program. This utility is easily applicable to nearly all types of digital signal processing programs including nonlinear, time-varying, multirate, and multidimensional signal processing algorithms. In addition, this software can be used to compare the fixed-point characteristics of different implementation archite...
A Perfect Hash Function Generator
"... gperf is a "software-tool generating-tool" designed to automate the generation of perfect hash functions. This paper describes the features, algorithms, and object-oriented design and implementation strategies incorporated in gperf.Italso presents the results from an empirical comparison ..."
Abstract
-
Cited by 56 (34 self)
- Add to MetaCart
gperf is a "software-tool generating-tool" designed to automate the generation of perfect hash functions. This paper describes the features, algorithms, and object-oriented design and implementation strategies incorporated in gperf.Italso presents the results from an empirical comparison between gperf-generated recognizers and other popular techniques for reserved word lookup. gperf is distributed with the GNU libg++ library and is used to generate the keyword recognizers for the GNU C and GNU C++ compilers. 1 Introduction Perfect hash functions are a time and space efficient implementation of static search sets, which are ADTs with operations like initialize, insert,andretrieve. Static search sets are common in system software applications. Typical static search sets include compiler and interpreter reserved words, assembler instruction mnemonics, and shell interpreter builtin commands. Search set elements are called keywords.Key- words are inserted into the set once, usually at c...
Eliminating Branches using a Superoptimizer and the GNU C Compiler
, 1992
"... this paper uses the RS/6000 for all its examples, the techniques described here are applicable to most machines ..."
Abstract
-
Cited by 45 (0 self)
- Add to MetaCart
this paper uses the RS/6000 for all its examples, the techniques described here are applicable to most machines
Exploiting Instruction Level Parallelism in the Presence of Conditional Branches
, 1996
"... Wide issue superscalar and VLIW processors utilize instruction-level parallelism (ILP) to achieve high performance. However, if insufficient ILP is found, the performance potential of these processors suffers dramatically. Branch instructions, which are one of the major limitations to exploiting ILP ..."
Abstract
-
Cited by 43 (2 self)
- Add to MetaCart
(Show Context)
Wide issue superscalar and VLIW processors utilize instruction-level parallelism (ILP) to achieve high performance. However, if insufficient ILP is found, the performance potential of these processors suffers dramatically. Branch instructions, which are one of the major limitations to exploiting ILP, enforce strict ordering conditions in programs to ensure correct execution. Therefore, it is difficult to achieve the desired overlap of instruction execution with branches in the instruction stream. To effectively exploit ILP in the presence of branches requires efficient handling of branches and the dependences they impose. This dissertation investigates two techniques for exposing and enhancing ILP in the presence of branches, speculative execution and predicated execution. Speculative execution enables an ILP compiler to remove dependences between instructions and prior branches. In this manner, the execution of instructions and predicted future instructions may be overlapped. Compiler-controlled speculative execution is employed using an efficient structure called the superblock. The formation and optimization of superblocks increase ILP along important execution paths by systematically removing constraints due to unimportant paths. In conjunction with superblock optimizations, speculative execution is utilized to remove control dependences in the superblock
Software synthesis of process-based concurrent programs
- In Proceedings of the Design Automation Conference
, 1998
"... Abstract — We present a Petri net theoretic approach to the software synthesis problem that can synthesize ordinary C programs from processbased concurrent specifications without the need for a run-time multithreading environment. The synthesized C programs can be readily retargeted to different pro ..."
Abstract
-
Cited by 38 (1 self)
- Add to MetaCart
(Show Context)
Abstract — We present a Petri net theoretic approach to the software synthesis problem that can synthesize ordinary C programs from processbased concurrent specifications without the need for a run-time multithreading environment. The synthesized C programs can be readily retargeted to different processors using available optimizing C compilers. Our compiler can also generate sequential Java programs as output, which can also be readily mapped to a target processor without the need for a multithreading environment. Initial results demonstrate significant potentials for improvements over current run-time solutions. I.
Compiler Code Transformations for Superscalar-Based High-Performance Systems
- in Proceedings of Supercomputing '92
, 1992
"... Exploiting parallelism at both the multiprocessor level and the instruction level is an effective means for supercomputers to achieve high-performance. The amount of instruction-level parallelism available to superscalar or VLIW node processors can be limited, however, with conventional compiler opt ..."
Abstract
-
Cited by 30 (8 self)
- Add to MetaCart
(Show Context)
Exploiting parallelism at both the multiprocessor level and the instruction level is an effective means for supercomputers to achieve high-performance. The amount of instruction-level parallelism available to superscalar or VLIW node processors can be limited, however, with conventional compiler optimization techniques. In this paper, a set of compiler transformations designed to increase instruction-level parallelism is described. The effectiveness of these transformations is evaluated using 40 loop nests extracted from a range of supercomputer applications. This evaluation shows that increasing execution resources in superscalar /VLIW node processors yields little performance improvement unless loop unrolling and register renaming are applied. It also reveals that these two transformations are sufficient for DOALL loops. However, more advanced transformations are required in order for serial and DOACROSS loops to fully benefit from the increased execution resources. The results show ...
Leveraging Open-Source Communities To Improve the Quality Performance of Open-Source Software
- Paper presented at the First Workshop on Open-Source Software Engineering
, 2001
"... Open-source development processes have emerged as an effective approach to reduce cycle-time and decrease design, implementation, and quality assurance costs for certain types of software, particularly systems infrastructure software, such as operating systems, compilers and language processing tool ..."
Abstract
-
Cited by 30 (1 self)
- Add to MetaCart
(Show Context)
Open-source development processes have emerged as an effective approach to reduce cycle-time and decrease design, implementation, and quality assurance costs for certain types of software, particularly systems infrastructure software, such as operating systems, compilers and language processing tools, editors, and distribution middleware. This paper presents two contributions to the study of open-source software engineering. First, we describe the key challenges of open-source software, such as controlling long-term maintenance and evolution costs, ensuring acceptable levels of quality, sustaining end-user confidence and good will, and ensuring the coherency of system-wide software and usability properties. We illustrate how well-organized open-source projects make it easier to address many of these challenges compared with traditional closedsource approaches to building software. Second, we present the goals and methodology of the Skoll project, which focuses on developing and empirically validating novel open-source software quality assurance and optimization techniques to resolve key open-source challenges. We summarize the experimental design of a long-term case study of two widely used open-source middleware projects---ACE and TAO---that we are using in the Skoll project to devise, deploy, and evaluate techniques for improving software quality through continuous distributed testing and profiling. These techniques are designed to leverage common open-source project assets, such as the technological sophistication and extensive computing resources of worldwide user communities, open access to source, and ubiquitous web access, that can improve the quality and performance of opensource software significantly. 1.
Embedded Software in Real-Time Signal Processing Systems: Design Technologies
- Proc. IEEE
, 1997
"... This paper discusses design technology issues for embedded systems using processor cores, with a focus on software compilation tools. Architectural characteristics of contemporary processor cores are reviewed and tool requirements are formulated. This is followed by a comprehensive survey of both ex ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
(Show Context)
This paper discusses design technology issues for embedded systems using processor cores, with a focus on software compilation tools. Architectural characteristics of contemporary processor cores are reviewed and tool requirements are formulated. This is followed by a comprehensive survey of both existing and new software compilation techniques that are considered important in the context of embedded processors