• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Stochastic optimization of floating-point programs using tunable precision (2014)

by E Schkufza, R Sharma, A Aiken
Venue:In PLDI
Add To MetaCart

Tools

Sorted by:
Results 1 - 5 of 5

PAR

by Prof M. Odersky, Prof V. Kuncak, Prof R. Bodik, Prof C. Koch, Eva Darulová , 2014
"... acceptée sur proposition du jury: ..."
Abstract - Add to MetaCart
acceptée sur proposition du jury:
(Show Context)

Citation Context

...d obscures any roundoff errors, or the application can tolerate a certain error (e.g. a human observer), then we may not need the full 64 bit floating-point precision that is often the default choice =-=[143]-=-. In other cases, an (embedded) device may not even have a floating-point unit and the computation has to be implemented in fixed-point arithmetic, while being accurate enough to ensure stability of y...

Conditionally Correct Superoptimization

by Rahul Sharma, Eric Schkufza, Berkeley Churchill, Alex Aiken
"... The aggressive optimization of heavily used kernels is an im-portant problem in high-performance computing. However, both general purpose compilers and highly specialized tools such as superoptimizers often do not have sufficient static knowledge of restrictions on program inputs that could be explo ..."
Abstract - Add to MetaCart
The aggressive optimization of heavily used kernels is an im-portant problem in high-performance computing. However, both general purpose compilers and highly specialized tools such as superoptimizers often do not have sufficient static knowledge of restrictions on program inputs that could be exploited to produce the very best code. For many appli-cations, the best possible code is conditionally correct: the optimized kernel is equal to the code that it replaces only under certain preconditions on the kernel’s inputs. The main technical challenge in producing conditionally correct opti-mizations is in obtaining non-trivial and useful conditions and proving conditional equivalence formally in the pres-ence of loops. We combine abstract interpretation, decision procedures, and testing to yield a verification strategy that can address both of these problems. This approach yields a superoptimizer for x86 that in our experiments produces binaries that are often multiple times faster than those pro-duced by production compilers.
(Show Context)

Citation Context

...p). STOKE produces code that is faster than gcc -O3 (bottom left) by eliminating and reordering computations. The resulting code (bottom right) is proved conditionally correct using COVE. ing program =-=[37]-=-. As is typical of ray tracers, the overall execution time of the program is dominated by vector arithmetic. In particular, consider the code shown in Figure 2, which executes in the inner loop of the...

Probability Type Inference for Flexible Approximate Programming

by Brett Boston, Adrian Sampson, Dan Grossman, Luis Ceze
"... In approximate computing, programs gain efficiency by al-lowing occasional errors. Controlling the probabilistic ef-fects of this approximation remains a key challenge. We propose a new approach where programmers use a type system to communicate high-level constraints on the de-gree of approximation ..."
Abstract - Add to MetaCart
In approximate computing, programs gain efficiency by al-lowing occasional errors. Controlling the probabilistic ef-fects of this approximation remains a key challenge. We propose a new approach where programmers use a type system to communicate high-level constraints on the de-gree of approximation. A combination of type inference, code specialization, and optional dynamic tracking makes the system expressive and convenient. The core type sys-tem captures the probability that each operation exhibits an error and bounds the probability that each expression devi-ates from its correct value. Solver-aided type inference lets the programmer specify the correctness probability on only some variables—program outputs, for example—and auto-matically fills in other types to meet these specifications. An optional dynamic type helps cope with complex run-time behavior where static approaches are insufficient. Together, these features interact to yield a high degree of programmer control while offering a strong soundness guarantee. We use existing approximate-computing benchmarks to show how our language, DECAF, maintains a low annota-tion burden. Our constraint-based approach can encode hard-ware details, such as finite degrees of reliability, so we also use DECAF to examine implications for approximate hard-ware design. We find that multi-level architectures can offer advantages over simpler two-level machines and that solver-aided optimization improves efficiency. Categories and Subject Descriptors D.3.3 [Programming

A Genetic Algorithm for Detecting Significant Floating-Point Inaccuracies

by unknown authors
"... Abstract—It is well-known that using floating-point numbers may inevitably result in inaccurate results and sometimes even cause serious software failures. Safety-critical software often has strict requirements on the upper bound of inaccuracy, and a crucial task in testing is to check whether signi ..."
Abstract - Add to MetaCart
Abstract—It is well-known that using floating-point numbers may inevitably result in inaccurate results and sometimes even cause serious software failures. Safety-critical software often has strict requirements on the upper bound of inaccuracy, and a crucial task in testing is to check whether significant inaccuracies may be produced. The main existing approach to the floating-point inaccuracy problem is error analysis, which produces an upper bound of inaccuracies that may occur. However, a high upper bound does not guarantee the existence of inaccuracy defects, nor does it give developers any concrete test inputs for debugging. In this paper, we propose the first metaheuristic search-based approach to automatically generating test inputs that aim to trigger significant inaccuracies in floating-point programs. Our approach is based on the following two insights: (1) with FPDebug, a recently proposed dynamic analysis approach, we can build a reliable fitness function to guide the search; (2) two main factors — the scales of exponents and the bit formations of significands — may have significant impact on the accuracy of the output, but in largely different ways. We have implemented and evaluated our approach over 154 real-world floating-point functions. The results show that our approach can detect significant inaccuracies in the subjects. I.
(Show Context)

Citation Context

...treatment is used in their programs. Second, precision-specific treatments are not very commonly used in practice. As a matter of fact, precision adjustment has been used in different approaches [6], =-=[23]-=-–[25], and no problem is reported as far as we know. Second, as our approach is based on testing, we cannot guarantee the inaccuracy detected by our approach to be always the maximum inaccuracy the pr...

A Genetic Algorithm for Detecting Significant Floating-Point Inaccuracies

by unknown authors
"... Abstract—It is well-known that using floating-point numbers may inevitably result in inaccurate results and sometimes even cause serious software failures. Safety-critical software often has strict requirements on the upper bound of inaccuracy, and a crucial task in testing is to check whether signi ..."
Abstract - Add to MetaCart
Abstract—It is well-known that using floating-point numbers may inevitably result in inaccurate results and sometimes even cause serious software failures. Safety-critical software often has strict requirements on the upper bound of inaccuracy, and a crucial task in testing is to check whether significant inaccuracies may be produced. The main existing approach to the floating-point inaccuracy problem is error analysis, which produces an upper bound of inaccuracies that may occur. However, a high upper bound does not guarantee the existence of inaccuracy defects, nor does it give developers any concrete test inputs for debugging. In this paper, we propose the first metaheuristic search-based approach to automatically generating test inputs that aim to trigger significant inaccuracies in floating-point programs. Our approach is based on the following two insights: (1) with FPDebug, a recently proposed dynamic analysis approach, we can build a reliable fitness function to guide the search; (2) two main factors — the scales of exponents and the bit formations of significands — may have significant impact on the accuracy of the output, but in largely different ways. We have implemented and evaluated our approach over 154 real-world floating-point functions. The results show that our approach can detect significant inaccuracies in the subjects. I.
(Show Context)

Citation Context

...treatment is used in their programs. Second, precision-specific treatments are not very commonly used in practice. As a matter of fact, precision adjustment has been used in different approaches [6], =-=[23]-=-–[25], and no problem is reported as far as we know. Second, as our approach is based on testing, we cannot guarantee the inaccuracy detected by our approach to be always the maximum inaccuracy the pr...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University