Results 1  10
of
10
Proving Programs Robust ∗
"... We present a program analysis for verifying quantitative robustness properties of programs, stated generally as: “If the inputs of a program are perturbed by an arbitrary amount ɛ, then its outputs change at most by Kɛ, where K can depend on the size of the input but not its value. ” Robustness prop ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
(Show Context)
We present a program analysis for verifying quantitative robustness properties of programs, stated generally as: “If the inputs of a program are perturbed by an arbitrary amount ɛ, then its outputs change at most by Kɛ, where K can depend on the size of the input but not its value. ” Robustness properties generalize the analytic notion of continuity—e.g., while the function e x is continuous, it is not robust. Our problem is to verify the robustness of a function P that is coded as an imperative program, and can use diverse data types and features such as branches and loops. Our approach to the problem soundly decomposes it into two subproblems: (a) verifying that the smallest possible perturbations to the inputs of P do not change the corresponding outputs significantly, even if control now flows
On Sound Compilation of Reals
"... Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a realvalued implementation and specification language that explicitly include ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a realvalued implementation and specification language that explicitly includes different types of uncertainties. We then present a compilation algorithm that generates a conventional implementation that is guaranteed to meet the desired precision with respect to real numbers. Our verification step generates verification conditions that treat different uncertainties in a unified way and encode reasoning about floatingpoint roundoff errors into reasoning about real numbers. Such verification conditions can be used as a standardized format for verifying the precision and the correctness of numerical programs. Due to their often nonlinear nature, precise reasoning about such verification conditions remains difficult. We show that current stateofthe art SMT solvers do not scale well to solving such verification conditions. We propose a new procedure that combines exact SMT solving over reals with approximate and sound affine and interval arithmetic. We show that this approach overcomes scalability limitations of SMT solvers while providing improved precision over affine and interval arithmetic. Using our initial implementation we show the usefullness and effectiveness of our approach on several examples, including those containing nonlinear computation. 1.
Automatic Detection of FloatingPoint Exceptions
, 1996
"... It is wellknown that floatingpoint exceptions can be disastrous and writing exceptionfree numerical programs is very difficult. Thus, it is important to automatically detect such errors. In this paper, we present Ariadne, a practical symbolic execution system specifically designed and implemented ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
It is wellknown that floatingpoint exceptions can be disastrous and writing exceptionfree numerical programs is very difficult. Thus, it is important to automatically detect such errors. In this paper, we present Ariadne, a practical symbolic execution system specifically designed and implemented for detecting floatingpoint exceptions. Ariadne systematically transforms a numerical program to explicitly check each exception triggering condition. Ariadne symbolically executes the transformed program using real arithmetic to find candidate realvalued inputs that can reach and trigger an exception. Ariadne converts each candidate input into a floatingpoint number, then tests it against the original program. In general, approximating floatingpoint arithmetic with real arithmetic can change paths from feasible to infeasible and vice versa. The key insight of this work is that, for the problem of detecting floatingpoint exceptions, this approximation works well in practice because, if one input reaches an exception, many are likely to, and at least one of them will do so over both floatingpoint and real arithmetic. To realize Ariadne, we also devised a novel, practical linearization technique to solve nonlinear constraints. We extensively evaluated Ariadne over 467 scalar functions in the widely used GNU Scientific Library (GSL). Our results show that Ariadne is practical and identifies a large number of real runtime exceptions in GSL. The GSL developers confirmed our preliminary findings and look forward to Ariadne’s public release, which we plan to do in the near future.
FeedbackDirected Unit Test Generation for C/C++ using Concolic Execution
"... Abstract—In industry, software testing and coveragebased metrics are the predominant techniques to check correctness of software. This paper addresses automatic unit test generation for programs written in C/C++. The main idea is to improve the coverage obtained by feedbackdirected random test gen ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In industry, software testing and coveragebased metrics are the predominant techniques to check correctness of software. This paper addresses automatic unit test generation for programs written in C/C++. The main idea is to improve the coverage obtained by feedbackdirected random test generation methods, by utilizing concolic execution on the generated test drivers. Furthermore, for programs with numeric computations, we employ nonlinear solvers in a lazy manner to generate new test inputs. These techniques significantly improve the coverage provided by a feedbackdirected random unit testing framework, while retaining the benefits of full automation. We have implemented these techniques in a prototype platform, and describe promising experimental results on a number of C/C++ open source benchmarks. I.
Formal property verification in a conformance testing framework. [Online at: http://www.public.asu. edu/∼hyabbas/techreports/MEMOCODE14TechRpt.pdf
, 2014
"... Abstract—In modelbased design of cyberphysical systems, such as switched mixedsignal circuits or softwarecontrolled physical systems, it is common to develop a sequence of system models of different fidelity and complexity, each appropriate for a particular design or verification task. In such a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In modelbased design of cyberphysical systems, such as switched mixedsignal circuits or softwarecontrolled physical systems, it is common to develop a sequence of system models of different fidelity and complexity, each appropriate for a particular design or verification task. In such a sequence, one model is often derived from the other by a process of simplification or implementation. E.g. a Simulink model might be implemented on an embedded processor via automatic code generation. Three questions naturally present themselves: how do we quantify closeness between the two systems? How can we measure such closeness? If the original system satisfies some formal property, can we automatically infer what properties are then satisfied by the derived model? This paper addresses all three questions: we quantify the closeness between original and derived model via a distance measure between their outputs. We then propose two computational methods for approximating this closeness measure. Finally, we derive syntactical rewriting rules which, when applied to a Metric Temporal Logic specification satisfied by the original model, produce a formula satisfied by the derived model. We demonstrate the soundness of the theory with several experiments. I.
Randomized AccuracyAware Program Transformations For Efficient Approximate Computations
"... Despite the fact that approximate computations have come to dominate many areas of computer science, the field of program transformations has focused almost exclusively on traditional semanticspreserving transformations that do not attempt to exploit the opportunity, available in many computations, ..."
Abstract
 Add to MetaCart
(Show Context)
Despite the fact that approximate computations have come to dominate many areas of computer science, the field of program transformations has focused almost exclusively on traditional semanticspreserving transformations that do not attempt to exploit the opportunity, available in many computations, to acceptably trade off accuracy for benefits such as increased performance and reduced resource consumption. We present a model of computation for approximate computations and an algorithm for optimizing these computations. The algorithm works with two classes of transformations: substitution transformations (which select one of a number of available implementations for a given function, with each implementation offering a different combination of accuracy and resource consumption) and sampling transformations (which randomly discard some of the inputs to a given reduction). The algorithm produces a (1 + ε) randomized approximation to the optimal randomized computation (which minimizes resource consumption subject to a probabilistic accuracy specification in the form of a maximum expected error or maximum error variance).
Nonlocal robustness analysis via rewriting techniques ✩
"... Robustness is a correctness property which intuitively means that if the inputs to a program changes less than a fixed small amount then its output changes only slightly. The study of errors caused by finiteprecision semantics requires a stronger property: the results in the finiteprecision semant ..."
Abstract
 Add to MetaCart
(Show Context)
Robustness is a correctness property which intuitively means that if the inputs to a program changes less than a fixed small amount then its output changes only slightly. The study of errors caused by finiteprecision semantics requires a stronger property: the results in the finiteprecision semantics have to be close to the result in the exact semantics. Compositional methods often are not useful in determining which programs are robust since key constructs—like the conditional and the whileloop—are not continuous. We propose a method for proving that some whileloop programs always returns finite precision values close to the exact values. Our method uses techniques borrowed from rewriting theory to analyze the possible paths in a program’s execution in order to show that while local operations in a program might not be robust, the full program might be guaranteed to be robust. This method is nonlocal in the sense that instead of breaking the analysis down to single lines of code, it checks certain global properties of its structure. We show the applicability of our method on two standard algorithms: the CORDIC computation of the cosine and Dijkstra’s shortest path algorithm.
FeedbackDirected Unit Test Generation for C/C++ using Concolic Execution
"... Abstract—In industry, software testing and coveragebased metrics are the predominant techniques to check correctness of software. This paper addresses automatic unit test generation for programs written in C/C++. The main idea is to improve the coverage obtained by feedbackdirected random test gen ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In industry, software testing and coveragebased metrics are the predominant techniques to check correctness of software. This paper addresses automatic unit test generation for programs written in C/C++. The main idea is to improve the coverage obtained by feedbackdirected random test generation methods, by utilizing concolic execution on the generated test drivers. Furthermore, for programs with numeric computations, we employ nonlinear solvers in a lazy manner to generate new test inputs. These techniques significantly improve the coverage provided by a feedbackdirected random unit testing framework, while retaining the benefits of full automation. We have implemented these techniques in a prototype platform, and describe promising experimental results on a number of C/C++ open source benchmarks. I.
On Numerical Error Propagation with Sensitivity
"... An emerging area of research is to automatically compute reasonably precise upper bounds on numerical errors including roundoffs. Previous approaches for this task are limited in their precision and scalability, especially in the presence of branches and loops. We argue that one reason for these lim ..."
Abstract
 Add to MetaCart
(Show Context)
An emerging area of research is to automatically compute reasonably precise upper bounds on numerical errors including roundoffs. Previous approaches for this task are limited in their precision and scalability, especially in the presence of branches and loops. We argue that one reason for these limitations is the focus of past approaches on approximating errors of individual reachable states. We propose instead a more relational and modular approach to analysis that characterizes analytically the input/output behavior of code fragments and reuses this characterization to reason about larger code fragments. We use the derivatives of the functions corresponding to program paths to capture a program’s sensitivity to input changes. To apply this approach for finiteprecision code, we decouple the computation of newly introduced roundoff errors from the amplification of existing errors. This enables us to precisely and efficiently account for propagation of errors through longrunning computation. Using this approach we implemented an analysis for programs containing nonlinear computation, conditionals, and loops. In the presence of loops our approach can find closedform symbolic invariants capturing upper bounds on numerical errors, even when the error grows with the number of iterations. We evaluate our system on a number of benchmarks from embedded systems and scientific computation, showing substantial improvements in precision and scalability over the state of the art. 1.