Results 1 - 10
of
12
Directed incremental symbolic execution
- In PLDI
, 2011
"... The last few years have seen a resurgence of interest in the use of symbolic execution – a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging ..."
Abstract
-
Cited by 30 (8 self)
- Add to MetaCart
The last few years have seen a resurgence of interest in the use of symbolic execution – a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves—only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.
Reuse of verification results: Conditional model checking, precision reuse, and verification witnesses
- In Proc. SPIN, LNCS 7976
, 2013
"... Abstract. Verification is a complex algorithmic task, requiring large amounts of computing resources. One approach to reduce the resource consumption is to reuse information from previous verification runs. This paper gives an overview of three techniques for such information reuse. Conditional mode ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
(Show Context)
Abstract. Verification is a complex algorithmic task, requiring large amounts of computing resources. One approach to reduce the resource consumption is to reuse information from previous verification runs. This paper gives an overview of three techniques for such information reuse. Conditional model checking outputs a condition that describes the state space that was successfully verified, and accepts as input a condition that instructs the model checker which parts of the system should be verified; thus, later verification runs can use the output condition of previous runs in order to not verify again parts of the state space that were already verified. Precision reuse is a technique to use intermediate results from previous verification runs to accelerate further verification runs of the system; information about the level of abstraction in the abstract model can be reused in later verification runs. Typical model checkers provide an error path through the system as witness for having proved that a system violates a property, and a few model checkers provide some kind of proof certificate as a witness for the correctness of the system; these witnesses should be such that the verifiers can read them and —with less computational effort — (re-) verify that the witness is valid. 1
Memoise: a tool for memoized symbolic execution
- In ICSE
, 2013
"... Abstract—This tool paper presents a tool for performing mem-oized symbolic execution (Memoise), an approach we developed in previous work for more efficient application of symbolic execution. The key idea in Memoise is to allow re-use of symbolic execution results across different runs of symbolic e ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Abstract—This tool paper presents a tool for performing mem-oized symbolic execution (Memoise), an approach we developed in previous work for more efficient application of symbolic execution. The key idea in Memoise is to allow re-use of symbolic execution results across different runs of symbolic execution without having to re-compute previously computed results as done in earlier approaches. Specifically, Memoise builds a trie-based data structure to record path exploration information during a run of symbolic execution, optimizes the trie for the next run, and re-uses the resulting trie during the next run. Our tool optimizes symbolic execution in three standard scenarios where it is commonly applied: iterative deepening, regression analysis, and heuristic search. Our tool Memoise builds on the Symbolic PathFinder framework to provide more efficient symbolic execution of Java programs and is available online for download. The tool demonstration video is available at
Precision Reuse for Efficient Regression Verification
, 2013
"... Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1 119 revisions.
Reusing Precisions for Efficient Regression Verification
, 2013
"... Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1 119 revisions from the Linux kernel.
Multi-solver Support in Symbolic Execution
"... Abstract. One of the main challenges of dynamic symbolic execution— an automated program analysis technique which has been successfully employed to test a variety of software—is constraint solving. A key decision in the design of a symbolic execution tool is the choice of a constraint solver. While ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. One of the main challenges of dynamic symbolic execution— an automated program analysis technique which has been successfully employed to test a variety of software—is constraint solving. A key decision in the design of a symbolic execution tool is the choice of a constraint solver. While different solvers have different strengths, for most queries, it is not possible to tell in advance which solver will perform better. In this paper, we argue that symbolic execution tools can, and should, make use of multiple constraint solvers. These solvers can be run competitively in parallel, with the symbolic execution engine using the result from the best-performing solver. We present empirical data obtained by running the symbolic execution engine KLEE on a set of real programs, and use it to highlight several important characteristics of the constraint solving queries generated during symbolic execution. In particular, we show the importance of constraint caching and counterexample values on the (relative) performance of KLEE configured to use different SMT solvers. We have implemented multi-solver support in KLEE, using the metaSMT framework, and explored how different state-of-the-art solvers compare on a large set of constraint-solving queries. We also report on our ongoing experience building a parallel portfolio solver in KLEE. 1
Quantification of Software Changes through Probabilistic Symbolic Execution
"... Characterizing software changes is a fundamental component of software maintenance. Despite being widely used and computationally efficient, techniques that characterize syntactic program changes lack an insight on the changed program behaviors and can possibly lead to unnecessary maintenance effort ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Characterizing software changes is a fundamental component of software maintenance. Despite being widely used and computationally efficient, techniques that characterize syntactic program changes lack an insight on the changed program behaviors and can possibly lead to unnecessary maintenance efforts. Recent promising techniques use program analysis to produce a behavioral characterization of program changes, see e.g. [10, 12]. Behaviors are either abstracted through operational models (e.g., transition systems) or summarized through a set of logical formulae satisfied by the input-output relation (e.g., pre- and post- conditions). Checking the implication or the equivalence between the abstraction of different program versions provides a qualitative assessment of the preservation of desired behaviors or the elimination of undesired behaviors. Nonetheless, such qualitative assessment provides only true-false answers, providing limited guid-ance on “how far ” two versions are different from one another. Recent work [8, 9] provides more informative but still only qualitative representation of the difference. We argue that a com-plementary quantitative representation for software changes is needed, particularly for programs required to operate under uncertainty usage profiles, where the goal of maintenance is to improve
Feedback-driven dynamic invariant discovery
- In Proc. ISSTA
, 2014
"... Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be chal-lenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dyn ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be chal-lenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The in-strumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further refine the set of candidate invariants. This feedback loop is exe-cuted until a fix-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experi-mental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of can-didate invariants in a small number of iterations.
Exploiting Undefined Behaviors for Efficient Symbolic Execution
"... Symbolic execution is an important and popular technique used in several software engineering tools for test case generation, debug-ging and program analysis. As such improving the performance of symbolic execution can have huge impact on the effectiveness of such tools. On the other hand, optimizat ..."
Abstract
- Add to MetaCart
(Show Context)
Symbolic execution is an important and popular technique used in several software engineering tools for test case generation, debug-ging and program analysis. As such improving the performance of symbolic execution can have huge impact on the effectiveness of such tools. On the other hand, optimizations based on undefined behaviors are an essential part of current C and C++ compilers (like GCC and LLVM). In this paper, we present a technique to system-atically introduce undefined behaviors during compilation to speed up the subsequent symbolic execution of the program. We have im-plemented our technique inside LLVM and tested with an existing symbolic execution engine (Pathgrind). Preliminary results on the SIR repository benchmark are encouraging and show 48 % speed up in time and 30 % reduction in the number of constraints. 1.