Results 1 - 10
of
455
EXE: Automatically generating inputs of death
- In Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS
, 2006
"... This article presents EXE, an effective bug-finding tool that automatically generates inputs that crash real code. Instead of running code on manually or randomly constructed input, EXE runs it on symbolic input initially allowed to be anything. As checked code runs, EXE tracks the constraints on ea ..."
Abstract
-
Cited by 349 (21 self)
- Add to MetaCart
This article presents EXE, an effective bug-finding tool that automatically generates inputs that crash real code. Instead of running code on manually or randomly constructed input, EXE runs it on symbolic input initially allowed to be anything. As checked code runs, EXE tracks the constraints on each symbolic (i.e., input-derived) memory location. If a statement uses a symbolic value, EXE does not run it, but instead adds it as an input-constraint; all other statements run as usual. If code conditionally checks a symbolic expression, EXE forks execution, constraining the expression to be true on the true branch and false on the other. Because EXE reasons about all possible values on a path, it has much more power than a traditional runtime tool: (1) it can force execution down any feasible program path and (2) at dangerous operations (e.g., a pointer dereference), it detects if the current path constraints allow any value that causes a bug. When a path terminates or hits a bug, EXE automatically generates a test case by solving the current path constraints to find concrete values using its own co-designed constraint solver, STP. Because EXE’s constraints have no approximations, feeding this concrete input to an uninstrumented version of the checked code will cause it to follow the same path and hit the same bug (assuming deterministic code).
ESP: Path-Sensitive Program Verification in Polynomial Time
, 2002
"... In this paper, we present a new algorithm for partial program verification that runs in polynomial time and space. We are interested in checking that a program satisfies a given temporal safety property. Our insight is that by accurately modeling only those branches in a program for which the proper ..."
Abstract
-
Cited by 299 (4 self)
- Add to MetaCart
(Show Context)
In this paper, we present a new algorithm for partial program verification that runs in polynomial time and space. We are interested in checking that a program satisfies a given temporal safety property. Our insight is that by accurately modeling only those branches in a program for which the property-related behavior differs along the arms of the branch, we can design an algorithm that is accurate enough to verify the program with respect to the given property, without paying the potentially exponential cost of full pathsensitive analysis. We have implemented this “property simulation ” algorithm as part of a partial verification tool called ESP. We present the results of applying ESP to the problem of verifying the file I/O behavior of a version of the GNU C compiler (gcc, 140,000 LOC). We are able to prove that all of the 646 calls to fprintf in the source code of gcc are guaranteed to print to valid, open files. Our results show that property simulation scales to large programs and is accurate enough to verify meaningful properties.
Automatic Verification of Pipelined Microprocessor Control
, 1994
"... We describe a technique for verifying the control logic of pipelined microprocessors. It handles more complicated designs, and requires less human intervention, than existing methods. The technique automaticMly compares a pipelined implementation to an architectural description. The CPU time nee ..."
Abstract
-
Cited by 290 (7 self)
- Add to MetaCart
(Show Context)
We describe a technique for verifying the control logic of pipelined microprocessors. It handles more complicated designs, and requires less human intervention, than existing methods. The technique automaticMly compares a pipelined implementation to an architectural description. The CPU time needed for verification is independent of the data path width, the register file size, and the number of ALU operations.
The Design and Implementation of a Certifying Compiler
, 1998
"... This paper presents the design and implementation of a com-piler that translates programs written in a type-safe subset of the C programming language into highly optimized DEC Alpha assembly language programs, and a certifier that automatically checks the type safety and memory safety of any assembl ..."
Abstract
-
Cited by 275 (10 self)
- Add to MetaCart
(Show Context)
This paper presents the design and implementation of a com-piler that translates programs written in a type-safe subset of the C programming language into highly optimized DEC Alpha assembly language programs, and a certifier that automatically checks the type safety and memory safety of any assembly language program produced by the compiler. The result of the certifier is either a formal proof of type safety or a counterexample pointing to a potential violation of the type system by the target program. The ensemble of the compiler and the certifier is called a certifying compiler. Several advantages of certifying compilation over previous approaches can be claimed. The notion of a certify-ing compiler is significantly easier to employ than a formal compiler verification, in part because it is generally easier to verify the correctness of the result of a computation than to prove the correctness of the computation itself. Also, the approach can be applied even to highly optimizing compilers, as demonstrated by the fact that our compiler generates target code, for a range of realistic C programs, which is competitive with both the cc and gee compilers with all op-timizations enabled. The certifier also drastically improves the effectiveness of compiler testing because, for each test case, it statically signals compilation errors that might oth-erwise require many executions to detect. Finally, this ap-proach is a practical way to produce the safety proofs for a Proof-Carrying Code system, and thus may be useful in a system for safe mobile code.
Lazy Satisfiability Modulo Theories
- JOURNAL ON SATISFIABILITY, BOOLEAN MODELING AND COMPUTATION 3 (2007) 141Â224
, 2007
"... Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some decidable first-order theory T (SMT (T)). These problems are typically not handled adequately by standard automated theorem provers. SMT is being recognized as increasingl ..."
Abstract
-
Cited by 189 (50 self)
- Add to MetaCart
(Show Context)
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some decidable first-order theory T (SMT (T)). These problems are typically not handled adequately by standard automated theorem provers. SMT is being recognized as increasingly important due to its applications in many domains in different communities, in particular in formal verification. An amount of papers with novel and very efficient techniques for SMT has been published in the last years, and some very efficient SMT tools are now available. Typical SMT (T) problems require testing the satisfiability of formulas which are Boolean combinations of atomic propositions and atomic expressions in T, so that heavy Boolean reasoning must be efficiently combined with expressive theory-specific reasoning. The dominating approach to SMT (T), called lazy approach, is based on the integration of a SAT solver and of a decision procedure able to handle sets of atomic constraints in T (T-solver), handling respectively the Boolean and the theory-specific components of reasoning. Unfortunately, neither the problem of building an efficient SMT solver, nor even that
Deciding Combinations of Theories
- JOURNAL OF THE ACM
, 1984
"... A method is given for decidlng formulas in combinations of unquantified first-order theories. Rather than coupling separate decision procedures for the contributing theories, the method makes use of a single, uniform procedure that minimizes the code needed to accommodate each additional theory. It ..."
Abstract
-
Cited by 174 (0 self)
- Add to MetaCart
(Show Context)
A method is given for decidlng formulas in combinations of unquantified first-order theories. Rather than coupling separate decision procedures for the contributing theories, the method makes use of a single, uniform procedure that minimizes the code needed to accommodate each additional theory. It is apphcable to theories whose semantics an be encoded within a certain class of purely equational canonical form theories that ~s closed under combination. Examples are given from the equational theories of integer and real anthmeUc, a subtheory of monadic set theory, the theory of cons, car, and cdr, and others. A discussion of the speed performance of the procedure and a proof of the theorem that underhes ~ts completeness are also g~ven. The procedure has been used extensively asthe deductive core of a system for program specification and verification.
Validity Checking for Combinations of Theories with Equality
, 1996
"... . An essential component in many verification methods is a fast decision procedure for validating logical expressions. This paper presents the algorithm used in the Stanford Validity Checker (SVC) which has been used to aid several realistic hardware verification efforts. The logic for this decision ..."
Abstract
-
Cited by 163 (30 self)
- Add to MetaCart
(Show Context)
. An essential component in many verification methods is a fast decision procedure for validating logical expressions. This paper presents the algorithm used in the Stanford Validity Checker (SVC) which has been used to aid several realistic hardware verification efforts. The logic for this decision procedure includes Boolean and uninterpreted functions and linear arithmetic. We have also successfully incorporated other interpreted functions, such as array operations and linear inequalities. The primary techniques which allow a complete and efficient implementation are expression sharing, heuristic rewriting, and congruence closure with interpreted functions. We discuss these techniques and present the results of initial experiments in which SVC is used as a decision procedure in PVS, resulting in dramatic speed-ups. 1 Introduction Decision procedures are emerging as a central component of formal verification systems. Such a procedure can be included as a component of a general-purpos...
Temporal Planning with Continuous Change
, 1994
"... We present zeno, a least commitment planner that handles actions occurring over extended intervals of time. Deadline goals, metric preconditions, metric effects, and continuous change are supported. Simultaneous actions are allowed when their effects do not interfere. Unlike most planners that deal ..."
Abstract
-
Cited by 133 (10 self)
- Add to MetaCart
We present zeno, a least commitment planner that handles actions occurring over extended intervals of time. Deadline goals, metric preconditions, metric effects, and continuous change are supported. Simultaneous actions are allowed when their effects do not interfere. Unlike most planners that deal with complex languages, the zeno planning algorithm is sound and complete. The running code is a complete implementation of the formal algorithm, capable of solving simple problems (i.e., those involving less than a dozen steps). Introduction We have built a least commitment planner, zeno, that handles actions occuring over extended intervals of time and whose preconditions and effects can be temporally quantified. These capabilities enable zeno to reason about deadline goals, piecewise-linear continuous change, external events and to a limited extent, simultaneous actions. While other planners exist with some of these features, zeno is different because it is both sound and complete. As a...
Automated Deduction by Theory Resolution
- Journal of Automated Reasoning
, 1985
"... Theory resolution constitutes a set of complete procedures for incorporating theories into a resolution theorem-proving program, thereby making it unnecessary to resolve directly upon axioms of the theory. This can greatly reduce the length of proofs and the size of the search space. Theory resoluti ..."
Abstract
-
Cited by 132 (1 self)
- Add to MetaCart
(Show Context)
Theory resolution constitutes a set of complete procedures for incorporating theories into a resolution theorem-proving program, thereby making it unnecessary to resolve directly upon axioms of the theory. This can greatly reduce the length of proofs and the size of the search space. Theory resolution effects a beneficial division of labor, improving the performance of the theorem prover and increasing the applicability of the specialized reasoning procedures. Total theory resolution utilizes a decision procedure that is capable of determining unsatisfiability of any set of clauses using predicates in the theory. Partial theory resolution employs a weaker decision procedure that can determine potential unsatisfiability of sets of literals. Applications include the building in of both mathematical and special decision procedures, e.g., for the taxonomic information furnished by a knowledge representation system. Theory resolution is a generalization of numerous previously known resolution refinements. Its power is demonstrated by comparing solutions of "Schubert's Steamroller" challenge problem with and without building in axioms through theory resolution. 1 1