Results 1  10
of
23
Positivity problems for loworder linear recurrence sequences
 In Proc. Symp. on Discrete Algorithms (SODA). ACMSIAM
, 2014
"... We consider two decision problems for linear recurrence sequences (LRS) over the integers, namely the Positivity Problem (are all terms of a given LRS positive?) and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). We show decidability of both problems for ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
We consider two decision problems for linear recurrence sequences (LRS) over the integers, namely the Positivity Problem (are all terms of a given LRS positive?) and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). We show decidability of both problems for LRS of order 5 or less, with complexity in the Counting Hierarchy for Positivity, and in polynomial time for Ultimate Positivity. Moreover, we show by way of hardness that extending the decidability of either problem to LRS of order 6 would entail major breakthroughs in analytic number theory, more precisely in the field of Diophantine approximation of transcendental numbers. 1
A.: An abstract domain to infer ordinalvalued ranking functions
, 2014
"... Abstract. The traditional method for proving program termination consists in inferring a ranking function. In many cases (i.e. programs with unbounded nondeterminism), a single ranking function over natural numbers is not sufficient. Hence, we propose a new abstract domain to automatically infer ra ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. The traditional method for proving program termination consists in inferring a ranking function. In many cases (i.e. programs with unbounded nondeterminism), a single ranking function over natural numbers is not sufficient. Hence, we propose a new abstract domain to automatically infer ranking functions over ordinals. We extend an existing domain for piecewisedefined naturalvalued ranking functions to polynomials in ω, where the polynomial coefficients are naturalvalued functions of the program variables. The abstract domain is parametric in the choice of the maximum degree of the polynomial, and the types of functions used as polynomial coefficients. We have implemented a prototype static analyzer for a whilelanguage by instantiating our domain using affine functions as polynomial coefficients. We successfully analyzed small but intricate examples that are out of the reach of existing methods. To our knowledge this is the first abstract domain able to reason about ordinals. Handling ordinals leads to a powerful approach for proving termination of imperative programs, which in particular subsumes existing techniques based on lexicographic ranking functions. 1
Ackermannian and PrimitiveRecursive Bounds with Dickson’s Lemma
"... Dickson’s Lemma is a simple yet powerful tool widely used in decidability proofs, especially when dealing with counters or related data structures in algorithmics, verification and modelchecking, constraint solving, logic, etc. While Dickson’s Lemma is wellknown, most computer scientists are not ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Dickson’s Lemma is a simple yet powerful tool widely used in decidability proofs, especially when dealing with counters or related data structures in algorithmics, verification and modelchecking, constraint solving, logic, etc. While Dickson’s Lemma is wellknown, most computer scientists are not aware of the complexity upper bounds that are entailed by its use. This is mainly because, on this issue, the existing literature is not very accessible. We propose a new analysis of the length of bad sequences over (N k, ≤), improving on earlier results and providing upper bounds that are essentially tight. This analysis is complemented by a “user guide” explaining through practical examples how to easily derive complexity upper bounds from Dickson’s Lemma.
G.: Underapproximating loops in C programs for fast counterexample detection
 In: CAV. Volume 8044 of LNCS. (2013) 381–396
"... Abstract. Many software model checkers only detect counterexamples with deep loops after exploring numerous spurious and increasingly longer counterexamples. We propose a technique that aims at eliminating this weakness by constructing auxiliary paths that represent the effect of a range of loop ite ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Many software model checkers only detect counterexamples with deep loops after exploring numerous spurious and increasingly longer counterexamples. We propose a technique that aims at eliminating this weakness by constructing auxiliary paths that represent the effect of a range of loop iterations. Unlike acceleration, which captures the exact effect of arbitrarily many loop iterations, these auxiliary paths may underapproximate the behaviour of the loops. In return, the approximation is sound with respect to the bitvector semantics of programs. Our approach supports arbitrary conditions and assignments to arrays in the loop body, but may as a result introduce quantified conditionals. To reduce the resulting performance penalty, we present two quantifier elimination techniques specially geared towards our application. Loop underapproximation can be combined with a broad range of verification techniques. We paired our techniques with lazy abstraction and bounded model checking, and evaluated the resulting tool on a number of buffer overflow benchmarks, demonstrating its ability to efficiently detect deep counterexamples in C programs that manipulate arrays. 1
Binary Reachability Analysis of Higher Order Functional Programs
, 2012
"... A number of recent approaches for proving program termination rely on transition invariants – a termination argument that can be constructed incrementally using abstract interpretation. These approaches use binary reachability analysis to check if a candidate transition invariant holds for a given ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
A number of recent approaches for proving program termination rely on transition invariants – a termination argument that can be constructed incrementally using abstract interpretation. These approaches use binary reachability analysis to check if a candidate transition invariant holds for a given program. For imperative programs, its efficient implementation can be obtained by a reduction to reachability analysis, for which practical tools are available. In this paper, we show how a binary reachability analysis can be put to work for proving termination of higher order functional programs.
Demystifying incentives in the consensus computer
"... Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners ’ computational resources or lead miners to accept incorrect script results. These attacks drive miners to an illfated choice, which we call the verifier’s dilemma, whereby rational miners are wellincentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier’s dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.
SatisfiabilityBased Program REASONING AND PROGRAM SYNTHESIS
, 2010
"... Program reasoning consists of the tasks of automatically and statically verifying correctness and inferring properties of programs. Program synthesis is the task of automatically generating programs. Both program reasoning and synthesis are theoretically undecidable, but the results in this disserta ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Program reasoning consists of the tasks of automatically and statically verifying correctness and inferring properties of programs. Program synthesis is the task of automatically generating programs. Both program reasoning and synthesis are theoretically undecidable, but the results in this dissertation show that they are practically tractable. We show that there is enough structure in programs written by human developers to make program reasoning feasible, and additionally we can leverage program reasoning technology for automatic program synthesis. This dissertation describes expressive and efficient techniques for program reasoning and program synthesis. Our techniques work by encoding the underlying inference tasks as solutions to satisfiability instances. A core ingredient in the reduction of these problems to finite satisfiability instances is the assumption of templates. Templates are userprovided hints about the structural form of the desired artifact, e.g., invariant, pre and postcondition templates for reasoning; or program templates for synthesis. We propose novel algorithms, parameterized by suitable templates, that reduce the inference of these artifacts to satisfiability. We show that fixedpoint computation—the key technical challenge in program reasoning— is encodable as SAT instances. We also show that program synthesis can be viewed as generalized
At the Interface of Biology and Computation
"... Representing a new class of tool for biological modeling, Bio Model Analyzer (BMA) uses sophisticated computational techniques to determine stabilization in cellular networks. This paper presents designs aimed at easing the problems that can arise when such techniques—using distinct approaches to co ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Representing a new class of tool for biological modeling, Bio Model Analyzer (BMA) uses sophisticated computational techniques to determine stabilization in cellular networks. This paper presents designs aimed at easing the problems that can arise when such techniques—using distinct approaches to conceptualizing networks—are applied in biology. The work also engages with more fundamental issues being discussed in the philosophy of science and science studies. It shows how scientific ways of knowing are constituted in routine interactions with tools like BMA, where the emphasis is on the practical business at hand, even when seemingly deep conceptual problems exist. For design, this perspective refigures the frictions raised when computation is used to model biology. Rather than obstacles, they can be seen as opportunities for opening up different ways of knowing.
Wordlength Optimization Beyond Straight Line Code
"... The silicon area benefits that result from wordlength optimization have been widely reported by the FPGA community. However, to date, most approaches are restricted to straight line code, or code that can be converted into straight line code using techniques such as loopunrolling. In this paper, w ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
The silicon area benefits that result from wordlength optimization have been widely reported by the FPGA community. However, to date, most approaches are restricted to straight line code, or code that can be converted into straight line code using techniques such as loopunrolling. In this paper, we take the first steps towards creating analytical techniques to optimize the precision used throughout custom FPGA accelerators for algorithms that contain loops with data dependent exit conditions. To achieve this, we build on ideas emanating from the software verification community to prove program termination. Our idea is to apply wordlength optimization techniques to find the minimum precision required to guarantee that a loop with data dependent exit conditions will terminate. Without techniques to analyze algorithms containing these types of loops, a hardware designer may elect to implement every arithmetic operator throughout a custom FPGAbased accelerator using IEEE754 standard single or double precision arithmetic. With this approach, the FPGA accelerator would have comparable accuracy to a software implementation. However, we show that using our new technique to create custom fixed and floating point designs, we can obtain silicon area savings of up to 50 % over IEEE standard single precision arithmetic, or 80 % over IEEE standard double precision arithmetic, at the same time as providing guarantees that the created hardware designs will work in practice.