Results 1  10
of
39
Kleene algebra with tests: Completeness and decidability
 In Proc. of 10th International Workshop on Computer Science Logic (CSL’96
, 1996
"... Abstract. Kleene algebras with tests provide a rigorous framework for equational speci cation and veri cation. They have been used successfully in basic safety analysis, sourcetosource program transformation, and concurrency control. We prove the completeness of the equational theory of Kleene alg ..."
Abstract

Cited by 37 (16 self)
 Add to MetaCart
(Show Context)
Abstract. Kleene algebras with tests provide a rigorous framework for equational speci cation and veri cation. They have been used successfully in basic safety analysis, sourcetosource program transformation, and concurrency control. We prove the completeness of the equational theory of Kleene algebra with tests and *continuous Kleene algebra with tests over languagetheoretic and relational models. We also show decidability. Cohen's reduction of Kleene algebra with hypotheses of the form r = 0 to Kleene algebra without hypotheses is simpli ed and extended to handle Kleene algebras with tests. 1
Upper bounds for static resource allocation in a distributed system
 Journal Of Computation And Systems Sciences
, 1981
"... The problem considered in this paper is a simplification of one arising in the study of distributed computer systems. A network is considered, in which are located a large number of “resources ” and a large number of potential users of those resources. Each user requires a certain fixed set of resou ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
The problem considered in this paper is a simplification of one arising in the study of distributed computer systems. A network is considered, in which are located a large number of “resources ” and a large number of potential users of those resources. Each user requires a certain fixed set of resources (for instance, in order to execute a
win and sin: Predicate transformers for concurrency
 ACM Transactions on Programming Languages and Systems
, 1990
"... Digital Equipment Corporation The weakest liberal precondition and strongest postcondition predicate transformers are generalized to the weakest invariant and strongest invariant. These new predicate transformers are useful for reasoning about concurrent programs containing operations in which the ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
Digital Equipment Corporation The weakest liberal precondition and strongest postcondition predicate transformers are generalized to the weakest invariant and strongest invariant. These new predicate transformers are useful for reasoning about concurrent programs containing operations in which the grain of atomicity is unspecified. They can also be used to replace behavioral arguments with more rigorous assertional ones.
The Third Homomorphism Theorem
, 1995
"... The Third Homomorphism Theorem is a folk theorem of the constructive algorithmics community. It states that a function on lists that can be computed both from left to right and from right to left is necessarily a list homomorphismit can be computed according to any parenthesization of the list. ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
The Third Homomorphism Theorem is a folk theorem of the constructive algorithmics community. It states that a function on lists that can be computed both from left to right and from right to left is necessarily a list homomorphismit can be computed according to any parenthesization of the list. We formalize and prove the theorem, and use it to improveanO#n 2 # sorting algorithm to O#n log n#. 1Introduction List homomorphisms are those functions on #nite lists that promote through list concatenationthat is, functions h for which there exists a binary operator # such that, for all #nite lists x and y , h #x ++ y# = hx#hy where `++' denotes list concatenation. Such functions are ubiquitous in functional programming. Some examples of list homomorphisms are: # the identity function id ; # the map function map f , which applies a given function f to every elementof a list; # the function concat , which concatenates a list of lists into a single long list; # the function ...
On the recursive decomposition ordering with lexicographical status and other related orderings
 Journal of Automated Reasoning
, 1990
"... This paper studies three orderings, useful in theorem proving, especially for proving termination of term rewriting systems: the recursive decomposition ordering with status, the recursive path ordering with status and the closure ordering. It proves the transitivity of the recursive path ordering ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
This paper studies three orderings, useful in theorem proving, especially for proving termination of term rewriting systems: the recursive decomposition ordering with status, the recursive path ordering with status and the closure ordering. It proves the transitivity of the recursive path ordering, the strict inclusion of the recursive path ordering in the recursive decomposition ordering, the totality of the recursive path ordering { therefore of the recursive decomposition ordering {, the strict inclusion of the recursive decomposition ordering in the closure ordering and the stability of the closure ordering by instanciation. 1
Techniques to Understand Computer Simulations: Markov; Chain Analysis
 JOURNAL OF ARTIFICIAL SOCIETIES AND SOCIAL SIMULATION
, 2009
"... The aim of this paper is to assist researchers in understanding the dynamics of simulation models that have been implemented and can be run in a computer, i.e. computer models. To do that, we start by explaining (a) that computer models are just inputoutput functions, (b) that every computer model ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The aim of this paper is to assist researchers in understanding the dynamics of simulation models that have been implemented and can be run in a computer, i.e. computer models. To do that, we start by explaining (a) that computer models are just inputoutput functions, (b) that every computer model can be reimplemented in many different formalisms (in particular in most programming languages), leading to alternative representations of the same inputoutput relation, and (c) that many computer models in the social simulation literature can be usefully represented as timehomogeneous Markov chains. Then we argue that analysing a computer model as a Markov chain can make apparent many features of the model that were not so evident before conducting such analysis. To prove this point, we present the main concepts needed to conduct a formal analysis of any timehomogeneous Markov chain, and we illustrate the usefulness of these concepts by analysing 10 wellknown models in the social simulation literature as Markov chains. These models are:
* Schelling's (1971) model of spatial segregation
* Epstein and Axtell's (1996) Sugarscape
* Miller and Page's (2004) standing ovation model
* Arthur's (1989) model of competing technologies
* Axelrod's (1986) metanorms models
* Takahashi's (2000) model of generalized exchange
* Axelrod's (1997) model of dissemination of culture
* Kinnaird's (1946) truels
* Axelrod and Bennett's (1993) model of competing bimodal coalitions
* Joyce et al.'s (2006) model of conditional association
In particular, we explain how to characterise the transient and the asymptotic dynamics of these computer models and, where appropriate, how to assess the stochastic stability of their absorbing states. In all cases, the analysis conducted using the theory of Markov chains has yielded useful insights about the dynamics of the computer model under study.
A Simple Supercompiler Formally Verified in Coq
 SECOND INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2010)
, 2010
"... We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We study an approach for verifying the correctness of a simplified supercompiler in Coq. While existing supercompilers are not very big in size, they combine many different program transformations in intricate ways, so checking the correctness of their implementation poses challenges. The presented method relies on two important technical features to achieve a compact and modular formalization: first, a very limited object language; second, decomposing the supercompilation process into many subtransformations, whose correctness can be checked independently. In particular, we give separate correctness proofs for two key parts of driving – normalization and positive information propagation – in the context of a nonTuringcomplete expression sublanguage. Though our supercompiler is currently limited, its formal correctness proof can give guidance for verifying more realistic implementations.
How to Make Zuse's Z3 a Universal Computer
, 1998
"... The computing machine Z3, built by Konrad Zuse between 1938 and 1941, could only execute fixed sequences of floatingpoint arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. An interesting question to ask, from the viewpoint of the hist ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The computing machine Z3, built by Konrad Zuse between 1938 and 1941, could only execute fixed sequences of floatingpoint arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. An interesting question to ask, from the viewpoint of the history of computing, is whether or not these operations are sufficient for universal computation. In this paper we show that in fact a single program loop containing these arithmetical instructions can simulate any Turing machine whose tape is of a given finite size. This is done by simulating conditional branching and indirect addressing by purely arithmetical means. Zuse's Z3 is therefore, at least in principle, as universal as today's computers which have a bounded addressing space. A sideeffect of this result is that the size of the program stored on punched tape increases enormously. Universal Machines and Single Loops Nobody has ever built a universal computer. The reason is that a uni...
The Generic Model of Computation
"... Over the past two decades, Yuri Gurevich and his colleagues have formulated axiomatic foundations for the notion of algorithm, be it classical, interactive, or parallel, and formalized them in the new generic framework of abstract state machines. This approach has recently been extended to suggest a ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Over the past two decades, Yuri Gurevich and his colleagues have formulated axiomatic foundations for the notion of algorithm, be it classical, interactive, or parallel, and formalized them in the new generic framework of abstract state machines. This approach has recently been extended to suggest a formalization of the notion of effective computation over arbitrary countable domains. The central notions are summarized herein. 1