Results 1 
9 of
9
How to prove algorithms linearisable
 CAV 2012, USA, 2012 Proceedings, volume 7358 of Lecture Notes in Computer Science
, 2012
"... Abstract. Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing’s landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique. 1
Verifying Concurrent Memory Reclamation Algorithms with Grace
"... Abstract. Memory management is one of the most complex aspects of modern concurrent algorithms, and various techniques proposed for it—such as hazard pointers, readcopyupdate and epochbased reclamation—have proved very challenging for formal reasoning. In this paper, we show that different memory ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Memory management is one of the most complex aspects of modern concurrent algorithms, and various techniques proposed for it—such as hazard pointers, readcopyupdate and epochbased reclamation—have proved very challenging for formal reasoning. In this paper, we show that different memory reclamation techniques actually rely on the same implicit synchronisation pattern, not clearly reflected in the code, but only in the form of assertions used to argue its correctness. The pattern is based on the key concept of a grace period, during which a thread can access certain shared memory cells without fear that they get deallocated. We propose a modular reasoning method, motivated by the pattern, that handles all three of the above memory reclamation techniques in a uniform way. By explicating their fundamental core, our method achieves clean and simple proofs, scaling even to realistic implementations of the algorithms without a significant increase in proof complexity. We formalise the method using a combination of separation logic and temporal logic and use it to verify example instantiations of the three approaches to memory reclamation. 1
Local RelyGuarantee Conditions for Linearizability and LockFreedom
"... Abstract. Relyguarantee reasoning specifications typically consider all components of a concurrent system. For the important case where components operate on a shared data object, we derive a local instance of relyguarantee reasoning, which permits specifications to examine a single pair of repres ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Relyguarantee reasoning specifications typically consider all components of a concurrent system. For the important case where components operate on a shared data object, we derive a local instance of relyguarantee reasoning, which permits specifications to examine a single pair of representative components only. Based on this instance, we define local proof obligations for linearizability and lockfreedom, which we then apply to a nonblocking concurrent stack with explicit memory reuse. Both the derivation of this local instance and its application are mechanized in the KIV interactive theorem prover.
ECEASST Proving Linearizability of Multiset with Local Proof Obligations
"... Abstract: Linearizability is a key correctness criterion for concurrent software. In our previous work, we introduced local proof obligations, which, by showing a refinement between an abstract specification and its implementation, imply linearizability of the implementation. The refinement is show ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Linearizability is a key correctness criterion for concurrent software. In our previous work, we introduced local proof obligations, which, by showing a refinement between an abstract specification and its implementation, imply linearizability of the implementation. The refinement is shown via a thread local backward simulation, which reduces the complexity of a backward simulation to an execution of two symbolic threads. In this paper, we present a correctness proof by applying those proof obligations to a lockbased implementation of a multiset. It is interesting for two reasons: First, one of its operations inserts two elements nonatomically. To show that it linearizes, we have to find one point, where the multiset is changed instantaneously, which is a counterintuitive task. Second, another operation has nonfixed linearization points, i.e. the linearization points cannot be statically fixed, because the operation’s linearization may depend on other processes ’ execution. This is a typical case to use backward simulation, where we could apply our thread local variant of it. All proofs were mechanized in the theorem prover KIV.
This Report has been published on the Internet under the following Creative Commons License:
, 2011
"... ObjectOriented Software ..."
(Show Context)
Quiescent Consistency: Defining and Verifying Relaxed
"... Abstract. Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designe ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (nonlinearizable) FIFO queue built using a diffraction tree. 1
ECEASST PreProceedings of the 12th International Workshop on Automated Verification of Critical Systems
, 2012
"... The background image on the top half of the front cover is public domain and used here for noncommercial purposes; it is available from ..."
Abstract
 Add to MetaCart
(Show Context)
The background image on the top half of the front cover is public domain and used here for noncommercial purposes; it is available from
ECEASST Compositional Verification of a LockFree Stack with RGITL
, 2013
"... Abstract: This paper describes a compositional verification approach for concurrent algorithms based on the logic RelyGuarantee Interval Temporal Logic (RGITL), which is implemented in the interactive theorem prover KIV. The logic makes it possible to mechanically derive and apply decomposition the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: This paper describes a compositional verification approach for concurrent algorithms based on the logic RelyGuarantee Interval Temporal Logic (RGITL), which is implemented in the interactive theorem prover KIV. The logic makes it possible to mechanically derive and apply decomposition theorems for safety and liveness properties. Decomposition theorems for relyguarantee reasoning, linearizability and lockfreedom are described and applied on a nontrivial running example, a lockfree data stack implementation that uses an explicit allocator stack for memory reuse. To deal with the heap, a lightweight approach that combines ownership annotations and separation logic is taken.
Verifying linearizability: A comparative survey
"... Linearizability has become the key correctness criterion for concurrent data structures, ensuring that histories of the concurrent object under consideration are consistent, where consistency is judged with respect to a sequential history of a corresponding abstract data structure. Linearizability ..."
Abstract
 Add to MetaCart
(Show Context)
Linearizability has become the key correctness criterion for concurrent data structures, ensuring that histories of the concurrent object under consideration are consistent, where consistency is judged with respect to a sequential history of a corresponding abstract data structure. Linearizability allows any order of concurrent (i.e., overlapping) calls to operations to be picked, but requires the realtime order of nonoverlapping to be preserved. A history of overlapping operation calls is linearizable if at least one of the possible order of operations forms a valid sequential history (i.e., corresponds to a valid sequential execution of the data structure), and a concurrent data structure is linearizable iff every history of the data structure is linearizable. Over the years numerous techniques for verifying linearizability have been developed, using a variety of formal foundations such as refinement, shape analysis, reduction, etc. However, as the underlying framework, nomenclature and terminology for each method differs, it has become difficult for practitioners to judge the differences between each approach, and hence, judge the methodology most appropriate for the data structure at hand. We compare the major of methods used to verify linearizability, describe the main contribution of each method, and compare their advantages and limitations. 1