Results 1 
7 of
7
Proving Nonopacity
"... Guerraoui and Kapalka defined opacity as a safety criterion for transactional memory algorithms in 2008. Researchers have shown how to prove opacity, while little is known about pitfalls that can lead to nonopacity. In this paper, we identify two problems that lead to nonopacity and we prove an im ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Guerraoui and Kapalka defined opacity as a safety criterion for transactional memory algorithms in 2008. Researchers have shown how to prove opacity, while little is known about pitfalls that can lead to nonopacity. In this paper, we identify two problems that lead to nonopacity and we prove an impossibility result. We first show that the wellknown TM algorithms DSTM and McRT don’t satisfy opacity. DSTM suffers from a writeskew anomaly, while McRT suffers from a writeexposure anomaly. We then prove that for directupdate TM algorithms, opacity is incompatible with a liveness criterion called local progress, even for faultfree systems. Our result implies that if TM algorithm designers want both opacity and local progress, they should avoid directupdate algorithms. 1.
On the Correctness of Transactional Memory Algorithms
, 2014
"... Transactional Memory (TM) provides programmers with a highlevel and composable concurrency control abstraction. The correct execution of client programs using TM is directly dependent on the correctness of the TM algorithms. In return for the simpler programming model, designing a correct TM algori ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Transactional Memory (TM) provides programmers with a highlevel and composable concurrency control abstraction. The correct execution of client programs using TM is directly dependent on the correctness of the TM algorithms. In return for the simpler programming model, designing a correct TM algorithm is an art. This dissertation contributes to the specification, safety criterion, testing and verification of TM algorithms. In particular, it presents techniques to prove the correctness or incorrectness of TM algorithms. We introduce a language for architectureindependent specification of synchronization algorithms. An algorithm specification captures two abstract properties of the algorithm namely the type of the used synchronization objects and the pairs of method calls that should preserve their program order in the relaxed execution. Decomposition of the correctness condition supports modular and scalable verification. We introduce the markability correctness condition as the conjunction of three intuitive invariants: writeobservation, readpreservation and realtimepreservation. We prove the equivalence of markability and opacity correctness conditions. We identify two pitfalls that lead to violation of opacity: the writeskew and writeexposure
Specifying Transactional Memories with Nontransactional Operations
"... Although transactional memory (TM) is a promising approach for synchronizing sharedmemory concurrent programs, it will not exist alone: real systems will provide a variety of synchronization mechanisms, and TM must interact properly with them. Therefore, ..."
Abstract
 Add to MetaCart
(Show Context)
Although transactional memory (TM) is a promising approach for synchronizing sharedmemory concurrent programs, it will not exist alone: real systems will provide a variety of synchronization mechanisms, and TM must interact properly with them. Therefore,
includes a review of WTTM The Fourth Workshop on the Theory of Transactional Memory.
"... As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed com ..."
Abstract
 Add to MetaCart
(Show Context)
As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed computing conferences, PODC – the ACM Symposium on Principles of Distributed
Writeobservation and Readpreservation TM Correctness Invariants
"... A transactional memory (TM) is a concurrent object with the three read, write and commit methods. The clients of a TM are transactions, a sequence of read and write invocations that are possibly succeeded by a commit invocation. A transactional processing ..."
Abstract
 Add to MetaCart
(Show Context)
A transactional memory (TM) is a concurrent object with the three read, write and commit methods. The clients of a TM are transactions, a sequence of read and write invocations that are possibly succeeded by a commit invocation. A transactional processing
Decomposing Opacity
, 2014
"... Transactional memory (TM) algorithms are subtle and the TM correctness conditions are intricate. Decomposition of the correctness condition can bring modularity to TM algorithm design and verification. We present a decomposition of opacity called markability as a conjunction of separate intuitive i ..."
Abstract
 Add to MetaCart
(Show Context)
Transactional memory (TM) algorithms are subtle and the TM correctness conditions are intricate. Decomposition of the correctness condition can bring modularity to TM algorithm design and verification. We present a decomposition of opacity called markability as a conjunction of separate intuitive invariants. We prove the equivalence of opacity and markability. The proofs of markability of TM algorithms can be aided by and mirror the algorithm design intuitions. As an example, we prove the markability and hence opacity of the TL2 algorithm. In addition, based on one of the invariants, we present lower bound results for the time complexity of TM algorithms.
The Push/Pull model of transactions
"... We present a general theory of serializability, unifying a wide range of transactional algorithms, including some that are yet to come. To this end, we provide a compact semantics in which concurrent transactions push their effects into the shared view (or unpush to recall effects) and pull the effe ..."
Abstract
 Add to MetaCart
(Show Context)
We present a general theory of serializability, unifying a wide range of transactional algorithms, including some that are yet to come. To this end, we provide a compact semantics in which concurrent transactions push their effects into the shared view (or unpush to recall effects) and pull the effects of potentially uncommitted concurrent transactions into their local view (or unpull to detangle). Each operation comes with simple sideconditions given in terms of commutativity (Lipton’s leftmovers and rightmovers [24]). The benefit of this model is that most of the elaborate reasoning (coinduction, simulation, subtle invariants, etc.) necessary for proving the serializability of a transactional algorithm is already proved within the semantic model. Thus, proving serializability (or opacity) amounts simply to mapping the algorithm on to our rules, and showing that it satisfies the rules ’ sideconditions. 1