• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Monitors: An Operating System Structure Concept: The origin of Concurrent Programming from Semahphore to Remote Procedure calls (1974)

by C A R Hoare
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 566
Next 10 →

Linearizability: a correctness condition for concurrent objects

by Maurice P. Herlihy, Jeannette M. Wing , 1990
"... A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent object ..."
Abstract - Cited by 1178 (28 self) - Add to MetaCart
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object’s operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
(Show Context)

Citation Context

...roofs in the Appendices. Our notion of linearizability generalizes and unifies similar notions found in specific examples in the literature. The use of concurrency control mechanisms such as monitors =-=[26]-=- or Ada tasks [9] is usually illustrated by simple implementations of linearizable objects such as bounded FIFO queues. These implementations permit very little concurrency, since operations execute o...

Memory Coherence in Shared Virtual Memory Systems

by Kai Li, Paul Hudak , 1989
"... This paper studies the memory coherence problem in designing said inaplementing a shared virtual memory on looselycoupled multiprocessors. Two classes of aIgoritb. ms for solving the problem are presented. A prototype shared virtual memory on an Apollo ring has been implemented based on these a ..."
Abstract - Cited by 957 (17 self) - Add to MetaCart
This paper studies the memory coherence problem in designing said inaplementing a shared virtual memory on looselycoupled multiprocessors. Two classes of aIgoritb. ms for solving the problem are presented. A prototype shared virtual memory on an Apollo ring has been implemented based on these algorithms. Both theoretical and practical results show tkat the mentory coherence problem cast indeed be solved efficiently on a loosely-coupled multiprocessor.

The Interdisciplinary Study of Coordination

by Thomas W. Malone, Kevin Crowston , 1993
"... ..."
Abstract - Cited by 809 (21 self) - Add to MetaCart
Abstract not found

Eraser: a dynamic data race detector for multithreaded programs

by Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro, Thomas Anderson - ACM Transaction of Computer System , 1997
"... Multi-threaded programming is difficult and error prone. It is easy to make a mistake in synchronization that produces a data race, yet it can be extremely hard to locate this mistake during debugging. This paper describes a new tool, called Eraser, for dynamically detecting data races in lock-based ..."
Abstract - Cited by 688 (2 self) - Add to MetaCart
Multi-threaded programming is difficult and error prone. It is easy to make a mistake in synchronization that produces a data race, yet it can be extremely hard to locate this mistake during debugging. This paper describes a new tool, called Eraser, for dynamically detecting data races in lock-based multi-threaded programs. Eraser uses binary rewriting techniques to monitor every shared memory reference and verify that consistent locking behavior is observed. We present several case studies, including undergraduate coursework and a multi-threaded Web search engine, that demonstrate the effectiveness of this approach. 1

The Spec# Programming System: An Overview

by Mike Barnett, K. Rustan M. Leino, Wolfram Schulte , 2004
"... Spec# is the latest in a long line of work on programming languages and systems aimed at improving the development of correct software. This paper describes the goals and architecture of the Spec# programming system, consisting of the object-oriented Spec# programming language, the Spec# compiler ..."
Abstract - Cited by 542 (50 self) - Add to MetaCart
Spec# is the latest in a long line of work on programming languages and systems aimed at improving the development of correct software. This paper describes the goals and architecture of the Spec# programming system, consisting of the object-oriented Spec# programming language, the Spec# compiler, and the Boogie static program verifier. The language includes constructs for writing specifications that capture programmer intentions about how methods and data are to be used, the compiler emits run-time checks to enforce these specifications, and the verifier can check the consistency between a program and its specifications. The Spec#
(Show Context)

Citation Context

...n-time error if expose (o) . . . is reached when o is already exposed. In this way, the expose mechanism is similar to thread-non-reentrant mutexes in concurrent programming, where monitor invariants =-=[34]-=- are the analog of our object invariants. If exposing were idempotent, then one would not be able to rely on the object invariant to hold immediately inside an expose block, in the same way that the i...

MULTILISP: a language for concurrent symbolic computation

by Robert H. Halstead - ACM Transactions on Programming Languages and Systems , 1985
"... Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing par ..."
Abstract - Cited by 529 (1 self) - Add to MetaCart
Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing parallelism. The potential complexity of dealing with side effects in a parallel context is mitigated by the nature of the parallelism constructs and by support for abstract data types: a recommended Multilisp programming style is presented which, if followed, should lead to highly parallel, easily understandable programs. Multilisp is being implemented on the 32-processor Concert multiprocessor; however, it is ulti-mately intended for use on larger multiprocessors. The current implementation, called Concert Multilisp, is complete enough to run the Multilisp compiler itself and has been run on Concert prototypes including up to eight processors. Concert Multilisp uses novel techniques for task scheduling and garbage collection. The task scheduler helps control excessive resource utilization by means of an unfair scheduling policy; the garbage collector uses a multiprocessor algorithm based on the incremental garbage collector of Baker.
(Show Context)

Citation Context

...sing objects with hidden state information can be accomplished quite easily using a procedure to represent each object. If the hidden state is mutable, the Multilisp programmer can build monitor-like =-=[33]-=- structures to synchronize access to it. Multilisp provides the necessary low-level tools for this encapsulation and synchronization, but no distinctive style or set of higher level abstractions for b...

Orca: A language for parallel programming of distributed systems

by Henri E. Bal, M. Frans Kaashoek, Andrew S. Tanenbaum - IEEE Transactions on Software Engineering , 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract - Cited by 332 (46 self) - Add to MetaCart
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
(Show Context)

Citation Context

... shared data-objects. By replicating objects, access control to shared objects is decentralized, which decreases access costs and increases parallelism. This is a major difference with, say, monitors =-=[19]-=-, which centralize control to shared data. 2.4. Synchronization An abstract data type in Orca can be used for creating shared as well as local objects. For objects that are shared among multiple proce...

On the Design and Development of Program families

by David L. Parnas , 1976
"... Program families are defined (analogously to hardware families) as sets of programs whose common properties are so extensive that it is advantageous to study thecommon properties of the programs before analyzing individual members. The assumption that, ifone is to develop a set of similar programs o ..."
Abstract - Cited by 326 (5 self) - Add to MetaCart
Program families are defined (analogously to hardware families) as sets of programs whose common properties are so extensive that it is advantageous to study thecommon properties of the programs before analyzing individual members. The assumption that, ifone is to develop a set of similar programs over a period of time, one should consider the set asawhole while developing the first three approaches to the development, is discussed. A conventional approach called "sequential development" is compared to "stepwise refinement" and "specification of information hiding modules." A more detailed comparison of the two methods is then made. By means of several examples it is demonstrated that the two methods are based on the same concepts but bring complementary advantages.

A type and effect system for atomicity

by Cormac Flanagan, Shaz Qadeer - In PLDI 03: Programming Language Design and Implementation , 2003
"... Ensuring the correctness of multithreaded programs is difficult, due to the potential for unexpected and nondeterministic interactions between threads. Previous work addressed this problem by devising tools for detecting race conditions, a situation where two threads simultaneously access the same d ..."
Abstract - Cited by 252 (22 self) - Add to MetaCart
Ensuring the correctness of multithreaded programs is difficult, due to the potential for unexpected and nondeterministic interactions between threads. Previous work addressed this problem by devising tools for detecting race conditions, a situation where two threads simultaneously access the same data variable, and at least one of the accesses is a write. However, verifying the absence of such simultaneous-access race conditions is neither necessary nor sufficient to ensure the absence of errors due to unexpected thread interactions. We propose that a stronger non-interference property is required, namely atomicity. Atomic methods can be assumed to execute serially, without interleaved steps of other threads. Thus, atomic methods are amenable to sequential reasoning techniques, which significantly simplifies both formal and informal reasoning about program correctness. This paper presents a type system for specifying and verifying the atomicity of methods in multithreaded Java programs. The atomic type system is a synthesis of Lipton’s theory of reduction and type systems for race detection. We have implemented this atomic type system for Java and used it to check a variety of standard Java library classes. The type checker uncovered subtle atomicity violations in classes such as java.lang.String and java.lang.String-Buffer that cause crashes under certain thread interleavings.

Guardians and Actions: Linguistic Support for Robust, Distributed Programs

by Barbara Liskov, Robert Scheifler , 1983
"... An overview is presented of an integrated programming language and system designed to support the construction and maintenance of distributed programs: programs in which modules reside and execute at communicating, but geographically distinct, nodes. The language is intended to support a class of a ..."
Abstract - Cited by 248 (6 self) - Add to MetaCart
An overview is presented of an integrated programming language and system designed to support the construction and maintenance of distributed programs: programs in which modules reside and execute at communicating, but geographically distinct, nodes. The language is intended to support a class of applications concerned with the manipulation and preservation of long-lived, on-line, distributed data. The language addresses the writing of robust programs that survive hardware failures without loss of distributed information and that provide highly concurrent access to that information while preserving its consistency. Several new linguistic constructs are provided; among them are atomic actions, and modules called guardians that survive node failures.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University