Results 1 - 10
of
103
Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System
- In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles
, 1995
"... Bayou is a replicated, weakly consistent storage system designed for a mobile computing environment that includes portable machines with less than ideal network connectivity. To maximize availability, users can read and write any accessible replica. Bayou's design has focused on supporting apph ..."
Abstract
-
Cited by 512 (16 self)
- Add to MetaCart
(Show Context)
Bayou is a replicated, weakly consistent storage system designed for a mobile computing environment that includes portable machines with less than ideal network connectivity. To maximize availability, users can read and write any accessible replica. Bayou's design has focused on supporting apphcation-specific mechanisms to detect and resolve the update conflicts that naturally arise in such a system, ensuring that replicas move towards eventual consistency, and defining a protocol by which the resolution of update conflicts stabilizes. It includes novel methods for conflict detection, called dependency checks, and per-write conflict resolution based on client-provided merge procedures. To guarantee eventual consistency, Bayou servers must be able to rollback the effects of previously executed writes and redo them according to a global senalization order. Furthermore, Bayou permits clients to observe the results of all writes received by a server, Including tentative writes whose conflicts have not been ultimately resolved. This paper presents the motivation for and design of these mechanisms and describes the experiences gained with an initial implementation of the system.
Optimistic replication
- ACM COMPUTING SURVEYS
, 2005
"... Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failure ..."
Abstract
-
Cited by 290 (19 self)
- Add to MetaCart
Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. Optimistic replication techniques are different from traditional “pessimistic” ones. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. We explore the solution space for optimistic replication algorithms. This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence — and provides a comprehensive survey of techniques developed for addressing these challenges.
Fundamental challenges in mobile computing
- In ACM Symposium on Principles of Distributed Computing
, 1996
"... This paper is an answer to the question: "What is unique and conceptually different about mobile computing? " The paper begins by describing a set of constraints intrinsic to mobile computing, and examining the impact of these constraints on the design of distributed systems. Next, it summ ..."
Abstract
-
Cited by 267 (18 self)
- Add to MetaCart
(Show Context)
This paper is an answer to the question: "What is unique and conceptually different about mobile computing? " The paper begins by describing a set of constraints intrinsic to mobile computing, and examining the impact of these constraints on the design of distributed systems. Next, it summarizes the key results of the Coda and Odyssey systems. Finally, it describes the research opportunities in five important topics relevant to mobile computing: caching metrics, semantic callbacks and validators, resource revocation, analysis of adaptation, and global estimation from local observations. 1.2. The Need for Adaptation Mobility exacerbates the tension between autonomy and interdependence that is characteristic of all distributed systems. The relative resource poverty of mobile elements as well as their lower trust and robustness argues for reliance on static servers. But the need to cope with unreliable and low-performance networks, as well as the need to be sensitive to power consumption argues for self-reliance. 1.
Taming aggressive replication in the Pangaea wide-area file system
, 2002
"... Pangaea is a wide-area file system that supports data sharing among a community of widely distributed users. It is built on a symmetrically decentralized infrastructure that consists of commodity computers provided by the end users. Computers act autonomously to serve data to their local users. When ..."
Abstract
-
Cited by 129 (3 self)
- Add to MetaCart
(Show Context)
Pangaea is a wide-area file system that supports data sharing among a community of widely distributed users. It is built on a symmetrically decentralized infrastructure that consists of commodity computers provided by the end users. Computers act autonomously to serve data to their local users. When possible, they exchange data with nearby peers to improve the system's overall performance, availability, and network economy. This approach is realized by aggressively creating a replica of a file whenever and wherever it is accessed. This paper presents
Mobile Information Access
, 1996
"... The ability to access information on demand when mobile will be a critical capability in the 21st century. In this paper, we examine the fundamental forces at work in mobile computing systems and explain how they constrain the problem of mobile information access. From these constraints, we derive t ..."
Abstract
-
Cited by 122 (4 self)
- Add to MetaCart
(Show Context)
The ability to access information on demand when mobile will be a critical capability in the 21st century. In this paper, we examine the fundamental forces at work in mobile computing systems and explain how they constrain the problem of mobile information access. From these constraints, we derive the importance of adaptivity as a crucial requirement of mobile clients. We then develop a taxonomy of adaptation strategies, and summarize our research in application-transparent and application-aware adaptation in the Coda and Odyssey systems respectively.
The IceCube approach to the reconciliation of divergent replicas
, 2001
"... We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static ..."
Abstract
-
Cited by 121 (11 self)
- Add to MetaCart
We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static and dynamic reconciliation constraints between all pairs of actions, proposes schedules that satisfy the static constraints, and validates them against the dynamic constraints. Preliminary experience indicates that strong static constraints
System support for pervasive applications.
- ACM Trans. on Computer Systems,
, 2004
"... ..."
(Show Context)
The evolution of Coda
, 2002
"... Failure-resilient, scalable, and secure read-write access to shared information by mobile and static users over wireless and wired networks is a fundamental computing challenge. In this article, we describe how the Coda file system has evolved to meet this challenge through the development of mechan ..."
Abstract
-
Cited by 85 (20 self)
- Add to MetaCart
Failure-resilient, scalable, and secure read-write access to shared information by mobile and static users over wireless and wired networks is a fundamental computing challenge. In this article, we describe how the Coda file system has evolved to meet this challenge through the development of mechanisms for server replication, disconnected operation, adaptive use of weak connectivity, isolation-only transactions, translucent caching, and opportunistic exploitation of hardware surrogates. For each mechanism, the article explains how usage experience with it led to the insights for another mechanism. It also shows how Coda has been influenced by the work of other researchers and by industry. The article closes with a discussion of the technical and nontechnical lessons that can be learned from the evolution of the system.
Storage-based intrusion detection: watching storage activity for suspicious behavior
- In Proceedings of the 12th USENIX Security Symposium
, 2003
"... Storage-based intrusion detection allows storage systems to transparently watch for suspicious activity. Storage systems are well-positioned to spot several common intruder actions, such as adding backdoors, inserting Trojan horses, and tampering with audit logs. Further, an intrusion detection syst ..."
Abstract
-
Cited by 59 (8 self)
- Add to MetaCart
(Show Context)
Storage-based intrusion detection allows storage systems to transparently watch for suspicious activity. Storage systems are well-positioned to spot several common intruder actions, such as adding backdoors, inserting Trojan horses, and tampering with audit logs. Further, an intrusion detection system (IDS) embedded in a storage device continues to operate even after client systems are compromised. This paper describes a number of specific warning signs visible at the storage interface. It describes and evaluates a storage IDS, embedded in an NFS server, demonstrating both feasibility and efficiency of storage-based intrusion detection. In particular, both the performance overhead and memory required (40 KB for a reasonable set of rules) are minimal. With small extensions, storage IDSs can also be embedded in block-based storage devices.
A Multicast-based Distributed File System for the Internet
- In Operating Systems Design and Implementation
, 1996
"... JetFile is a file system designed with multicast as its distribution mechanism. The goal is to support a large number of clients in an environment such as the Internet where hosts are attached to both high and low speed networks, sometimes over long distances. JetFile is designed for reduced relianc ..."
Abstract
-
Cited by 54 (3 self)
- Add to MetaCart
JetFile is a file system designed with multicast as its distribution mechanism. The goal is to support a large number of clients in an environment such as the Internet where hosts are attached to both high and low speed networks, sometimes over long distances. JetFile is designed for reduced reliance on servers by allowing client-to-client updates using scalable reliable multicast. Clients on high speed networks prefetch large numbers of files. On low speed networks such as wireless, special caching policies are used to decrease file access latency. The prototype implementation of JetFile is on the JetStream gigabit local area network which provides hardware support for many multicast addresses. The multicast Internet backbone (Mbone) is the wide area testbed for JetFile. 1 Introduction To achieve scalability in a wide area network environment, the next generation of distributed file systems need a new paradigm of communication. The prevailing mode of communication for current distrib...