Results 1 - 10
of
144
Frangipani: A Scalable Distributed File System
"... The ideal distributed file system would provide all its users with coherent, shared access to the same set of files,yet would be arbitrarily scalable to provide more storage space and higher performance to a growing user community. It would be highly available in spite of component failures. It woul ..."
Abstract
-
Cited by 320 (1 self)
- Add to MetaCart
(Show Context)
The ideal distributed file system would provide all its users with coherent, shared access to the same set of files,yet would be arbitrarily scalable to provide more storage space and higher performance to a growing user community. It would be highly available in spite of component failures. It would require minimal human administration, and administration would not become more complex as more components were added. Frangipani is a new file system that approximates this ideal, yet was relatively easy to build because of its two-layer structure. The lower layer is Petal (described in an earlier paper), a distributed storage service that provides incrementally scalable, highly available, automatically managed virtual disks. In the upper layer, multiple machines run the same Frangipani file system code on top of a shared Petal virtual disk, using a distributed lock service to ensure coherence. Frangipaniis meant to run in a cluster of machines that are under a common administration and can communicate securely. Thus the machines trust one another and the shared virtual disk approach is practical. Of course, a Frangipani file system can be exported to untrusted machines using ordinary network file access protocols. We have implemented Frangipani on a collection of Alphas running DIGITAL Unix 4.0. Initial measurements indicate that Frangipani has excellent single-server performance and scales well as servers are added.
Active Storage For Large-Scale Data Mining and Multimedia
, 1998
"... The increasing performance and decreasing cost of processors and memory are causing system intelligence to move into peripherals from the CPU. Storage system designers are using this trend toward "excess" compute power to perform more complex processing and optimizations inside stora ..."
Abstract
-
Cited by 147 (19 self)
- Add to MetaCart
The increasing performance and decreasing cost of processors and memory are causing system intelligence to move into peripherals from the CPU. Storage system designers are using this trend toward "excess" compute power to perform more complex processing and optimizations inside storage devices. To date, such optimizations have been at relatively low levels of the storage protocol. At the same time, trends in storage density, mechanics, and electronics are eliminating the bottleneck in moving data off the media and putting pressure on interconnects and host processors to move data more efficiently. We propose a system called Active Disks that takes advantage of processing power on individual disk drives to run application-level code. Moving portions of an application's processing to execute directly at disk drives can dramatically reduce data traffic and take advantage of the storage parallelism already present in large systems today. We discuss several types of appl...
Scalable performance of the panasas parallel file system
- In FAST-2008: 6th Usenix Conference on File and Storage Technologies
, 2008
"... The Panasas file system uses parallel and redundant access to object storage devices (OSDs), per-file RAID, distributed metadata management, consistent client caching, file locking services, and internal cluster management to provide a scalable, fault tolerant, high performance distributed file syst ..."
Abstract
-
Cited by 117 (25 self)
- Add to MetaCart
The Panasas file system uses parallel and redundant access to object storage devices (OSDs), per-file RAID, distributed metadata management, consistent client caching, file locking services, and internal cluster management to provide a scalable, fault tolerant, high performance distributed file system. The clustered design of the storage system and the use of clientdriven RAID provide scalable performance to many concurrent file system clients through parallel access to file data that is striped across OSD storage nodes. RAID recovery is performed in parallel by the cluster of metadata managers, and declustered data placement yields scalable RAID rebuild rates as the storage system grows larger. This paper presents performance measures of I/O, metadata, and recovery operations for storage clusters that range in size from 10 to 120 storage nodes, 1 to 12 metadata nodes, and with file system client counts ranging from 1 to 100 compute nodes. Production installations are as large as 500 storage nodes, 50 metadata managers, and 5000 clients. 1
Interposed Request Routing for Scalable Network Storage
- IN PROCEEDINGS OF THE FOURTH SYMPOSIUM ON OPERATING SYSTEM DESIGN AND IMPLEMENTATION (OSDI
, 2000
"... This paper presents Slice, a new storage system architecture for highspeed LANs incorporating network-attached block storage. Slice interposes a request switching filter -- called a /proxy -- along the network path between the client and the network storage system (e.g., in a network adapter or swit ..."
Abstract
-
Cited by 103 (11 self)
- Add to MetaCart
(Show Context)
This paper presents Slice, a new storage system architecture for highspeed LANs incorporating network-attached block storage. Slice interposes a request switching filter -- called a /proxy -- along the network path between the client and the network storage system (e.g., in a network adapter or switch). The purpose of the/proxy is to route requests among a server ensemble that implements the file service. We present a prototype that uses this approach to virtualize the standard NFS file protocol to provide scalable, high-bandwidth file service to ordinary NFS clients. The paper presents and justifies the architecture, proposes and evaluates several request routing policies realizable within the architecture, and explores the effects of these policies on service structure
Filesystems for network-attach secure disks
, 1997
"... Network-attached storage enables network-striped data transfers directly between client and storage to pro-vide clients with scalable bandwidth on large transfers. Network-attached storage also decouples policy and enforcement of access control, avoiding unnecessary reverification of protection chec ..."
Abstract
-
Cited by 82 (6 self)
- Add to MetaCart
(Show Context)
Network-attached storage enables network-striped data transfers directly between client and storage to pro-vide clients with scalable bandwidth on large transfers. Network-attached storage also decouples policy and enforcement of access control, avoiding unnecessary reverification of protection checks, reducing file manager work and increasing scalability. It eliminates the expense of a server computer devoted to copying data between peripheral network and client network. This architecture better matches storage technology’s sus-tained data rates, now 80 Mb/s and growing at 40 % per year. Finally, it enables self-managing storage to counter the increasing cost of data management. The availability of cost-effective network-attached storage depends on it becoming a storage commodity, which in turn depends on its utility to a broad segment of the storage market. Specifically, multiple distributed and parallel filesystems must benefit from network-attached storage’s requirement for secure, direct access between client and storage, for reusable, asynchronous access protection checks, and for increased license to efficiently manage underlying storage media. In this paper, we describe a prototype network-attached secure disk interface and filesystems adapted to network-attached stor-age implementing Sun’s NFS, Transarc’s AFS, a network-striped NFS variant, and an informed prefetching NFS variant. Our experimental implementations demonstrate bandwidth and workload scaling and aggressive
Implementing Cooperative Prefetching and Caching in a Globally-Managed Memory System
- In Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems
, 1998
"... This paper presents cooperative prefetching and caching --- the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques fro ..."
Abstract
-
Cited by 65 (11 self)
- Add to MetaCart
(Show Context)
This paper presents cooperative prefetching and caching --- the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques from three lines of research: prefetching algorithms, cluster-wide memory management, and parallel I/O. When used together, these techniques greatly increase the power of prefetching relative to a conventional (nonglobal -memory) system. We have designed and implemented PGMS, a cooperative prefetching and caching system, under the Digital Unix operating system running on a 1.28 Gb/sec Myrinetconnected cluster of DEC Alpha workstations. Our measurements and analysis show that by using available global resources, cooperative prefetching can obtain significant speedups for I/O-bound programs. For example, for a graphics rendering application, our system achieves a speedup of 4.9 over a non-prefetc...
Active Disks - Remote Execution for Network-Attached Storage
, 1997
"... The principal trend in the design of computer systems is the expectation of much greater computational power in future generations of microprocessors. This trend applies to embedded systems as well as host processors. As a result, devices such as storage controllers have excess capacity and growing ..."
Abstract
-
Cited by 59 (1 self)
- Add to MetaCart
(Show Context)
The principal trend in the design of computer systems is the expectation of much greater computational power in future generations of microprocessors. This trend applies to embedded systems as well as host processors. As a result, devices such as storage controllers have excess capacity and growing computational capabilities. Storage system designers are exploiting this trend with higher-level interfaces to storage and increased intelligence inside storage devices. One development in this direction is Network-Attached Secure Disks (NASD) which attaches storage devices directly to the network and raises the storage interface above the simple (fixed-size block) memory abstraction of SCSI. This allows devices more freedom to provide efficient operations; promises more scalable subsystems by offloading file system and storage management functionality from dedicated servers; and reduces latency by executing common case requests directly at storage devices. In this paper, we push this increa...
Security for a High Performance Commodity Storage Subsystem
, 1999
"... and the United States Postal Service. The views and conclusions in this document are my own and should not be interpreted as representing the official policies, either expressed or implied, of any supporting organization or the U.S. Government. ..."
Abstract
-
Cited by 44 (1 self)
- Add to MetaCart
(Show Context)
and the United States Postal Service. The views and conclusions in this document are my own and should not be interpreted as representing the official policies, either expressed or implied, of any supporting organization or the U.S. Government.
Security for network attached storage devices
, 1997
"... This paper presents a novel cryptographic capability system addressing the security and performance needs of network attached storage systems in which file management functions occur at a different location than the file storage device. In our NASD system file managers issue capabilities to client m ..."
Abstract
-
Cited by 42 (8 self)
- Add to MetaCart
(Show Context)
This paper presents a novel cryptographic capability system addressing the security and performance needs of network attached storage systems in which file management functions occur at a different location than the file storage device. In our NASD system file managers issue capabilities to client machines, which can then directly access files stored on the network attached storage device without intervention by a file server. These capabilities may be reused by the client, so that interaction with the file manager is kept to a minimum. Our system emphasizes performance and scalability while separating the roles of decision maker (issuing capabilities) and verifier (validating a capability). We have demonstrated our system with adaptations of both the NFS and AFS distributed file systems using a prototype NASD implementation. Sponsored by DARPA/ITO through ARPA Order D306, and issued by the Indian Head Division, NSWC under contract
Experiences with VI Communication for Database Storage
- In Proceedings of the 29th annual international symposium on Computer architecture
, 2002
"... This paper examines how VI–based interconnects can be used to improve I/O path performance between a database server and the storage subsystem. We design and implement a software layer, DSA, that is layered between the application and VI. DSA takes advantage of specific VI features and deals with ma ..."
Abstract
-
Cited by 37 (10 self)
- Add to MetaCart
This paper examines how VI–based interconnects can be used to improve I/O path performance between a database server and the storage subsystem. We design and implement a software layer, DSA, that is layered between the application and VI. DSA takes advantage of specific VI features and deals with many of its shortcomings. We provide and evaluate one kernel–level and two user–level implementations of DSA. These implementations trade transparency and generality for performance at different degrees, and unlike research prototypes are designed to be suitable for real– world deployment. We present detailed measurements using a commercial database management system with both micro-benchmarks and industrial database workloads on a mid–size, 4 CPU, and a large, 32 CPU, database server. Our results show that VI–based interconnects and user– level communication can improve all aspects of the I/O path between the database system and the storage back-end. We also find that to make effective use of VI in I/O intensive environments we need to provide substantial additional functionality than what is currently provided by VI. Finally, new storage APIs that help minimize kernel involvement in the I/O path are needed to fully exploit the benefits of VI–based communication. 1