• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

The DiskSim Simulation Environment Version 2.0 Reference Manual, (1999)

by G Ganger, B Worthington, Y Patt
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 76
Next 10 →

Power-Aware Storage Cache Management.

by Q Zhu, Y Zhou - IEEE Transactions on Computers, , 2005
"... ..."
Abstract - Cited by 35 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...{s|0≤s≤S}{K(n, s)} 5 Evaluation Methodology 5.1 Test Bed We simulate a complete storage system to evaluate our power-aware cache management schemes. We have enhanced the widely used DiskSim simulator =-=[12]-=- and augmented it with a disk power model. The power model we use is similar to that used by Gurumurthi et al. [16] for multi-speed disks. We have also developed a storage cache simulator, CacheSim an...

The DiskSim Simulation Environment Version 4.0 Reference Manual

by John S. Bucy, Jiri Schindler, Steven W. Schlosser, Gregory R. Ganger , 2008
"... DiskSim is an efficient, accurate and highly-configurable disk system simulator developed to support research into various aspects of storage subsystem architecture. It includes modules that simulate disks, intermediate controllers, buses, device drivers, request schedulers, disk block caches, and d ..."
Abstract - Cited by 35 (0 self) - Add to MetaCart
DiskSim is an efficient, accurate and highly-configurable disk system simulator developed to support research into various aspects of storage subsystem architecture. It includes modules that simulate disks, intermediate controllers, buses, device drivers, request schedulers, disk block caches, and disk array data organizations. In particular, the disk drive module simulates modern disk drives in great detail and has been carefully validated against several production disks (with accuracy that exceeds any previously reported simulator). It also includes a MEMS-based storage device module. This manual describes how to configure and use DiskSim, which has been made publicly available with the hope of advancing the state-of-the-art in disk system performance evaluation in the research community. The manual also briefly describes DiskSim’s internal structure and various validation results. Keywords: storage system, disk simulator, disk modelContents
(Show Context)

Citation Context

...hanical model handles seek times, rotational latency and various other aspects of disk mechanics. The implementations of these modules in the current version of Diskmodel are derived from DiskSim 2.0 =-=[4]-=-. Disksim 3.0 uses Diskmodel natively. Diskmodel has also been used in a device driver implementation of a shortest positioning time first disk request scheduler. B.2 Types and Units All math in diskm...

PB-LRU: a self-tuning power aware storage cache replacement algorithm for conserving disk energy

by Qingbo Zhu, Asim Shankar, Yuanyuan Zhou - In Proceedings of the 18th annual international conference on Supercomputing , 2004
"... Energy consumption is an important concern at data centers, where storage systems consume a significant fraction of the total energy. A recent study proposed power-aware storage cache management to provide more opportunities for the underlying disk power management scheme to save energy. However, th ..."
Abstract - Cited by 31 (2 self) - Add to MetaCart
Energy consumption is an important concern at data centers, where storage systems consume a significant fraction of the total energy. A recent study proposed power-aware storage cache management to provide more opportunities for the underlying disk power management scheme to save energy. However, the on-line algorithm proposed in that study requires cumbersome parameter tuning for each workload and is therefore difficult to apply to real systems. This paper presents a new power-aware on-line algorithm called PB-LRU (Partition-Based LRU) that requires little parameter tuning. Our results with both real system and synthetic workloads show that PB-LRU without any parameter tuning provides similar or even better performance and energy savings than the previous power-aware algorithm with the best parameter setting for each workload.
(Show Context)

Citation Context

...W 10.9 secs 135 J 1.5 secs 13 J Table 1: Simulation Parameters We simulate a complete storage system to evaluate our poweraware cache management schemes. We enhanced the widely used DiskSim simulator =-=[11]-=- by adding a disk power model. The power model used is similar to that proposed by Gurumurthi et al. [15] for multi-speed disks. We also developed a storage cache simulator, CacheSim and use it with D...

Synthesizing Representative I/O Workloads for TPC-H

by Yanyong Zhang, Anand Sivasubramaniam, Hubertus Franke, Natarajan Gautam, Sivasubramaniam Hubertus, Franke Natarajan Gautam, Yanyong Zhang, Shailabh Nagar
"... Synthesizing I/O requests that can accurately capture workload behavior is extremely valuable for the design, implementation and optimization of disk subsystems. This paper presents a synthetic workload generator for TPC-H, an important decision-support commercial workload, by completely characteriz ..."
Abstract - Cited by 24 (2 self) - Add to MetaCart
Synthesizing I/O requests that can accurately capture workload behavior is extremely valuable for the design, implementation and optimization of disk subsystems. This paper presents a synthetic workload generator for TPC-H, an important decision-support commercial workload, by completely characterizing the arrival and access patterns of its queries. We present a novel approach for parameterizing the behavior of inter-mingling streams of sequential requests, and exploit correlations between multiple attributes of these requests, to generate disk block-level traces that are shown to accurately mimic the behavior of a real trace in terms of response time characteristics for each TPC-H query.

Intra-Disk Parallelism: An Idea Whose Time Has Come

by Sriram Sankar, Sudhanva Gurumurthi, Mircea R. Stan , 2008
"... Power is a big problem in data centers and a significant fraction of this power is consumed by the storage system. Server storage systems use a large number of disks to achieve high performance, which increases their power consumption. In this paper, we propose to significantly reduce the power cons ..."
Abstract - Cited by 17 (1 self) - Add to MetaCart
Power is a big problem in data centers and a significant fraction of this power is consumed by the storage system. Server storage systems use a large number of disks to achieve high performance, which increases their power consumption. In this paper, we propose to significantly reduce the power consumed by the storage system via intra-disk parallelism, wherein disk drives can exploit parallelism in the I/O request stream. Intra-disk parallelism can facilitate replacing a large disk array with a smaller one, using the minimum number of disk drives needed to satisfy the capacity requirements. We show that the design space of intra-disk parallelism is large and present a taxonomy to formulate specific implementations within this space. Using a set of commercial workloads, we perform a limit study to identify the key performance bottlenecks that arise when we replace a storage array that is tuned to provide high performance with a single high-capacity disk drive. These are the bottlenecks that intra-disk parallelism would need to alleviate. We then explore a particular intra-disk parallelism approach, where a disk is equipped with multiple arm assemblies that can be independently controlled, and evaluate three disk drive designs that embody this form of parallelism. We show that it is possible to match, and even surpass, the performance of a storage array for these workloads by using a single disk drive of sufficient capacity that exploits intra-disk parallelism, while significantly reducing the power consumed by the storage system compared to the multi-disk configuration. We evaluate the performance and power consumption of disk arrays composed of intra-disk parallel drives, discuss the engineering issues involved in implementing such drives, and finally provide a preliminary cost-benefit analysis of building and deploying intra-disk parallel drives, using cost data obtained from several companies in the disk drive industry. 1
(Show Context)

Citation Context

...PC-H 4,228,725 15 35.96 7200 6 Table 2: Workloads and the configuration of the original storage systems on which the traces were collected. Our experiments are carried out using the Disksim simulator =-=[12]-=-, which models the performance of disks, caches, storage interconnects, and multi-disk organizations in detail, and has been validated against several real disk drives. We augmented Disksim with power...

Power Conservation Strategies for MEMS-based Storage Devices

by Ying Lin, Scott A. Brandt, Darrell D. E. Long, Ethan L. Miller , 2002
"... Power dissipation in mobile computers is crucial and researchers have expended significant effort to improve power management for the hard drive, which accounts for a large percentage of the power consumed by the system. A new class of secondary storage devices based on microelectromechanical system ..."
Abstract - Cited by 15 (0 self) - Add to MetaCart
Power dissipation in mobile computers is crucial and researchers have expended significant effort to improve power management for the hard drive, which accounts for a large percentage of the power consumed by the system. A new class of secondary storage devices based on microelectromechanical systems (MEMS) promises to consume an order of magnitude less power with 10--20 times shorter latency and 10 times greater storage densities. Though MEMS storage devices promise to provide a more energy-efficient storage solution for mobile computing applications, little research has been conducted on how to manage the power consumption of these devices. In this paper we examine the power model of a MEMSbased storage device and perform a quantitative analysis of the power distribution among different working modes. Based on our analysis, we present three strategies to reduce power consumption: aggressive spin-down, merging of sequential requests, and subsector accesses. We show that immediate spin-down can save 50% of the total energy consumed by the device at the cost of increased response time. Merging of sequential requests can save up to 18% of the servicing energy and reduce response time by about 20%. Transferring less data for small requests such as those for metadata can save 40% of the servicing energy. Finally, we show that by applying all three power management strategies simultaneously the total power consumption of MEMS-based storage devices can be reduced by about 54% with no impact on I/O performance. This research is supported by the National Science Foundation under grant number CCR-073509 and the Institute for Scientific Computation Research at Lawrence Livermore National Laboratory under grant number SC-20010378.
(Show Context)

Citation Context

...ple, accessing partial sectors is not supported in disk drives but MEMS storage devices may be able to conserve power by using small transfers for metadata and other small requests. Using the DiskSim =-=[4]-=- MEMS simulator, we investigated the interaction between power consumption and I/O performance based on file system traces [18]. Our experimental results show that the power consumption pattern of MEM...

Power-efficient server-class performance from arrays of laptop disks

by Athanasios E. Papathanasiou, Michael L. Scott , 2004
"... The disk array of a server-class system can account for a significant portion of the server’s total power budget. Similar observations for mobile (e.g. laptop) systems have led to the development of power management policies that spin down the hard disk when it is idle, but these policies do not tra ..."
Abstract - Cited by 15 (1 self) - Add to MetaCart
The disk array of a server-class system can account for a significant portion of the server’s total power budget. Similar observations for mobile (e.g. laptop) systems have led to the development of power management policies that spin down the hard disk when it is idle, but these policies do not transfer well to server-class disks. On the other hand, state-of-the-art laptop disks have response times and bandwidths within a factor of 2.5 of their server class cousins, and consume less than one sixth the energy. These ratios suggest the possibility of replacing a server-class disk array with a larger array of mirrored laptop disks. By spinning up a subset of the disks proportional to the current workload, we can exploit the latency tolerance and parallelism of typical server workloads to achieve significant energy savings, with equal or better peak bandwidth. Potential savings range from 50 % to 80 % of the total disk energy consumption.

HyLog: A high performance approach to managing disk layout

by Wenguang Wang , Yanping Zhao , Rick Bunt - In Proceedings of the 3 rd USENIX Conference on File and Storage Technologies (FAST 2004 , 2004
"... Abstract Our objective is to improve disk I/O performance in multi-disk systems supporting multiple concurrent users, such as file servers, database servers, and email servers. In such systems, many disk reads are absorbed by large in-memory buffers, and so disk writes comprise a large portion of t ..."
Abstract - Cited by 14 (1 self) - Add to MetaCart
Abstract Our objective is to improve disk I/O performance in multi-disk systems supporting multiple concurrent users, such as file servers, database servers, and email servers. In such systems, many disk reads are absorbed by large in-memory buffers, and so disk writes comprise a large portion of the disk I/O traffic. LFS (Log-structured File System) has the potential to achieve superior write performance by accumulating small writes into large blocks and writing them to new places, rather than overwriting on top of their old copies (called Overwrite). Although it is commonly believed that the high segment cleaning overhead of LFS makes it a poor choice for workloads with random updates, in this paper we find that because of the fast improvement of disk technologies, LFS significantly outperforms Overwrite in a wide range of system configurations and workloads (including the random update workload) under modern and future disks. LFS performs worse than Overwrite, however, when the disk space utilization is very high due to the high cleaning cost. In this paper, we propose a new approach, the Hybrid Log-structured (HyLog) disk layout, to overcome this problem. HyLog uses a log-structured approach for hot pages to achieve high write performance, and Overwrite for cold pages to reduce the cleaning cost. We compare the performance of HyLog to that of Overwrite, LFS and WOLF (the latest improvement on LFS) under various system configurations and workloads. Our results show that, in most cases, Hylog performs comparably to the best of the other three approaches.
(Show Context)

Citation Context

...ns to compare the throughput of different disk layouts. Our simulator consists of a disk component, a disk layout component, and a buffer pool component. We ported the disk component from DiskSim 2.0 =-=[5]-=-. The disk layout component simulates disk layouts for Overwrite, LFS, WOLF, and HyLog. The implementation of LFS is based on the description in [14, 19] and the source code of the Sprite operating sy...

REO: A generic RAID Engine and Optimizer

by Dingshan He, James Lee Hafner
"... Present day applications that require reliable data storage use one of five commonly available RAID levels to protect against data loss due to media or disk failures. With a marked rise in the quantity of stored data and no commensurate improvement in disk reliability, a greater variety is becoming ..."
Abstract - Cited by 13 (1 self) - Add to MetaCart
Present day applications that require reliable data storage use one of five commonly available RAID levels to protect against data loss due to media or disk failures. With a marked rise in the quantity of stored data and no commensurate improvement in disk reliability, a greater variety is becoming necessary to contain costs. Adding new RAID codes to an implementation becomes cost prohibitive since they require significant development, testing and tuning efforts. We suggest a novel solution to this problem: a generic RAID Engine and Optimizer (REO). It is generic in that it works for any XOR-based erasure (RAID) code and under any combination of sector or disk failures. REO can systematically deduce a least cost reconstruction strategy for a read to lost pages or for an update strategy for a flush of dirty pages. Using trace driven simulations we show that REO can automatically tune I/O performance to be competitive with existing RAID implementations. 1
(Show Context)

Citation Context

...tion of REO, we have attempted to quantify the benefits of the Optimizer within the RAID Engine. For this we built a simulation model that included memory and I/O buses and integrated it into disksim =-=[13]-=-, a disk simulator with fairly accurate disk and array models. We simulated the setup shown in Figure 1. 8.2.1 Setup Table 2 lists the fixed parameters for our experiment and their values. We chose pa...

Zone-Based Shortest Positioning Time First Scheduling for MEMS-Based Storage Devices

by Bo Hong, Scott A. Brandt, Darrell D. E. Long, Ethan L. Miller, Karen A. Glocer, Zachary N. J. Peterson - In Proceedings of the 11th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS ’03 , 2003
"... Access latency to secondary storage devices is frequently a limiting factor in computer system performance. New storage technologies promise to provide greater storage densities at lower latencies than is currently obtainable with hard disk drives. MEMS-based storage devices use orthogonal magnetic ..."
Abstract - Cited by 11 (3 self) - Add to MetaCart
Access latency to secondary storage devices is frequently a limiting factor in computer system performance. New storage technologies promise to provide greater storage densities at lower latencies than is currently obtainable with hard disk drives. MEMS-based storage devices use orthogonal magnetic or physical recording techniques and thousands of simultaneously active MEMS-based read-write tips to provide high-density low-latency non-volatile storage. These devices promise seek times 10--20 times faster than hard drives, storage densities 10 times greater, and power consumption an order of magnitude lower. Previous research has examined data layout and request ordering algorithms that are analogs of those developed for hard drives. We present an analytical model of MEMS device performance that motivates a computationally simple MEMSbased request scheduling algorithm called ZSPTF, which has average response times comparable to Shortest Positioning Time First (SPTF) but with response time variability comparable to Circular Scan (C-SCAN).
(Show Context)

Citation Context

...mmler and Wilkes [14] developed an accurate disk drive model, which has since been used to study disk seek algorithms. DiskSim is another storage simulator that has been used to model system behavior =-=[2]-=-, and has been adapted to include MEMS devices. Recently, there has been interest in modeling the behavior of MEMS storage devices. Griffin, Schlosser, Ganger, and Nagle have published extensively on ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University