Results 11 - 20
of
41
Beyond Power Proportionality: Designing Power-Lean Cloud Storage
, 2011
"... We present a power-lean storage system, where racks of servers, or even entire data center shipping containers, can be powered down to save energy. We show that racks and containers are more than the sum of their servers, and demonstrate the feasibility of designing a storage system that powers them ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We present a power-lean storage system, where racks of servers, or even entire data center shipping containers, can be powered down to save energy. We show that racks and containers are more than the sum of their servers, and demonstrate the feasibility of designing a storage system that powers them up and down on demand; further, we show that such a system would save an order of magnitude more energy than current disk-based power-proportional storage systems. Our simulation results using file system traces from the Internet Archive show over 44 % energy savings, a 5x improvement over disk-based power management systems, without performance impact. We explore the tradeoffs in choosing the right unit to power off/on, and present an automated framework to compute the optimal power management unit for different scenarios.
MINT: A Reliability Modeling Framework for Energy-Efficient Parallel Disk Systems
"... Abstract—The Popular Disk Concentration (PDC) technique and the Massive Array of Idle Disks (MAID) technique are two effective energy conservation schemes for parallel disk systems. The goal of PDC and MAID is to skew I/O load toward a few disks so that other disks can be transitioned to low power s ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—The Popular Disk Concentration (PDC) technique and the Massive Array of Idle Disks (MAID) technique are two effective energy conservation schemes for parallel disk systems. The goal of PDC and MAID is to skew I/O load toward a few disks so that other disks can be transitioned to low power states to conserve energy. I/O load skewing techniques like PDC and MAID inherently affect reliability of parallel disks, because disks storing popular data tend to have high failure rates than disks storing cold data. To study reliability impacts of energy-saving techniques on parallel disk systems, we develop a mathematical modeling framework called MINT. We first model the behaviors of parallel disks coupled with power management optimization policies. We make use of data access patterns as input parameters to estimate each disk’s utilization and power-state transitions. Then, we derive each disk’s reliability in terms of annual failure rate from the disk’s utilization, age, operating temperature, and power-state transition frequency. Next, we calculate the reliability of PDC and MAID parallel disk systems in accordance with the annual failure rate of each disk in the systems. Finally, we use real-world trace to validate out MINT model. Validation result shows that the behaviors of PDC and MAID which are modeled by MINT have a similar trend as that in the real-world.
Centaur: Host-side SSD Caching for Storage Performance Control
"... Abstract—Host-side SSD caches represent a powerful knob for improving and controlling storage performance and improve performance isolation. We present Centaur, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals. Centaur implements dynam ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Host-side SSD caches represent a powerful knob for improving and controlling storage performance and improve performance isolation. We present Centaur, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals. Centaur implements dynamically partitioned per-VM caches with per-partition local replacement to provide both lower cache miss rate, better performance isolation and performance control for VM workloads. It uses SSD cache sizing as a universal knob for meeting a variety of workload-specific goals including per-VM latency and IOPS reservations, proportional share fairness, and aggregate optimizations such as minimizing the average latency across VMs. We implemented Centaur for the VMware ESX hypervisor. With Centaur, times for simultaneously booting 28 virtual desktops improve by 42% relative to a non-caching system and by 18 % relative to a unified caching system. Centaur also implements per-VM shares for latency with less than 5 % error when running microbenchmarks, and enforces latency and IOPS reservations on OLTP workloads with less than 10 % error. I.
To ARC or not to ARC
- In Proc. of USENIX HotStorage
, 2015
"... Cache replacement algorithms have focused on man-aging caches that are in the datapath. In datapath caches, every cache miss results in a cache update. Cache up-dates are expensive because they induce cache insertion and cache eviction overheads which can be detrimental to both cache performance and ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Cache replacement algorithms have focused on man-aging caches that are in the datapath. In datapath caches, every cache miss results in a cache update. Cache up-dates are expensive because they induce cache insertion and cache eviction overheads which can be detrimental to both cache performance and cache device lifetime. Non-datapath caches, such as host-side flash caches, allow the flexibility of not having to update the cache on each miss. We propose the multi-modal adaptive replacement cache (mARC), a new cache replacement algorithm that extends the adaptive replacement cache (ARC) algorithm for non-datapath caches. Our initial trace-driven sim-ulation experiments suggest that mARC improves the cache performance over ARC while significantly reduc-ing the number of cache updates for two sets of storage I/O workloads from MSR Cambridge and FIU. 1
A Case for Virtualizing the Electric Utility in Cloud Data Centers Position paper
"... Since energy-related costs make up an increasingly sig-nificant component of overall costs for data centers run by cloud providers, it is important that these costs be propagated to their tenants in ways that are fair and pro-mote workload modulation that is aligned with overall cost-efficacy. We ar ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Since energy-related costs make up an increasingly sig-nificant component of overall costs for data centers run by cloud providers, it is important that these costs be propagated to their tenants in ways that are fair and pro-mote workload modulation that is aligned with overall cost-efficacy. We argue that there exists a big gap in how electric utilities charge data centers for their energy con-sumption (on the one hand) and the pricing interface ex-posed by cloud providers to their tenants (on the other). Whereas electric utilities employ complex features such as peak-based, time-varying, or tiered (load-dependent) pricing schemes, cloud providers charge tenants based on IT abstractions. This gap can create shortcomings such as unfairness in how tenants are charged and may also hinder overall cost-effective resource allocation. To overcome these shortcomings, we propose a novel idea of a virtual electric utility (VEU) that cloud providers should expose to individual tenants (in addition to their existing IT-based offerings). We discuss initial ideas un-derlying VEUs and challenges that must be addressed to turn them into a practical idea whose merits can be sys-tematically explored. 1
GreenDM: A Versatile Tiering Hybrid Drive for the Trade-Off Evaluation of Performance, Energy, and Endurance
, 2014
"... this dissertation. ..."
(Show Context)
Quantitative Estimation of the Performance Delay with Propagation Effects in Disk Power Savings *
"... Abstract The biggest power consumer in data centers is the storage system. Coupled with the fact that disk drives are lowly utilized, disks offer great opportunities for power savings, but any power saving action should be transparent to user traffic. Estimating correctly the performance impact of ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract The biggest power consumer in data centers is the storage system. Coupled with the fact that disk drives are lowly utilized, disks offer great opportunities for power savings, but any power saving action should be transparent to user traffic. Estimating correctly the performance impact of power saving becomes crucial for the effectiveness of power saving. Here, we develop a methodology that quantitatively estimates the performance impact due to power savings. By taking into consideration the propagation delay effects. Experiments driven by production server traces verify the correctness and efficiency of the proposed analytical methodology.
Power-Saving Approaches and Tradeoffs for Storage Systems
"... Power is becoming a major concern when designing storage systems for platforms ranging from mobile devices to data centers. While many solutions exist, different solutions make very different tradeoffs between energy savings and storage performance, capacity, reliability, cost of ownership, etc. Thi ..."
Abstract
- Add to MetaCart
(Show Context)
Power is becoming a major concern when designing storage systems for platforms ranging from mobile devices to data centers. While many solutions exist, different solutions make very different tradeoffs between energy savings and storage performance, capacity, reliability, cost of ownership, etc. This survey walks through layers of the legacy storage stack, exploring tradeoffs made by a representative set of energy-efficient storage approaches. The survey also points out architectural implications of implementing energy-efficient solutions at various storage layers.
Storage Power Optimizations for Client Devices and Data Centers
"... Storage devices are essential to all computing systems that store user data from desktops, to notebooks and Ultrabooks ™ to data centers. Hard disk drives (HDDs) or solid state drives (SSDs) are today’s most popular storage solutions. Active power for storage devices has significant impact on the ba ..."
Abstract
- Add to MetaCart
(Show Context)
Storage devices are essential to all computing systems that store user data from desktops, to notebooks and Ultrabooks ™ to data centers. Hard disk drives (HDDs) or solid state drives (SSDs) are today’s most popular storage solutions. Active power for storage devices has significant impact on the battery life of client devices. In the data center, the amount of energy required for thousands of HDDs/SSDs competes with that of small cities. In this article, we propose a range of storage power optimization techniques that apply equally well to client devices and data centers. A number of power conscious optimization techniques along with multiple case studies are discussed. On notebook/Ultrabook platforms our techniques achieve power savings of ~420 mW for HDD/SSD power during a media playback scenario. Furthermore, for an “always connected” scenario, our techniques achieve 7.5X higher retrieval rates with a 5 mW storage power budget. For data center storage, we demonstrate 61.9 percent power savings from using a swap policy optimization strategy.