Results 1 - 10
of
69
Write Off-Loading: Practical Power Management for Enterprise Storage
"... In enterprise data centers power usage is a problem impacting server density and the total cost of ownership. Storage uses a significant fraction of the power budget and there are no widely deployed power-saving solutions for enterprise storage systems. The traditional view is that enterprise worklo ..."
Abstract
-
Cited by 134 (9 self)
- Add to MetaCart
(Show Context)
In enterprise data centers power usage is a problem impacting server density and the total cost of ownership. Storage uses a significant fraction of the power budget and there are no widely deployed power-saving solutions for enterprise storage systems. The traditional view is that enterprise workloads make spinning disks down ineffective because idle periods are too short. We analyzed block-level traces from 36 volumes in an enterprise data center for one week and concluded that significant idle periods exist, and that they can be further increased by modifying the read/write patterns using write off-loading. Write off-loading allows write requests on spun-down disks to be temporarily redirected to persistent storage elsewhere in the data center. The key challenge is doing this transparently and efficiently at the block level, without sacrificing consistency or failure resilience. We describe our write offloading design and implementation that achieves these goals. We evaluate it by replaying portions of our traces on a rack-based testbed. Results show that just spinning disks down when idle saves 28–36 % of energy, and write off-loading further increases the savings to 45–60%. 1
Reducing Energy Consumption of Disk Storage Using Power-Aware Cache Management
- In Proceedings of the International Symposium on High-Performance Computer Architecture (HPCA), Febuary
, 2004
"... User News) [32], today's data centers have power require-ments that range from 75 W/ft ..."
Abstract
-
Cited by 103 (7 self)
- Add to MetaCart
(Show Context)
User News) [32], today's data centers have power require-ments that range from 75 W/ft
Full-system power analysis and modeling for server environments
- In Workshop on Modeling Benchmarking and Simulation (MOBS
, 2006
"... Abstract — The increasing costs of power delivery and cooling, as well as the trend toward higher-density computer systems, have created a growing demand for better power management in server environments. Despite the increasing interest in this issue, little work has been done in quantitatively und ..."
Abstract
-
Cited by 75 (1 self)
- Add to MetaCart
(Show Context)
Abstract — The increasing costs of power delivery and cooling, as well as the trend toward higher-density computer systems, have created a growing demand for better power management in server environments. Despite the increasing interest in this issue, little work has been done in quantitatively understanding power consumption trends and developing simple yet accurate models to predict full-system power. We study the component-level power breakdown and variation, as well as temporal workload-specific power consumption of an instrumented power-optimized blade server. Using this analysis, we examine the validity of prior adhoc approaches to understanding power breakdown and quantify several interesting trends important for power modeling and management in the future. We also introduce Mantis, a nonintrusive method for modeling full-system power consumption and providing real-time power prediction. Mantis uses a onetime calibration phase to generate a model by correlating AC power measurements with user-level system utilization metrics. We experimentally validate the model on two server systems with drastically different power footprints and characteristics (a low-end blade and high-end compute-optimized server) using a variety of workloads. Mantis provides power estimates with high accuracy for both overall and temporal power consumption, making it a valuable tool for power-aware scheduling and analysis. I.
Energy-efficiency and storage flexibility in the blue file system
- In Proceedings of the 6th Symposium on Operating Systems Design and Implementation
, 2004
"... A fundamental vision driving pervasive computing research is access to personal and shared data anywhere at anytime. In many ways, this vision is close to being realized. Wireless networks such as 802.11 offer connectivity to small, mobile devices. Portable storage, such as mobile disks and USB keyc ..."
Abstract
-
Cited by 67 (13 self)
- Add to MetaCart
(Show Context)
A fundamental vision driving pervasive computing research is access to personal and shared data anywhere at anytime. In many ways, this vision is close to being realized. Wireless networks such as 802.11 offer connectivity to small, mobile devices. Portable storage, such as mobile disks and USB keychains, let users carry several gigabytes of data in their pockets. Yet, at least three substantial barriers to pervasive data access remain. First, power-hungry network and storage devices tax the limited battery capacity of mobile computers. Second, the danger of viewing stale data or making inconsistent updates grows as objects are replicated across more computers and portable storage devices. Third, mobile data access performance can suffer due to variable storage access times caused by dynamic power management, mobility, and use of heterogeneous storage devices. To overcome these barriers, we have built a new distributed file system called BlueFS. Compared to the Coda file system, BlueFS reduces file system energy usage by up to 55% and provides up to 3 times faster access to data replicated on portable storage. 1
Virtual Machine Power Metering and Provisioning
"... Virtualization is often used in cloud computing platforms for its several advantages in efficiently managing resources. However, virtualization raises certain additional challenges, and one of them is lack of power metering for virtual machines (VMs). Power management requirements in modern data cen ..."
Abstract
-
Cited by 67 (5 self)
- Add to MetaCart
(Show Context)
Virtualization is often used in cloud computing platforms for its several advantages in efficiently managing resources. However, virtualization raises certain additional challenges, and one of them is lack of power metering for virtual machines (VMs). Power management requirements in modern data centers have led to most new servers providing power usage measurement in hardware and alternate solutions exist for older servers using circuit and outlet level measurements. However, VM power cannot be measured purely in hardware. We present a solution for VM power metering, named Joulemeter. We build power models to infer power consumption from resource usage at runtime and identify the challenges that arise when applying such models for VM power metering. We show how existing instrumentation in server hardware and hypervisors can be used to build the required power models on real platforms with low error. Our approach is designed to operate with extremely low runtime overhead while providing practically useful accuracy. We illustrate the use of the proposed metering capability for VM power capping, a technique to reduce power provisioning costs in data centers. Experiments are performed on server traces from several thousand production servers, hosting Microsoft’s realworld applications such as Windows Live Messenger. The results show that not only does VM power metering allow virtualized data centers to achieve the same savings that non-virtualized data centers achieved through physical server power capping, but also that it enables further savings in provisioning costs with virtualization.
Energy management for hypervisorbased virtual machines
- In ATC’07: 2007 USENIX Annual Technical Conference on Proceedings of the USENIX Annual Technical Conference
, 2007
"... Current approaches to power management are based on operating systems with full knowledge of and full control over the underlying hardware; the distributed nature of multi-layered virtual machine environments renders such approaches insufficient. In this paper, we present a novel framework for energ ..."
Abstract
-
Cited by 58 (2 self)
- Add to MetaCart
(Show Context)
Current approaches to power management are based on operating systems with full knowledge of and full control over the underlying hardware; the distributed nature of multi-layered virtual machine environments renders such approaches insufficient. In this paper, we present a novel framework for energy management in modular, multi-layered operating system structures. The framework provides aunifiedmodeltopartitionanddistributeenergy, and mechanisms for energy-aware resource accounting and allocation. As a key property, the framework explicitly takes the recursive energy consumption into account, which is spent, e.g., in the virtualization layer or subsequent driver components. Our prototypical implementation targets hypervisor-based virtual machine systems and comprises two components: a host-level subsystem, which controls machine-wide energy constraints and enforces them among all guest OSes and service components, and, complementary, an energy-aware guest operating system, capable of fine-grained applicationspecific energy management. Guest level energy management thereby relies on effective virtualization of physical energy effects provided by the virtual machine monitor. Experiments with CPU and disk devices and an external data acquisition system demonstrate that our framework accurately controls and stipulates the power consumption of individual hardware devices, both for energy-aware and energyunaware guest operating systems. 1
Disk Drive Roadmap from the Thermal Perspective: A Case for Dynamic Thermal Management
- In Proceedings of the International Symposium on Computer Architecture (ISCA
, 2005
"... The importance of pushing the performance envelope of disk drives continues to grow, not just in the server market but also in numerous consumer electronics products. One of the most fundamental factors impacting disk drive design is the heat dissipation and its effect on drive reliability, since hi ..."
Abstract
-
Cited by 47 (11 self)
- Add to MetaCart
(Show Context)
The importance of pushing the performance envelope of disk drives continues to grow, not just in the server market but also in numerous consumer electronics products. One of the most fundamental factors impacting disk drive design is the heat dissipation and its effect on drive reliability, since high temperatures can cause off-track errors, or even head crashes. Until now, drive manufacturers have continued to meet the 40 % annual growth target of the internal data rates (IDR) by increasing RPMs, and shrinking platter sizes, both of which have counter-acting effects on the heat dissipation within a drive. As this paper will show, we are getting to a point where it is becoming very difficult to stay on this roadmap. This paper presents an integrated disk drive model that captures the close relationships between capacity, performance and thermal characteristics over time. Using this model, we quantify the drop off in IDR growth rates over the next decade if we are to adhere to the thermal envelope of drive design. We present two mechanisms for buying back some of this IDR loss with Dynamic Thermal Management (DTM). The first DTM technique exploits any available thermal slack, between what the drive was intended to support and the currently lower operating temperature, to ramp up the RPM. The second DTM technique assumes that the drive is only designed for average case behavior, thus allowing higher RPMs than the thermal envelope, and employs dynamic throttling of disk drive activities to remain within this envelope.
Performance directed energy management for main memory and disks
- In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems
, 2004
"... Much research has been conducted on energy management for memory and disks. Most studies use control algorithms that dynamically transition devices to low power modes after they are idle for a certain threshold period of time. The control algorithms used in the past have two major limitations. First ..."
Abstract
-
Cited by 43 (3 self)
- Add to MetaCart
Much research has been conducted on energy management for memory and disks. Most studies use control algorithms that dynamically transition devices to low power modes after they are idle for a certain threshold period of time. The control algorithms used in the past have two major limitations. First, they require painstaking, application-dependent manual tuning of their thresholds to achieve energy savings without significantly degrading performance. Second, they do not provide performance guarantees. In one case, they slowed down an application by 835%! This paper addresses these two limitations for both memory and disks, making memory/disk energy-saving schemes practical enough to use in real systems. Specifically, we make three contributions: (1) We propose a technique that provides a performance guarantee for control algorithms. We show that our method works well for
Understanding the Performance-Temperature Interactions in Disk I/O of Server Workloads
- Interactions in Disk I/O of Server Workloads. In Proceedings of HPCA
, 2006
"... This paper describes the first infrastructure for integrated studies of the performance and thermal behavior of storage systems. Using microbenchmarks running on this infrastructure, we first gain insight into how I/O characteristics can affect the temperature of disk drives. We use this analysis to ..."
Abstract
-
Cited by 38 (8 self)
- Add to MetaCart
(Show Context)
This paper describes the first infrastructure for integrated studies of the performance and thermal behavior of storage systems. Using microbenchmarks running on this infrastructure, we first gain insight into how I/O characteristics can affect the temperature of disk drives. We use this analysis to identify the most promising, yet simple, “knobs ” for temperature optimization of high speed disks, which can be implemented on existing disks. We then analyze the thermal profiles of real workloads that use such disk drives in their storage systems, pointing out which knobs are most useful for dynamic thermal management when pushing the performance envelope.
Power-Aware Storage Cache Management.
- IEEE Transactions on Computers,
, 2005
"... ..."
(Show Context)