Results 1 - 10
of
60
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems
"... Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific and business domains. However, the ever increasing energy consumption of computing systems has started to limit further performance growth d ..."
Abstract
-
Cited by 58 (4 self)
- Add to MetaCart
(Show Context)
Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific and business domains. However, the ever increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements it is essential to synthesize and classify the research on power and energy-efficient design conducted to date. In this work we discuss causes and problems of high power / energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization and data center levels. We survey various key works in the area and map them to our taxonomy to guide future design and development efforts. This chapter is concluded with a discussion of advancements identified in energy-efficient computing and our vision on future
Integrating concurrency control and energy management in device drivers
- in Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles, ser. SOSP ’07
"... Abstract Energy management is a critical concern in wireless sensornets. Despite its importance, sensor network operating systems today provide minimal energy management support, requiring applications to explicitly manage system power states. To address this problem, we present ICEM, a device driv ..."
Abstract
-
Cited by 29 (5 self)
- Add to MetaCart
(Show Context)
Abstract Energy management is a critical concern in wireless sensornets. Despite its importance, sensor network operating systems today provide minimal energy management support, requiring applications to explicitly manage system power states. To address this problem, we present ICEM, a device driver architecture that enables simple, energy efficient wireless sensornet applications. The key insight behind ICEM is that the most valuable information an application can give the OS for energy management is its concurrency. Using ICEM, a low-rate sensing application requires only a single line of energy management code and has an efficiency within 1.6% of a hand-tuned implementation. ICEM's effectiveness questions the assumption that sensornet applications must be responsible for all power management and sensornets cannot have a standardized OS with a simple API.
Data center evolution: A tutorial on state of the art, issues,
- and challenges,” Computer Networks,
, 2009
"... Abstract Data centers form a key part of the infrastructure upon which a variety of information technology services are built. As data centers continue to grow in size and complexity, it is desirable to understand aspects of their design that are worthy of carrying forward, as well as existing or u ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
(Show Context)
Abstract Data centers form a key part of the infrastructure upon which a variety of information technology services are built. As data centers continue to grow in size and complexity, it is desirable to understand aspects of their design that are worthy of carrying forward, as well as existing or upcoming shortcomings and challenges that would have to be addressed. We envision the data center evolving from owned physical entities to potentially outsourced, virtualized and geographically distributed infrastructures that still attempt to provide the same level of control and isolation that owned infrastructures do. We define a layered model for such data centers and provide a detailed treatment of state of the art and emerging challenges in storage, networking, management and power/thermal aspects.
Capturing performance knowledge for automated analysis
- in High Performance Computing, Networking, Storage and Analysis, 2008. SC 2008. International Conference for
, 2008
"... ..."
(Show Context)
Comparison and analysis of greedy energy-efficient scheduling algorithms for computational grids
- in Energy Aware Distributed Computing Systems
, 2012
"... A computational grid is a distributed computational network, enabled with software that allows cooperation and the sharing of resources. The energy consumption of these large-scale distributed systems is an important problem. As our society becomes more technologically advanced, the size of these co ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
(Show Context)
A computational grid is a distributed computational network, enabled with software that allows cooperation and the sharing of resources. The energy consumption of these large-scale distributed systems is an important problem. As our society becomes more technologically advanced, the size of these computational grids and energy consumption continue to increase. In this paper, we study the problem of optimizing energy consumption and makespan by focusing on different techniques to schedule the tasks to the computational grid. A computational grid is simulated using a wide range of task heterogeneity and size variety. The heuristics are used with the simulated computational grid and the results are compared extensively against each other. 1.1
Leveraging Data Promotion for Low Power D-NUCA Caches
"... D-NUCA caches are cache memories that, thanks to banked organization, broadcast search and promotion/demotion mechanism, are able to tolerate the increasing wire delay effects introduced by technology scaling. As a consequence, they will outperform conventional caches (UCA, Uniform Cache Architectur ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
D-NUCA caches are cache memories that, thanks to banked organization, broadcast search and promotion/demotion mechanism, are able to tolerate the increasing wire delay effects introduced by technology scaling. As a consequence, they will outperform conventional caches (UCA, Uniform Cache Architectures) in future generation cores. Due to the promotion/demotion mechanism, we observed that the distribution of hits across the ways of a D-NUCA cache varies across applications as well as across different execution phases within a single application. In this work, we show how such a behavior can be leveraged to improve the D-NUCA power efficiency as well as to decrease its access latency. In particular, we propose: 1) A new microarchitectural technique to reduce the static power consumption of a D-NUCA cache by dynamically adapting the number of active (i.e. powered-on) ways to the need of the running application; our evaluation shows that a strong reduction of the average number of active ways (37.1%) is achievable, without significantly affecting the IPC (-2.25%), leading to a resultant reduction of the Energy Delay Product (EDP) of 30.9%. 2) A strategy to estimate the characteristic parameters of the proposed technique. 3) An evaluation of the effectiveness of the proposed technique in the multicore environment.
Engineering of softwareintensive systems: State of the art and research challenges
- in SoftwareIntensive Systems and New Computing Paradigms, ser. Lecture Notes in Computer Science
"... ..."
(Show Context)
STENSTRÖM P.: ‘Improving power efficiency of D-NUCA caches
- ACM SIGARCH Comput. Arch. News
"... D-NUCA caches are cache memories that, thanks to banked organization, broadcast search and promotion/demotion mechanism, are able to tolerate the increasing wire delay effects introduced by technology scaling. As a consequence, they will outperform conventional caches (UCA, Uniform Cache Architectur ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
D-NUCA caches are cache memories that, thanks to banked organization, broadcast search and promotion/demotion mechanism, are able to tolerate the increasing wire delay effects introduced by technology scaling. As a consequence, they will outperform conventional caches (UCA, Uniform Cache Architectures) in future generation cores. Due to the promotion/demotion mechanism, we have found that, in a D-NUCA cache, the distribution of hits on the ways varies across applications as well as across different execution phases within a single application. In this paper, we show how such a behavior can be utilized to improve D-NUCA power efficiency as well as to decrease its access latencies. In particular, we propose a new D-NUCA structure, called Way Adaptable D-NUCA cache, in which the number of active (i.e. powered-on) ways is dynamically adapted to the need of the running application. Our initial evaluation shows that a consistent reduction of both the average number of active ways (42 % in average) and the number of bank access requests (29 % in average) is achieved, without significantly affecting the IPC. 1.
On Effective Slack Reclamation in Task Scheduling for Energy Reduction
- Journal of Information Processing Systems
, 2009
"... Abstract: Power consumed by modern computer systems, particularly servers in data centers has al-most reached an unacceptable level. However, their energy consumption is often not justifiable when their utilization is considered; that is, they tend to consume more energy than needed for their comput ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Abstract: Power consumed by modern computer systems, particularly servers in data centers has al-most reached an unacceptable level. However, their energy consumption is often not justifiable when their utilization is considered; that is, they tend to consume more energy than needed for their comput-ing related jobs. Task scheduling in distributed computing systems (DCSs) can play a crucial role in increasing utilization; this will lead to the reduction in energy consumption. In this paper, we address the problem of scheduling precedence-constrained parallel applications in DCSs, and present two en-ergy-conscious scheduling algorithms. Our scheduling algorithms adopt dynamic voltage and fre-quency scaling (DVFS) to minimize energy consumption. DVFS, as an efficient power management technology, has been increasingly integrated into many recent commodity processors. DVFS enables these processors to operate with different voltage supply levels at the expense of sacrificing clock fre-quencies. In the context of scheduling, this multiple voltage facility implies that there is a trade-off be-tween the quality of schedules and energy consumption. Our algorithms effectively balance these two performance goals using a novel objective function and its variant, which take into account both goals; this claim is verified by the results obtained from our extensive comparative evaluation study.
Optimal Power Allocation and Load Distribution for Multiple Heterogeneous Multicore Server Processors across Clouds and Data Centers
"... Abstract—For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and f ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queueing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers. Index Terms—Load distribution, multicore server processor, power allocation, queueing model, response time.