Results 1 - 10
of
137
Black-box and Gray-box Strategies for Virtual Machine Migration
, 2007
"... Virtualization can provide significant benefits in data centers by enabling virtual machine migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources and initiating the nece ..."
Abstract
-
Cited by 211 (7 self)
- Add to MetaCart
Virtualization can provide significant benefits in data centers by enabling virtual machine migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources and initiating the necessary migrations. Sandpiper implements a black-box approach that is fully OS- and application-agnostic and a gray-box approach that exploits OS- and application-level statistics. We implement our techniques in Xen and conduct a detailed evaluation using a mix of CPU, network and memory-intensive applications. Our results show that Sandpiper is able to resolve single server hotspots within 20 seconds and scales well to larger, data center environments. We also show that the gray-box approach can help Sandpiper make more informed decisions, particularly in response to memory pressure.
An early performance analysis of cloud computing services for scientific computing
- TU Delft, Tech. Rep., Dec 2008, [Online] Available
"... Abstract—Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike.Throughtheuseofvirtualizationandresourcetime-sharing, clouds serve with a single set of physical resources a ..."
Abstract
-
Cited by 134 (8 self)
- Add to MetaCart
Abstract—Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike.Throughtheuseofvirtualizationandresourcetime-sharing, clouds serve with a single set of physical resources a large user base withdifferentneeds.Thus,cloudshavethepotentialtoprovide to their owners the benefits of an economy of scale and, at the same time, becomeanalternativeforscientiststoclusters,grids,and parallel production environments. However, the current commercial clouds have been built to support web and small database workloads, which are very different from typical scientific computing workloads. Moreover, the use of virtualization and resource time-sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this work we analyze the performance of cloud computing services for scientific computing workloads. We quantify the presence in real scientific computing workloads of Many-Task Computing (MTC) users, that is, of users who employ looselycoupledapplicationscomprisingmanytaskstoachieve their scientific goals. Then, we perform an empirical evaluation of theperformanceoffourcommercialcloudcomputingservices including Amazon EC2, which is currently the largest commercial cloud. Last,wecomparethroughtrace-basedsimulationtheperformance characteristics and cost models of clouds and other scientific computing platforms, for general and MTC-based scientific computing workloads. Our results indicate that the current clouds need an order of magnitude in performance improvement to be useful tothe scientific community, and show which improvements should be considered first to address this discrepancy between offer and demand.
Automated control of multiple virtualized resources
, 2008
"... Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy servicelevel objectives (SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoCont ..."
Abstract
-
Cited by 119 (5 self)
- Add to MetaCart
Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy servicelevel objectives (SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I/O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.
Scheduling I/O in virtual machine monitors
- In VEE ’08: Proceedings of the International Conference on Virtual Execution Environments
, 2008
"... This paper explores the relationship between domain scheduling in a virtual machine monitor (VMM) and I/O performance. Tradition-ally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O re-sources as a secondary concern. However, t ..."
Abstract
-
Cited by 97 (0 self)
- Add to MetaCart
(Show Context)
This paper explores the relationship between domain scheduling in a virtual machine monitor (VMM) and I/O performance. Tradition-ally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O re-sources as a secondary concern. However, this can result in poor and/or unpredictable application performance, making virtualiza-tion less desirable for applications that require efficient and consis-tent I/O behavior. This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently run-ning different types of applications. In particular, different com-binations of processor-intensive, bandwidth-intensive, and latency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O perfor-mance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O perfor-mance. This cross product of scheduler configurations and applica-tion types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.
Xen and Co.: communication-aware CPU scheduling for consolidated Xen-based hosting platforms
- In ACM VEE
, 2007
"... Recent advances in software and architectural support for server virtualization have created interest in using this tech-nology in the design of consolidated hosting platforms. Since virtualization enables easier and faster application migra-tion as well as secure co-location of antagonistic applica ..."
Abstract
-
Cited by 74 (1 self)
- Add to MetaCart
(Show Context)
Recent advances in software and architectural support for server virtualization have created interest in using this tech-nology in the design of consolidated hosting platforms. Since virtualization enables easier and faster application migra-tion as well as secure co-location of antagonistic applica-tions, higher degrees of server consolidation are likely to re-sult in such virtualization-based hosting platforms (VHPs). We identify a key shortcoming in existing virtual machine monitors (VMMs) that proves to be an obstacle in operating hosting platforms, such as Internet data centers, under con-ditions of such high consolidation: CPU schedulers that are agnostic to the communication behavior of modern, multi-tier applications. We develop a new communication-aware CPU scheduling algorithm to alleviate this problem. We im-plement our algorithm in the Xen VMM and build a proto-type VHP on a cluster of servers. Our experimental evalu-ation with realistic Internet server applications and bench-marks demonstrates the performance/cost benefits and the wide applicability of our algorithms. For example, the TPC-W benchmark exhibited improvements in average response times of up to 35 % for a variety of consolidation scenarios. A streaming media server hosted on our prototype VHP was able to satisfactorily service up to 3.5 times as many clients as one running on the default Xen.
An analysis of performance interference effects in virtual environments
- In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS
, 2007
"... Virtualization is an essential technology in modern datacenters. Despite advantages such as security isolation, fault isolation, and environment isolation, current virtualization techniques do not provide effective performance isolation between virtual machines (VMs). Specifically, hidden contention ..."
Abstract
-
Cited by 71 (5 self)
- Add to MetaCart
(Show Context)
Virtualization is an essential technology in modern datacenters. Despite advantages such as security isolation, fault isolation, and environment isolation, current virtualization techniques do not provide effective performance isolation between virtual machines (VMs). Specifically, hidden contention for physical resources impacts performance differently in different workload configurations, causing significant variance in observed system throughput. To this end, characterizing workloads that generate performance interference is important in order to maximize overall utility. In this paper, we study the effects of performance interference by looking at system-level workload characteristics. In a physical host, we allocate two VMs, each of which runs a sample application chosen from a wide range of benchmark and real-world workloads. For each combination, we collect performance metrics and runtime characteristics using an instrumented Xen hypervisor. Through subsequent analysis of collected data, we identify clusters of applications that generate certain types of performance interference. Furthermore, we develop mathematical models to predict the performance of a new application from its workload characteristics. Our evaluation shows our techniques were able to predict performance with average error of approximately 5%. 1.
Comparison of the Three CPU Schedulers in Xen
"... The primary motivation for enterprises to adopt virtualization technologies is to create a more agile and dynamic IT infrastructure — with server consolidation, high resource utilization, the ability to quickly add and adjust capacity on demand — while lowering total cost of ownership and responding ..."
Abstract
-
Cited by 62 (1 self)
- Add to MetaCart
The primary motivation for enterprises to adopt virtualization technologies is to create a more agile and dynamic IT infrastructure — with server consolidation, high resource utilization, the ability to quickly add and adjust capacity on demand — while lowering total cost of ownership and responding more effectively to changing business conditions. However, effective management of virtualized IT environments introduces new and unique requirements, such as dynamically resizing and migrating virtual machines (VMs) in response to changing application demands. Such capacity management methods should work in conjunction with the underlying resource management mechanisms. In general, resource multiplexing and scheduling among virtual machines is poorly understood. CPU scheduling for virtual machines, for instance, has largely been borrowed from the process scheduling research in operating systems. However, it is not clear whether a straight-forward port of process schedulers to VM schedulers would perform just as well. We use the open source Xen virtual machine monitor to perform a comparative evaluation of three different CPU schedulers for virtual machines. We analyze the impact of the choice of scheduler and its parameters on application performance, and discuss challenges in estimating the application resource requirements in virtualized environments.
Performance evaluation of virtualization technologies for server consolidation,” HP Labs Tec
, 2007
"... ..."
(Show Context)
Vconf: a reinforcement learning approach to virtual machines auto-configuration
- In ICAC
, 2009
"... Virtual machine (VM) technology enables multiple VMs to share resources on the same host. Resources allocated to the VMs should be re-configured dynamically in response to the change of application demands or resource supply. Because VM execution involves privileged domain and VM monitor, this cause ..."
Abstract
-
Cited by 48 (18 self)
- Add to MetaCart
(Show Context)
Virtual machine (VM) technology enables multiple VMs to share resources on the same host. Resources allocated to the VMs should be re-configured dynamically in response to the change of application demands or resource supply. Because VM execution involves privileged domain and VM monitor, this causes uncertainties in VMs ’ resource to performance mapping and poses challenges in online determination of appropriate VM configurations. In this paper, we propose a reinforcement learning (RL) based approach, namely VCONF, to automate the VM configuration process. VCONF employs model-based RL algorithms to address the scalability and adaptability issues in applying RL in real system management. Experimental results on both controlled environments and a testbed imitating production systems with Xen VMs and representative server workloads demonstrate the effectiveness of VCONF. The approach is able to find optimal (near optimal) configurations in small scale systems and shows good adaptability and scalability.
Profiling and Modeling Resource Usage of Virtualized Applications. UMass
"... Next Generation Data Centers (NGDC) are transforming labor-intensive, hard-coded systems into shared, virtualized, automated, and fully managed adaptive infrastructures. Virtualization technologies promise great opportunities for reducing energy and hardware costs through server consolidation. Moreo ..."
Abstract
-
Cited by 45 (1 self)
- Add to MetaCart
(Show Context)
Next Generation Data Centers (NGDC) are transforming labor-intensive, hard-coded systems into shared, virtualized, automated, and fully managed adaptive infrastructures. Virtualization technologies promise great opportunities for reducing energy and hardware costs through server consolidation. Moreover, virtualization can optimize resource sharing among applications hosted in different virtual machines to better meet their resource needs. However, to safely transition an application running natively on real hardware to a virtualized environment, one needs to estimate the additional resource requirements incurred by virtualization overheads. In this work, we design a general approach for estimating the resource requirements of applications when they are transferred to a virtual environment. Our approach has two key components: a set of microbenchmarks to profile the different types of virtualization overhead on a given platform, and a regression-based model that maps the native system usage profile into a virtualized one. This derived model can be used for estimating resource requirements of any application to be virtualized on a given platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. We illustrate the effectiveness of our methodology using Xen virtual machine monitor. Our evaluation shows that our automated model generation procedure effectively characterizes the different virtualization overheads of two diverse hardware platforms and that the models have median prediction error of less than 5% for both the RUBiS and TPC-W benchmarks. 1