Results 1 - 10
of
126
CloudVisor: Retrofitting protection of virtual machines in multi-tenant cloud with nested virtualization
- IN PROC. OF ACM SOSP, CAS CAIS, PORTUGAL,
, 2011
"... Multi-tenant cloud, which usually leases resources in the form of virtual machines, has been commercially available for years. Unfortunately, with the adoption of commodity virtualized infrastructures, software stacks in typical multi-tenant clouds are non-trivially large and complex, and thus are p ..."
Abstract
-
Cited by 77 (2 self)
- Add to MetaCart
(Show Context)
Multi-tenant cloud, which usually leases resources in the form of virtual machines, has been commercially available for years. Unfortunately, with the adoption of commodity virtualized infrastructures, software stacks in typical multi-tenant clouds are non-trivially large and complex, and thus are prone to compromise or abuse from adversaries including the cloud operators, which may lead to leakage of security-sensitive data. In this paper, we propose a transparent, backward-compatible approach that protects the privacy and integrity of customers ’ virtual machines on commodity virtualized infrastructures, even facing a total compromise of the virtual machine monitor (VMM) and the management VM. The key of our approach is the separation of the resource management from security protection in the virtualization layer. A tiny security monitor is introduced underneath the commodity VMM using nested virtualization and provides protection to the hosted VMs. As a result, our approach allows virtualization software (e.g., VMM, management VM and tools) to handle complex tasks of managing leased VMs for the cloud, without breaking security of users ’ data inside the VMs. We have implemented a prototype by leveraging commercially-available hardware support for virtualization. The prototype system, called CloudVisor, comprises only 5.5K LOCs and supports the Xen VMM with multiple Linux and Windows as the guest OSes. Performance evaluation shows that CloudVisor incurs moderate slowdown for I/O intensive applications and very small slowdown for other applications.
Efficient resource provisioning in compute clouds via vm multiplexing,”
- in The 7th IEEE/ACM International Conference on Autonomic Computing and Communications,
, 2010
"... ABSTRACT Resource provisioning in compute clouds often requires an estimate of the capacity needs of Virtual Machines (VMs). The estimated VM size is the basis for allocating resources commensurate with demand. In contrast to the traditional practice of estimating the size of VMs individually, we p ..."
Abstract
-
Cited by 54 (2 self)
- Add to MetaCart
(Show Context)
ABSTRACT Resource provisioning in compute clouds often requires an estimate of the capacity needs of Virtual Machines (VMs). The estimated VM size is the basis for allocating resources commensurate with demand. In contrast to the traditional practice of estimating the size of VMs individually, we propose a joint-VM provisioning approach in which multiple VMs are consolidated and provisioned together, based on an estimate of their aggregate capacity needs. This new approach exploits statistical multiplexing among the workload patterns of multiple VMs, i.e., the peaks and valleys in one workload pattern do not necessarily coincide with the others. Thus, the unused resources of a low utilized VM can be borrowed by the other co-located VMs with high utilization. Compared to individual-VM based provisioning, joint-VM provisioning could lead to much higher resource utilization. This paper presents three design modules to enable such a concept in practice. Specifically, a performance constraint describing the capacity need of a VM for achieving a certain level of application performance; an algorithm for estimating the aggregate size of multiplexed VMs; a VM selection algorithm that seeks to find those VM combinations with complementary workload patterns. We showcase that the proposed three modules can be seamlessly plugged into applications such as resource provisioning, and providing resource guarantees for VMs. The proposed method and applications are evaluated by performance data collected from about 16 thousand VMs in commercial data centers. The results demonstrate more than 45% improvements in terms of the overall resource utilization.
Sleepserver: A software-only approach for reducing the energy consumption of pcs within enterprise environments
- IN USENIX ATC
, 2010
"... Desktop computers are an attractive focus for energy savings as they are both a substantial component of enterprise energy consumption and are frequently unused or otherwise idle. Indeed, past studies have shown large power savings if such machines could simply be powered down when not in use. Unfor ..."
Abstract
-
Cited by 54 (7 self)
- Add to MetaCart
(Show Context)
Desktop computers are an attractive focus for energy savings as they are both a substantial component of enterprise energy consumption and are frequently unused or otherwise idle. Indeed, past studies have shown large power savings if such machines could simply be powered down when not in use. Unfortunately, while contemporary hardware supports low power “sleep ” modes of operation, their use in desktop PCs has been curtailed by application expectations of “always on ” network connectivity. In this paper, we describe the architecture and implementation of SleepServer, a system that enables hosts to transition to such low-power sleep states while still maintaining their application’s expected network presence using an ondemand proxy server. Our approach is particularly informed by our focus on practical deployment and thus SleepServer is designed to be compatible with existing networking infrastructure, host hardware and operating systems. Using SleepServer does not require any hardware additions to the end hosts themselves, and can be supported purely by additional software running on the systems under management. We detail results from our experience in deploying SleepServer in a medium scale enterprise with a sample set of thirty machines instrumented to provide accurate real-time measurements of energy consumption. Our measurements show significant energy savings for PCs ranging from 60%-80%, depending on their use model.
Memory Buddies: Exploiting Page Sharing for Smart Colocation in Virtualized Data Centers.
- Proceedings of the ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments,
, 2009
"... Abstract Many data center virtualization solutions, such as VMware ESX, employ content-based page sharing to consolidate the resources of multiple servers. Page sharing identifies virtual machine memory pages with identical content and consolidates them into a single shared page. This technique, im ..."
Abstract
-
Cited by 51 (7 self)
- Add to MetaCart
(Show Context)
Abstract Many data center virtualization solutions, such as VMware ESX, employ content-based page sharing to consolidate the resources of multiple servers. Page sharing identifies virtual machine memory pages with identical content and consolidates them into a single shared page. This technique, implemented at the host level, applies only between VMs placed on a given physical host. In a multiserver data center, opportunities for sharing may be lost because the VMs holding identical pages are resident on different hosts. In order to obtain the full benefit of content-based page sharing it is necessary to place virtual machines such that VMs with similar memory content are located on the same hosts. In this paper we present Memory Buddies, a memory sharingaware placement system for virtual machines. This system includes a memory fingerprinting system to efficiently determine the sharing potential among a set of VMs, and compute more efficient placements. In addition it makes use of live migration to optimize VM placement as workloads change. We have implemented a prototype Memory Buddies system with VMware ESX Server and present experimental results on our testbed, as well as an analysis of an extensive memory trace study. Evaluation of our prototype using a mix of enterprise and e-commerce applications demonstrates an increase of data center capacity (i.e. number of VMs supported) of 17%, while imposing low overhead and scaling to as many as a thousand servers.
I/O Deduplication: Utilizing Content Similarity to Improve I/O Performance
"... Duplication of data in storage systems is becoming increasingly common. We introduce I/O Deduplication, a storage optimization that utilizes content similarity for improving I/O performance by eliminating I/O operations and reducing the mechanical delays during I/O operations. I/O Deduplication cons ..."
Abstract
-
Cited by 48 (5 self)
- Add to MetaCart
Duplication of data in storage systems is becoming increasingly common. We introduce I/O Deduplication, a storage optimization that utilizes content similarity for improving I/O performance by eliminating I/O operations and reducing the mechanical delays during I/O operations. I/O Deduplication consists of three main techniques: content-based caching, dynamic replica retrieval, and selective duplication. Each of these techniques is motivated by our observations with I/O workload traces obtained from actively-used production storage systems, all of which revealed surprisingly high levels of content similarity for both stored and accessed data. Evaluation of a prototype implementation using these workloads showed an overall improvement in disk I/O performance of 28 to 47 % across these workloads. Further breakdown also showed that each of the three techniques contributed significantly to the overall performance improvement.
Rethinking the Library OS from the Top Down
"... “There is nothing new under the sun, but there are a lot of old things we don’t know.” – Ambrose Bierce, The Devil’s Dictionary This paper revisits an old approach to operating system construction, the library OS, in a new context. The idea of the library OS is that the personality of the OS on whic ..."
Abstract
-
Cited by 43 (8 self)
- Add to MetaCart
(Show Context)
“There is nothing new under the sun, but there are a lot of old things we don’t know.” – Ambrose Bierce, The Devil’s Dictionary This paper revisits an old approach to operating system construction, the library OS, in a new context. The idea of the library OS is that the personality of the OS on which an application depends runs in the address space of the application. A small, fixed set of abstractions connects the library OS to the host OS kernel, offering the promise of better system security and more rapid independent evolution of OS components. We describe a working prototype of a Windows 7 library OS that runs the latest releases of major applications such as Microsoft Excel, PowerPoint, and Internet Explorer. We demonstrate that desktop sharing across independent, securely isolated, library OS instances can be achieved through the pragmatic reuse of networking protocols. Each instance has significantly lower overhead than a full VM bundled with an application: a typical application adds just 16MB of working set and 64MB of disk footprint. We contribute a new ABI below the library OS that enables application mobility. We also show that our library OS can address many of the current uses of hardware virtual machines at a fraction of the overheads. This paper describes the first working prototype of a full commercial OS redesigned as a library OS capable of running significant applications. Our experience shows that the longpromised benefits of the library OS approach—better protection of system integrity and rapid system evolution—are readily obtainable.
Satori: Enlightened Page Sharing
- In Proceedings of the USENIX Annual Technical Conference
, 2009
"... We introduce Satori, an efficient and effective system for sharing memory in virtualised systems. Satori uses enlightenments in guest operating systems to detect sharing opportunities and manage the surplus memory that results from sharing. Our approach has three key benefits over existing systems: ..."
Abstract
-
Cited by 41 (0 self)
- Add to MetaCart
(Show Context)
We introduce Satori, an efficient and effective system for sharing memory in virtualised systems. Satori uses enlightenments in guest operating systems to detect sharing opportunities and manage the surplus memory that results from sharing. Our approach has three key benefits over existing systems: it is better able to detect short-lived sharing opportunities, it is efficient and incurs negligible overhead, and it maintains performance isolation between virtual machines. We present Satori in terms of hypervisor-agnostic design decisions, and also discuss our implementation for the Xen virtual machine monitor. In our evaluation, we show that Satori quickly exploits up to 94% of the maximum possible sharing with insignificant performance overhead. Furthermore, we demonstrate workloads where the additional memory improves macrobenchmark performance by a factor of two. 1
The Effectiveness of Deduplication on Virtual Machine Disk Images
"... Virtualization is becoming widely deployed in servers to efficiently provide many logically separate execution environments while reducing the need for physical servers. While this approach saves physical CPU resources, it still consumes large amounts of storage because each virtual machine (VM) ins ..."
Abstract
-
Cited by 40 (1 self)
- Add to MetaCart
(Show Context)
Virtualization is becoming widely deployed in servers to efficiently provide many logically separate execution environments while reducing the need for physical servers. While this approach saves physical CPU resources, it still consumes large amounts of storage because each virtual machine (VM) instance requires its own multi-gigabyte disk image. Moreover, existing systems do not support ad hoc block sharing between disk images, instead relying on techniques such as overlays to build multiple VMs from a single “base ” image. Instead, we propose the use of deduplication to both reduce the total storage required for VM disk images and increase the ability of VMs to share disk blocks. To test the effectiveness of deduplication, we conducted extensive evaluations on different sets of virtual machine disk images with different chunking strategies. Our experiments found that the amount of stored data grows very slowly after the first few virtual disk images if only the locale or software configuration is changed, with the rate of compression suffering when different versions of an operating system or different operating systems are included. We also show that fixedlength chunks work well, achieving nearly the same compression rate as variable-length chunks. Finally, we show that simply identifying zero-filled blocks, even in ready-touse virtual machine disk images available online, can provide significant savings in storage.
Live virtual machine migration with adaptive, memory compression
- In CLUSTER (2009), IEEE
"... Abstract—Live migration of virtual machines has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance, and power-saving, especially in clusters or data centers. Although pre-copy is a predominantly used approach in the state of the art, it is difficult to provide qui ..."
Abstract
-
Cited by 37 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Live migration of virtual machines has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance, and power-saving, especially in clusters or data centers. Although pre-copy is a predominantly used approach in the state of the art, it is difficult to provide quick migration with low network overhead, due to a great amount of transferred data during migration, leading to large per-formance degradation of virtual machine services. This paper presents the design and implementation of a novel memory-compression-based VM migration approach (MECOM) that first uses memory compression to provide fast, stable virtual machine migration, while guaranteeing the virtual machine services to be slightly affected. Based on memory page char-acteristics, we design an adaptive zero-aware compression algorithm for balancing the performance and the cost of virtual machine migration. Pages are quickly compressed in batches on the source and exactly recovered on the target. Experiment demonstrates that compared with Xen, our system can significantly reduce 27.1 % of downtime, 32 % of total migration time and 68.8 % of total transferred data on average. Keywords-virtual machine; live migration; memory compres-sion I.
Breaking up is hard to do: Security and functionality in a commodity hypervisor
- In Proc. ACM Symposium on OperatingSystems Principles,2011
"... Cloud computing uses virtualization to lease small slices of largescale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficien ..."
Abstract
-
Cited by 36 (0 self)
- Add to MetaCart
(Show Context)
Cloud computing uses virtualization to lease small slices of largescale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficiently secure to prevent breaches of isolation between different users who are co-located on the same host. Hypervisors are believed to be trustworthy in this role because of their small size and narrow interfaces. We observe that despite the modest footprint of the hypervisor itself, these platforms have a large aggregate trusted computing base (TCB) that includes a monolithic control VM with numerous interfaces exposed to VMs. We present Xoar, a modified version of Xen that retrofits the modularity and isolation principles used in microkernels onto a mature virtualization platform. Xoar breaks the control VM into single-purpose components called service VMs. We show that this componentized abstraction brings a number of benefits: sharing of service components by guests is configurable and auditable, making exposure to risk explicit, and access to the hypervisor is restricted to the least privilege required for each component. Microrebooting components at configurable frequencies reduces the temporal attack surface of individual components. Our approach incurs little performance overhead, and does not require functionality to be sacrificed or components to be rewritten from scratch. 1.