Results 1 - 10
of
138
Diagnosing performance overheads in the Xen virtual machine environment
- In VEE ’05: Proc. 1st ACM/USENIX International Conference on Virtual Execution Environments
, 2005
"... Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application’s performance in a virtual machine environment can differ markedly from its performance in a nonvirtualized environment because of interactions with the underlying virtua ..."
Abstract
-
Cited by 141 (9 self)
- Add to MetaCart
(Show Context)
Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application’s performance in a virtual machine environment can differ markedly from its performance in a nonvirtualized environment because of interactions with the underlying virtual machine monitor and other virtual machines. However, few tools are currently available to help debug performance problems in virtual machine environments. In this paper, we present Xenoprof, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment. The toolkit enables coordinated profiling of multiple VMs in a system to obtain the distribution of hardware events such as clock cycles and cache and TLB misses. We use our toolkit to analyze performance overheads incurred by networking applications running in Xen VMs. We focus on networking applications since virtualizing network I/O devices is relatively expensive. Our experimental results quantify Xen’s performance overheads for network I/O device virtualization in uni- and multi-processor systems. Our results identify the main sources of this overhead which should be the focus of Xen optimization efforts. We also show how our profiling toolkit was used to uncover and resolve performance bugs that we encountered in our experiments which caused unexpected application behavior.
Unmodified device driver reuse and improved system dependability via virtual machines
- In Proceedings of the 6th Symposium on Operating Systems Design and Implementation
, 2004
"... We propose a method to reuse unmodified device drivers and to improve system dependability using virtual machines. We run the unmodified device driver, with its original operating system, in a virtual machine. This approach enables extensive reuse of existing and unmodified drivers, independent of t ..."
Abstract
-
Cited by 134 (8 self)
- Add to MetaCart
(Show Context)
We propose a method to reuse unmodified device drivers and to improve system dependability using virtual machines. We run the unmodified device driver, with its original operating system, in a virtual machine. This approach enables extensive reuse of existing and unmodified drivers, independent of the OS or device vendor, significantly reducing the barrier to building new OS endeavors. By allowing distinct device drivers to reside in separate virtual machines, this technique isolates faults caused by defective or malicious drivers, thus improving a system’s dependability. We show that our technique requires minimal support infrastructure and provides strong fault isolation. Our prototype’s network performance is within 3–8 % of a native Linux system. Each additional virtual machine increases the CPU utilization by about 0.12%. We have successfully reused a wide variety of unmodified Linux network, disk, and PCI device drivers. 1
Optimizing network virtualization in xen
- In Proceedings of the USENIX Annual Technical Conference
, 2006
"... In this paper, we propose and evaluate three techniques for optimizing network performance in the Xen virtualized environment. Our techniques retain the basic Xen architecture of locating device drivers in a privileged ‘driver ’ domain with access to I/O devices, and providing network access to unpr ..."
Abstract
-
Cited by 124 (9 self)
- Add to MetaCart
(Show Context)
In this paper, we propose and evaluate three techniques for optimizing network performance in the Xen virtualized environment. Our techniques retain the basic Xen architecture of locating device drivers in a privileged ‘driver ’ domain with access to I/O devices, and providing network access to unprivileged ‘guest ’ domains through virtualized network interfaces. First, we redefine the virtual network interfaces of guest domains to incorporate high-level network offfload features available in most modern network cards. We demonstrate the performance benefits of high-level offload functionality in the virtual interface, even when such functionality is not supported in the underlying physical interface. Second, we optimize the implementation of the data transfer path between guest and driver domains. The optimization avoids expensive data remapping operations on the transmit path, and replaces page remapping by data copying on the receive path. Finally, we provide support for guest operating systems to effectively utilize advanced virtual memory features such as superpages and global page mappings. The overall impact of these optimizations is an improvement in transmit performance of guest domains by a factor of 4.4. The receive performance of the driver domain is improved by 35 % and reaches within 7 % of native Linux performance. The receive performance in guest domains improves by 18%, but still trails the native Linux performance by 61%. We analyse the performance improvements in detail, and quantify the contribution of each optimization to the overall performance. 1
vTPM: Virtualizing the trusted platform module
- In USENIX Security
, 2006
"... We present the design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. To this end, we virtualized the Trusted Platform Module (TPM). As a result, the TPM’s secure storage and cryptographic functions are availabl ..."
Abstract
-
Cited by 121 (4 self)
- Add to MetaCart
(Show Context)
We present the design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. To this end, we virtualized the Trusted Platform Module (TPM). As a result, the TPM’s secure storage and cryptographic functions are available to operating systems and applications running in virtual machines. Our new facility supports higher-level services for establishing trust in virtualized environments, for example remote attestation of software integrity. We implemented the full TPM specification in software and added functions to create and destroy virtual TPM instances. We integrated our software TPM into a hypervisor environment to make TPM functions available to virtual machines. Our virtual TPM supports suspend and resume operations, as well as migration of a virtual TPM instance with its respective virtual machine across platforms. We present four designs for certificate chains to link the virtual TPM to a hardware TPM, with security vs. efficiency trade-offs based on threat models. Finally, we demonstrate a working system by layering an existing integrity measurement application on top of our virtual TPM facility. 1
Container-based operating system virtualization: A scalable, highperformance alternative to hypervisors
- In Proc. 2nd ACM European Conference on Computer Systems (EuroSys
"... Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many us-age scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and effi-ciency. Examples include HPC clusters, the Grid, hosting centers, and Pl ..."
Abstract
-
Cited by 117 (6 self)
- Add to MetaCart
(Show Context)
Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many us-age scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and effi-ciency. Examples include HPC clusters, the Grid, hosting centers, and PlanetLab. We present an alternative to hy-pervisors that is better suited to such scenarios. The ap-proach is a synthesis of prior work on resource containers and security containers applied to general-purpose, time-shared operating systems. Examples of such container-based systems include Solaris 10, Virtuozzo for Linux, and Linux-VServer. As a representative instance of container-based systems, this paper describes the design and implementa-tion of Linux-VServer. In addition, it contrasts the archi-tecture of Linux-VServer with current generations of Xen, and shows how Linux-VServer provides comparable support for isolation and superior system efficiency.
Scheduling I/O in virtual machine monitors
- In VEE ’08: Proceedings of the International Conference on Virtual Execution Environments
, 2008
"... This paper explores the relationship between domain scheduling in a virtual machine monitor (VMM) and I/O performance. Tradition-ally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O re-sources as a secondary concern. However, t ..."
Abstract
-
Cited by 97 (0 self)
- Add to MetaCart
(Show Context)
This paper explores the relationship between domain scheduling in a virtual machine monitor (VMM) and I/O performance. Tradition-ally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O re-sources as a secondary concern. However, this can result in poor and/or unpredictable application performance, making virtualiza-tion less desirable for applications that require efficient and consis-tent I/O behavior. This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently run-ning different types of applications. In particular, different com-binations of processor-intensive, bandwidth-intensive, and latency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O perfor-mance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O perfor-mance. This cross product of scheduler configurations and applica-tion types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.
SafeDrive: Safe and recoverable extensions using language-based techniques
- In OSDI’06
, 2006
"... We present SafeDrive, a system for detecting and recovering from type safety violations in software extensions. SafeDrive has low overhead and requires minimal changes to existing source code. To achieve this result, SafeDrive uses a novel type system that provides finegrained isolation for existing ..."
Abstract
-
Cited by 97 (5 self)
- Add to MetaCart
(Show Context)
We present SafeDrive, a system for detecting and recovering from type safety violations in software extensions. SafeDrive has low overhead and requires minimal changes to existing source code. To achieve this result, SafeDrive uses a novel type system that provides finegrained isolation for existing extensions written in C. In addition, SafeDrive tracks invariants using simple wrappers for the host system API and restores them when recovering from a violation. This approach achieves finegrained memory error detection and recovery with few code changes and at a significantly lower performance cost than existing solutions based on hardware-enforced domains, such as Nooks [33], L4 [21], and Xen [13], or software-enforced domains, such as SFI [35]. The principles used in SafeDrive can be applied to any large system with loadable, error-prone extension modules. In this paper we describe our experience using SafeDrive for protection and recovery of a variety of Linux device drivers. In order to apply SafeDrive to these device drivers, we had to change less than 4 % of the source code. SafeDrive recovered from all 44 crashes due to injected faults in a network card driver. In experiments with 6 different drivers, we observed increases in kernel CPU utilization of 4–23 % with no noticeable degradation in end-to-end performance. 1
Improving Xen security through disaggregation
- Proceedings of the Fourth ACM SIGPLAN/SIGOPS international conference on Virtual Execution Environments
"... Virtual machine monitors (VMMs) have been hailed as the basis for an increasing number of reliable or trusted computing systems. The Xen VMM is a relatively small piece of software – a hypervisor – that runs at a lower level than a conventional operating system in order to provide isolation between ..."
Abstract
-
Cited by 76 (3 self)
- Add to MetaCart
(Show Context)
Virtual machine monitors (VMMs) have been hailed as the basis for an increasing number of reliable or trusted computing systems. The Xen VMM is a relatively small piece of software – a hypervisor – that runs at a lower level than a conventional operating system in order to provide isolation between virtual machines: its size is offered as an argument for its trustworthiness. However, the management of a Xen-based system requires a privileged, fullblown operating system to be included in the trusted computing base (TCB). In this paper, we introduce our work to disaggregate the management virtual machine in a Xen-based system. We begin by analysing the Xen architecture and explaining why the status quo results in a large TCB. We then describe our implementation, which moves the domain builder, the most important privileged component, into a minimal trusted compartment. We illustrate how this approach may be used to implement “trusted virtualisation ” and improve the security of virtual TPM implementations. Finally, we evaluate our approach in terms of the reduction in TCB size, and by performing a security analysis of the disaggregated system. Categories and Subject Descriptors D.4.6 [Operating Systems]: Security and Protection—Information flow controls
High Performance VMM-Bypass I/O in Virtual Machines
, 2006
"... Currently, I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (VMM) and/or a privileged VM for each I/O operation, which may turn out to be a performance bottleneck for systems with high I/O demands, especially those equipped with m ..."
Abstract
-
Cited by 71 (2 self)
- Add to MetaCart
Currently, I/O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (VMM) and/or a privileged VM for each I/O operation, which may turn out to be a performance bottleneck for systems with high I/O demands, especially those equipped with modern high speed interconnects such as InfiniBand. In this paper, we propose a new device virtualization model called VMM-bypass I/O, which extends the idea of OS-bypass originated from user-level communication. Essentially, VMM-bypass allows time-critical I/O operations to be carried out directly in guest VMs without involvement of the VMM and/or a privileged VM. By exploiting the intelligence found in modern high speed network interfaces, VMM-bypass can significantly improve I/O and communication performance for VMs without sacrificing safety or isolation. To demonstrate the idea of VMM-bypass, we have developed a prototype called Xen-IB, which offers Infini-Band virtualization support in the Xen 3.0 VM environment. Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand. Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment.