Results 1 - 10
of
90
Container-based operating system virtualization: A scalable, highperformance alternative to hypervisors
- In Proc. 2nd ACM European Conference on Computer Systems (EuroSys
"... Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many us-age scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and effi-ciency. Examples include HPC clusters, the Grid, hosting centers, and Pl ..."
Abstract
-
Cited by 117 (6 self)
- Add to MetaCart
(Show Context)
Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many us-age scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and effi-ciency. Examples include HPC clusters, the Grid, hosting centers, and PlanetLab. We present an alternative to hy-pervisors that is better suited to such scenarios. The ap-proach is a synthesis of prior work on resource containers and security containers applied to general-purpose, time-shared operating systems. Examples of such container-based systems include Solaris 10, Virtuozzo for Linux, and Linux-VServer. As a representative instance of container-based systems, this paper describes the design and implementa-tion of Linux-VServer. In addition, it contrasts the archi-tecture of Linux-VServer with current generations of Xen, and shows how Linux-VServer provides comparable support for isolation and superior system efficiency.
In VINI veritas: realistic and controlled network experimentation
- in Proc. ACM SIGCOMM
, 2006
"... This paper describes VINI, a virtual network infrastructure that allows network researchers to evaluate their protocols and services in a realistic environment that also provides a high degree of control over network conditions. VINI allows researchers to deploy and evaluate their ideas with real ro ..."
Abstract
-
Cited by 96 (4 self)
- Add to MetaCart
(Show Context)
This paper describes VINI, a virtual network infrastructure that allows network researchers to evaluate their protocols and services in a realistic environment that also provides a high degree of control over network conditions. VINI allows researchers to deploy and evaluate their ideas with real routing software, traffic loads, and network events. To provide researchers flexibility in designing their experiments, VINI supports simultaneous experiments with arbitrary network topologies on a shared physical infrastructure. This paper tackles the following important design question: What set of concepts and techniques facilitate flexible, realistic, and controlled experimentation (e.g., multiple topologies and the ability to tweak routing algorithms) on a fixed physical infrastructure? We first present VINI’s high-level design and the challenges of virtualizing a single network. We then present PL-VINI, an implementation of VINI on PlanetLab, running the “Internet In a Slice”. Our evaluation of PL-VINI shows that it provides a realistic and controlled environment for evaluating new protocols and services.
Cabernet: Connectivity Architecture for Better Network Services ABSTRACT
"... Deploying and managing wide-area network services is exceptionally challenging. Despite having servers at many locations, a service provider must rely on an underlying besteffort network; a network provider can offer services over its own customized network, but only within limited footprint. In thi ..."
Abstract
-
Cited by 29 (0 self)
- Add to MetaCart
(Show Context)
Deploying and managing wide-area network services is exceptionally challenging. Despite having servers at many locations, a service provider must rely on an underlying besteffort network; a network provider can offer services over its own customized network, but only within limited footprint. In this paper, we propose Cabernet (Connectivity Architecture for Better Network Services), a three-layer network architecture that lowers the barrier for deploying wide-area services. We introduce the connectivity layer, which uses virtual links purchased from infrastructure providers to run virtual networks with the necessary geographic footprint, reliability, and performance for the service providers. As an example, we present a cost-effective way to support IPTV delivery through wide-area IP multicast that runs on top of a reliable virtual network. 1.
Antiquity: Exploiting a secure log for wide-area distributed storage
- In EuroSys
, 2007
"... Antiquity is a wide-area distributed storage system designed to provide a simple storage service for applications like file systems and back-up. The design assumes that all servers eventually fail and attempts to maintain data despite those failures. Antiquity uses a secure log to maintain data inte ..."
Abstract
-
Cited by 19 (5 self)
- Add to MetaCart
(Show Context)
Antiquity is a wide-area distributed storage system designed to provide a simple storage service for applications like file systems and back-up. The design assumes that all servers eventually fail and attempts to maintain data despite those failures. Antiquity uses a secure log to maintain data integrity, replicates each log on multiple servers for durability, and uses dynamic Byzantine faulttolerant quorum protocols to ensure consistency among replicas. We present Antiquity’s design and an experimental evaluation with global and local testbeds. Antiquity has been running for over two months on 400+ PlanetLab servers storing nearly 20,000 logs totaling more than 84 GB of data. Despite constant server churn, all logs remain durable.
ExoGENI: A Multi-Domain Infrastructure-as-a-Service Testbed
- In TridentCom: International Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities
, 2012
"... NSF’s GENI program seeks to enable experiments that run within virtual network topologies built-toorder from testbed infrastructure offered by multiple providers (domains). GENI is often viewed as a network testbed integration effort, but behind it is an ambitious vision for multi-domain infrastruct ..."
Abstract
-
Cited by 9 (7 self)
- Add to MetaCart
(Show Context)
NSF’s GENI program seeks to enable experiments that run within virtual network topologies built-toorder from testbed infrastructure offered by multiple providers (domains). GENI is often viewed as a network testbed integration effort, but behind it is an ambitious vision for multi-domain infrastructure-asa-service (IaaS). This paper presents ExoGENI, a new GENI testbed that links GENI to two advances in virtual infrastructure services outside of GENI: open cloud computing (OpenStack) and dynamic circuit fabrics. ExoGENI orchestrates a federation of independent cloud sites and circuit providers through their native IaaS interfaces, and links them to other GENI tools and resources. The ExoGENI deployment consists of cloud site “racks ” on host campuses within the US, linked with national research networks and other circuit networks through programmable exchange points. The ExoGENI sites and control software are enabled for software-defined networking using OpenFlow. ExoGENI offers a powerful unified hosting platform for deeply networked, multi-domain, multi-site cloud applications. We intend that ExoGENI will seed a larger, evolving platform linking other third-party cloud sites, transport networks, and other infrastructure services, and that it will enable real-world deployment of innovative distributed services and new visions of a Future Internet. 1
Learning from planetlab
- In Proceedings of the 3rd WORLDS
, 2006
"... PlanetLab has been an enormously successful testbed for networking and distributed systems research, and it is likely to have a significant influence on future systems. In this paper, we examine PlanetLab’s success, and caution against an uncritical acceptance of the factors that led to it. We discu ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
PlanetLab has been an enormously successful testbed for networking and distributed systems research, and it is likely to have a significant influence on future systems. In this paper, we examine PlanetLab’s success, and caution against an uncritical acceptance of the factors that led to it. We discuss nine design decisions that were essential to Planet-Lab’s initial success and yet in our view should be revisited in order to better position PlanetLab for its future growth.
Lightweight, High-Resolution Monitoring for Troubleshooting Production Systems Abhishek Kumar
"... Production systems are commonly plagued by intermittent problems that are difficult to diagnose. This paper describes a new diagnostic tool, called Chopstix, that continuously collects profiles of low-level OS events (e.g., scheduling, L2 cache misses, CPU utilization, I/O operations, page allocatio ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Production systems are commonly plagued by intermittent problems that are difficult to diagnose. This paper describes a new diagnostic tool, called Chopstix, that continuously collects profiles of low-level OS events (e.g., scheduling, L2 cache misses, CPU utilization, I/O operations, page allocation, locking) at the granularity of executables, procedures and instructions. Chopstix then reconstructs these events offline for analysis. We have used Chopstix to diagnose several elusive problems in a largescale production system, thereby reducing these intermittent problems to reproducible bugs that can be debugged using standard techniques. The key to Chopstix is an approximate data collection strategy that incurs very low overhead. An evaluation shows Chopstix requires under 1 % of the CPU, under 256KB of RAM, and under 16MB of disk space per day to collect a rich set of system-wide data. 1
Experiences from a Decade of TinyOS Development
"... When first written in 2000, TinyOS’s users were a handful of academic computer science researchers. A decade later, TinyOS averages 25,000 downloads a year, is in many commercial products, and remains a platform used for a great deal of sensor network, low-power systems, and wireless research. We fo ..."
Abstract
-
Cited by 8 (0 self)
- Add to MetaCart
(Show Context)
When first written in 2000, TinyOS’s users were a handful of academic computer science researchers. A decade later, TinyOS averages 25,000 downloads a year, is in many commercial products, and remains a platform used for a great deal of sensor network, low-power systems, and wireless research. We focus on how technical and social decisions influenced this success, sometimes in surprising ways. As TinyOS matured, it evolved language extensions to help experts write efficient, robust systems. These extensions revealed insights and novel programming abstractions for embedded software. Using these abstractions, experts could build increasingly complex systems more easily than with other operating systems, making TinyOS the dominant choice. This success, however, came at a long-term cost. System design decisions that seem good at first can have unforeseen and undesirable implications that play out over the span of years. Today, TinyOS is a stable, selfcontained ecosystem that is discouraging to new users. Other systems, such as Arduino and Contiki, by remaining more accessible, have emerged as better solutions for simpler embedded sensing applications. 1.
S.: On the use of computational geometry to detect software faults at runtime
"... Despite advances in software engineering, software faults continue to cause system downtime. Software faults are difficult to detect before the system fails, especially since the first symptom of a fault is often system failure itself. This paper presents a computational geometry technique and a sup ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Despite advances in software engineering, software faults continue to cause system downtime. Software faults are difficult to detect before the system fails, especially since the first symptom of a fault is often system failure itself. This paper presents a computational geometry technique and a supporting tool to tackle the problem of timely fault detection during the execution of a software application. The approach involves collecting a variety of runtime measurements and building a geometric enclosure, such as a convex hull, which represents the normal (i.e., non-failing) operating space of the application being monitored. When collected runtime measurements are classified as being outside of the enclosure, the application is considered to be in an anomalous (i.e., failing) state. This paper presents experimental results that illustrate the advantages of using a computational geometry approach over the distance based approaches of Chi-Squared and Mahalanobis distance. Additionally, we present results illustrating the advantages of using the convex-hull enclosure for fault detection in favor of a simpler enclosure such as a hyperrectangle