• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Eucalyptus : A Technical Report on an Elastic Utility Computing Architecture Linking Your Programs to Useful Systems,” (2008)

by D Nurmi, R Wolski, C Grzegorczyk, G Obertelli, S Soman, L Youseff, D Zagorodnov
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 40
Next 10 →

Above the Clouds: A Berkeley View of Cloud Computing

by Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, Matei Zaharia , 2009
"... personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires pri ..."
Abstract - Cited by 955 (14 self) - Add to MetaCart
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement The RAD Lab's existence is due to the generous support of the founding members Google, Microsoft, and Sun Microsystems and of the affiliate members Amazon Web Services, Cisco Systems, Facebook, Hewlett-
(Show Context)

Citation Context

...Is come from open sources efforts from outside these companies. Hadoop and Hypertable are efforts to recreate the Google infrastructure [11], and Eucalyptus recreates important aspects of the EC2 API =-=[34]-=-. 10 Indeed, harking back to Section 2, “surge chip fabrication” is one of the common uses of “chip-les” fabrication companies like TSMC. 11 A 1TB 3.5” disk weighs 1.4 pounds. If we assume that packag...

Towards Trusted Cloud Computing

by Nuno Santos, Krishna P. Gummadi, Rodrigo Rodrigues - HOTCLOUD , 2009
"... Cloud computing infrastructures enable companies to cut costs by outsourcing computations on-demand. However, clients of cloud computing services currently have no means of verifying the confidentiality and integrity of their data and computation. To address this problem we propose the design of a t ..."
Abstract - Cited by 96 (1 self) - Add to MetaCart
Cloud computing infrastructures enable companies to cut costs by outsourcing computations on-demand. However, clients of cloud computing services currently have no means of verifying the confidentiality and integrity of their data and computation. To address this problem we propose the design of a trusted cloud computing platform (TCCP). TCCP enables Infrastructure as a Service (IaaS) providers such as Amazon EC2 to provide a closed box execution environment that guarantees confidential execution of guest virtual machines. Moreover, it allows users to attest to the IaaS provider and determine whether or not the service is secure before they launch their virtual machines.
(Show Context)

Citation Context

...ers where securing a customer’s VM is more manageable. While very little detail is known about the internal organization of commercial IaaS services, we describe (and base our proposal on) Eucalyptus =-=[6]-=-, an open source IaaS platform that offers an interface similar to EC2. Figure 1 presents a very simplified architecture of Eucalyptus. This system manages one or more clusters whose nodes run a virtu...

Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud

by Daniel Warneke, Odej Kao - IEEE Trans. Parallel and Distributed Systems , 2011
"... Abstract—In recent years ad hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for cu ..."
Abstract - Cited by 51 (2 self) - Add to MetaCart
Abstract—In recent years ad hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework Hadoop. Index Terms—Many-task computing, high-throughput computing, loosely coupled applications, cloud computing. Ç 1
(Show Context)

Citation Context

...s Gentoo Linux (kernel version 2.6.30) with KVM [15] (version 88-r1) using virtio [23] to provide virtual I/O access. To manage the cloud and provision VMs on request of Nephele, we set up Eucalyptus =-=[16]-=-. Similar to Amazon EC2, Eucalyptus offers a predefined set of instance types a user can choose from. During our experiments, we used two different instance types: The first instance type was “m1.smal...

Evaluating the CostBenefit of Using Cloud Computing to Extend the Capacity of Clusters

by Marcos Dias De Assunção, Alexandre Di Costanzo, Rajkumar Buyya - In Proceedings of the International Symposium on High Performance Distributed Computing (HPDC 2009 , 2009
"... In this paper, we investigate the benefits that organisations can reap by using “Cloud Computing ” providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine t ..."
Abstract - Cited by 42 (7 self) - Add to MetaCart
In this paper, we investigate the benefits that organisations can reap by using “Cloud Computing ” providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for virtual machines are submitted to the organisation’s cluster, but additional virtual machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users ’ requests. Naïve scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate six scheduling strategies that consider the use of resources from the “Cloud”, to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests ’ response times.
(Show Context)

Citation Context

...ts user applications in a way that guarantees acceptable response time. The resources of the local cluster are managed by a Virtual Infrastructure Engine (VIE) such as Open Nebula [13] and Eucalyptus =-=[25]-=-. The VIE can start, pause, resume, and stop Virtual Machines (VMs) on the physical resources offered by the cluster. The scheduling decisions at the cluster are performed by the Scheduler, which leas...

C-Meter:A Framework for Performance Analysis of Computing Clouds”, IEEE/ACM symposium on cluster computing and cloud,

by Iosup M N Yigitbasi , Epema A , D H J , Nezih Yigitbasi , Alexandru Iosup , Dick Epema , {m N Yigitbasi , A Iosup , D H J Epema}@tudelft Nl , Simon Ostermann , 2009
"... Abstract-Cloud computing has emerged as a new technology that provides large amounts of computing and data storage capacity to its users with a promise of increased scalability, high availability, and reduced administration and maintenance costs. As the use of cloud computing environments increases ..."
Abstract - Cited by 42 (2 self) - Add to MetaCart
Abstract-Cloud computing has emerged as a new technology that provides large amounts of computing and data storage capacity to its users with a promise of increased scalability, high availability, and reduced administration and maintenance costs. As the use of cloud computing environments increases, it becomes crucial to understand the performance of these environments. So, it is of great importance to assess the performance of computing clouds in terms of various metrics, such as the overhead of acquiring and releasing the virtual computing resources, and other virtualization and network communications overheads. To address these issues, we have designed and implemented C-Meter, which is a portable, extensible, and easy-to-use framework for generating and submitting test workloads to computing clouds. In this paper, first we state the requirements for frameworks to assess the performance of computing clouds. Then, we present the architecture of the C-Meter framework and discuss several cloud resource management alternatives. Finally, we present our early experiences with C-Meter in Amazon EC2. We show how C-Meter can be used for assessing the overhead of acquiring and releasing the virtual computing resources, for comparing different configurations, and for evaluating different scheduling algorithms.
(Show Context)

Citation Context

...ster consisting of equivalent processors; he has also uses the NAS Parallel Benchmarks and the mpptest micro benchmark for demonstrating the performance of message passing applications. Wolski et al. =-=[11]-=- introduce their open source cloud computing software framework Eucalyptus and present the results of their experiments, which compare the instance throughput and network performance against Amazon EC...

AppScale: Scalable and Open AppEngine Application Development and Deployment

by Navraj Chohan, Chris Bunch, Sydney Pang, Chandra Krintz, Nagy Mostafa, Sunil Soman, Rich Wolski
"... Abstract. We present the design and implementation of AppScale, an open source extension to the Google AppEngine (GAE) Platform-asa-Service (PaaS) cloud technology. Our extensions build upon the GAE SDK to facilitate distributed execution of GAE applications over virtualized cluster resources, inclu ..."
Abstract - Cited by 34 (10 self) - Add to MetaCart
Abstract. We present the design and implementation of AppScale, an open source extension to the Google AppEngine (GAE) Platform-asa-Service (PaaS) cloud technology. Our extensions build upon the GAE SDK to facilitate distributed execution of GAE applications over virtualized cluster resources, including Infrastructure-as-a-Service (IaaS) cloud systems such as Amazon’s AWS/EC2 and Eucalyptus. AppScale provides a framework with which researchers can investigate the interaction between PaaS and IaaS systems as well as the inner workings of, and new technologies for, PaaS cloud technologies using real GAE applications.
(Show Context)

Citation Context

... typically significantly less than the cost of owning and maintaining even a small subset of the resources that these commercial entities make available to users for application execution. Eucalyptus =-=[20]-=- is an open-source IaaS system that implements the AWS interface. Eucalyptus is compatible with AWS to the extent that commercial tools designed to work with EC2 (e.g., Rightscale [22], Elastra [11], ...

Policy-sealed data: A new abstraction for building trusted cloud services

by Nuno Santos, Rodrigo Rodrigues, Krishna P. Gummadi, Stefan Saroiu - In USENIX Security , 2012
"... Accidental or intentional mismanagement of cloud software by administrators poses a serious threat to the integrity and confidentiality of customer data hosted by cloud services. Trusted computing provides an important foundation for designing cloud services that are more resilient to these threats. ..."
Abstract - Cited by 33 (8 self) - Add to MetaCart
Accidental or intentional mismanagement of cloud software by administrators poses a serious threat to the integrity and confidentiality of customer data hosted by cloud services. Trusted computing provides an important foundation for designing cloud services that are more resilient to these threats. However, current trusted computing technology is ill-suited to the cloud as it exposes too many internal details of the cloud infrastructure, hinders fault tolerance and load-balancing flexibility, and performs poorly. We present Excalibur, a system that addresses these limitations by enabling the design of trusted cloud services. Excalibur provides a new trusted computing abstraction, called policy-sealed data, that lets data be sealed (i.e., encrypted to a customer-defined policy) and then unsealed (i.e., decrypted) only by nodes whose configurations match the policy. To provide this abstraction, Excalibur uses attribute-based encryption, which reduces the overhead of key management and improves the performance of the distributed protocols employed. To demonstrate that Excalibur is practical, we incorporated it in the Eucalyptus open-source cloud platform. Policy-sealed data can provide greater confidence to Eucalyptus customers that their data is not being mismanaged. 1
(Show Context)

Citation Context

...ls using a protocol verifier [12]. To demonstrate the practicality of Excalibur, we built a proof-of-concept compute service akin to EC2. Based on the Eucalyptus open source cloud management platform =-=[36]-=-, our service leveraged Excalibur to give users better guarantees regarding the type of hypervisor or the location where their VM instances run. Our experience shows that Excalibur’s primitive is simp...

An Operating System for Multicore and Clouds: Mechanisms and Implementation

by David Wentzlaff, Kevin Modzelewski, Jason Miller, Charles Gruenwald Iii, Adam Belay, Anant Agarwal, Nathan Beckmann, Lamia Youseff
"... Cloud computers and multicore processors are two emerging classes of computational hardware that have the potential to provide unprecedented compute capacity to the average user. In order for the user to effectively harness all of this computational power, operating systems (OSes) for these new hard ..."
Abstract - Cited by 31 (4 self) - Add to MetaCart
Cloud computers and multicore processors are two emerging classes of computational hardware that have the potential to provide unprecedented compute capacity to the average user. In order for the user to effectively harness all of this computational power, operating systems (OSes) for these new hardware platforms are needed. Existing multicore operating systems do not scale to large numbers of cores, and do not support clouds. Consequently, current day cloud systems push much complexity onto the user, requiring the user to manage individual Virtual Machines (VMs) and deal with many system-level concerns. In this work we describe the mechanisms and implementation of a factored operating system named fos. fos is a single system image operating system across both multicore and Infrastructure as a Service (IaaS) cloud systems. fos tackles OS scalability challenges by factoring the OS into its component system services. Each system service is further factored into a collection of Internet-inspired servers which communicate via messaging. Although designed in a manner similar to distributed Internet services, OS services instead provide traditional kernel services such as file systems, scheduling, memory management, and access to hardware. fos also implements new classes of OS services like fault tolerance and demand elasticity. In this work, we describe our working fos implementation, and provide early performance measurements of fos for both intra-machine and inter-machine operations.
(Show Context)

Citation Context

...h a 2GB disk file, time for the second fos VM to receive an IP address via DHCP, and the round trip time of the TCP messages sent by the proxy servers when sharing state. For a point of reference, in =-=[20, 19]-=-, the Eucalyptus team found that it takes approximately 24 seconds to start up a VM using Eucalyptus, but this is using a very different machine and network setup, making these numbers difficult to co...

Cost- and Deadline-Constrained Provisioning for Scientific Workflow Ensembles in IaaS Clouds

by Maciej Malawski, Gideon Juve, Ewa Deelman, Jarek Nabrzyski
"... Abstract—Large-scale applications expressed as scientific workflows are often grouped into ensembles of inter-related workflows. In this paper, we address a new and important problem concerning the efficient management of such ensembles under budget and deadline constraints on Infrastructure- as-a-S ..."
Abstract - Cited by 21 (3 self) - Add to MetaCart
Abstract—Large-scale applications expressed as scientific workflows are often grouped into ensembles of inter-related workflows. In this paper, we address a new and important problem concerning the efficient management of such ensembles under budget and deadline constraints on Infrastructure- as-a-Service (IaaS) clouds. We discuss, develop, and assess algorithms based on static and dynamic strategies for both task scheduling and resource provisioning. We perform the evaluation via simulation using a set of scientific workflow ensembles with a broad range of budget and deadline parameters, taking into account uncertainties in task runtime estimations, provisioning delays, and failures. We find that the key factor determining the performance of an algorithm is its ability to decide which workflows in an ensemble to admit or reject for execution. Our results show that an admission procedure based on workflow structure and estimates of task runtimes can significantly improve the quality of solutions. I.
(Show Context)

Citation Context

...ually becomes available to the application. Typically these provisioning delays are on the order of a few minutes, and are highly dependent upon the cloud architecture and/or the size of the VM image =-=[26]-=-. We assume that resources are billed from the minute that they are requested until they are terminated. As a result, provisioning delays have an impact on both the cost and makespan of an ensemble. F...

Scientific workflows and clouds

by Gideon Juve, Ewa Deelman - Crossroads , 2010
"... Abstract The development of cloud computing has generated significant interest in the scientific computing community. In this chapter we consider the impact of cloud computing on scientific workflow applications. We examine the benefits and drawbacks of cloud computing for workflows, and argue that ..."
Abstract - Cited by 20 (1 self) - Add to MetaCart
Abstract The development of cloud computing has generated significant interest in the scientific computing community. In this chapter we consider the impact of cloud computing on scientific workflow applications. We examine the benefits and drawbacks of cloud computing for workflows, and argue that the primary benefit of cloud computing is not the economic model it promotes, but rather the technologies it employs and how they enable new features for workflow applications. We describe how clouds can be configured to execute workflow tasks, and present a case study that examines the performance and cost of three typical workflow applications on Amazon EC2. Finally, we identify several areas in which existing clouds can be improved and discuss the future of workflows in the cloud. 1
(Show Context)

Citation Context

...ance. Current estimates put the overhead of existing virtualization software at around 10 percent [2, 15, 51] and VM startup time takes between 15 and 80 seconds depending on the size of the VM image =-=[19, 32]-=-. Fortunately, advances in virtualization technology, such as improved hardware-assisted virtualization, may reduce or eliminate runtime overheads in the future. Lack of shared or parallel file system...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University