Results 11 - 20
of
115
Reliable Provisioning of Spot Instances for Compute-intensive Applications
"... Abstract—Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standa ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted faultaware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault-tolerance techniques, namely checkpointing, task duplication and migration. We evaluate our strategies using tracedriven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures. Index Terms—cloud computing; spot market; scheduling; fault-tolerance; I.
A Heuristic for Mapping Virtual Machines and Links in Emulation Testbeds
"... Distributed system emulators provide a paramount platform for testing of network protocols and distributed applications in clusters and networks of workstations. However, to allow testers to benefit from these systems, it is necessary an efficient and automatic mapping of hundreds, or even thousands ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
(Show Context)
Distributed system emulators provide a paramount platform for testing of network protocols and distributed applications in clusters and networks of workstations. However, to allow testers to benefit from these systems, it is necessary an efficient and automatic mapping of hundreds, or even thousands, of virtual nodes to physical hosts—and the mapping of the virtual links between guests to physical paths in the physical environment. In this paper we present a heuristic to map both virtual machines to hosts and virtual links between virtual machines to paths in the real system. We define the problem we are addressing, present the solution for it and evaluate it in different usage scenarios. 1.
SLA-based Admission Control for a Software-as-a-Service Provider in Cloud Computing Environments
"... With the increasing popularity of Cloud computing, the requirement for services supporting brokering across multiple infrastructure providers is growing rapidly. Cloud Computing environments are not only dynamic, but also heterogeneous with multiple types of Virtual Machine (VM) offered by various i ..."
Abstract
-
Cited by 9 (3 self)
- Add to MetaCart
(Show Context)
With the increasing popularity of Cloud computing, the requirement for services supporting brokering across multiple infrastructure providers is growing rapidly. Cloud Computing environments are not only dynamic, but also heterogeneous with multiple types of Virtual Machine (VM) offered by various infrastructure providers. Similarly, the demand on services can also vary with time, which affects the number of VMs to be initiated. In this environment, the aim of Software as a Service (SaaS) providers is to maximize their profit and enhance their reputation by meeting Service Level Agreement (SLA) requirements of all accepted requests. SLAs are signed between SaaS providers and the customers to decide on the issues such as payment and Quality of Service (QoS). Thus, SaaS providers need effective strategies for accepting particular request, how many and what type of VMs to be initiated from suitable IaaS provider. This paper proposes admission control and scheduling algorithms that take into account dynamic parameters such as variation in VM’s initiation time and user’s QoS requirements such as budget deadline, and penalty rate ratio. This paper also presents an extensive evaluation study to analyse well suited algorithm for a particular scenario to maximise the SaaS provider's profit. Keywords: Cloud computing; Service Level Agreement (SLA); Admission Control; Software as a Service;
Power-aware Provisioning of Cloud Resources for Real-time Services
- in 7th International Workshop on Middleware for Grids, Clouds and e-Science, 2009
"... ..."
(Show Context)
An Efficient Sensitivity Analysis Method for Large Cloud Simulations
"... Abstract—Simulations of large distributed systems, such as infrastructure clouds, usually entail a large space of parameters and responses that prove impractical to explore. To reduce the space of inputs, experimenters, guided by domain knowledge and ad hoc methods, typically select a subset of para ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
(Show Context)
Abstract—Simulations of large distributed systems, such as infrastructure clouds, usually entail a large space of parameters and responses that prove impractical to explore. To reduce the space of inputs, experimenters, guided by domain knowledge and ad hoc methods, typically select a subset of parameters and values to simulate. Similarly, experimenters typically use ad hoc methods to reduce the number of responses to analyze. Such ad hoc methods can result in experiment designs that miss significant parameter combinations and important responses, or that overweight selected parameters and responses. When this occurs, the experiment results and subsequent analyses can be misleading. In this paper, we apply an efficient sensitivity analysis method to demonstrate how relevant parameter combinations and behaviors can be identified for an infrastructure Cloud simulator that is intended to compare resource allocation algorithms. Researchers can use the techniques we demonstrate here to design experiments for large Cloud simulations, leading to improved quality in derived research results and findings. Keywords- cloud computing;modeling; resource allocation; sensitivity analysis; simulation I.
Provisioning Spot Market Cloud Resources to Create Cost-Effective Virtual Clusters
"... Abstract. Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as “spot instances”, at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as “spot instances”, at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user’s bid. This paper proposes a resource allocation policy that addresses the problem of running deadlineconstrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60 % lower) of the price that would normally cost on fixed priced resources.
Efficient VM Load Balancing Algorithm for Cloud Computing Environment,” IJCSE
, 2012
"... Cloud computing is a fast growing area in computing research and industry today. With the advancement of the Cloud, there are new possibilities opening up on how applications can be built and how different services can be offered to the end user through Virtualization, on the internet. There are the ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
Cloud computing is a fast growing area in computing research and industry today. With the advancement of the Cloud, there are new possibilities opening up on how applications can be built and how different services can be offered to the end user through Virtualization, on the internet. There are the cloud service providers who provide large scaled computing infrastructure defined on usage, and provide the infrastructure services in a very flexible manner which the users can scale up or down at will. The establishment of an effective load balancing algorithm and how to use Cloud computing resources efficiently for effective and efficient cloud computing is one of the Cloud computing service providers’ ultimate goals. In this paper firstly analysis of different Virtual Machine (VM) load balancing algorithms is done. Secondly, a new VM load balancing algorithm has been proposed and implemented for an IaaS framework in Simulated cloud computing environment; i.e. ‘Weighted Active Monitoring Load Balancing Algorithm ’ using CloudSim tools, for the Datacenter to effectively load balance requests between the available virtual machines assigning a weight, in order to achieve better performance parameters such as response time and Data processing time.
Virtual Organization Clusters: Self-provisioned clouds on the grid. Future Generation Computer Systems
"... Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overh ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely
Service Level Agreement (SLA) in Utility Computing Systems
"... In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers ‟ service quality expectation can be achieved. In utility computing systems, the level of customer satisfact ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers ‟ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental aspect is the management of SLAs, including SLA autonomy management or trade of among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.
Towards a cost model for scheduling scientific workflows activities in cloud environments
- in Services (SERVICES), 2011 IEEE World Congress on
, 2011
"... Cloud computing has emerged as a new computing model that enables scientists to benefit from several distributed resources such as hardware and software. Clouds are an opportunity for scientists that need high performance computing infrastructure to execute their scientific experiments. Most of the ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Cloud computing has emerged as a new computing model that enables scientists to benefit from several distributed resources such as hardware and software. Clouds are an opportunity for scientists that need high performance computing infrastructure to execute their scientific experiments. Most of the experiments modeled as scientific workflows manage the execution of several activities and produce a large amount of data. Since scientists work with large amounts of data, parallel techniques are often a key factor on the experimentation process. However, parallelizing a scientific workflow in the cloud environment is far from trivial. One of the complex tasks is to define the configuration of the environment, i.e. the number and types of virtual machines and to design the parallel execution strategy. Due to the number of options for configuring an environment it is a hard task to be done manually and it may produce negative impact on performance. This paper proposes a lightweight cost model that is based on concepts of quality of service (QoS) in cloud environments to help determining an adequate configuration of the environment according to restrictions imposed by scientists.