Results 1 -
8 of
8
Control strategies for predictable brownout
- in Cloud Computing”. In: IFAC WC
, 2014
"... Abstract: Cloud computing is an application hosting model providing the illusion of infinite computing power. However, even the largest datacenters have finite computing capacity, thus cloud infrastructures have experienced overload due to overbooking or transient failures. The topic of this paper i ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
(Show Context)
Abstract: Cloud computing is an application hosting model providing the illusion of infinite computing power. However, even the largest datacenters have finite computing capacity, thus cloud infrastructures have experienced overload due to overbooking or transient failures. The topic of this paper is the comparison of different control strategies to mitigate overload for datacenters, that assume that the running cloud applications are cooperative and help the infrastructure in recovering from critical events. Specifically, the paper investigates the behavior of different controllers when they have to keep the average response time of a cloud application below a certain threshold by acting on the probability of serving requests with optional computations disabled, where the pressure exerted by each request on the infrastructure is diminished, at the expense of user experience.
Consistency in the Cloud: When Money Does Matter!
, 2012
"... Abstract—With the emergence of cloud computing, many organizations have moved their data to the cloud in order to provide scalable, reliable and highly available services. To meet ever growing user needs, these services mainly rely on geographically-distributed data replication to guarantee good per ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—With the emergence of cloud computing, many organizations have moved their data to the cloud in order to provide scalable, reliable and highly available services. To meet ever growing user needs, these services mainly rely on geographically-distributed data replication to guarantee good performance and high availability. However, with replication, consistency comes into question. Service providers in the cloud have the freedom to select the level of consistency according to the access patterns exhibited by the applications. Most optimizations efforts then concentrate on how to provide adequate trade-offs between consistency guarantees and performance. However, as the monetary cost completely relies on the service providers, in this paper we argue that monetary cost should be taken into consideration when evaluating or selecting a consistency level in the cloud. Accordingly, we define a new metric called consistency-cost efficiency. Based on this metric, we present a simple, yet efficient economical consistency model, called Bismar, that adaptively tunes the consistency level at run-time in order to reduce the monetary cost while simultaneously maintaining a low fraction of stale reads. Experimental evaluations with the Cassandra cloud storage on a Grid’5000 testbed show the validity of the metric and demonstrate the effectiveness of the proposed consistency model. Keywords-Cloud storage; geographical replications; consistency; Monetary cost; efficiency; I.
(2013)" Self-Adaptive Cost-Efficient Consistency Management in the Cloud
, 2013
"... Abstract—Many data-intensive applications and services in the cloud are geo-distributed and rely on geo-replication. Traditional synchronous replication that ensures strong consistency exposes these systems to the bottleneck of wide areas network latencies that affect their performance, availability ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Many data-intensive applications and services in the cloud are geo-distributed and rely on geo-replication. Traditional synchronous replication that ensures strong consistency exposes these systems to the bottleneck of wide areas network latencies that affect their performance, availability and the monetary cost of running in the cloud. In this context, several weaker consistency models were introduced to hide such effects. However, these solutions may tolerate far too much stale data to be read. In this PhD research, we focus on the investigation of better and efficient ways to manage consistency. We propose self-adaptive methods that tune consistency levels at runtime in order to achieve better performance, availability and reduce the monetary cost without violating the consistency requirements of the application. Furthermore, we introduce a behavior modeling method that automatically analyzes the application and learns its consistency requirements. The set of experimental evaluations on Grid’5000 and Amazon EC2 cloud platforms show the effectiveness of the proposed approaches. I.
To cite this version:
, 2012
"... HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract
- Add to MetaCart
(Show Context)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
THEME Distributed and High Performance
"... 4. Application Domains......................................................................4 4.1.1. Joint genetic and neuroimaging data analysis on Azure clouds 4 4.1.2. Structural protein analysis on Nimbus clouds 5 4.1.3. I/O intensive climate simulations for the Blue Waters post-Petascale machin ..."
Abstract
- Add to MetaCart
(Show Context)
4. Application Domains......................................................................4 4.1.1. Joint genetic and neuroimaging data analysis on Azure clouds 4 4.1.2. Structural protein analysis on Nimbus clouds 5 4.1.3. I/O intensive climate simulations for the Blue Waters post-Petascale machine 5 5. Software................................................................................. 6
Author manuscript, published in "CCGRID 2013- 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (2013)" Consistency in the Cloud: When Money Does Matter!
, 2013
"... Abstract—With the emergence of cloud computing, many organizations have moved their data to the cloud in order to provide scalable, reliable and highly available services. To meet the ever-growing user needs, these services mainly rely on geographically-distributed data replication to guarantee good ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—With the emergence of cloud computing, many organizations have moved their data to the cloud in order to provide scalable, reliable and highly available services. To meet the ever-growing user needs, these services mainly rely on geographically-distributed data replication to guarantee good performance and high availability. However, with replication, consistency comes into question. Service providers in the cloud have the freedom to select the level of consistency according to the access patterns exhibited by the applications. Most optimizations efforts then concentrate on how to provide adequate trade-offs between consistency guarantees and performance. However, as the monetary cost completely relies on the service providers, in this paper we argue that monetary cost should be taken into consideration when evaluating or selecting a consistency level in the cloud. Accordingly, we define a new metric called consistency-cost efficiency. Based on this metric, we present a simple, yet efficient economical consistency model, called Bismar, that adaptively tunes the consistency level at runtime in order to reduce the monetary cost while simultaneously maintaining a low fraction of stale reads. Experimental evaluations with the Cassandra cloud storage on the Grid’5000 testbed show the validity of the metric and demonstrate the effectiveness of the proposed consistency model. I.
cENS Cachan-Antenne de Bretagne
"... Multiple Big Data applications are being deployed worldwide to serve a very large number of clients nowadays. These applications vary in their perfor-mance and consistency requirements. Understanding such requirements at the storage system level is not possible. The high level semantics of an appli- ..."
Abstract
- Add to MetaCart
(Show Context)
Multiple Big Data applications are being deployed worldwide to serve a very large number of clients nowadays. These applications vary in their perfor-mance and consistency requirements. Understanding such requirements at the storage system level is not possible. The high level semantics of an appli-cation are not exposed at the system level. In this context, the consequences of a stale read are not the same for all types of applications. In this work, we focus on managing consistency at the application level rather than at the system level. In order to achieve this goal, we propose an offline model-ing approach of the application access behavior that considers its high–level consistency semantics. Furthermore, every application state is automati-cally associated with a consistency policy. At runtime, we introduce the Chameleon approach that leverages the application model to provide a cus-tomized consistency specific to that application. Experimental evaluations show the high accuracy of our modeling approach exceeding 96 % of correct classification of the application states. Moreover, our experiments conducted on Grid’5000 show that Chameleon adapts, for every time period, accord-ing to the application behavior and requirements while providing best-effort performance.