Results 1  10
of
24
Functional Monitoring Without Monotonicity
, 2008
"... The notion of distributed functional monitoring was recently introduced by Cormode, Muthukrishnan and Yi [CMY08] to initiate a formal study of the communication cost of certain fundamental problems arising in distributed systems, especially sensor networks. In this model, each of k sites reads a str ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
(Show Context)
The notion of distributed functional monitoring was recently introduced by Cormode, Muthukrishnan and Yi [CMY08] to initiate a formal study of the communication cost of certain fundamental problems arising in distributed systems, especially sensor networks. In this model, each of k sites reads a stream of tokens and is in communication with a central coordinator, who wishes to continuously monitor some function f of σ, the union of the k streams. The goal is to minimize the number of bits communicated by a protocol that correctly monitors f (σ), to within some small error. As in previous work, we focus on a threshold version of the problem, where the coordinator’s task is simply to maintain a single output bit, which is 0 whenever f (σ) ≤ τ(1−ε) and 1 whenever f (σ) ≥ τ. Following Cormode et al., we term this the (k, f, τ, ε) functional monitoring problem. In previous work, some upper and lower bounds were obtained for this problem, with f being a frequency moment function, e.g., F0, F1, F2. Importantly, these functions are monotone. Here, we further advance the study of such problems, proving three new classes of results. First, we prove new lower bounds on this problem when f = Fp, for several values of p. Second, we study the effect of nonmonotonicity of f on our ability to give nontrivial monitoring protocols, by considering f = Fp with deletions allowed, as well as f = H, the empirical Shannon entropy of a stream. Third, we provide nontrivial monitoring protocols when f is either H, or any of a related class of entropy functions (Tsallis entropies). These are the first nontrivial algorithms for distributed monitoring of nonmonotone functions.
Optimal tracking of distributed heavy hitters and quantiles
 In PODS
, 2009
"... We consider the the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let A be a multiset of elements, drawn from the universe U = {1,..., u}. For a given 0 ≤ φ ≤ 1, ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
We consider the the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let A be a multiset of elements, drawn from the universe U = {1,..., u}. For a given 0 ≤ φ ≤ 1, the φheavy hitters are those elements of A whose frequency in A is at least φA; the φquantile of A is an element x of U such that at most φA  elements of A are smaller than A and at most (1 − φ)A  elements of A are greater than x. Suppose the elements of A are received at k remote sites over time, and each of the sites has a twoway communication channel to a designated coordinator, whose goal is to track the set of φheavy hitters and the φquantile of A approximately at all times with minimum communication. We give tracking algorithms with worstcase communication cost O(k/ǫ · log n) for both problems, where n is the total number of items in A, and ǫ is the approximation error. This substantially improves upon the previous known algorithms. We also give matching lower bounds on the communication costs for both problems, showing that our algorithms are optimal. We also consider a more general version of the problem where we simultaneously track the φquantiles for all 0 ≤ φ ≤ 1. 1
Online strategies for intra and inter provider service migration in virtual networks
 IN: PROC. PRINCIPLES, SYSTEMS AND APPLICATIONS OF IP TELECOMMUNICATIONS, IPTCOMM
, 2011
"... Network virtualization allows one to build dynamic distributed systems in which resources can be dynamically allocated at locations where they are most useful. In order to fully exploit the benefits of this new technology, protocols need to be devised which react efficiently to changes in the deman ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
Network virtualization allows one to build dynamic distributed systems in which resources can be dynamically allocated at locations where they are most useful. In order to fully exploit the benefits of this new technology, protocols need to be devised which react efficiently to changes in the demand. This paper argues that the field of online algorithms and competitive analysis provides useful tools to deal with and reason about the uncertainty in the request dynamics, and to design algorithms with provable performance guarantees. As a case study, we describe a system (e.g., a gaming application) where network virtualization is used to support thin client applications for mobile devices to improve their QoS. By decoupling the service from the underlying resource infrastructure, it can be migrated closer to the current client locations while taking into account migration cost. This paper identifies the major cost factors in such a system, and formalizes the corresponding optimization problem. Both randomized and deterministic, gravity center based online algorithms are presented which achieve a good tradeoff between improved QoS and migration cost in the worstcase, both for service migration within an infrastructure provider as well as for networks supporting crossprovider migration. The paper reports on our simulation results and also presents an explicit construction of an optimal offline algorithm which allows, e.g., to evaluate the competitive ratio empirically.
Competitive analysis for service migration in VNets
, 2010
"... Network virtualization promises a high flexibility by decoupling services from the underlying substrate network and allowing the virtual network to adapt to the needs of the service, e.g., by migrating servers or/and parts of the network. We study a system (e.g., a gaming application) where network ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Network virtualization promises a high flexibility by decoupling services from the underlying substrate network and allowing the virtual network to adapt to the needs of the service, e.g., by migrating servers or/and parts of the network. We study a system (e.g., a gaming application) where network virtualization is used to support thin client applications for mobile devices to improve their QoS. To deal with the dynamics of both the mobile clients as well as the ability to migrate services closer to the client location we advocate, in this paper, the use of competitive analysis. After identifying the parameters that characterize the costbenefit tradeoff for this kind of application we propose an online migration strategy. The strength of the strategy is that it is robust with regards to any arbitrary request access pattern. In particular, it is close to the optimal offline algorithm that knows the access pattern in advance. In this paper we present both an optimal offline algorithm based on dynamic programming techniques to find the best migration paths for a given request sequence, and a O(µ log n)competitive migration strategy MIG where µ is the ratio between maximal and minimal link capacity in the substrate network for a simplified model. This is almost optimal for small µ, as we also show that there are networks where no online algorithm can achieve a ratio
Online function tracking with generalized penalties
 In Proc. 12th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT
, 2010
"... Abstract. We attend to the classic setting where an observer needs to inform a tracker about an arbitrary time varying function f: N0 → Z. This is an optimization problem, where both wrong values at the tracker and sending updates entail a certain cost. We consider an online variant of this problem, ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We attend to the classic setting where an observer needs to inform a tracker about an arbitrary time varying function f: N0 → Z. This is an optimization problem, where both wrong values at the tracker and sending updates entail a certain cost. We consider an online variant of this problem, i.e., at time t, the observer only knows f(t ′ ) for all t ′ ≤ t. In this paper, we generalize existing cost models (with an emphasis on concave and convex penalties) and present two online algorithms. Our analysis shows that these algorithms perform well in a large class of models, and are even optimal in some settings. 1
On the benefit of virtualization: Strategies for flexible server allocation
 In Proc. USENIX Workshop on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (HotICE
, 2011
"... Network virtualization [2] is an intriguing paradigm which loosens the ties between services and physical infrastructure. The gained flexibility promises faster innovations, enabling a more diverse Internet and ensuring ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Network virtualization [2] is an intriguing paradigm which loosens the ties between services and physical infrastructure. The gained flexibility promises faster innovations, enabling a more diverse Internet and ensuring
Maintaining Nets and Net Trees under Incremental Motion ⋆
"... Abstract. The problem of maintaining geometric structures for points in motion has been well studied over the years. Much theoretical work to date has been based on the assumption that point motion is continuous and predictable, but in practice, motion is typically presented incrementally in discret ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of maintaining geometric structures for points in motion has been well studied over the years. Much theoretical work to date has been based on the assumption that point motion is continuous and predictable, but in practice, motion is typically presented incrementally in discrete time steps and may not be predictable. We consider the problem of maintaining a data structure for a set of points undergoing such incremental motion. We present a simple online model in which two agents cooperate to maintain the structure. One defines the data structure and provides a collection of certificates, which guarantee the structure’s correctness. The other checks that the motion over time satisfies these certificates and notifies the first agent of any violations. We present efficient online algorithms for maintaining both nets and net trees for a point set undergoing incremental motion in a space of constant dimension. We analyze our algorithms ’ efficiencies by bounding their competitive ratios relative to an optimal algorithm. We prove a constant factor competitive ratio for maintaining a slack form of nets, and our competitive ratio for net trees is proportional to the square of the tree’s height. 1
Compressing kinetic data from sensor networks
, 2009
"... We introduce a framework for storing and processing kinetic data observed by sensor networks. These sensor networks generate vast quantities of data, which motivates a significant need for data compression. We are given a set of sensors, each of which continuously monitors some region of space. We ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We introduce a framework for storing and processing kinetic data observed by sensor networks. These sensor networks generate vast quantities of data, which motivates a significant need for data compression. We are given a set of sensors, each of which continuously monitors some region of space. We are interested in the kinetic data generated by a finite set of objects moving through space, as observed by these sensors. Our model relies purely on sensor observations; it allows points to move freely and requires no advance notification of motion plans. Sensor outputs are represented as random processes, where nearby sensors may be statistically dependent. We model the local nature of sensor networks by assuming that two sensor outputs are statistically dependent only if the two sensors are among the k nearest neighbors of each other. We present an algorithm for the lossless compression of the data produced by the network. We show that, under the statistical dependence and locality assumptions of our framework, asymptotically this compression algorithm encodes the data to within a constant factor of the informationtheoretic lower bound optimum dictated by the joint entropy of the system.
Kinetic convex hulls and delaunay triangulations in the blackbox model
 In Proc. 27th Annu. Sympos. Comput. Geom
, 2011
"... Over the past decade, the kineticdatastructures framework has become the standard in computational geometry for dealing with moving objects. A fundamental assumption underlying the framework is that the motions of the objects are known in advance. This assumption severely limits the applicability ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Over the past decade, the kineticdatastructures framework has become the standard in computational geometry for dealing with moving objects. A fundamental assumption underlying the framework is that the motions of the objects are known in advance. This assumption severely limits the applicability of KDSs. We study KDSs in the blackbox model, which is a hybrid of the KDS model and the traditional timeslicing approach. In this more practical model we receive the position of each object at regular time steps and we have an upper bound on dmax, the maximum displacement of any point in one time step. We study the maintenance of the convex hull and the Delaunay triangulation of a planar point set P in the blackbox model, under the following assumption on dmax: there is some constant k such that for any point p ∈ P the disk of radius dmax contains at most k points. We analyze our algorithms in terms of ∆k, the socalled kspread of P. We show how to update the convex hull at each time step in O(k∆k log 2 n) amortized time. For the Delaunay triangulation our main contribution is an analysis of the standard edgeflipping approach; we show that the number of flips is O(k2∆2k) at each time step.
The WideArea Virtual Service Migration Problem: A Competitive Analysis Approach
"... Today’s trend towards network virtualization and softwaredefined networking enables flexible new distributed systems where resources can be dynamically allocated and migrated to locations where they are most useful. This article proposes a competitive analysis approach to design and reason about o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Today’s trend towards network virtualization and softwaredefined networking enables flexible new distributed systems where resources can be dynamically allocated and migrated to locations where they are most useful. This article proposes a competitive analysis approach to design and reason about online algorithms that find a good tradeoff between the benefits and costs of a migratable service. A competitive online algorithm provides worstcase performance guarantees under any demand dynamics, and without any information or statistical assumptions on the demand in the future. This is attractive especially in scenarios where the demand is hard to predict and can be subject to unexpected events. As a case study, we describe a service (e.g., an SAP server or a gaming application) that uses network virtualization to improve the QualityofService (QoS) experienced by thin client applications running on mobile devices. By decoupling the service from the underlying resource infrastructure, it can be migrated closer to the current client locations while taking into account migration costs. We identify the major cost factors in such a system, and formalize the widearea service migration problem. Our main contribution are a randomized and a deterministic online algorithm that achieve a competitive ratio of O(log n) in a simplified scenario, where n is the size of the substrate network. This is almost optimal. We complement our worstcase analysis with simulations in different specific scenarios, and also sketch a migration demonstrator.