Results 1 - 10
of
329
Reputation-based scheduling on unreliable distributed infrastructures
- IN PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS
, 2006
"... This paper presents a design and analysis of scheduling techniques to cope with the inherent unreliability and instability of worker nodes in large-scale donation-based distributed infrastructures such as P2P and Grid systems. In particular, we focus on nodes that execute tasks via donated computati ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
This paper presents a design and analysis of scheduling techniques to cope with the inherent unreliability and instability of worker nodes in large-scale donation-based distributed infrastructures such as P2P and Grid systems. In particular, we focus on nodes that execute tasks via donated
Provisioning Heterogeneous and Unreliable Providers for Service Workflows
- Proc. Sixth Int’l Joint Conf. Autonomous Agents and Multiagent Systems (AAMAS ’07
, 2007
"... Service-oriented technologies enable software agents to dy-namically discover and provision remote services for their workflows. Current work has typically assumed these ser-vices to be reliable and deterministic, but this is unrealistic in open systems, such as the Web, where they are offered by au ..."
Abstract
-
Cited by 9 (7 self)
- Add to MetaCart
by autonomous agents and are, therefore, inherently unreliable. To address this potential unreliability (in particular, uncertain service durations and failures), we consider the provisioning of abstract workflows, where many heterogeneous providers offer services at differing levels of quality. More
Adaptive Reputation-Based Scheduling on Unreliable Distributed Infrastructures
, 2007
"... This paper addresses the inherent unreliability and instability of worker nodes in large-scale donation-based distributed infrastructures such as P2P and Grid systems. We present adaptive scheduling tech-niques that can mitigate this uncertainty and significantly outperform current approaches. In th ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
This paper addresses the inherent unreliability and instability of worker nodes in large-scale donation-based distributed infrastructures such as P2P and Grid systems. We present adaptive scheduling tech-niques that can mitigate this uncertainty and significantly outperform current approaches
Medians and Beyond: New Aggregation Techniques for Sensor Networks
, 2004
"... Wireless sensor networks offer the potential to span and monitor large geographical areas inexpensively. Sensors, however, have significant power constraint (battery life), making communication very expensive. Another important issue in the context of sensorbased information systems is that individu ..."
Abstract
-
Cited by 190 (6 self)
- Add to MetaCart
is that individual sensor readings are inherently unreliable. In order to address these two aspects, sensor database systems like TinyDB and Cougar enable in-network data aggregation to reduce the communication cost and improve reliability. The existing data aggregation techniques, however, are limited to relatively
The Inherent Price of Indulgence
, 2002
"... This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unifying framework of round-by-round fault detectors. ..."
Abstract
-
Cited by 17 (7 self)
- Add to MetaCart
This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unifying framework of round-by-round fault detectors.
Reliable Computing with Unreliable Components
"... How, in the face of both intrinsic and extrinsic volatility, can unconventional computing fabrics store information over arbitrarily long periods? Here, we argue that the predictable structure of many realistic environments, both natural and artificial, can be used to maintain useful categorical bou ..."
Abstract
- Add to MetaCart
boundaries even when the computational fabric itself is inherently volatile and the inputs and outputs are partially stochastic. As a concrete example, we consider the storage of binary classifications in connectionist networks, although the underlying principles should be applicable to other unconventional
The Inherent Price of Indulgence
"... Abstract An indulgent algorithm is a distributed algorithm that tolerates asynchronous periods of the network when process crash detection is unreliable. This paper presents a tight bound on the time complexity of indulgent consensus algorithms. ..."
Abstract
- Add to MetaCart
Abstract An indulgent algorithm is a distributed algorithm that tolerates asynchronous periods of the network when process crash detection is unreliable. This paper presents a tight bound on the time complexity of indulgent consensus algorithms.
The Inherent Price of Indulgence
- Proc. 21st ACM Symposium on Principles of Distributed Computing, (PODC'02), ACM Press
, 2002
"... This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unifying framework of round-by-round fault detectors. ..."
Abstract
- Add to MetaCart
This paper presents a tight lower bound on the time complexity of indulgent consensus algorithms, i.e., consensus algorithms that use unreliable failure detectors. We state and prove our tight lower bound in the unifying framework of round-by-round fault detectors.
Protecting Free Expression Online with Freenet
, 2002
"... ially hundreds of thousands of desktop computers to create a collaborative virtual file system. To increase network robustness and eliminate single points of failure, Freenet employs a completely decentralized architecture. Given that the P2P environment is inherently untrustworthy and unreliable, w ..."
Abstract
-
Cited by 211 (7 self)
- Add to MetaCart
ially hundreds of thousands of desktop computers to create a collaborative virtual file system. To increase network robustness and eliminate single points of failure, Freenet employs a completely decentralized architecture. Given that the P2P environment is inherently untrustworthy and unreliable
Adaptive cleaning for rfid data streams
, 2006
"... ABSTRACT To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a "smoothing filter", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID ..."
Abstract
-
Cited by 101 (0 self)
- Add to MetaCart
ABSTRACT To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a "smoothing filter", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID
Results 1 - 10
of
329