Results 1  10
of
94
Optimal Clock Synchronization
 Journal of the ACM
, 2003
"... We present a simple, efficient, and unified solution to the problems of synchronizing, initializing, and integrating clocks for systems with different types of failures: crash, omission, and arbitrary failures with and without message authentication. This is the ft known solution that achieves optim ..."
Abstract

Cited by 153 (0 self)
 Add to MetaCart
We present a simple, efficient, and unified solution to the problems of synchronizing, initializing, and integrating clocks for systems with different types of failures: crash, omission, and arbitrary failures with and without message authentication. This is the ft known solution that achieves optimal accuracy  the accuracy of synchronized clocks (with respect to real time) is as good as that specified for the underlying hardware clocks. The solution is also optimal with respect to the number of faulty processes that can be tolerated to achieve this accuracy.
Reaching approximate agreement in the presence of faults
 Journal of the ACM
, 1986
"... Abstract. This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach ..."
Abstract

Cited by 135 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of Fischer et al, who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to
A Comparison of Bus Architectures for SafetyCritical Embedded Systems
, 2001
"... Abstract. Embedded systems for safetycritical applications often integrate multiple “functions ” and must generally be faulttolerant. These requirements lead to a need for mechanisms and services that provide protection against fault propagation and ease the construction of distributed faulttoler ..."
Abstract

Cited by 121 (5 self)
 Add to MetaCart
Abstract. Embedded systems for safetycritical applications often integrate multiple “functions ” and must generally be faulttolerant. These requirements lead to a need for mechanisms and services that provide protection against fault propagation and ease the construction of distributed faulttolerant applications. A number of bus architectures have been developed to satisfy this need. This paper reviews the requirements on these architectures, the mechanisms employed, and the services provided. Four representative architectures (SAFEbus TM, SPIDER, TTA, and FlexRay) are briefly described. 1
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
On the composition of authenticated Byzantine agreement
 In 34th Annual ACM Symposium on Theory of Computing (STOC
, 2002
"... ..."
The wakeup problem
 SIAM Journal on Computing
, 1996
"... We study a new problem, the wakeup problem, that seems to be fundamental in distributed computing. We present efficient solutions to the problem and show how these solutions can be used to solve the consensus problem, the leader election problem, and other related problems. The main question we try ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
We study a new problem, the wakeup problem, that seems to be fundamental in distributed computing. We present efficient solutions to the problem and show how these solutions can be used to solve the consensus problem, the leader election problem, and other related problems. The main question we try to answer is, how much memory is needed to solve the wakeup problem? We assume a model that captures important properties of real systems that have been largely ignored by previous work on cooperative problems.
Gap Theorems for Distributed Computation
 SIAM Journal on Computing
, 1986
"... lower bounds, gap theorem. Consider a bidirectional ring of n identical processors that communicate asynchronously. The processors have no identifiers and hence the ring is called anonymous. Each processor receives an input letter, and the ring is to compute a function of the circular input string. ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
lower bounds, gap theorem. Consider a bidirectional ring of n identical processors that communicate asynchronously. The processors have no identifiers and hence the ring is called anonymous. Each processor receives an input letter, and the ring is to compute a function of the circular input string. If the function value is constant for all input strings, then the processors do not need to send any messages. On the other hand, we prove that any deterministic algorithm that computes any nonconstant function for anonymous rings requires Ω(n logn) bits of communication for some input string. We also exhibit nonconstant functions that require O (n logn) bits of communication for every input string. The same gap for the bit complexity of nonconstant functions remains even if the processors have distinct identifier, provided that the identifiers are taken from a large enough domain. When the communication is measured in messages rather than bits, the results change. We present a nonconstant function that can be computed with O (n log*n) messages on an anonymous ring. 1.
Lower bounds on implementing robust and resilient mediators
, 2007
"... We consider games that have (k, t)robust equilibria when played with a mediator, where an equilibrium is (k, t)robust if it tolerates deviations by coalitions of size up to k and deviations by up to t players with unknown utilities. We prove lower bounds that match upper bounds on the ability to i ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We consider games that have (k, t)robust equilibria when played with a mediator, where an equilibrium is (k, t)robust if it tolerates deviations by coalitions of size up to k and deviations by up to t players with unknown utilities. We prove lower bounds that match upper bounds on the ability to implement such mediators using cheap talk (that is, just allowing communication among the players). The bounds depend on (a) the relationship between k, t and n, the total number of players in the system; (b) whether players know the exact utilities of other players; (c) whether there are broadcast channels or just pointtopoint channels; (d) whether cryptography is available; and (e) whether the game has a (k + t)punishment strategy; that is, a strategy that, if used by all but at most k + t players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy.
Linear Time Byzantine SelfStabilizing Clock Synchronization
 Proceedings of 7th International Conference on Principles of Distributed Systems (OPODIS2003), La
, 2003
"... ght pulse synchronization that is uncorrelated to the actual clock values. The synchronized pulses are used as the events for resynchronizing the clock values. 1 Introduction Overcoming failures that are not predictable in advance is most suitably addressed by tolerating Byzantine faults. It is ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
ght pulse synchronization that is uncorrelated to the actual clock values. The synchronized pulses are used as the events for resynchronizing the clock values. 1 Introduction Overcoming failures that are not predictable in advance is most suitably addressed by tolerating Byzantine faults. It is the preferred fault model in order to seal o unexpected behavior within limitations on the number of concurrent faults. Most distributed tasks require the number of Byzantine faults, f , to abide by the ratio of 3f < n, where n is the network size. See [14] for impossibility results on several consensus related problems such as clock synchronization. Additionally, it makes sense to require such systems to resume operation after serious unpredictable events without the need for an outside intervention and/or a restart of the system from scratch. E.g. systems may occasionally experience This research was supported in part by Intel COMM Grant  Internet Network /Transport Layer & QoS Enviro