Results 1  10
of
21
Error Correcting Tournaments
, 2008
"... Abstract. We present a family of adaptive pairwise tournaments that are provably robust against large error fractions when used to determine the largest element in a set. The tournaments use nk pairwise comparisons but have only O(k + log n) depth, where n is the number of players and k is the robus ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a family of adaptive pairwise tournaments that are provably robust against large error fractions when used to determine the largest element in a set. The tournaments use nk pairwise comparisons but have only O(k + log n) depth, where n is the number of players and k is the robustness parameter (for reasonable values of n and k). These tournaments also give a reduction from multiclass to binary classification in machine learning, yielding the best known analysis for the problem. 1
Sorting and Searching in the Presence of Memory Faults (without Redundancy)
 Proc. 36th ACM Symposium on Theory of Computing (STOC’04
, 2004
"... We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and searching. In particular, we prove that any O(n log n) comparisonbased sorting algorithm can tolerate at most O((n log n) ) memory faults. Furthermore, we present one comparisonbased sorting algorithm with optimal space and running time that is resilient to O((n log n) ) faults. We also prove polylogarithmic lower and upper bounds on faulttolerant searching.
Optimal resilient sorting and searching in the presence of memory faults
 IN PROC. 33RD INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING, VOLUME 4051 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2006
"... We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an O(n log n) time sorting algorithm that can optimally tolerate up to O ( √ n log n) memory faults. In the special case of integer sorting, we present an algorithm with linear expected running time that can tolerate O ( √ n) faults. We also present a randomized searching algorithm that can optimally tolerate up to O(log n) memory faults in O(log n) expected time, and an almost optimal deterministic searching algorithm that can tolerate O((log n) 1−ǫ) faults, for any small positive constant ǫ, in O(log n) worstcase time. All these results improve over previous bounds.
Max Algorithms in Crowdsourcing Environments
"... Our work investigates the problem of retrieving the maximum item from a set in crowdsourcing environments. We first develop parameterized families of max algorithms, that take as input a set of items and output an item from the set that is believed to be the maximum. Such max algorithms could, for i ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Our work investigates the problem of retrieving the maximum item from a set in crowdsourcing environments. We first develop parameterized families of max algorithms, that take as input a set of items and output an item from the set that is believed to be the maximum. Such max algorithms could, for instance, select the best Facebook profile that matches a given person or the best photo that describes a given restaurant. Then, we propose strategies that select appropriatemaxalgorithmparameters. Ourframeworksupports various human error and cost models and we consider many of them for our experiments. We evaluate under many metrics, both analytically and via simulations, the tradeoff between three quantities: (1) quality, (2) monetary cost, and (3) execution time. Also, we provide insights on the effectiveness of the strategies in selecting appropriate max algorithm parameters and guidelines for choosing max algorithms and strategies for each application.
Optimal resilient dynamic dictionaries
 IN PROCEEDINGS OF 15TH EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA
, 2007
"... We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
(Show Context)
We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys. We first present a simple resilient dynamic search tree, based on random sampling, with O(log n+δ) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + δ) amortized time. Our dynamic dictionary also supports range queries in O(log n+δ+t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Ω(log n + δ) worstcase time per search.
Designing Reliable Algorithms in Unreliable Memories
"... Some of today’s applications run on computer platforms with large and inexpensive memories, which are also errorprone. Unfortunately, the appearance of even very few memory faults may jeopardize the correctness of the computational results. An algorithm is resilient to memory faults if, despite t ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Some of today’s applications run on computer platforms with large and inexpensive memories, which are also errorprone. Unfortunately, the appearance of even very few memory faults may jeopardize the correctness of the computational results. An algorithm is resilient to memory faults if, despite the corruption of some memory values before or during its execution, it is nevertheless able to get a correct output at least on the set of uncorrupted values. In this paper we will survey some recent work on reliable computation in the presence of memory faults.
Sorting and Selection with Imprecise Comparisons
"... Abstract. In experimental psychology, the method of paired comparisons was proposed as a means for ranking preferences amongst n elements of a human subject. The method requires performing all ( n 2 comparisons then sorting elements according to the number of wins. The large number of comparisons i ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In experimental psychology, the method of paired comparisons was proposed as a means for ranking preferences amongst n elements of a human subject. The method requires performing all ( n 2 comparisons then sorting elements according to the number of wins. The large number of comparisons is performed to counter the potentially faulty decisionmaking of the human subject, who acts as an imprecise comparator. We consider a simple model of the imprecise comparisons: there exists some δ> 0 such that when a subject is given two elements to compare, if the values of those elements (as perceived by the subject) differ by at least δ, then the comparison will be made correctly; when the two elements have values that are within δ, the outcome of the comparison is unpredictable. This δ corresponds to the just noticeable difference unit (JND) or difference threshold in the psychophysics literature, but does not require the statistical assumptions used to define this value. In this model, the standard method of paired comparisons minimizes the errors introduced by the imprecise comparisons at the cost of ( n 2 comparisons. We show that the same optimal guarantees can be achieved using 4n 3/2 comparisons, and we prove the optimality of our method. We then explore the general tradeoff between the guarantees on the error that can be made and number of comparisons for the problems of sorting, maxfinding, and selection. Our results provide closetooptimal solutions for each of these problems. 1
Reliable Minimum Finding Comparator Networks
 Fundamenta Informaticae
, 2000
"... . We consider the problem of constructing reliable minimum finding networks built from unreliable comparators. In case of a faulty comparator inputs are directly output without comparison. Our main result is the first nontrivial lower bound on depths of networks computing minimum among n ? 2 items ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
. We consider the problem of constructing reliable minimum finding networks built from unreliable comparators. In case of a faulty comparator inputs are directly output without comparison. Our main result is the first nontrivial lower bound on depths of networks computing minimum among n ? 2 items in the presence of k ? 0 faulty comparators. We prove that the depth of any such network is at least max(dlog ne + 2k; log n + k log log n k+1 ). We also describe a network whose depth nearly matches the lower bound. The lower bounds should be compared with the first nontrivial upper bound O(log n + k log log n log k ) on the depth of kfault tolerant sorting networks that was recently derived by Leighton and Ma [6]. 1 Introduction Networks built from comparators are commonly used to perform such tasks as selection, sorting and merging. A comparator is a 2 input2 output device which sorts two items. Networks of minimum size, i.e. using the minimum number of comparators for a given tas...
Highly FaultTolerant Sorting Circuits
 In Proceedings of the 32nd Annual Symposium on Foundations of Computer Science
, 1991
"... In this paper, we consider the problem of constructing a sorting circuit that will work well even if a constant fraction of its comparators fail at random. We consider two types of comparator failures: passive failures, which result in no comparison being made (i.e., the items being compared are out ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we consider the problem of constructing a sorting circuit that will work well even if a constant fraction of its comparators fail at random. We consider two types of comparator failures: passive failures, which result in no comparison being made (i.e., the items being compared are output in the same order that they are input) , and destructive failures, which result in the items being output in the reverse of the correct order. In either scenario, we assume that each comparator is faulty with some constant probability ae, and we say that a circuit is faulttolerant if it performs some desired function with high probability given that each comparator fails with probability ae. Our results include: (i) the construction of a passivefaulttolerant circuit of depth O(log n log log n) that sorts most permutations of n inputs, (ii) a proof that any destructivefaulttolerant circuit that is capable of approximate insertion (or many other operations) must have depth \Omega\Ga...