Results 1  10
of
10
Optimal resilient dynamic dictionaries
 IN PROCEEDINGS OF 15TH EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA
, 2007
"... We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys. We first present a simple resilient dynamic search tree, based on random sampling, with O(log n+δ) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + δ) amortized time. Our dynamic dictionary also supports range queries in O(log n+δ+t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Ω(log n + δ) worstcase time per search.
Local dependency dynamic programming in the presence of memory faults
 In STACS, volume 9 of LIPIcs
, 2011
"... memory faults ..."
Dynamic programming in faulty memory hierarchies (cacheobliviously) ∗
"... Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algor ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Random access memories suffer from transient errors that lead the logical state of some bits to be read differently from how they were last written. Due to technological constraints, caches in the memory hierarchy of modern computer platforms appear to be particularly prone to bit flips. Since algorithms implicitly assume data to be stored in reliable memories, they might easily exhibit unpredictable behaviors even in the presence of a small number of faults. In this paper we investigate the design of dynamic programming algorithms in faulty memory hierarchies. Previous works on resilient algorithms considered a onelevel faulty memory model and, with respect to dynamic programming, could address only problems with local dependencies. Our improvement upon these works is twofold: (1) we significantly extend the class of problems that can be solved resiliently via dynamic programming in the presence of faults, settling challenging nonlocal problems such as allpairs shortest paths and matrix multiplication; (2) we investigate the connection between resiliency and cacheefficiency, providing cacheoblivious implementations that incur an (almost) optimal number of cache misses. Our approach yields the first resilient algorithms that can tolerate faults at any level of the memory hierarchy, while maintaining cacheefficiency. All our algorithms are correct with high probability and match the running time and cache misses of their standard nonresilient counterparts while tolerating a large (polynomial) number of faults. Our results also extend to Fast Fourier Transform. 1998 ACM Subject Classification B.8 [Performance and reliability]; F.2 [Analysis of algorithms and problem complexity]; I.2.8 [Dynamic programming].
Lossless FaultTolerant Data Structures with Additive Overhead
"... Abstract. We develop the first dynamic data structures that tolerate δ memory faults, lose no data, and incur only an Õ(δ) additive overhead in overall space and time per operation. We obtain such data structures for arrays, linked lists, binary search trees, interval trees, predecessor search, and ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We develop the first dynamic data structures that tolerate δ memory faults, lose no data, and incur only an Õ(δ) additive overhead in overall space and time per operation. We obtain such data structures for arrays, linked lists, binary search trees, interval trees, predecessor search, and suffix trees. Like previous data structures, δ must be known in advance, but we show how to restore pristine state in linear time, in parallel with queries, making δ just a bound on the rate of memory faults. Our data structures require Θ(δ) words of safe memory during an operation, which may not be theoretically necessary but seems a practical assumption. 1
Data Structures: Sequence Problems, Range Queries and Fault Tolerance
, 2010
"... The focus of this dissertation is on algorithms, in particular data structures that give provably efficient solutions for sequence analysis problems, range queries, and fault tolerant computing. The work presented in this dissertation is divided into three parts. In Part I we consider algorithms for ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The focus of this dissertation is on algorithms, in particular data structures that give provably efficient solutions for sequence analysis problems, range queries, and fault tolerant computing. The work presented in this dissertation is divided into three parts. In Part I we consider algorithms for a range of sequence analysis problems that have risen from applications in pattern matching, bioinformatics, and data mining. On a high level, each problem is defined by a function and some constraints and the job at hand is to locate subsequences that score high with this function and are not invalidated by the constraints. Many variants and similar problems have been proposed leading to several different approaches and algorithms. We consider problems where the function is the sum of the elements in the sequence and the constraints only bound the length of the subsequences considered. We give optimal algorithms for several variants of the problem based on a simple idea and classic algorithms and data structures. In Part II we consider range query data structures. This a category of
The Price of Resiliency: A Case Study on Sorting with Memory Faults
, 2006
"... We address the problem of sorting in the presence of faults that may arbitrarily corrupt memory locations, and investigate the impact of memory faults both on the correctness and on the running times of mergesortbased algorithms. To achieve this goal, we develop a software testbed that simulates di ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We address the problem of sorting in the presence of faults that may arbitrarily corrupt memory locations, and investigate the impact of memory faults both on the correctness and on the running times of mergesortbased algorithms. To achieve this goal, we develop a software testbed that simulates different fault injection strategies, and perform a thorough experimental study using a combination of several fault parameters. Our experiments give evidence that simpleminded approaches to this problem are largely impractical, while the design of more sophisticated resilient algorithms seems really worth the effort. Another contribution of our computational study is a carefully engineered implementation of a resilient sorting algorithm, which appears robust to different memory fault patterns.
Resilient dynamic programming∗
, 2015
"... We investigate the design of dynamic programming algorithms in unreliable memories, i.e., in the presence of errors that lead the logical state of some bits to be read differently from how they were last written. Assuming that a limited number of memory faults can be inserted at runtime by an adver ..."
Abstract
 Add to MetaCart
(Show Context)
We investigate the design of dynamic programming algorithms in unreliable memories, i.e., in the presence of errors that lead the logical state of some bits to be read differently from how they were last written. Assuming that a limited number of memory faults can be inserted at runtime by an adversary with unbounded computational power, we obtain the first resilient algorithms for a broad range of dynamic programming problems, devising a general framework that can be applied to both iterative and recursive implementations. Besides all local dependency problems, where updates to table entries are determined by the contents of neighboring cells, we also settle challenging nonlocal problems, such as allpairs shortest paths and matrix multiplication. All our algorithms are correct with high probability and match the running time of their standard nonresilient counterparts while tolerating a polynomial number of faults. The recursive algorithms are also cacheefficient and can tolerate faults at any level of the memory hierarchy. Our results exploit a careful combination of data replication, majority techniques, fingerprint computations, and lazy fault detection. To cope with the complex data access patterns induced by some of our algorithms, we also devise amplified fingerprints, which might be of independent interest in the design of resilient algorithms for different problems.
Evolutionary Algorithms in Unreliable Memory
"... Abstract — Guaranteeing the underlying reliability of computer memory is becoming more difficult as chip dimensions scale down, and as power limitations make lower voltages desirable. To date, the reliability of memory has been seen as the responsibility of the computer engineer, any underlying unre ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Guaranteeing the underlying reliability of computer memory is becoming more difficult as chip dimensions scale down, and as power limitations make lower voltages desirable. To date, the reliability of memory has been seen as the responsibility of the computer engineer, any underlying unreliability being hidden from programmers. However it may make sense, in future, to shift this balance, optionally exposing the unreliability to programmers, permitting them to choose between higher and lower reliabilities. This is particularly relevant to the dataintensive applications which might potentially provide the ”killer apps ” for anticipated future manycore architectures. We simulated the effect of unreliable memory on the behaviour of a slightly reprogrammed variant of a typical Genetic Algorithm (GA) on a range of optimisation problems. With only minor change to the code, most variables held in unreliable memory, and error rates up to 10 −3, the memory unreliability had no real effect on the GA behaviour. For higher error rates, the effects became noticeable, and the behaviour of the GA was unacceptable once the error rate reached 10 −2. I.
Data Structures Resilient to Memory Faults: An Experimental Study of Dictionaries
"... Abstract. We address the problem of implementing data structures resilient to memory faults which may arbitrarily corrupt memory locations. In this framework, we focus on the implementation of dictionaries, and perform a thorough experimental study using a testbed that we designed for this purpose. ..."
Abstract
 Add to MetaCart
Abstract. We address the problem of implementing data structures resilient to memory faults which may arbitrarily corrupt memory locations. In this framework, we focus on the implementation of dictionaries, and perform a thorough experimental study using a testbed that we designed for this purpose. Our main discovery is that the bestknown (asymptotically optimal) resilient data structures have very large space overheads. More precisely, most of the space used by these data structures is not due to key storage. This might not be acceptable in practice since resilient data structures are meant for applications where a huge amount of data (often of the order of terabytes) has to be stored. Exploiting techniques developed in the context of resilient (static) sorting and searching, in combination with some new ideas, we designed and engineered an alternative implementation which, while still guaranteeing optimal asymptotic time and space bounds, performs much better in terms of memory without compromising the time efficiency. 1