Results 1  10
of
291
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data
, 2008
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 527 (38 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
CollusionSecure Fingerprinting for Digital Data
 IEEE Transactions on Information Theory
, 1996
"... This paper discusses methods for assigning codewords for the purpose of fingerprinting digital data (e.g., software, documents, and images). Fingerprinting consists of uniquely marking and registering each copy of the data. This marking allows a distributor to detect any unauthorized copy and trac ..."
Abstract

Cited by 347 (1 self)
 Add to MetaCart
This paper discusses methods for assigning codewords for the purpose of fingerprinting digital data (e.g., software, documents, and images). Fingerprinting consists of uniquely marking and registering each copy of the data. This marking allows a distributor to detect any unauthorized copy and trace it back to the user. This threat of detection will hopefully deter users from releasing unauthorized copies. A problem arises when users collude: For digital data, two different fingerprinted objects can be compared and the differences between them detected. Hence, a set of users can collude to detect the location of the fingerprint. They can then alter the fingerprint to mask their identities. We present a general fingerprinting solution which is secure in the context of collusion. In addition, we discuss methods for distributing fingerprinted data. 1 Introduction Fingerprinting is an old cryptographic technique. For instance, several hundred years ago logarithm tables were protec...
Expander Codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1996
"... We present a new class of asymptotically good, linear errorcorrecting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both random ..."
Abstract

Cited by 336 (10 self)
 Add to MetaCart
We present a new class of asymptotically good, linear errorcorrecting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both randomized and explicit constructions for some of these codes. Experimental results demonstrate the extremely good performance of the randomly chosen codes.
Priority Encoding Transmission
 IEEE Transactions on Information Theory
, 1994
"... We introduce a new method, called Priority Encoding Transmission, for sending messages over lossy packetbased networks. When a message is to be transmitted, the user specifies a priority value for each part of the message. Based on the priorities, the system encodes the message into packets for tra ..."
Abstract

Cited by 310 (11 self)
 Add to MetaCart
We introduce a new method, called Priority Encoding Transmission, for sending messages over lossy packetbased networks. When a message is to be transmitted, the user specifies a priority value for each part of the message. Based on the priorities, the system encodes the message into packets for transmission and sends them to (possibly multiple) receivers. The priority value of each part of the message determines the fraction of encoding packets sufficient to recover that part. Thus, even if some of the encoding packets are lost enroute, each receiver is still able to recover the parts of the message for which a sufficient fraction of the encoding packets are received. International Computer Science Institute, Berkeley, California. Research supported in part by National Science Foundation operating grant NCR941610 y Computer Science Department, Swiss Federal Institute of Technology, Zurich, Switzerland. Research done while a postdoc at the International Computer Science Institute...
Quantum Error Correction Via Codes Over GF(4)
, 1997
"... The problem of finding quantumerrorcorrecting codes is transformed into the problem of finding additive codes over the field GF(4) which are selforthogonal with respect to a certain trace inner product. Many new codes and new bounds are presented, as well as a table of upper and lower bounds on s ..."
Abstract

Cited by 303 (18 self)
 Add to MetaCart
(Show Context)
The problem of finding quantumerrorcorrecting codes is transformed into the problem of finding additive codes over the field GF(4) which are selforthogonal with respect to a certain trace inner product. Many new codes and new bounds are presented, as well as a table of upper and lower bounds on such codes of length up to 30 qubits.
Decoding Reed Solomon Codes beyond the ErrorCorrection Bound
, 1997
"... We present a randomized algorithm which takes as input n distinct points f(xi; yi)g n i=1 from F \Theta F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in a ..."
Abstract

Cited by 273 (18 self)
 Add to MetaCart
(Show Context)
We present a randomized algorithm which takes as input n distinct points f(xi; yi)g n i=1 from F \Theta F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., yi = f (xi) for at least t values of i), provided t = \Omega (
A tutorial on ReedSolomon coding for faulttolerance in RAIDlike systems
 Software – Practice & Experience
, 1997
"... It is wellknown that ReedSolomon codes may be used to provide error correction for multiple failures in RAIDlike systems. The coding technique itself, however, is not as wellknown. To the coding theorist, this technique is a straightforward extension to a basic coding paradigm and needs no speci ..."
Abstract

Cited by 232 (37 self)
 Add to MetaCart
(Show Context)
It is wellknown that ReedSolomon codes may be used to provide error correction for multiple failures in RAIDlike systems. The coding technique itself, however, is not as wellknown. To the coding theorist, this technique is a straightforward extension to a basic coding paradigm and needs no special mention. However, to the systems programmer with no training in coding theory, the technique may be a mystery. Currently, there are no references that describe how to perform this coding that do not assume that the reader is already wellversed in algebra and coding theory. This paper is intended for the systems programmer. It presents a complete specification of the coding algorithm plus details on how it may be implemented. This specification assumes no prior knowledge of algebra or coding theory. The goal of this paper is for a systems programmer to be able to implement ReedSolomon coding for reliability in RAIDlike systems without needing to consult any external references. Problem Specification Let there be storage devices, ¡£¢¥¤¦¡¨§©¤�������¤¦¡¨�, each of which holds � bytes. These are called the “Data Devices. ” � Let there be � � more storage devices
A Survey of Fast Exponentiation Methods
 JOURNAL OF ALGORITHMS
, 1998
"... Publickey cryptographic systems often involve raising elements of some group (e.g. GF(2 n), Z/NZ, or elliptic curves) to large powers. An important question is how fast this exponentiation can be done, which often determines whether a given system is practical. The best method for exponentiation de ..."
Abstract

Cited by 179 (0 self)
 Add to MetaCart
(Show Context)
Publickey cryptographic systems often involve raising elements of some group (e.g. GF(2 n), Z/NZ, or elliptic curves) to large powers. An important question is how fast this exponentiation can be done, which often determines whether a given system is practical. The best method for exponentiation depends strongly on the group being used, the hardware the system is implemented on, and whether one element is being raised repeatedly to different powers, different elements are raised to a fixed power, or both powers and group elements vary. This problem has received much attention, but the results are scattered through the literature. In this paper we survey the known methods for fast exponentiation, examining their relative strengths and weaknesses.
Defending Against Statistical Steganalysis
 10th USENIX Security Symposium
, 2001
"... The main purpose of steganography is to hide the occurrence of communication. While most methods in use today are invisible to an observer's senses, mathematical analysis may reveal statistical anomalies in the stego medium. These discrepancies expose the fact that hidden communication is happe ..."
Abstract

Cited by 163 (1 self)
 Add to MetaCart
(Show Context)
The main purpose of steganography is to hide the occurrence of communication. While most methods in use today are invisible to an observer's senses, mathematical analysis may reveal statistical anomalies in the stego medium. These discrepancies expose the fact that hidden communication is happening. This paper presents improved methods for information hiding. One method uses probabilistic embedding to minimize modifications to the cover medium. Another method employs errorcorrecting codes, which allow the embedding process to choose which bits to modify in a way that decreases the likelihood of being detected. In addition, we can hide multiple data sets in the same cover medium to provide plausible deniability.