Results 1  10
of
12
Tightlysecure signatures from lossy identification schemes
"... In this paper we present three digital signature schemes with tight security reductions. Our first signature scheme is a particularly efficient version of the short exponent discrete log based scheme of Girault et al. (J. of Cryptology 2006). Our scheme has a tight reduction to the decisional Short ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
In this paper we present three digital signature schemes with tight security reductions. Our first signature scheme is a particularly efficient version of the short exponent discrete log based scheme of Girault et al. (J. of Cryptology 2006). Our scheme has a tight reduction to the decisional Short Discrete Logarithm problem, while still maintaining the nontight reduction to the computational version of the problem upon which the original scheme of Girault et al. is based. The second signature scheme we construct is a modification of the scheme of Lyubashevsky (Asiacrypt 2009) that is based on the worstcase hardness of the shortest vector problem in ideal lattices. And the third scheme is a very simple signature scheme that is based directly on the hardness of the Subset Sum problem. We also present a general transformation that converts, what we term lossy identification schemes, into signature schemes with tight security reductions. We believe that this greatly simplifies the task of constructing and proving the security of such signature schemes.
Towards SuperExponential SideChannel Security with Efficient LeakageResilient PRFs
 Cryptographic Hardware and Embedded Systems — CHES 2012
, 2012
"... Abstract. Leakageresilient constructions have attracted significant attention over the last couple of years. In practice, pseudorandom functions are among the most important such primitives, because they are stateless and do not require a secure initialization as, e.g. stream ciphers. However, th ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Leakageresilient constructions have attracted significant attention over the last couple of years. In practice, pseudorandom functions are among the most important such primitives, because they are stateless and do not require a secure initialization as, e.g. stream ciphers. However, their deployment in actual applications is still limited by security and efficiency concerns. This paper contributes to solve these issues in two directions. On the one hand, we highlight that the condition of bounded data complexity, that is guaranteed by previous leakageresilient constructions, may not be enough to obtain practical security. We show experimentally that, if implemented in an 8bit microcontroller, such constructions can actually be broken. On the other hand, we present tweaks for treebased leakageresilient PRFs that improve their efficiency and their security, by taking advantage of parallel implementations. Our security analyses are based on worstcase attacks in a noisefree setting and suggest that under reasonable assumptions, the sidechannel resistance of our construction grows superexponentially with a security parameter that corresponds to the degree of parallelism of the implementation. In addition, it exhibits that standard DPA attacks are not the most relevant tool for evaluating such leakageresilient constructions and may lead to overestimated security. As a consequence, we investigate more sophisticated tools based on lattice reduction, which turn out to be powerful in the physical cryptanalysis of these primitives. Eventually, we put forward that the AES is not perfectly suited for integration in a leakageresilient design. This observation raises interesting challenges for developing block ciphers with better properties regarding leakageresilience. 1
Solving shortest and closest vector problems: The decomposition approach
"... Abstract. In this paper, we present a heuristic algorithm for solving exact, as well as approximate, SVP and CVP for lattices. This algorithm is based on a new approach which is very different from and complementary to the sieving technique. This new approach frees us from the kissing number bound a ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. In this paper, we present a heuristic algorithm for solving exact, as well as approximate, SVP and CVP for lattices. This algorithm is based on a new approach which is very different from and complementary to the sieving technique. This new approach frees us from the kissing number bound and allows us to solve SVP and CVP in lattices of dimension n in time 2 0.377 n using memory 2 0.292 n. The key idea is to no longer work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in a dense overlattice of the original lattice that admits a quasiorthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice of our tower. Repeating this, we climb all the way to the top of the tower and finally obtain solution vector(s) in the initial lattice as a sum of vectors of the overlattice just below it. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case. 1
On the Complexity of the BKW Algorithm on LWE
"... Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We app ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWEbased cryptographic schemes from the literature and, as a result, provide new upper bounds for the concrete hardness of these LWEbased schemes. 1
On computing nearest neighbors with applications to decoding of binary linear codes
 In Advances in Cryptology – Eurocrypt 2015, Lecture Notes in Computer Science
, 2015
"... Abstract. We propose a new decoding algorithm for random binary linear codes. The socalled information set decoding algorithm of Prange (1962) achieves worstcase complexity 20.121n. In the late 80s, Stern proposed a sortandmatch version for Prange’s algorithm, on which all variants of the curren ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new decoding algorithm for random binary linear codes. The socalled information set decoding algorithm of Prange (1962) achieves worstcase complexity 20.121n. In the late 80s, Stern proposed a sortandmatch version for Prange’s algorithm, on which all variants of the currently best known decoding algorithms are build. The fastest algorithm of Becker, Joux, May and Meurer (2012) achieves running time 20.102n in the full distance decoding setting and 20.0494n with half (bounded) distance decoding. In this work we point out that the sortandmatch routine in Stern’s algorithm is carried out in a nonoptimal way, since the matching is done in a two step manner to realize an approximate matching up to a small number of error coordinates. Our observation is that such an approximate matching can be done by a variant of the socalled High Dimensional Nearest Neighbor Problem. Namely, out of two lists with entries from Fm2 we have to find a pair with closest Hamming distance. We develop a new algorithm for this problem with subquadratic complexity which might be of independent interest in other contexts. Using our algorithm for full distance decoding improves Stern’s complexity from 20.117n to 20.114n. Since the techniques of Becker et al apply for our algorithm as well, we eventually obtain the fastest decoding algorithm for binary linear codes with complexity 20.097n. In the half distance decoding scenario, we obtain a complexity of 20.0473n.
New NonInteractive ZeroKnowledge Subset Sum, Decision Knapsack And Range Arguments
, 2013
"... Abstract. We propose several new efficient noninteractive zero knowledge (NIZK) arguments in the common reference string model. The final arguments are based on two building blocks, a more efficient version of Lipmaa’s Hadamard product argument from TCC 2012, and a novel shift argument. Based on th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We propose several new efficient noninteractive zero knowledge (NIZK) arguments in the common reference string model. The final arguments are based on two building blocks, a more efficient version of Lipmaa’s Hadamard product argument from TCC 2012, and a novel shift argument. Based on these two arguments, we speed up the recent range argument by Chaabouni, Lipmaa and Zhang (FC 2012). We also propose efficient arguments for two NPcomplete problems, subset sum and decision knapsack, with constant communication, quasilinear prover’s computation and linear verifier’s computation.
Quantum algorithms for the subsetsum problem
"... Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs.
Decoding Random Binary . . . 1 + 1 = 0 Improves Information Set Decoding
"... Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWEbased schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving th ..."
Abstract
 Add to MetaCart
Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWEbased schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern’s information set decoding algorithm to 2 0.0556n. Using representations this bound was improved to 2 0.0537n by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 2 0.0494n.
Solving Subset Sum Problems of Density close to 1 by ”randomized ” BKZreduction
, 2012
"... Abstract. Subset sum or Knapsack problems of dimension n are known to be hardest for knapsacks of density close to 1. These problems are NPhard for arbitrary n. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Joux [BCJ10] pr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Subset sum or Knapsack problems of dimension n are known to be hardest for knapsacks of density close to 1. These problems are NPhard for arbitrary n. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Joux [BCJ10] present a birthday algorithm that follows Schroeppel, Shamir [SS81], and HowgraveGraham, Joux [HJ10]. This algorithm solves 50 random knapsacks of dimension 80 and density close to 1 in roughly 15 hours on a 2.67 GHz PC. We present an optimized lattice basis reduction algorithm that follows Schnorr, Euchner [SE03] using pruning of Schnorr, Hörner [SH95] that solves such random knapsacks of dimension 80 on average in less than a minute, and 50 such problems all together about 9.4 times faster with less space than [BCJ10] on another 2.67 GHz PC.
SPACE–TIME TRADEOFFS FOR SUBSET SUM: AN IMPROVED WORST CASE ALGORITHM
"... Abstract. The technique of Schroeppel and Shamir (SICOMP, 1981) has long been the most efficient way to trade space against time for the Subset Sum problem. In the randominstance setting, however, improved tradeoffs exist. In particular, the recently discovered dissection method of Dinur et al. (CR ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The technique of Schroeppel and Shamir (SICOMP, 1981) has long been the most efficient way to trade space against time for the Subset Sum problem. In the randominstance setting, however, improved tradeoffs exist. In particular, the recently discovered dissection method of Dinur et al. (CRYPTO 2012) yields a significantly improved space–time tradeoff curve for instances with strong randomness properties. Our main result is that these strong randomness assumptions can be removed, obtaining the same space– time tradeoffs in the worst case. We also show that for small space usage the dissection algorithm can be almost fully parallelized. Our strategy for dealing with arbitrary instances is to instead inject the randomness into the dissection process itself by working over a carefully selected but random composite modulus, and to introduce explicit space–time controls into the algorithm by means of a “bailout mechanism”. 1.