Results 1  10
of
16
Saving Space by Algebraization
, 2010
"... The Subset Sum and Knapsack problems are fundamental N Pcomplete problems and the pseudopolynomial time dynamic programming algorithms for them appear in every algorithms textbook. The algorithms require pseudopolynomial time and space. Since we do not expect polynomial time algorithms for Subset ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
The Subset Sum and Knapsack problems are fundamental N Pcomplete problems and the pseudopolynomial time dynamic programming algorithms for them appear in every algorithms textbook. The algorithms require pseudopolynomial time and space. Since we do not expect polynomial time algorithms for Subset Sum and Knapsack to exist, a very natural question is whether they can be solved in pseudopolynomial time and polynomial space. In this paper we answer this question affirmatively, and give the first pseudopolynomial time, polynomial space algorithms for these problems. Our approach is based on algebraic methods and turns out to be useful for several other problems as well. Then we show how the framework yields polynomial space exact algorithms for the classical Traveling Salesman, Weighted Set Cover and Weighted Steiner Tree problems as well. Our algorithms match the time bound of the best known pseudopolynomial space algorithms for these problems.
Improved generic algorithms for hard knapsacks
"... At Eurocrypt 2010, HowgraveGraham and Joux described an algorithm for solving hard knapsacks of density close to 1 in time Õ(20.337n) and memory Õ(20.256n), thereby improving a 30year old algorithm by Shamir and Schroeppel. In this paper we extend the HowgraveGraham– Joux technique to get an al ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
At Eurocrypt 2010, HowgraveGraham and Joux described an algorithm for solving hard knapsacks of density close to 1 in time Õ(20.337n) and memory Õ(20.256n), thereby improving a 30year old algorithm by Shamir and Schroeppel. In this paper we extend the HowgraveGraham– Joux technique to get an algorithm with running time down to Õ(20.291n). An implementation shows the practicability of the technique. Another challenge is to reduce the memory requirement. We describe a constant memory algorithm based on cycle finding with running time Õ(20.72n); we also show a timememory tradeoff.
Decoding random linear codes in Õ(20.054n
 Advances in Cryptology  ASIACRYPT 2011, volume 7073 of LNCS
, 2011
"... Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decodin ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length n was for a long time Stern’s variant of informationset decoding running in time Õ ( 2 0.05563n). Recently, Bernstein, Lange and Peters proposed a new technique called Ballcollision decoding which offers a speedup over Stern’s algorithm by improving the running time to Õ ( 2 0.05558n). In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to HowgraveGraham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to Õ ( 2 0.05363n).
On the Complexity of the BKW Algorithm on LWE
"... Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We app ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWEbased cryptographic schemes from the literature and, as a result, provide new upper bounds for the concrete hardness of these LWEbased schemes. 1
Constructing Carmichael numbers through improved subsetproduct algorihms
, 2012
"... Abstract. We have constructed a Carmichael number with 10,333,229,505 prime factors, and have also constructed Carmichael numbers with k prime factors for every k between 3 and 19,565,220. These computations are the product of implementations of two new algorithms for the subset product problem tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We have constructed a Carmichael number with 10,333,229,505 prime factors, and have also constructed Carmichael numbers with k prime factors for every k between 3 and 19,565,220. These computations are the product of implementations of two new algorithms for the subset product problem that exploit the nonuniform distribution of primes p with the property that p − 1 divides a highly composite Λ. 1.
On computing nearest neighbors with applications to decoding of binary linear codes
 In Advances in Cryptology – Eurocrypt 2015, Lecture Notes in Computer Science
, 2015
"... Abstract. We propose a new decoding algorithm for random binary linear codes. The socalled information set decoding algorithm of Prange (1962) achieves worstcase complexity 20.121n. In the late 80s, Stern proposed a sortandmatch version for Prange’s algorithm, on which all variants of the curren ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new decoding algorithm for random binary linear codes. The socalled information set decoding algorithm of Prange (1962) achieves worstcase complexity 20.121n. In the late 80s, Stern proposed a sortandmatch version for Prange’s algorithm, on which all variants of the currently best known decoding algorithms are build. The fastest algorithm of Becker, Joux, May and Meurer (2012) achieves running time 20.102n in the full distance decoding setting and 20.0494n with half (bounded) distance decoding. In this work we point out that the sortandmatch routine in Stern’s algorithm is carried out in a nonoptimal way, since the matching is done in a two step manner to realize an approximate matching up to a small number of error coordinates. Our observation is that such an approximate matching can be done by a variant of the socalled High Dimensional Nearest Neighbor Problem. Namely, out of two lists with entries from Fm2 we have to find a pair with closest Hamming distance. We develop a new algorithm for this problem with subquadratic complexity which might be of independent interest in other contexts. Using our algorithm for full distance decoding improves Stern’s complexity from 20.117n to 20.114n. Since the techniques of Becker et al apply for our algorithm as well, we eventually obtain the fastest decoding algorithm for binary linear codes with complexity 20.097n. In the half distance decoding scenario, we obtain a complexity of 20.0473n.
Quantum algorithms for the subsetsum problem
"... Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs.
A LOWMEMORY ALGORITHM FOR FINDING SHORT PRODUCT REPRESENTATIONS IN FINITE GROUPS
"... Abstract. We describe a spaceefficient algorithm for solving a generalization of the subset sum problem in a �nite group G, using a Pollardρ approach. Given an element z and a sequence of elements S, our algorithm attempts to �nd a subsequence of S whose product in G is equal to z. For a random se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We describe a spaceefficient algorithm for solving a generalization of the subset sum problem in a �nite group G, using a Pollardρ approach. Given an element z and a sequence of elements S, our algorithm attempts to �nd a subsequence of S whose product in G is equal to z. For a random sequence S of length dlog 2 n, where n = #G and d ⩾ 2 is a constant, we �nd that its expected running time is O ( � nlogn) group operations (we give a rigorous proof for d> 4), and it only needs to store O(1) group elements. We consider applications to class groups of imaginary quadratic �elds, and to �nding isogenies between elliptic curves over a �nite �eld. 1.
A Knapsacklike Code Using Recurrence Sequence Representations
"... Abstract We had recently shown that every positive integer can be represented uniquely using a recurrence sequence, when certain restrictions on the digit strings are satisfied. We present the details of how such representations can be used to build a knapsacklike public key cryptosystem. We also ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We had recently shown that every positive integer can be represented uniquely using a recurrence sequence, when certain restrictions on the digit strings are satisfied. We present the details of how such representations can be used to build a knapsacklike public key cryptosystem. We also present new disguising methods, and provide arguments for the security of the code against known methods of attack.
Decoding Random Binary . . . 1 + 1 = 0 Improves Information Set Decoding
"... Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWEbased schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving th ..."
Abstract
 Add to MetaCart
Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWEbased schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern’s information set decoding algorithm to 2 0.0556n. Using representations this bound was improved to 2 0.0537n by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 2 0.0494n.