Results 1  10
of
14
Saving Space by Algebraization
, 2010
"... The Subset Sum and Knapsack problems are fundamental N Pcomplete problems and the pseudopolynomial time dynamic programming algorithms for them appear in every algorithms textbook. The algorithms require pseudopolynomial time and space. Since we do not expect polynomial time algorithms for Subset ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
The Subset Sum and Knapsack problems are fundamental N Pcomplete problems and the pseudopolynomial time dynamic programming algorithms for them appear in every algorithms textbook. The algorithms require pseudopolynomial time and space. Since we do not expect polynomial time algorithms for Subset Sum and Knapsack to exist, a very natural question is whether they can be solved in pseudopolynomial time and polynomial space. In this paper we answer this question affirmatively, and give the first pseudopolynomial time, polynomial space algorithms for these problems. Our approach is based on algebraic methods and turns out to be useful for several other problems as well. Then we show how the framework yields polynomial space exact algorithms for the classical Traveling Salesman, Weighted Set Cover and Weighted Steiner Tree problems as well. Our algorithms match the time bound of the best known pseudopolynomial space algorithms for these problems.
Improved generic algorithms for hard knapsacks
"... At Eurocrypt 2010, HowgraveGraham and Joux described an algorithm for solving hard knapsacks of density close to 1 in time Õ(20.337n) and memory Õ(20.256n), thereby improving a 30year old algorithm by Shamir and Schroeppel. In this paper we extend the HowgraveGraham– Joux technique to get an al ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
At Eurocrypt 2010, HowgraveGraham and Joux described an algorithm for solving hard knapsacks of density close to 1 in time Õ(20.337n) and memory Õ(20.256n), thereby improving a 30year old algorithm by Shamir and Schroeppel. In this paper we extend the HowgraveGraham– Joux technique to get an algorithm with running time down to Õ(20.291n). An implementation shows the practicability of the technique. Another challenge is to reduce the memory requirement. We describe a constant memory algorithm based on cycle finding with running time Õ(20.72n); we also show a timememory tradeoff.
Decoding random linear codes in Õ(20.054n
 Advances in Cryptology  ASIACRYPT 2011, volume 7073 of LNCS
, 2011
"... Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decodin ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all codebased cryptography.Thebestattacksonthemostprominentcodebasedcryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length n was for a long time Stern’s variant of informationset decoding running in time Õ ( 2 0.05563n). Recently, Bernstein, Lange and Peters proposed a new technique called Ballcollision decoding which offers a speedup over Stern’s algorithm by improving the running time to Õ ( 2 0.05558n). In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to HowgraveGraham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to Õ ( 2 0.05363n).
On the Complexity of the BKW Algorithm on LWE
"... Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We app ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a study of the complexity of the BlumKalaiWasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWEbased cryptographic schemes from the literature and, as a result, provide new upper bounds for the concrete hardness of these LWEbased schemes. 1
Constructing Carmichael numbers through improved subsetproduct algorihms
, 2012
"... Abstract. We have constructed a Carmichael number with 10,333,229,505 prime factors, and have also constructed Carmichael numbers with k prime factors for every k between 3 and 19,565,220. These computations are the product of implementations of two new algorithms for the subset product problem tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We have constructed a Carmichael number with 10,333,229,505 prime factors, and have also constructed Carmichael numbers with k prime factors for every k between 3 and 19,565,220. These computations are the product of implementations of two new algorithms for the subset product problem that exploit the nonuniform distribution of primes p with the property that p − 1 divides a highly composite Λ. 1.
A LOWMEMORY ALGORITHM FOR FINDING SHORT PRODUCT REPRESENTATIONS IN FINITE GROUPS
"... Abstract. We describe a spaceefficient algorithm for solving a generalization of the subset sum problem in a �nite group G, using a Pollardρ approach. Given an element z and a sequence of elements S, our algorithm attempts to �nd a subsequence of S whose product in G is equal to z. For a random se ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We describe a spaceefficient algorithm for solving a generalization of the subset sum problem in a �nite group G, using a Pollardρ approach. Given an element z and a sequence of elements S, our algorithm attempts to �nd a subsequence of S whose product in G is equal to z. For a random sequence S of length dlog 2 n, where n = #G and d ⩾ 2 is a constant, we �nd that its expected running time is O ( � nlogn) group operations (we give a rigorous proof for d> 4), and it only needs to store O(1) group elements. We consider applications to class groups of imaginary quadratic �elds, and to �nding isogenies between elliptic curves over a �nite �eld. 1.
Quantum algorithms for the subsetsum problem
"... Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a subsetsum algorithm with heuristic asymptotic cost exponent below 0.25. The new algorithm combines the 2010 HowgraveGraham–Joux subsetsum algorithm with a new streamlined data structure for quantum walks on Johnson graphs.
A Efficient Dissection of Bicomposite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems
"... In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection, which has better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key ..."
Abstract
 Add to MetaCart
In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection, which has better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent nbit keys. All the previous errorfree attacks required time T and memory M satisfying T M = 2 rn, and even if “false negatives ” are allowed, no previous attack could achieve T M < 2 3rn/4. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of T M, such as T = 2 4n time and M = 2 n memory for breaking the sequential execution of r = 7 block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new algorithmic technique, we show how to use it in a generic way in order to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as solving knapsack problems, or finding the shortest sequence of face rotations which can unscramble a given state of Rubik’s cube.
A Non Asymptotic Analysis of Information Set Decoding
"... Abstract. We propose here a non asymptotic complexity analysis of some variants of information set decoding. In particular, we give this analysis for the two recent variants – published by May, Meurer and Thomae in 2011 and by Becker, Joux, May and Meurer in 2012 – for which only an asymptotic analy ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose here a non asymptotic complexity analysis of some variants of information set decoding. In particular, we give this analysis for the two recent variants – published by May, Meurer and Thomae in 2011 and by Becker, Joux, May and Meurer in 2012 – for which only an asymptotic analysis was available. The purpose is to provide a simple and accurate estimate of the complexity to facilitate the paramater selection for codebased cryptosystems. We implemented those estimates and give a comparison at the end of the paper. Notation: – Sn(0, w) is the radius w sphere centered in 0 in the Hamming space {0, 1} n. – X  denotes the cardinality of the set X. 1
Solving Subset Sum Problems of Density close to 1 by ”randomized ” BKZreduction
, 2012
"... Abstract. Subset sum or Knapsack problems of dimension n are known to be hardest for knapsacks of density close to 1. These problems are NPhard for arbitrary n. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Joux [BCJ10] pr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Subset sum or Knapsack problems of dimension n are known to be hardest for knapsacks of density close to 1. These problems are NPhard for arbitrary n. One can solve such problems either by lattice basis reduction or by optimized birthday algorithms. Recently Becker, Coron, Joux [BCJ10] present a birthday algorithm that follows Schroeppel, Shamir [SS81], and HowgraveGraham, Joux [HJ10]. This algorithm solves 50 random knapsacks of dimension 80 and density close to 1 in roughly 15 hours on a 2.67 GHz PC. We present an optimized lattice basis reduction algorithm that follows Schnorr, Euchner [SE03] using pruning of Schnorr, Hörner [SH95] that solves such random knapsacks of dimension 80 on average in less than a minute, and 50 such problems all together about 9.4 times faster with less space than [BCJ10] on another 2.67 GHz PC.