Results 1  10
of
11
Hard Equality Constrained Integer Knapsacks
, 2005
"... We consider the following integer feasibility problem: “Given positive integer numbers a0, a1,..., an, with gcd(a1,..., an) = 1 and a = (a1,..., an), does there exist a vector x ∈ Z n ≥0 satisfying ax = a0? ” We prove that if the coefficients a1,..., an have a certain decomposable structure, then t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We consider the following integer feasibility problem: “Given positive integer numbers a0, a1,..., an, with gcd(a1,..., an) = 1 and a = (a1,..., an), does there exist a vector x ∈ Z n ≥0 satisfying ax = a0? ” We prove that if the coefficients a1,..., an have a certain decomposable structure, then the Frobenius number associated with a1,..., an, i.e., the largest value of a0 for which ax = a0 does not have a nonnegative integer solution, is close to a known upper bound. In the instances we consider, we take a0 to be the Frobenius number. Furthermore, we show that the decomposable structure of a1,..., an makes the solution of a lattice reformulation of our problem almost trivial, since the number of lattice hyperplanes that intersect the polytope resulting from the reformulation in the direction of the last coordinate is going to be very small. For branchandbound such instances are difficult to solve, since they are infeasible and have large values of a0/ai, 1 ≤ i ≤ n. We illustrate our results by some computational examples.
Analyzing Blockwise Lattice Algorithms using Dynamical Systems
 Proc. 31th Cryptology Conference (CRYPTO
, 2011
"... n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introd ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT’91] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt’08] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)i≤n ∈ Q n×n “ of a lattice L and a blocksize β, and if terminated after
FloatingPoint LLL: Theoretical and Practical Aspects
"... The textbook LLL algorithm can be sped up considerably by replacing the underlying rational arithmetic used for the GramSchmidt orthogonalisation by floatingpoint approximations. We review how this modification has been and is currently implemented, both in theory and in practice. Using floating ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The textbook LLL algorithm can be sped up considerably by replacing the underlying rational arithmetic used for the GramSchmidt orthogonalisation by floatingpoint approximations. We review how this modification has been and is currently implemented, both in theory and in practice. Using floatingpoint approximations seems to be natural for LLL even from the theoretical point of view: it is the key to reach a bitcomplexity which is quadratic with respect to the bitlength of the input vectors entries, without fast integer multiplication. The latter bitcomplexity strengthens the connection between LLL and Euclid’s gcd algorithm. On the practical side, the LLL implementer may weaken the provable variants in order to further improve their efficiency: we emphasise on these techniques. We also consider the practical behaviour of the floatingpoint LLL algorithms, in particular their output distribution, their runningtime and their numerical behaviour. After 25 years of implementation, many questions motivated by the practical side of LLL remain open.
Another view of the Gaussian algorithm
 IN PROCEEDINGS OF THE 2004 LATIN AMERICAN THEORETICAL INFORMATICS (LATIN 2004). LECTURE NOTES IN COMPUTER SCIENCE
, 2004
"... We introduce here a rewrite system in the group of unimodular matrices, i.e., matrices with integer entries and with determinant equal to ±1. We use this rewrite system to precisely characterize the mechanism of the Gaussian algorithm, that finds shortest vectors in a two–dimensional lattice given b ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce here a rewrite system in the group of unimodular matrices, i.e., matrices with integer entries and with determinant equal to ±1. We use this rewrite system to precisely characterize the mechanism of the Gaussian algorithm, that finds shortest vectors in a two–dimensional lattice given by any basis. Putting together the algorithmic of lattice reduction and the rewrite system theory, we propose a new worst–case analysis of the Gaussian algorithm. There is already an optimal worst–case bound for some variant of the Gaussian algorithm due to Vallée [16]. She used essentially geometric considerations. Our analysis generalizes her result to the case of the usual Gaussian algorithm. An interesting point in our work is its possible (but not easy) generalization to the same problem in higher dimensions, in order to exhibit a tight upperbound for the number of iterations of LLL–like reduction algorithms in the worst case. Moreover, our method seems to work for analyzing other families of algorithms. As an illustration, the analysis of sorting algorithms are briefly developed in the last section of the paper.
Lattices
, 2008
"... It occurs frequently in algorithmic number theory that a problem has both a discrete and a continuous component. A typical example is the search for a system of integers that satisfies certain inequalities. A problem of this nature can often be successfully approached by means of the algorithmic t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
It occurs frequently in algorithmic number theory that a problem has both a discrete and a continuous component. A typical example is the search for a system of integers that satisfies certain inequalities. A problem of this nature can often be successfully approached by means of the algorithmic theory of lattices, a lattice being a discrete subgroup of a Euclidean vector space. This article provides an introduction to this theory, including a generous
Variants of the LLL . . . Communications: Complexity Analysis and FixedComplexity Implementation
, 2010
"... ..."
Terminating BKZ
"... n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introd ..."
Abstract
 Add to MetaCart
n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT’91] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt’08] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)i≤n ∈ Q n×n “ of a lattice L and a blocksize β, and if terminated after