• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Lattice basis reduction: Improved practical algorithms and solving subset sum problems (1994)

by C P Schnorr, M Euchner
Venue:Math. Prog
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 329
Next 10 →

Closest Point Search in Lattices

by Erik Agrell, Thomas Eriksson, Alexander Vardy, Kenneth Zeger - IEEE TRANS. INFORM. THEORY , 2000
"... In this semi-tutorial paper, a comprehensive survey of closest-point search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closest-point search algorithm, ba ..."
Abstract - Cited by 333 (2 self) - Add to MetaCart
In this semi-tutorial paper, a comprehensive survey of closest-point search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closest-point search algorithm, based on the Schnorr-Euchner variation of the Pohst method, is implemented. Given an arbitrary point x 2 R m and a generator matrix for a lattice , the algorithm computes the point of that is closest to x. The algorithm is shown to be substantially faster than other known methods, by means of a theoretical comparison with the Kannan algorithm and an experimental comparison with the Pohst algorithm and its variants, such as the recent Viterbo-Boutros decoder. The improvement increases with the dimension of the lattice. Modifications of the algorithm are developed to solve a number of related search problems for lattices, such as finding a shortest vector, determining the kissing number, compu...
(Show Context)

Citation Context

...exity. This is probably the reason why the two strategies, despite having so much in common, have never been compared and evaluated against each other in the literature. Recently, Schnorr and Euchner =-=[41]-=- suggested an important improvement of the Pohst strategy, based on examining the points inside the aforementioned hypersphere in a different order. The same idea was developed independently by the au...

On Maximum-Likelihood Detection and the Search for the Closest Lattice Point

by Mohamed Oussama Damen, Hesham El Gamal, Giuseppe Caire - IEEE TRANS. INFORM. THEORY , 2003
"... Maximum-likelihood (ML) decoding algorithms for Gaussian multiple-input multiple-output (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using number-theoretic tools for searching the closest lattice point. These decoders are colle ..."
Abstract - Cited by 273 (9 self) - Add to MetaCart
Maximum-likelihood (ML) decoding algorithms for Gaussian multiple-input multiple-output (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using number-theoretic tools for searching the closest lattice point. These decoders are collectively referred to as sphere decoders in the literature. In this paper, a fresh look at this class of decoding algorithms is taken. In particular, two novel algorithms are developed. The first algorithm is inspired by the Pohst enumeration strategy and is shown to offer a significant reduction in complexity compared to the Viterbo--Boutros sphere decoder. The connection between the proposed algorithm and the stack sequential decoding algorithm is then established. This connection is utilized to construct the second algorithm which can also be viewed as an application of the Schnorr--Euchner strategy to ML decoding. Aided with a detailed study of preprocessing algorithms, a variant of the second algorithm is developed and shown to offer significant reductions in the computational complexity compared to all previously proposed sphere decoders with a near-ML detection performance. This claim is supported by intuitive arguments and simulation results in many relevant scenarios.

Analysis of PSLQ, An Integer Relation Finding Algorithm

by Helaman R. P. Ferguson, David H. Bailey, Steve Arno - Mathematics of Computation , 1999
"... Let K be either the real, complex, or quaternion number system and let O(K) be the corresponding integers. Let × = (Xl, • • • , ×n) be a vector in K n. The vector × has an integer relation if there exists a vector m = (ml,..., mn) E O(K) n, m = _ O, such that mlx I + m2x 2 +... + mnXn = O. In th ..."
Abstract - Cited by 90 (27 self) - Add to MetaCart
Let K be either the real, complex, or quaternion number system and let O(K) be the corresponding integers. Let × = (Xl, • • • , ×n) be a vector in K n. The vector × has an integer relation if there exists a vector m = (ml,..., mn) E O(K) n, m = _ O, such that mlx I + m2x 2 +... + mnXn = O. In this paper we define the parameterized integer relation construction algorithm PSLQ(r), where the parameter rcan be freely chosen in a certain interval. Beginning with an arbitrary vector X = (Xl,..., Xn) _ K n, iterations of PSLQ(r) will produce lower bounds on the norm of any possible relation for X. Thus PS/Q(r) can be used to prove that there are no relations for × of norm less than a given size. Let M x be the smallest norm of any relation for ×. For the real and complex case and each fixed parameter rin a certain interval, we prove that PSLQ(r) constructs a relation in less than O(fl 3 + n 2 log Mx) iterations.
(Show Context)

Citation Context

...tion of some kind. See [21] for a list of various orthogonalization algorithms and their numerical linear algebra differences. PSLQ is of the QR type. HJLS follows the lattice reduction work of [28], =-=[34]-=-, and [36], which is classical Gram-Schmidt type, cf. [31] and [11]. This conceptual difference may explain some of the numerical differences observed between PSLQ and HJLS, cf. [2]. Rigorous proofs t...

Algorithm and implementation of the K-Best sphere decoding for MIMO detection

by Zhan Guo, Peter Nilsson - IEEE Journal on Selected Areas in Communications , 2006
"... Abstract—K-best Schnorr–Euchner (KSE) decoding algorithm is proposed in this paper to approach near-maximum-likelihood (ML) performance for multiple-input–multiple-output (MIMO) detection. As a low complexity MIMO decoding algorithm, the KSE is shown to be suitable for very large scale integration ( ..."
Abstract - Cited by 88 (1 self) - Add to MetaCart
Abstract—K-best Schnorr–Euchner (KSE) decoding algorithm is proposed in this paper to approach near-maximum-likelihood (ML) performance for multiple-input–multiple-output (MIMO) detection. As a low complexity MIMO decoding algorithm, the KSE is shown to be suitable for very large scale integration (VLSI) implementations and be capable of supporting soft outputs. Modified KSE (MKSE) decoding algorithm is further proposed to improve the performance of the soft-output KSE with minor modifications. Moreover, a VLSI architecture is proposed for both algorithms. There are several low complexity and low-power features incorporated in the proposed algorithms and the VLSI architecture. The proposed hard-output KSE decoder and the soft-output MKSE decoder is implemented for 4 4 16-quadra-ture amplitude modulation (QAM) MIMO detection in a 0.35- m and a 0.13- m CMOS technology, respectively. The implemented hard-output KSE chip core is 5.76 mm2 with 91 K gates. The KSE decoding throughput is up to 53.3 Mb/s with a core power consumption of 626 mW at 100 MHz clock frequency and 2.8 V supply. The implemented soft-output MKSE chip can achieve a decoding throughput of more than 100 Mb/s with a 0.56 mm2 core area and 97 K gates. The implementation results show that it is feasible to achieve near-ML performance and high detection throughput for a 4 4 16-QAM MIMO system using the proposed algorithms and the VLSI architecture with reasonable complexity. Index Terms—Multiple-input–multiple-output (MIMO), Schnorr–Euchner algorithm, sphere decoder, very large scale integration (VLSI). I.
(Show Context)

Citation Context

..., hence, be used in an iterative MIMO receiver [6]. The lattice decoding algorithms have two kinds of implementation strategies, i.e., Fincke–Pohst strategy [4], [7], [8] and Schnorr–Euchner strategy =-=[9]-=-–[11]. To avoid confusion in this paper, the lattice decoder using the Fincke–Pohst strategy is called SD (sphere decoder), and the lattice decoder using the Schnorr–Euchner strategy is called SE. In ...

The two faces of lattices in cryptology.

by P Nguyen, J Stern - In Proceedings of CaLC ’01, , 2001
"... ..."
Abstract - Cited by 82 (17 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...nces depend on a parameter called the blocksize. These algorithms use some kind of exhaustive search super-exponential in the blocksize. So far, the best reduction algorithms in practice are variants =-=[124, 125]-=- of those BKZ-algorithms, which apply a heuristic to reduce exhaustive search. But little is known on the average-case (and even worst-case) complexity of reduction algorithms. Babai's nearest plane a...

The Insecurity of the Digital Signature Algorithm with Partially Known Nonces

by Phong Q. Nguyen, Igor E. Shparlinski - Journal of Cryptology , 2000
"... . We present a polynomial-time algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonabl ..."
Abstract - Cited by 80 (18 self) - Add to MetaCart
. We present a polynomial-time algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log 1=2 q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who recently introduced that topic. Our attack is based on a connection with the hidden number problem (HNP) introduced at Crypto '96 by Boneh and Venkatesan in order to study the bit-security of the Diffie--Hellman key exchange. The HNP consists, given a prime number q, of recovering a number ff 2 IFq such that for many known random t 2 IFq ...
(Show Context)

Citation Context

...y generated parameters (including the prime q and the multipliers of the DSA--HNP). Each trial is referred as a sample. Using Babai's nearest plane algorithm and Schnorr's Korkine-Zolotarev reduction =-=[23, 25]-=- with blocksize 20, we could break DSA with # as low as # = 4 and d = 70. More precisely, the method always worked for # = 5 (a hundred samples). For # = 4, it worked 90% of the time over 100 samples....

A unified framework for tree search decoding: rediscovering the sequential decoder,”

by A D Murugan, H El Gamal, M O Damen, G Caire - IEEE Transactions on Information Theory, , 2006
"... ..."
Abstract - Cited by 80 (6 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...of such decoders are known in the literature as sphere decoders (e.g., [4]–[7]). These decoders typically exploit number-theoretic ideas to efficiently span the space of allowed codewords (e.g., [8], =-=[9]-=-). The complexity of such decoders was shown, via simulation and numerical analysis, to be significantly smaller than the exhaustive ML decoder in many scenarios of practical interest (e.g., [4], [5])...

Lattice Reduction: a Toolbox for the Cryptanalyst

by Antoine Joux, Jacques Stern - Journal of Cryptology , 1994
"... In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical ..."
Abstract - Cited by 72 (9 self) - Add to MetaCart
In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical and the algorithmic point of view. The aim of the present paper is to explain what can be achieved by lattice reduction algorithms, even without understanding of the actual mechanisms involved. Two examples are given, one of them being the attack devised by the second named author against Knuth's truncated linear congruential generator, which has been announced a few years ago and appears here for the first time in journal version.

Attacking the Chor-Rivest Cryptosystem by Improved Lattice Reduction

by C. P. Schnorr, H.H. Hörner , 1995
"... We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 -algorithm. If a random L 3 --reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram-- Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) ..."
Abstract - Cited by 72 (6 self) - Add to MetaCart
We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 -algorithm. If a random L 3 --reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram-- Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) , then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor--Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
(Show Context)

Citation Context

.... The first vector of a block reduced basis satisfies ‖b1‖ ≤ γ n−1 β−1 β λ1, where γβ ∼ βpie is the Hermite constant of dimension β. For an implementation of block reduction, see the algorithm BKZ of =-=[SE94]-=-. With block size β = 20 it is only 10 times slower than L3-reduction but for large block sizes β the delay factor is about βO(β). This delay factor is the time to construct a shortest vector b̂i for ...

Better key sizes (and attacks) for LWE-based encryption

by Richard Lindner, Chris Peikert - In CT-RSA , 2011
"... We analyze the concrete security and key sizes of theoretically sound lattice-based encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success ..."
Abstract - Cited by 71 (7 self) - Add to MetaCart
We analyze the concrete security and key sizes of theoretically sound lattice-based encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2) concrete parameters and security estimates for an LWE-based cryptosystem that is more compact and efficient than the well-known schemes from the literature. Our new key sizes are up to 10 times smaller than prior examples, while providing even stronger concrete security levels.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University