Results 1  10
of
66
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 126 (18 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
Fully homomorphic encryption without modulus switching from classical GapSVP
 In Advances in Cryptology  Crypto 2012, volume 7417 of Lecture
"... We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically (B → B 2 · poly(n)) with every multiplication (before “refreshing”), our noise only grows linearly (B → B · poly(n)). We use this technique to constr ..."
Abstract

Cited by 70 (5 self)
 Add to MetaCart
We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically (B → B 2 · poly(n)) with every multiplication (before “refreshing”), our noise only grows linearly (B → B · poly(n)). We use this technique to construct a scaleinvariant fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus q and the initial noise level B, and not on their absolute values. Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for “modulus switching”), and this modulus can take arbitrary form. In addition, security can be classically reduced from the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction from GapSVP. Fully homomorphic encryption has been the focus of extensive study since the first candidate scheme was introduced by Gentry [Gen09b]. In a nutshell, fully homomorphic encryption allows to
Lattice Signatures Without Trapdoors
"... We provide an alternative method for constructing latticebased digital signatures which does not use the “hashandsign” methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worstcase hardness of the Õ(n ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
We provide an alternative method for constructing latticebased digital signatures which does not use the “hashandsign” methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worstcase hardness of the Õ(n1.5)SIVP problem in general lattices. The secret key, public key, and the signature size of our scheme are smaller than in all previous instantiations of the hashandsign signature, and our signing algorithm is also quite simple, requiring just a few matrixvector multiplications and rejection samplings. We then also show that by slightly changing the parameters, one can get even more efficient signatures that are based on the hardness of the Learning With Errors problem. Our construction naturally transfers to the ring setting, where the size of the public and secret keys can be significantly shrunk, which results in the most practical todate provably secure signature scheme based on lattices.
Pseudorandom Functions and Lattices
, 2011
"... We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arith ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arithmetic or boolean circuits (e.g., in NC 1 or even TC 0). In addition, they are the first lowdepth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new “derandomization ” technique for the learning with errors (LWE) problem which, in effect, generates the error terms deterministically. 1 Introduction and Main Results The past few years have seen significant progress in constructing publickey, identitybased, and homomorphic cryptographic schemes using lattices, e.g., [Reg05, PW08, GPV08, Gen09, CHKP10, ABB10a] and many more. Part of their appeal stems from provable worstcase hardness guarantees (starting with the seminal work of Ajtai [Ajt96]), good asymptotic efficiency and parallelism, and apparent resistance to quantum
Practical latticebased cryptography: A signature scheme for embedded systems
 CHES 2012, LNCS
, 2012
"... Nearly all of the currently used and welltested signature schemes (e.g. RSA or DSA) are based either on the factoring assumption or the presumed intractability of the discrete logarithm problem. Further algorithmic advances on these problems may lead to the unpleasant situation that a large number ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
Nearly all of the currently used and welltested signature schemes (e.g. RSA or DSA) are based either on the factoring assumption or the presumed intractability of the discrete logarithm problem. Further algorithmic advances on these problems may lead to the unpleasant situation that a large number of schemes have to be replaced with alternatives. In this work we present such an alternative – a signature scheme whose security is derived from the hardness of lattice problems. It is based on recent theoretical advances in latticebased cryptography and is highly optimized for practicability and use in embedded systems. The public and secret keys are roughly 12000 and 2000 bits long, while the signature size is approximately 9000 bits for a security level of around 100 bits. The implementation results on reconfigurable hardware (Spartan/Virtex 6) are very promising and show that the scheme is scalable, has low area consumption, and even outperforms some classical schemes.
Fully KeyHomomorphic Encryption, Arithmetic Circuit ABE, and Compact Garbled Circuits
, 2014
"... We construct the first (keypolicy) attributebased encryption (ABE) system with short secret keys: the size of keys in our system depends only on the depth of the policy circuit, not its size. Our constructions extend naturally to arithmetic circuits with arbitrary fanin gates thereby further redu ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
We construct the first (keypolicy) attributebased encryption (ABE) system with short secret keys: the size of keys in our system depends only on the depth of the policy circuit, not its size. Our constructions extend naturally to arithmetic circuits with arbitrary fanin gates thereby further reducing the circuit depth. Building on this ABE system we obtain the first reusable circuit garbling scheme that produces garbled circuits whose size is the same as the original circuit plus an additive poly(λ, d) bits, where λ is the security parameter and d is the circuit depth. Save the additive poly(λ, d) factor, this is the best one could hope for. All previous constructions incurred a multiplicative poly(λ) blowup. As another application, we obtain (single key secure) functional encryption with short secret keys. We construct our attributebased system using a mechanism we call fully keyhomomorphic encryption which is a publickey system that lets anyone translate a ciphertext encrypted under a publickey x into a ciphertext encrypted under the publickey (f(x), f) of the same plaintext, for any efficiently computable f. We show that this mechanism gives an ABE with short keys. Security is based on the subexponential hardness of the learning with errors problem. We also present a second (keypolicy) ABE, using multilinear maps, with short ciphertexts: an encryption to an attribute vector x is the size of x plus poly(λ, d) additional bits. This gives a reusable circuit garbling scheme where the size of the garbled input is short, namely the same as that of the original input, plus a poly(λ, d) factor.
Faster Gaussian lattice sampling using lazy floatingpoint arithmetic
 FULL VERSION OF THE ASIACRYPT ’12 ARTICLE
, 2013
"... Many lattice cryptographic primitives require an efficient algorithm to sample lattice points according to some Gaussian distribution. All algorithms known for this task require longinteger arithmetic at some point, which may be problematic in practice. We study how much lattice sampling can be sp ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Many lattice cryptographic primitives require an efficient algorithm to sample lattice points according to some Gaussian distribution. All algorithms known for this task require longinteger arithmetic at some point, which may be problematic in practice. We study how much lattice sampling can be sped up using floatingpoint arithmetic. First, we show that a direct floatingpoint implementation of these algorithms does not give any asymptotic speedup: the floatingpoint precision needs to be greater than the security parameter, leading to an overall complexity Õ(n 3) where n is the lattice dimension. However, we introduce a laziness technique that can significantly speed up these algorithms. Namely, in certain cases such as NTRUSign lattices, laziness can decrease the complexity to Õ(n2) or even Õ(n). Furthermore, our analysis is practical: for typical parameters, most of the floatingpoint operations only require the doubleprecision IEEE standard.
Hardness of SIS and LWE with Small Parameters
, 2013
"... The Short Integer Solution (SIS) and Learning With Errors (LWE) problems are the foundations for countless applications in latticebased cryptography, and are provably as hard as approximate lattice problems in the worst case. A important question from both a practical and theoretical perspective is ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
The Short Integer Solution (SIS) and Learning With Errors (LWE) problems are the foundations for countless applications in latticebased cryptography, and are provably as hard as approximate lattice problems in the worst case. A important question from both a practical and theoretical perspective is how small their parameters can be made, while preserving their hardness. We prove two main results on SIS and LWE with small parameters. For SIS, we show that the problem retains its hardness for moduli q ≥ β · n δ for any constant δ> 0, where β is the bound on the Euclidean norm of the solution. This improves upon prior results which required q ≥ β · √ n log n, and is essentially optimal since the problem is trivially easy for q ≤ β. For LWE, we show that it remains hard even when the errors are small (e.g., uniformly random from {0, 1}), provided that the number of samples is small enough (e.g., linear in the dimension n of the LWE secret). Prior results required the errors to have magnitude at least √ n and to come from a Gaussianlike distribution. 1
Sampling from discrete Gaussians for latticebased cryptography on a constrained device
 Appl. Algebra Eng. Commun. Comput
"... ABSTRACT. Modern latticebased publickey cryptosystems require sampling from discrete Gaussian (normal) distributions. The paper surveys algorithms to implement such sampling efficiently, with particular focus on the case of constrained devices with small onboard storage and without access to larg ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Modern latticebased publickey cryptosystems require sampling from discrete Gaussian (normal) distributions. The paper surveys algorithms to implement such sampling efficiently, with particular focus on the case of constrained devices with small onboard storage and without access to large numbers of external random bits. We review latticebased encryption schemes and signature schemes and their requirements for sampling from discrete Gaussians. Finally, we make some remarks on challenges and potential solutions for practical latticebased cryptography.
LatticeBased FHE as Secure as PKE
"... We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of Õ(n1.5+ɛ)approximation for lattice problems (such as GapSVP) under quantum reductions for any ɛ> 0 (or Õ(n2+ɛ)approximation under classical reductions). This matches the best known hardness for “regular ” ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of Õ(n1.5+ɛ)approximation for lattice problems (such as GapSVP) under quantum reductions for any ɛ> 0 (or Õ(n2+ɛ)approximation under classical reductions). This matches the best known hardness for “regular ” (nonhomomorphic) lattice based publickey encryption up to the ɛ factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a nonleveled FHE scheme.) Our approach consists of three main ideas: Noisebounded sequential evaluation of high fanin operations; Circuit sequentialization using Barrington’s Theorem; and finally, successive dimensionmodulus reduction.