Results 1  10
of
42
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 126 (18 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
Improvement and Efficient Implementation of a Latticebased Signature Scheme
, 2013
"... Latticebased signature schemes constitute an interesting alternative to RSA and discrete logarithm based systems which may become insecure in the future, for example due to the possibility of quantum attacks. A particularly interesting scheme in this context is the GPV signature scheme [GPV08] comb ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Latticebased signature schemes constitute an interesting alternative to RSA and discrete logarithm based systems which may become insecure in the future, for example due to the possibility of quantum attacks. A particularly interesting scheme in this context is the GPV signature scheme [GPV08] combined with the trapdoor construction from Micciancio and Peikert [MP12] as it admits strong security proofs and is believed to be very efficient in practice. This paper confirms this belief and shows how to improve the GPV scheme in terms of space and running time and presents an implementation of the optimized scheme. A ring variant of this scheme is also introduced which leads to a more efficient construction. Experimental results show that GPV with the new trapdoor construction is competitive to the signature schemes that are currently used in practice.
Key Homomorphic PRFs and Their Applications∗
, 2014
"... A pseudorandom function F: K ×X → Y is said to be key homomorphic if given F (k1, x) and F (k2, x) there is an efficient algorithm to compute F (k1 ⊕ k2, x), where ⊕ denotes a group operation on k1 and k2 such as xor. Key homomorphic PRFs are natural objects to study and have a number of interesting ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A pseudorandom function F: K ×X → Y is said to be key homomorphic if given F (k1, x) and F (k2, x) there is an efficient algorithm to compute F (k1 ⊕ k2, x), where ⊕ denotes a group operation on k1 and k2 such as xor. Key homomorphic PRFs are natural objects to study and have a number of interesting applications: they can simplify the process of rotating encryption keys for encrypted data stored in the cloud, they give one round distributed PRFs, and they can be the basis of a symmetrickey proxy reencryption scheme. Until now all known constructions for key homomorphic PRFs were only proven secure in the random oracle model. We construct the first provably secure key homomorphic PRFs in the standard model. Our main construction is based on the learning with errors (LWE) problem. In the proof of security we need a variant of LWE where query points are nonuniform and we show that this variant is as hard as the standard LWE. We also construct key homomorphic PRFs based on the decision linear assumption in groups with an `linear map. We leave as an open problem the question of constructing standard model key homomorphic PRFs from more general assumptions.
Faster bootstrapping with polynomial error
, 2014
"... Bootstrapping is a technique, originally due to Gentry (STOC 2009), for “refreshing” ciphertexts of a somewhat homomorphic encryption scheme so that they can support further homomorphic operations. To date, bootstrapping remains the only known way of obtaining fully homomorphic encryption for arbitr ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Bootstrapping is a technique, originally due to Gentry (STOC 2009), for “refreshing” ciphertexts of a somewhat homomorphic encryption scheme so that they can support further homomorphic operations. To date, bootstrapping remains the only known way of obtaining fully homomorphic encryption for arbitrary unbounded computations. Over the past few years, several works have dramatically improved the efficiency of bootstrapping and the hardness assumptions needed to implement it. Recently, Brakerski and Vaikuntanathan (ITCS 2014) reached the major milestone of a bootstrapping algorithm based on Learning With Errors for polynomial approximation factors. Their method uses the GentrySahaiWaters (GSW) cryptosystem (CRYPTO 2013) in conjunction with Barrington’s “circuit sequentialization” theorem (STOC 1986). This approach, however, results in very large polynomial runtimes and approximation factors. (The approximation factors can be improved, but at even greater costs in runtime and space.) In this work we give a new bootstrapping algorithm whose runtime and associated approximation factor are both small polynomials. Unlike most previous methods, ours implements an elementary and efficient arithmetic procedure, thereby avoiding the inefficiencies inherent to the use of boolean circuits
New and improved keyhomomorphic pseudorandom functions. Cryptology ePrint Archive, Report 2014/074
, 2014
"... A keyhomomorphic pseudorandom function (PRF) family {Fs: D → R} allows one to efficiently compute the value Fs+t(x) given Fs(x) and Ft(x). Such functions have many applications, such as distributing the operation of a keydistribution center and updatable symmetric encryption. The only known const ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A keyhomomorphic pseudorandom function (PRF) family {Fs: D → R} allows one to efficiently compute the value Fs+t(x) given Fs(x) and Ft(x). Such functions have many applications, such as distributing the operation of a keydistribution center and updatable symmetric encryption. The only known construction of keyhomomorphic PRFs without random oracles, due to Boneh et al. (CRYPTO 2013), is based on the learning with errors (LWE) problem and hence on worstcase lattice problems. However, the security proof relies on a very strong LWE assumption (i.e., very large approximation factors), and hence has quite inefficient parameter sizes and runtimes. In this work we give new constructions of keyhomomorphic PRFs that are based on much weaker LWE assumptions, are much more efficient in time and space, and are still highly parallel. More specifically, we improve the LWE approximation factor from exponential in the input length to exponential in its logarithm (or less). For input length λ and 2λ security against known lattice algorithms, we improve the key size from λ3 to λ bits, the public parameters from λ6 to λ2 bits, and the runtime from λ7 to λω+1 bit operations (ignoring polylogarithmic factors in λ), where ω ∈ [2, 2.373] is the exponent of matrix multiplication. In addition, we give even more efficient ringLWEbased constructions whose key sizes, public parameters, and incremental runtimes on consecutive inputs are all quasilinear Õ(λ), which is optimal up to polylogarithmic factors. To our knowledge, these are the first lowdepth PRFs (whether key homomorphic or not) enjoying any of these efficiency measures together with nontrivial proofs of 2λ security under any conventional assumption. 1
LatticeBased Group Signatures with Logarithmic Signature Size
, 2013
"... Group signatures are cryptographic primitives where users can anonymously sign messages in the name of a population they belong to. Gordon et al. (Asiacrypt 2010) suggested the first realization of group signatures based on lattice assumptions in the random oracle model. A significant drawback of th ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Group signatures are cryptographic primitives where users can anonymously sign messages in the name of a population they belong to. Gordon et al. (Asiacrypt 2010) suggested the first realization of group signatures based on lattice assumptions in the random oracle model. A significant drawback of their scheme is its linear signature size in the cardinality N of the group. A recent extension proposed by Camenisch et al. (SCN 2012) suffers from the same overhead. In this paper, we describe the first latticebased group signature schemes where the signature and public key sizes are essentially logarithmic in N (for any fixed security level). Our basic construction only satisfies a relaxed definition of anonymity (just like the Gordon et al. system) but readily extends into a fully anonymous group signature (i.e., that resists adversaries equipped with a signature opening oracle). We prove the security of our schemes in the random oracle model under the SIS and LWE assumptions.
WorstCase to AverageCase Reductions for Module Lattices
"... Abstract. Most latticebased cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact RingSIS and RingLWE problems. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Most latticebased cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact RingSIS and RingLWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worstcase) problems on euclidean lattices, whereas RingSIS and RingLWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the ModuleSIS and ModuleLWE problems, which bridge SIS with RingSIS, and LWE with RingLWE, respectively. We prove that these averagecase problems are at least as hard as standard lattice problems restricted to module lattices (which themselves generalize arbitrary and ideal lattices). As these new problems enlarge the toolbox of the latticebased cryptographer, they could prove useful for designing new schemes. Importantly, the worstcase to averagecase reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of RingSIS/RingLWE: Ideal lattice problems could reveal easy without impacting the hardness of RingSIS/RingLWE. 1
Improved security proofs in latticebased cryptography: using the Rényi divergence rather than the statistical distance
"... Abstract. The Rényi divergence is a mean to measure the closeness of two distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for latticebased cryptography. Using the Rényi divergence is particularly suited for security proofs of primitiv ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. The Rényi divergence is a mean to measure the closeness of two distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for latticebased cryptography. Using the Rényi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature). We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters.4 1
Lattice Cryptography for the Internet
, 2014
"... In recent years, latticebased cryptography has been recognized for its many attractive properties, such as strong provable security guarantees and apparent resistance to quantum attacks, flexibility for realizing powerful tools like fully homomorphic encryption, and high asymptotic efficiency. Inde ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In recent years, latticebased cryptography has been recognized for its many attractive properties, such as strong provable security guarantees and apparent resistance to quantum attacks, flexibility for realizing powerful tools like fully homomorphic encryption, and high asymptotic efficiency. Indeed, several works have demonstrated that for basic tasks like encryption and authentication, latticebased primitives can have performance competitive with (or even surpassing) those based on classical mechanisms like RSA or DiffieHellman. However, there still has been relatively little work on developing lattice cryptography for deployment in realworld cryptosystems and protocols. In this work we take a step toward that goal, by giving efficient and practical latticebased protocols for key transport, encryption, and authenticated key exchange that are suitable as “dropin ” components for proposed Internet standards and other open protocols. The security of all our proposals is provably based (sometimes in the randomoracle model) on the wellstudied “learning with errors over rings” problem, and hence on the conjectured worstcase hardness of problems on ideal lattices (against quantum algorithms). One of our main technical innovations (which may be of independent interest) is a simple, lowbandwidth reconciliation technique that allows two parties who “approximately agree ” on a secret value to reach exact agreement, a setting common to essentially all latticebased encryption schemes. Our technique reduces the ciphertext length of prior (already compact) encryption schemes nearly twofold, at essentially no cost. 1