HOME

TheInfoList



OR:

Learning with errors (LWE) is the computational problem of inferring a linear n-ary function f over a finite
ring Ring may refer to: * Ring (jewellery), a round band, usually made of metal, worn as ornamental jewelry * To make a sound with a bell, and the sound made by a bell :(hence) to initiate a telephone connection Arts, entertainment and media Film and ...
from given samples y_i = f(\mathbf_i) some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography. More precisely, the LWE problem is defined as follows. Let \mathbb_q denote the ring of integers
modulo In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the '' modulus'' of the operation). Given two positive numbers and , modulo (often abbreviated as ) is t ...
q and let \mathbb_q^n denote the set of n- vectors over \mathbb_q . There exists a certain unknown linear function f:\mathbb_q^n \rightarrow \mathbb_q, and the input to the LWE problem is a sample of pairs (\mathbf,y), where \mathbf\in \mathbb_q^n and y \in \mathbb_q, so that with high probability y=f(\mathbf). Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function f, or some close approximation thereof, with high probability. The LWE problem was introduced by Oded Regev in 2005 (who won the 2018 Gödel Prize for this work), it is a generalization of the
parity learning Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function ''ƒ'', given some samples (''x'', ''ƒ''(''x'')) and the assurance that ''ƒ'' computes the parity of bits at some fixed locations. T ...
problem. Regev showed that the LWE problem is as hard to solve as several worst-case
lattice problems In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lat ...
. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems,Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, http://portal.acm.org/citation.cfm?id=1536414.1536461. such as the ring learning with errors key exchange by Peikert.


Definition

Denote by \mathbb=\mathbb/\mathbb the additive group on reals modulo one. Let \mathbf \in \mathbb_q^n be a fixed vector. Let \phi be a fixed probability distribution over \mathbb. Denote by A_ the distribution on \mathbb_q^n \times \mathbb obtained as follows. # Pick a vector \mathbf\in \mathbb_q^n from the uniform distribution over \mathbb_q^n, # Pick a number e\in\mathbb from the distribution \phi, # Evaluate t=\langle \mathbf,\mathbf \rangle /q + e, where \textstyle \langle \mathbf,\mathbf \rangle = \sum_^n a_i s_i is the standard inner product in \mathbb_q^n, the division is done in the
field of reals In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every re ...
(or more formally, this "division by q" is notation for the group homomorphism \mathbb_q \longrightarrow \mathbb mapping 1 \in \mathbb_q to 1/q + \mathbb \in \mathbb), and the final addition is in \mathbb. # Output the pair (\mathbf,t). The learning with errors problem \mathrm_ is to find \mathbf \in \mathbb_q^n, given access to polynomially many samples of choice from A_. For every \alpha > 0, denote by D_\alpha the one-dimensional
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
with zero mean and variance \alpha^2/(2\pi), that is, the density function is D_\alpha(x)=\rho_\alpha(x)/\alpha where \rho_\alpha(x)=e^, and let \Psi_\alpha be the distribution on \mathbb obtained by considering D_\alpha modulo one. The version of LWE considered in most of the results would be \mathrm_


Decision version

The LWE problem described above is the ''search'' version of the problem. In the ''decision'' version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from \mathbb_q^n \times \mathbb (practically, some discretized version of it). Regev showed that the ''decision'' and ''search'' versions are equivalent when q is a prime bounded by some polynomial in n.


Solving decision assuming search

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by \ \subset \mathbb^n_q \times \mathbb. If the solver returns a candidate \mathbf, for all i, calculate \ . If the samples are from an LWE distribution, then the results of this calculation will be distributed according \chi, but if the samples are uniformly random, these quantities will be distributed uniformly as well.


Solving search assuming decision

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover \mathbf one coordinate at a time. To obtain the first coordinate, \mathbf_1, make a guess k \in Z_q, and do the following. Choose a number r \in \mathbb_q uniformly at random. Transform the given samples \ \subset \mathbb^n_q \times \mathbb as follows. Calculate \. Send the transformed samples to the decision solver. If the guess k was correct, the transformation takes the distribution A_ to itself, and otherwise, since q is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since q is bounded by some polynomial in n, it only takes polynomial time to guess every possible value for k and use the solver to see which one is correct. After obtaining \mathbf_1, we follow an analogous procedure for each other coordinate \mathbf_j. Namely, we transform our \mathbf_i samples the same way, and transform our \mathbf_i samples by calculating \mathbf_i + (0, \ldots, r, \ldots, 0), where the r is in the j^\text coordinate. Peikert showed that this reduction, with a small modification, works for any q that is a product of distinct, small (polynomial in n) primes. The main idea is if q = q_1 q_2 \cdots q_t, for each q_, guess and check to see if \mathbf_j is congruent to 0 \mod q_, and then use the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
to recover \mathbf_j.


Average case hardness

Regev showed the random self-reducibility of the LWE and DLWE problems for arbitrary q and \chi. Given samples \ from A_, it is easy to see that \ are samples from A_. So, suppose there was some set \mathcal \subset \mathbb_q^n such that , \mathcal, /, \mathbb_q^n, = 1/\operatorname(n), and for distributions A_, with \mathbf' \leftarrow \mathcal, DLWE was easy. Then there would be some distinguisher \mathcal, who, given samples \, could tell whether they were uniformly random or from A_. If we need to distinguish uniformly random samples from A_, where \mathbf is chosen uniformly at random from \mathbb_q^n, we could simply try different values \mathbf sampled uniformly at random from \mathbb_q^n, calculate \ and feed these samples to \mathcal. Since \mathcal comprises a large fraction of \mathbb_q^n, with high probability, if we choose a polynomial number of values for \mathbf, we will find one such that \mathbf + \mathbf \in \mathcal, and \mathcal will successfully distinguish the samples. Thus, no such \mathcal can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.


Hardness results


Regev's result

For a ''n''-dimensional lattice L, let ''smoothing parameter'' \eta_\varepsilon(L) denote the smallest s such that \rho_(L^*\setminus \) \leq \varepsilon where L^* is the dual of L and \rho_\alpha(x)=e^ is extended to sets by summing over function values at each element in the set. Let D_ denote the discrete Gaussian distribution on L of width r for a lattice L and real r>0. The probability of each x \in L is proportional to \rho_r(x). The ''discrete Gaussian sampling problem''(DGS) is defined as follows: An instance of DGS_\phi is given by an n-dimensional lattice L and a number r \geq \phi(L). The goal is to output a sample from D_. Regev shows that there is a reduction from \operatorname_ to DGS_ for any function \gamma(n) \ge 1. Regev then shows that there exists an efficient quantum algorithm for DGS_ given access to an oracle for \mathrm_ for integer q and \alpha \in (0,1) such that \alpha q > 2\sqrt. This implies the hardness for LWE. Although the proof of this assertion works for any q, for creating a cryptosystem, the modulus q has to be polynomial in n.


Peikert's result

Peikert proves that there is a probabilistic polynomial time reduction from the \operatorname_ problem in the worst case to solving \mathrm_ using \operatorname(n) samples for parameters \alpha \in (0,1), \gamma(n)\geq n/(\alpha \sqrt), \zeta(n) \geq \gamma(n) and q \geq (\zeta/\sqrt) \omega \sqrt).


Use in cryptography

The LWE problem serves as a versatile problem used in construction of several cryptosystems. In 2005, Regev showed that the decision version of LWE is hard assuming quantum hardness of the
lattice problems In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lat ...
\mathrm_\gamma (for \gamma as above) and \mathrm_t with t=O(n/\alpha) ). In 2009, Peikert proved a similar result assuming only the classical hardness of the related problem \mathrm_. The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.


Public-key cryptosystem

Regev proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by m,q and a probability distribution \chi on \mathbb. The setting of the parameters used in proofs of correctness and security is * q \geq 2 , usually a prime number between n^2 and 2n^2. * m=(1+\varepsilon)(n+1) \log q for an arbitrary constant \varepsilon * \chi=\Psi_ for \alpha(n) \in o(1/\sqrt\log n), where \Psi_\beta is a probability distribution obtained by sampling a normal variable with mean 0 and standard variation \frac and reducing the result modulo 1. The cryptosystem is then defined by: * ''Private key'': Private key is an \mathbf\in \mathbb^n_q chosen uniformly at random. * ''Public key'': Choose m vectors \mathbf_1,\ldots,\mathbf_m \in \mathbb^n_q uniformly and independently. Choose error offsets e_1,\ldots,e_m \in \mathbb independently according to \chi. The public key consists of (\mathbf_i,b_i=\langle \mathbf_i,\mathbf \rangle/q + e_i)^m_ * ''Encryption'': The encryption of a bit x \in \ is done by choosing a random subset S of /math> and then defining \operatorname(x) as :: \left(\sum_ \mathbf_i, \frac x 2 + \sum_ b_i\right) * ''Decryption'': The decryption of (\mathbf,b) is 0 if b-\langle \mathbf, \mathbf \rangle/q is closer to 0 than to \frac, and 1 otherwise. The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of 0 and 1 can be used to distinguish between A_ and the uniform distribution over \mathbb^n_q \times \mathbb


CCA-secure cryptosystem

Peikert proposed a system that is secure even against any chosen-ciphertext attack.


Key exchange

The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper appeared in 2012 after a provisional patent application was filed in 2012. The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme following the same basic idea of Ding's, where the new idea of sending an additional 1-bit signal for rounding in Ding's construction is also used. The "new hope" implementation selected for Google's post-quantum experiment, uses Peikert's scheme with variation in the error distribution.


See also

* Post-quantum cryptography *
Lattice-based cryptography Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions are currently important candidates for pos ...
* Ring learning with errors key exchange * Short integer solution (SIS) problem


References

{{Computational hardness assumptions Machine learning Cryptography Post-quantum cryptography