Computational hardness assumptions
   HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in
polynomial time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations p ...
"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead,
computer scientist A computer scientist is a scientist who specializes in the academic study of computer science. Computer scientists typically work on the theoretical side of computation. Although computer scientists can also focus their work and research on ...
s rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood. Computational hardness assumptions are of particular importance in
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
. A major goal in cryptography is to create
cryptographic primitive Cryptographic primitives are well-established, low-level cryptography, cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash fun ...
s with
provable security Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields. Usually, this refers to mathematical proofs, which are common in cryptography. In such a proof, the capabilit ...
. In some cases, cryptographic protocols are found to have information theoretic security; the
one-time pad The one-time pad (OTP) is an encryption technique that cannot be Cryptanalysis, cracked in cryptography. It requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, ...
is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure ''assuming that any adversaries are computationally limited'', as all adversaries are in practice. Computational hardness assumptions are also useful for guiding
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as P ≠ NP.


Comparing hardness assumptions

Computer scientists have different ways of assessing which hardness assumptions are more reliable.


Strength of hardness assumptions

We say that assumption A is ''stronger'' than assumption B when A implies B (and the converse is false or not known). In other words, even if assumption A were false, assumption B may still be true, and cryptographic protocols based on assumption B may still be safe to use. Thus when devising cryptographic protocols, one hopes to be able to prove security using the ''weakest'' possible assumptions.


Average-case vs. worst-case assumptions

An average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a worst-case assumption only says that the problem is hard on ''some'' instances. For a given problem, average-case hardness implies worst-case hardness, so an average-case hardness assumption is stronger than a worst-case hardness assumption for the same problem. Furthermore, even for incomparable problems, an assumption like the exponential time hypothesis is often considered preferable to an average-case assumption like the planted clique conjecture. However, for cryptographic applications, knowing that a problem has some hard instance (the problem is hard in the worst-case) is useless because it does not provide us with a way of generating hard instances.J. Katz and Y. Lindell, Introduction to Modern Cryptography (Chapman and Hall/CRC Cryptography and Network Security Series), Chapman and Hall/CRC, 2007. Fortunately, many average-case assumptions used in cryptography (including RSA, discrete log, and some lattice problems) can be based on worst-case assumptions via worst-case-to-average-case reductions.


Falsifiability

A desired characteristic of a computational hardness assumption is
falsifiability Falsifiability (or refutability) is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the Philosophy of science, philosopher of science Karl Popper in his book ''The Logic of Scientific Discovery'' (1934). ...
, i.e. that if the assumption were false, then it would be possible to prove it. In particular, introduced a formal notion of cryptographic falsifiability. Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge: an interactive protocol between an adversary and an efficient verifier, where an efficient adversary can convince the verifier to accept
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
the assumption is false.


Common cryptographic hardness assumptions

There are many cryptographic hardness assumptions in use. This is a list of some of the most common ones, and some cryptographic protocols that use them.


Integer factorization

Given a composite
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
n, and in particular one which is the product of two large primes n = p\cdot q, the integer factorization problem is to find p and q (more generally, find primes p_1,\dots,p_k such that n = \prod_i p_i). It is a major
open problem In science and mathematics, an open problem or an open question is a known problem which can be accurately stated, and which is assumed to have an objective and verifiable solution, but which has not yet been solved (i.e., no solution for it is kno ...
to find an algorithm for integer factorization that runs in time polynomial in the size of representation (\log n). The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time). Cryptosystems whose security is equivalent to this assumption include the
Rabin cryptosystem The Rabin cryptosystem is a family of public-key encryption schemes based on a trapdoor function whose security, like that of RSA, is related to the difficulty of integer factorization. The Rabin trapdoor function has the advantage that invert ...
and the Okamoto–Uchiyama cryptosystem. Many more cryptosystems rely on stronger assumptions such as RSA, residuosity problems, and phi-hiding.


RSA problem

Given a composite number n, exponent e and number c := m^e (\mathrm\; n), the RSA problem is to find m. The problem is
conjecture In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), ha ...
d to be hard, but becomes easy given the factorization of n. In the RSA cryptosystem, (n,e) is the
public key Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic alg ...
, c is the encryption of message m, and the factorization of n is the secret key used for decryption.


Residuosity problems

Given a composite number n and integers y,d, the residuosity problem is to determine whether there exists (alternatively, find an) x such that : x^d \equiv y \pmod. Important special cases include the
quadratic residuosity problem The quadratic residuosity problem (QRP) in computational number theory is to decide, given integers a and N, whether a is a quadratic residue modulo N or not. Here N = p_1 p_2 for two unknown primes p_1 and p_2, and a is among the numbers which a ...
and the decisional composite residuosity problem. As in the case of RSA, this problem (and its special cases) are conjectured to be hard, but become easy given the factorization of n. Some cryptosystems that rely on the hardness of residuousity problems include: * Goldwasser–Micali cryptosystem (quadratic residuosity problem) * Blum Blum Shub generator (quadratic residuosity problem) *
Paillier cryptosystem The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing ''n''-th residue classes is believed to be computationally difficult. Th ...
(decisional composite residuosity problem) * Benaloh cryptosystem (higher residuosity problem) * Naccache–Stern cryptosystem (higher residuosity problem)


Phi-hiding assumption

For a composite number m, it is not known how to efficiently compute its
Euler's totient function In number theory, Euler's totient function counts the positive integers up to a given integer that are relatively prime to . It is written using the Greek letter phi as \varphi(n) or \phi(n), and may also be called Euler's phi function. In ot ...
\phi(m). The phi-hiding assumption postulates that it is hard to compute \phi(m), and furthermore even computing any
prime factor A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways ...
s of \phi(m) is hard. This assumption is used in the Cachin–Micali–Stadler PIR protocol.


Discrete log problem (DLP)

Given elements a and b from a group G, the discrete log problem asks for an integer k such that a = b^k. The discrete log problem is not known to be comparable to integer factorization, but their computational complexities are closely related. Most cryptographic protocols related to the discrete log problem actually rely on the stronger Diffie–Hellman assumption: given group elements g, g^a, g^b, where g is a generator and a,b are random integers, it is hard to find g^. Examples of protocols that use this assumption include the original
Diffie–Hellman key exchange Diffie–Hellman (DH) key exchangeSynonyms of Diffie–Hellman key exchange include: * Diffie–Hellman–Merkle key exchange * Diffie–Hellman key agreement * Diffie–Hellman key establishment * Diffie–Hellman key negotiation * Exponential ke ...
, as well as the ElGamal encryption (which relies on the yet stronger Decisional Diffie–Hellman (DDH) variant).


Multilinear maps

A
multilinear map Multilinear may refer to: * Multilinear form, a type of mathematical function from a vector space to the underlying field * Multilinear map, a type of mathematical function between vector spaces * Multilinear algebra, a field of mathematics ...
is a function e: G_1 ,\dots,G_n \rightarrow G_T (where G_1 ,\dots,G_n,G_T are groups) such that for any g_1, \dots, g_n \in G_1, \dots G_n and a_1, \dots, a_n, :e(g_1^,\dots,g_n^) = e(g_1,\dots,g_n)^. For cryptographic applications, one would like to construct groups G_1,\dots,G_n,G_T and a map e such that the map and the group operations on G_1,\dots,G_n,G_T can be computed efficiently, but the discrete log problem on G_1,\dots,G_n is still hard. Some applications require stronger assumptions, e.g. multilinear analogs of Diffie-Hellman assumptions. For the special case of n=2,
bilinear map In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example. A bilinear map can also be defined for ...
s with believable security have been constructed using
Weil pairing In mathematics, the Weil pairing is a pairing (bilinear form, though with multiplicative notation) on the points of order dividing ''n'' of an elliptic curve ''E'', taking values in ''n''th roots of unity. More generally there is a similar Weil ...
and Tate pairing. For n>2 many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate. Some cryptosystems that rely on multilinear hardness assumptions include: * Boneh-Franklin scheme (bilinear Diffie-Hellman) * Boneh–Lynn–Shacham (bilinear Diffie-Hellman) * Garg-Gentry-Halevi-Raykova-Sahai-Waters candidate for indistinguishability obfuscation and functional encryption (multilinear jigsaw puzzles)


Lattice problems

The most fundamental computational problem on lattices is the shortest vector problem (SVP): given a lattice L, find the shortest non-zero vector v \in L. Most cryptosystems require stronger assumptions on variants of SVP, such as shortest independent vectors problem (SIVP), GapSVP, or Unique-SVP. The most useful lattice hardness assumption in cryptography is for the
learning with errors In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is ...
(LWE) problem: Given samples to (x,y), where y = f(x) for some linear function f(\cdot), it is easy to learn f(\cdot) using
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
. In the LWE problem, the input to the algorithm has errors, i.e. for each pair y\neq f(x) with some small
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
. The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP. For quantum computers, factoring and discrete log problems are easy, but lattice problems are conjectured to be hard. This makes some lattice-based cryptosystems candidates for
post-quantum cryptography Post-quantum cryptography (PQC), sometimes referred to as quantum-proof, quantum-safe, or quantum-resistant, is the development of cryptographic algorithms (usually public-key algorithms) that are currently thought to be secure against a crypt ...
. Some cryptosystems that rely on hardness of lattice problems include: * NTRU (both NTRUEncrypt and NTRUSign) * Most candidates for fully homomorphic encryption


Non-cryptographic hardness assumptions

As well as their cryptographic applications, hardness assumptions are used in
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
to provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, one proves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption that P ≠ NP, but others include the exponential time hypothesis, the planted clique conjecture, and the
unique games conjecture In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate ''value'' of a certain type of g ...
..


''C''-hard problems

Many worst-case computational problems are known to be hard or even complete for some
complexity class In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and s ...
C, in particular
NP-hard In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ...
(but often also PSPACE-hard, PPAD-hard, etc.). This means that they are at least as hard as any problem in the class C. If a problem is C-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumption P \neq C is false.


Exponential time hypothesis (ETH) and variants

The exponential time hypothesis (ETH) is a strengthening of P \neq NP hardness assumption, which conjectures that not only does the
Boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an Interpretation (logic), interpretation that Satisf ...
(SAT) not have a polynomial time algorithm, it furthermore requires exponential time (2^). An even stronger assumption, known as the strong exponential time hypothesis (SETH) conjectures that k-SAT requires 2^ time, where \lim_ \varepsilon_k = 0. ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time and quasi-polynomial time, or even n^ versus n^2. Such assumptions are also useful in parametrized complexity.


Average-case hardness assumptions

Some computational problems are assumed to be hard on average over a particular distribution of instances. For example, in the planted clique problem, the input is a
random graph In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs l ...
sampled, by sampling an Erdős–Rényi random graph and then "planting" a random k-clique, i.e. connecting k uniformly random nodes (where 2\log_2 n \ll k \ll \sqrt n), and the goal is to find the planted k-clique (which is unique w.h.p.).. Another important example is Feige's Hypothesis, which is a computational hardness assumption about random instances of 3-SAT (sampled to maintain a specific ratio of clauses to variables). Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs. Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems, similarly to the exponential time hypothesis.


Unique games

The unique label cover problem is a constraint satisfaction problem, where each constraint C involves two variables x,y, and for each value of x there is a ''unique'' value of y that satisfies C. Determining whether all the constraints can be satisfied is easy, but the unique game conjecture (UGC) postulates that determining whether almost all the constraints ((1-\varepsilon)-fraction, for any constant \varepsilon>0) can be satisfied or almost none of them (\varepsilon-fraction) can be satisfied is NP-hard. Approximation problems are often known to be NP-hard assuming UGC; such problems are referred to as UG-hard. In particular, assuming UGC there is a semidefinite programming algorithm that achieves optimal approximation guarantees for many important problems.


Small set expansion

Closely related to the unique label cover problem is the small set expansion (SSE) problem: Given a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discret ...
G = (V,E), find a small set of vertices (of size n/\log(n)) whose edge expansion is minimal. It is known that if SSE is hard to approximate, then so is unique label cover. Hence, the ''small set expansion hypothesis'', which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the unique game conjecture. Some approximation problems are known to be SSE-hard (i.e. at least as hard as approximating SSE).


The 3SUM conjecture

Given a set of n numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero. There is a quadratic-time algorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time": the 3SUM conjecture is the computational hardness assumption that there are no O(n^)-time algorithms for 3SUM (for any constant \varepsilon > 0). This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from computational geometry.


See also

*
Security level In cryptography, security level is a measure of the strength that a cryptographic primitive — such as a cipher or hash function — achieves. Security level is usually expressed as a number of " bits of security" (also security strength ...


References

{{Computational hardness assumptions Theory of cryptography Computational number theory Computational hardness assumptions