In
computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in
polynomial time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on
reductions
Reductions ( es, reducciones, also called ; , pl. ) were settlements created by Spanish rulers and Roman Catholic missionaries in Spanish America and the Spanish East Indies (the Philippines). In Portuguese-speaking Latin America, such redu ...
to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Computational hardness assumptions are of particular importance in
cryptography
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
. A major goal in cryptography is to create
cryptographic primitives with
provable security
Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields.
Usually, this refers to mathematical proofs, which are common in cryptography. In such a proof, the capabiliti ...
. In some cases, cryptographic protocols are found to have
information theoretic security
A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computatio ...
; the
one-time pad
In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is not smaller than the message being sent. In this technique, a plaintext is paired with a ran ...
is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure ''assuming that any adversaries are computationally limited'', as all adversaries are in practice.
Computational hardness assumptions are also useful for guiding algorithm designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as
P ≠ NP
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
The informal term ''quickly'', used above ...
.
Comparing hardness assumptions
Computer scientists have different ways of assessing which hardness assumptions are more reliable.
Strength of hardness assumptions
We say that assumption
is ''stronger'' than assumption
when
implies
(and the converse is false or not known).
In other words, even if assumption
were false, assumption
may still be true, and cryptographic protocols based on assumption
may still be safe to use.
Thus when devising cryptographic protocols, one hopes to be able to prove security using the ''weakest'' possible assumptions.
Average-case vs. worst-case assumptions
An
average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a
worst-case
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
assumption only says that the problem is hard on ''some'' instances. For a given problem, average-case hardness implies worst-case hardness, so an
average-case hardness assumption is stronger than a worst-case hardness assumption for the same problem.
Furthermore, even for incomparable problems, an assumption like the
Exponential Time Hypothesis
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...
is often considered
preferable to an
average-case assumption like the
planted clique conjecture.
Note, however, that in most cryptographic applications, knowing that a problem has some hard instance (i.e. a problem is hard on the worst-case) is useless because it does not provide us with a way of generating hard instances.
[J. Katz and Y. Lindell, Introduction to Modern Cryptography (Chapman and Hall/Crc Cryptography and Network Security Series), Chapman and Hall/CRC, 2007.] Fortunately, many average-case assumptions used in cryptography (including
RSA,
discrete log
In mathematics, for given real numbers ''a'' and ''b'', the logarithm log''b'' ''a'' is a number ''x'' such that . Analogously, in any group ''G'', powers ''b'k'' can be defined for all integers ''k'', and the discrete logarithm log''b' ...
, and some
lattice problems In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lat ...
) can be based on worst-case assumptions via worst-case-to-average-case reductions.
Falsifiability
A desired characteristic of a computational hardness assumption is
falsifiability
Falsifiability is a standard of evaluation of scientific theories and hypotheses that was introduced by the philosopher of science Karl Popper in his book ''The Logic of Scientific Discovery'' (1934). He proposed it as the cornerstone of a sol ...
, i.e. that if the assumption were false, then it would be possible to prove it.
In particular, introduced a formal notion of cryptographic falsifiability.
Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge:
an interactive protocol between an adversary and an efficient verifier,
where an efficient adversary can convince the verifier to accept if and only if the assumption is false.
Common cryptographic hardness assumptions
There are many cryptographic hardness assumptions in use. This is a list of some of the most common ones, and some cryptographic protocols that use them.
Integer factorization
Given a
composite number
A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Equivalently, it is a positive integer that has at least one divisor other than 1 and itself. Every positive integer is composite, prime, ...
, and in particular one which is the product of two large primes
, the integer factorization problem is to find
and
(more generally, find primes
such that
).
It is a major open problem to find an algorithm for integer factorization that runs in time polynomial in the size of representation (
).
The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time).
Cryptosystems whose security is equivalent to this assumption include the
Rabin cryptosystem
The Rabin cryptosystem is a family of public-key encryption schemes
based on a trapdoor function whose security, like that of RSA, is related to the difficulty of integer factorization.
The Rabin trapdoor function has the advantage that invert ...
and the
Okamoto–Uchiyama cryptosystem The Okamoto–Uchiyama cryptosystem is a public key cryptosystem proposed in 1998 by Tatsuaki Okamoto and Shigenori Uchiyama. The system works in the multiplicative group of integers modulo n, (\mathbb/n\mathbb)^*, where ''n'' is of the form ''p' ...
.
Many more cryptosystems rely on stronger assumptions such as
RSA,
Residuosity problems, and
Phi-hiding.
RSA problem
Given a composite number
, exponent
and number
, the RSA problem is to find
.
The problem is conjectured to be hard, but becomes easy given the factorization of
.
In the
RSA cryptosystem
RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem that is widely used for secure data transmission. It is also one of the oldest. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly ...
,
is the
public key
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic al ...
,
is the encryption of message
, and the factorization of
is the secret key used for decryption.
Residuosity problems
Given a composite number
and integers
, the residuosity problem is to determine whether there exists (alternatively, find an)
such that
:
Important special cases include the
Quadratic residuosity problem The quadratic residuosity problem (QRP) in computational number theory is to decide, given integers a and N, whether a is a quadratic residue modulo N or not.
Here N = p_1 p_2 for two unknown primes p_1 and p_2, and a is among the numbers which are ...
and the
Decisional composite residuosity problem.
As in the case of RSA, this problem (and its special cases) are conjectured to be hard, but become easy given the factorization of
.
Some cryptosystems that rely on the hardness of residuousity problems include:
*
Goldwasser–Micali cryptosystem
The Goldwasser–Micali (GM) cryptosystem is an asymmetric key encryption algorithm developed by Shafi Goldwasser and Silvio Micali in 1982. GM has the distinction of being the first probabilistic public-key encryption scheme which is provably ...
(quadratic redisduosity problem)
*
Blum Blum Shub generator
Blum Blum Shub (B.B.S.) is a pseudorandom number generator proposed in 1986 by Lenore Blum, Manuel Blum and Michael Shub that is derived from Michael O. Rabin's one-way function.
__TOC__
Blum Blum Shub takes the form
:x_ = x_n^2 \bmod M,
where ...
(quadratic redisduosity problem)
*
Paillier cryptosystem
The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing ''n''-th residue classes is believed to be computationally difficult. The ...
(decisional composite residuosity problem)
*
Benaloh cryptosystem The Benaloh Cryptosystem is an extension of the Goldwasser-Micali cryptosystem (GM) created in 1985 by Josh (Cohen) Benaloh. The main improvement of the Benaloh Cryptosystem over GM is that longer blocks of data can be encrypted at once, whereas i ...
(higher residuosity problem)
*
Naccache–Stern cryptosystem
The Naccache–Stern cryptosystem is a homomorphic public-key cryptosystem whose security rests on the higher residuosity problem. The Naccache–Stern cryptosystem was discovered by David Naccache and Jacques Stern in 1998.
Scheme Definition
L ...
(higher residuosity problem)
Phi-hiding assumption
For a composite number
, it is not known how to efficiently compute its
Euler's totient function
In number theory, Euler's totient function counts the positive integers up to a given integer that are relatively prime to . It is written using the Greek letter phi as \varphi(n) or \phi(n), and may also be called Euler's phi function. In ot ...
. The Phi-hiding assumption postulates that it is hard to compute
, and furthermore even computing any prime factors of
is hard. This assumption is used in the Cachin–Micali–Stadler
PIR protocol.
Discrete log problem (DLP)
Given elements
and
from a
group
A group is a number of persons or things that are located, gathered, or classed together.
Groups of people
* Cultural group, a group whose members share the same cultural identity
* Ethnic group, a group whose members share the same ethnic ide ...
, the discrete log problem asks for an integer
such that
.
The discrete log problem is not known to be comparable to integer factorization, but their computational complexities are
closely related.
Most cryptographic protocols related to the discrete log problem actually rely on the stronger
Diffie–Hellman assumption: given group elements
, where
is a generator and
are random integers, it is hard to find
. Examples of protocols that use this assumption include the original
Diffie–Hellman key exchange
Diffie–Hellman key exchangeSynonyms of Diffie–Hellman key exchange include:
* Diffie–Hellman–Merkle key exchange
* Diffie–Hellman key agreement
* Diffie–Hellman key establishment
* Diffie–Hellman key negotiation
* Exponential key exc ...
, as well as the
ElGamal encryption
In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in t ...
(which relies on the yet stronger
Decisional Diffie–Hellman (DDH) variant).
Multilinear maps
A
multilinear map
In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function
:f\colon V_1 \times \cdots \times V_n \to W\text
where V_1,\ldots,V_n and W ar ...
is a function
(where
are
groups
A group is a number of persons or things that are located, gathered, or classed together.
Groups of people
* Cultural group, a group whose members share the same cultural identity
* Ethnic group, a group whose members share the same ethnic ide ...
) such that for any
and
,
:
.
For cryptographic applications, one would like to construct groups
and a map
such that the map and the group operations on
can be computed efficiently, but the discrete log problem on
is still hard.
[
]
Some applications require stronger assumptions, e.g. multilinear analogs of Diffie-Hellman assumptions.
For the special case of
,
bilinear maps
Bilinear may refer to:
* Bilinear sampling (also called "bilinear filtering"), a method in computer graphics for choosing the color of a texture
* Bilinear form, a type of mathematical function from a vector space to the underlying field
* Biline ...
with believable security have been constructed using
Weil pairing
Weil may refer to:
Places in Germany
*Weil, Bavaria
*Weil am Rhein, Baden-Württemberg
* Weil der Stadt, Baden-Württemberg
*Weil im Schönbuch, Baden-Württemberg
Other uses
* Weil (river), Hesse, Germany
* Weil (surname), including people with ...
and
Tate pairing
In mathematics, Tate pairing is any of several closely related bilinear pairings involving elliptic curves or abelian varieties, usually over local or finite fields, based on the Tate duality pairings introduced by and extended by . applied t ...
.
[
]
For
many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate.
Some cryptosystems that rely on multilinear hardness assumptions include:
*
Boneh-Franklin scheme (blinear Diffie-Hellman)
*
Boneh–Lynn–Shacham (blinear Diffie-Hellman)
* Garg-Gentry-Halevi-Raykova-Sahai-Waters candidate for
indistinguishability obfuscation
In cryptography, indistinguishability obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot b ...
and
functional encryption
Functional encryption (FE) is a generalization of public-key encryption in which possessing a secret key allows one to learn a function of what the ciphertext is encrypting.
Formal definition
More precisely, a functional encryption scheme for a g ...
(multilinear jigsaw puzzles)
Lattice problems
The most fundamental computational problem on lattices is the
Shortest vector problem (SVP): given a lattice
, find the shortest non-zero vector
.
Most cryptosystems require stronger assumptions on variants of SVP, such as
Shortest independent vectors problem (SIVP),
GapSVP,
or Unique-SVP.
[
]
The most useful lattice hardness assumption in cryptography is for the
Learning with errors
Learning with errors (LWE) is the computational problem of inferring a linear n-ary function f over a finite ring from given samples y_i = f(\mathbf_i) some of which may be erroneous.
The LWE problem is conjectured to be hard to solve, and thus to ...
(LWE) problem: Given samples to
, where
for some linear function
, it is easy to learn
using linear algebra. In the LWE problem, the input to the algorithm has errors, i.e. for each pair
with some small probability. The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP.
For quantum computers, Factoring and Discrete Log problems are easy, but lattice problems are conjectured to be hard.
This makes some
lattice-based cryptosystems candidates for
post-quantum cryptography
In cryptography, post-quantum cryptography (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against a cryptanalytic attack ...
.
Some cryptosystems that rely on hardness of lattice problems include:
*
NTRU
NTRU is an open-source public-key cryptosystem that uses lattice-based cryptography to encrypt and decrypt data. It consists of two algorithms: NTRUEncrypt, which is used for encryption, and NTRUSign, which is used for digital signatures. Unli ...
(both
NTRUEncrypt
The NTRUEncrypt public key cryptosystem, also known as the NTRU encryption algorithm, is an NTRU lattice-based alternative to RSA and elliptic curve cryptography (ECC) and is based on the shortest vector problem in a lattice (which is not known ...
and
NTRUSign)
* Most candidates for
fully homomorphic encryption
Homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrypting it. These resulting computations are left in an encrypted form which, when decrypted, result in an identical ...
Non-cryptographic hardness assumptions
As well as their cryptographic applications, hardness assumptions are used in
computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
to provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, one proves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption that
P ≠ NP
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
The informal term ''quickly'', used above ...
, but others include the
exponential time hypothesis
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...
, the
planted clique conjecture, and the
unique games conjecture
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002.
The conjecture postulates that the problem of determining the approximate ''value'' of a certain type of ga ...
.
[.]
''C''-hard problems
Many
worst-case
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
computational problems are known to be hard or even
complete
Complete may refer to:
Logic
* Completeness (logic)
* Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable
Mathematics
* The completeness of the real numbers, which implies t ...
for some
complexity class
In computational complexity theory, a complexity class is a set of computational problems of related resource-based complexity. The two most commonly analyzed resources are time and memory.
In general, a complexity class is defined in terms of ...
, in particular
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
(but often also
PSPACE-hard
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.
Formal definition
If we denote by SPACE(''t''(''n'')), the set of all problems that can b ...
,
PPAD-hard, etc.). This means that they are at least as hard as any problem in the class
.
If a problem is
-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumption
is false.
Exponential Time Hypothesis (ETH) and variants
The Exponential Time Hypothesis (ETH) is a
strengthening of
hardness assumption, which conjectures that not only does the
Boolean satisfiability problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfie ...
not have a polynomial time algorithm, it furthermore requires exponential time (
).
An even stronger assumption, known as the
Strong Exponential Time Hypothesis (SETH) conjectures that
-SAT requires
time, where
.
ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time and
quasi-polynomial time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
,
or even
versus
.
Such assumptions are also useful in
parametrized complexity
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. ...
.
[
]
Average-case hardness assumptions
Some computational problems are assumed to be hard on average over a particular distribution of instances.
For example, in the
planted clique
In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset. The planted clique pr ...
problem, the input is a random graph sampled, by sampling an
Erdős–Rényi random graph and then "planting" a random
-clique, i.e. connecting
uniformly random nodes (where
), and the goal is to find the planted
-clique (which is unique w.h.p.).
[.]
Another important example is
Feige's Hypothesis, which is a computational hardness assumption about random instances of
3-SAT
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfie ...
(sampled to maintain a specific ratio of clauses to variables).
[
]
Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs.
[
]
Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems,
similarly to the
Exponential Time Hypothesis
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...
.
Unique Games
The
Unique Label Cover problem is a constraint satisfaction problem, where each constraint
involves two variables
, and for each value of
there is a ''unique'' value of
that satisfies
.
Determining whether all the constraints can be satisfied is easy, but the Unique Game Conjecture (UGC) postulates that determining whether almost all the constraints (
-fraction, for any constant
) can be satisfied or almost none of them (
-fraction) can be satisfied is NP-hard.
Approximation problems are often known to be NP-hard assuming UGC; such problems are referred to as UG-hard.
In particular, assuming UGC there is a
semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize)
over the intersection of the cone of positive ...
algorithm that achieves optimal approximation guarantees for many important problems.
[
]
Small Set Expansion
Closely related to the Unique Label Cover problem is the Small Set Expansion (SSE) problem: Given a graph
, find a small set of vertices (of size
) whose
edge expansion is minimal.
It is known that if SSE is hard to approximate, then so is Unique Label Cover. Hence, the ''Small Set Expansion Hypothesis'', which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the Unique Game Conjecture.
[
]
Some approximation problems are known to be SSE-hard (i.e. at least as hard as approximating SSE).
The 3SUM Conjecture
Given a set of
numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero.
There is an quadratic-time algorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time":
the 3SUM Conjecture is the computational hardness assumption that there are no
-time algorithms for 3SUM (for any constant
).
This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from
computational geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems ar ...
.
[
]
See also
*
Security level
References
{{Computational hardness assumptions
Theory of cryptography
Computational number theory
Computational hardness assumptions