HOME
*





Computational Security
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood. Computational hardness assumptions are of particular importance in cryptography. A major goal in cryptography is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure ''assuming th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computationa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lattice Problems
In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space (often \mathbb^n) or free modules (often \mathbb^n) are generally considered. For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space ''V'' and a norm ''N''. The norm usuall ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Paillier Cryptosystem
The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing ''n''-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based. The scheme is an additive homomorphic cryptosystem; this means that, given only the public key and the encryption of m_1 and m_2, one can compute the encryption of m_1+m_2. Algorithm The scheme works as follows: Key generation #Choose two large prime numbers p and q randomly and independently of each other such that \gcd(pq, (p-1)(q-1))=1. This property is assured if both primes are of equal length.Jonathan Katz, Yehuda Lindell, "Introduction to Modern Cryptography: Principles and Protocols," Chapman & Hall/CRC, 2007 #Compute n=pq and \lambda=\operatorname(p-1,q-1). lcm means Least Common Multiple. #Select random inte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Blum Blum Shub Generator
Blum Blum Shub (B.B.S.) is a pseudorandom number generator proposed in 1986 by Lenore Blum, Manuel Blum and Michael Shub that is derived from Michael O. Rabin's one-way function. __TOC__ Blum Blum Shub takes the form :x_ = x_n^2 \bmod M, where ''M'' = ''pq'' is the product of two large primes ''p'' and ''q''. At each step of the algorithm, some output is derived from ''x''''n''+1; the output is commonly either the bit parity of ''x''''n''+1 or one or more of the least significant bits of ''x''''n''+1''. The seed ''x''0 should be an integer that is co-prime to ''M'' (i.e. ''p'' and ''q'' are not factors of ''x''0) and not 1 or 0. The two primes, ''p'' and ''q'', should both be congruent to 3 (mod 4) (this guarantees that each quadratic residue has one square root which is also a quadratic residue), and should be safe primes with a small gcd((''p-3'')''/2'', (''q-3'')''/2'') (this makes the cycle length large). An interesting characteristic of the Blum Blum Shub generator is th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Goldwasser–Micali Cryptosystem
The Goldwasser–Micali (GM) cryptosystem is an asymmetric key encryption algorithm developed by Shafi Goldwasser and Silvio Micali in 1982. GM has the distinction of being the first probabilistic public-key encryption scheme which is provably secure under standard cryptographic assumptions. However, it is not an efficient cryptosystem, as ciphertexts may be several hundred times larger than the initial plaintext. To prove the security properties of the cryptosystem, Goldwasser and Micali proposed the widely used definition of semantic security. Basis The GM cryptosystem is semantically secure based on the assumed intractability of the quadratic residuosity problem modulo a composite ''N'' = ''pq'' where ''p, q'' are large primes. This assumption states that given (''x'', ''N'') it is difficult to determine whether ''x'' is a quadratic residue modulo ''N'' (i.e., ''x'' = ''y''2 mod ''N'' for some ''y''), when the Jacobi symbol for ''x'' is +1. The quadratic residue proble ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Decisional Composite Residuosity Assumption
The decisional composite residuosity assumption (DCRA) is a mathematical assumption used in cryptography. In particular, the assumption is used in the proof of the Paillier cryptosystem. Informally, the DCRA states that given a composite n and an integer z, it is hard to decide whether z is an n-residue modulo n^2. ''I.e.'' whether there exists a y such that : z \equiv y^n \pmod. \, See also * Quadratic residuosity problem * Higher residuosity problem In cryptography, most public key cryptosystems are founded on problems that are believed to be intractable. The higher residuosity problem (also called the n th-residuosity problem) is one such problem. This problem is ''easier'' to solve than ... References * P. Paillier, ''Public-Key Cryptosystems Based on Composite Degree Residuosity Classes'', Eurocrypt 1999. {{DEFAULTSORT:Decisional Composite Residuosity Assumption Computational hardness assumptions ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quadratic Residuosity Problem
The quadratic residuosity problem (QRP) in computational number theory is to decide, given integers a and N, whether a is a quadratic residue modulo N or not. Here N = p_1 p_2 for two unknown primes p_1 and p_2, and a is among the numbers which are not obviously quadratic non-residues (see below). The problem was first described by Gauss in his ''Disquisitiones Arithmeticae'' in 1801. This problem is believed to be computationally difficult. Several cryptographic methods rely on its hardness, see . An efficient algorithm for the quadratic residuosity problem immediately implies efficient algorithms for other number theoretic problems, such as deciding whether a composite N of unknown factorization is the product of 2 or 3 primes. Precise formulation Given integers a and T, a is said to be a ''quadratic residue modulo T'' if there exists an integer b such that :a \equiv b^2 \pmod T. Otherwise we say it is a quadratic non-residue. When T = p is a prime, it is customary to us ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Public-key Cryptography
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message. For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext. Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

RSA Cryptosystem
RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem that is widely used for secure data transmission. It is also one of the oldest. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ) (the British signals intelligence agency) by the English mathematician Clifford Cocks. That system was declassified in 1997. In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private). An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers. The security of RSA relies on the practical difficulty of factoring the product of two ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Phi-hiding Assumption
The phi-hiding assumption or Φ-hiding assumption is an assumption about the difficulty of finding small factors of φ(''m'') where ''m'' is a number whose factorization is unknown, and φ is Euler's totient function. The security of many modern cryptosystems comes from the perceived difficulty of certain problems. Since P vs. NP problem is still unresolved, cryptographers cannot be sure computationally intractable problems exist. Cryptographers thus make assumptions as to which problems are ''hard''. It is commonly believed that if ''m'' is the product of two large primes, then calculating φ(''m'') is currently computationally infeasible; this assumption is required for the security of the RSA Cryptosystem. The Φ-Hiding assumption is a stronger assumption, namely that if ''p''1 and ''p''2 are small primes exactly one of which divides φ(''m''), there is no polynomial-time algorithm which can distinguish which of the primes ''p''1 and ''p''2 divides φ(''m'') with probability ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Okamoto–Uchiyama Cryptosystem
The Okamoto–Uchiyama cryptosystem is a public key cryptosystem proposed in 1998 by Tatsuaki Okamoto and Shigenori Uchiyama. The system works in the multiplicative group of integers modulo n, (\mathbb/n\mathbb)^*, where ''n'' is of the form ''p''2''q'' and ''p'' and ''q'' are large primes. Operation Like many public key cryptosystems, this scheme works in the group (\mathbb/n\mathbb)^*. This scheme is homomorphic and hence malleable. Key generation A public/private key pair is generated as follows: # Generate two large primes p and q. # Compute n = p^2 q. # Choose a random integer g \in \ such that g^ \not\equiv 1 \mod p^2. # Compute h = g^n \bmod n. The public key is then (n,g,h) and the private key is (p,q). Encryption A message m < p can be encrypted with the public key (n,g,h) as follows. # Choose a random integer r \in \. # Compute c = g^m h^r \bmod n. The value c is the encryption of m.


< ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]