Indistinguishability Obfuscation
   HOME
*





Indistinguishability Obfuscation
In cryptography, indistinguishability obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, IO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable. Indistinguishability obfuscation has several interesting theoretical properties. Firstly, iO is the "best-possible" obfuscation (in the sense that any secret about a program that can be hidden by any obfuscator at all can also be hidden by iO). Secondly, iO can be used to construct nearly the entire gamut of cryptographic primitives, including both mundane ones such as public-key cryptography and more exotic ones such as deniable encryption and f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cryptography
Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security ( data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Cryptography prior to the modern age was effectively synonymo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probabilistic Polynomial-time
In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977. If a decision problem is in PP, then there is an algorithm for it that is allowed to flip coins and make random decisions. It is guaranteed to run in polynomial time. If the answer is YES, the algorithm will answer YES with probability more than 1/2. If the answer is NO, the algorithm will answer YES with probability less than 1/2. In more practical terms, it is the class of problems that can be solved to any fixed degree of accuracy by running a randomized, polynomial-time algorithm a sufficient (but bounded) number of times. Turing machines that are polynomially-bound and probabilistic are characterized as PPT, which stands for probabilistic polynomial-time machines. This characterization ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Pseudorandom Generator
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution. Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size. It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory. Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions. Definition Let \mathcal A = \ be a class of functions. These ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Parity Learning
Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function ''ƒ'', given some samples (''x'', ''ƒ''(''x'')) and the assurance that ''ƒ'' computes the parity of bits at some fixed locations. The samples are generated using some distribution over the input. The problem is easy to solve using Gaussian elimination provided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm. Noisy version ("Learning Parity with Noise") In Learning Parity with Noise (LPN), the samples may contain some error. Instead of samples (''x'', ''ƒ''(''x'')), the algorithm is provided with (''x'', ''y''), where for random boolean b \in \ y = \begin f(x), & \textb \\ 1-f(x), & \text \end The noisy version of the parity learning problem is conjectured to be hard. See also * Learning with errors Learning with errors (LWE) is the computational problem of inferring a linear n-ary func ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Learning With Errors
Learning with errors (LWE) is the computational problem of inferring a linear n-ary function f over a finite ring from given samples y_i = f(\mathbf_i) some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography. More precisely, the LWE problem is defined as follows. Let \mathbb_q denote the ring of integers modulo q and let \mathbb_q^n denote the set of n- vectors over \mathbb_q . There exists a certain unknown linear function f:\mathbb_q^n \rightarrow \mathbb_q, and the input to the LWE problem is a sample of pairs (\mathbf,y), where \mathbf\in \mathbb_q^n and y \in \mathbb_q, so that with high probability y=f(\mathbf). Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function f, or some close approximation thereof, with high probability. The LWE problem was introduced by Oded Regev in 2005 (who won the 2018 Gödel Prize for this work), it is a g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




XDH Assumption
The external Diffie–Hellman (XDH) assumption is a computational hardness assumption used in elliptic curve cryptography. The XDH assumption holds that there exist certain subgroups of elliptic curves which have useful properties for cryptography. Specifically, XDH implies the existence of two distinct groups \langle_1, _2\rangle with the following properties: # The discrete logarithm problem (DLP), the computational Diffie–Hellman problem (CDH), and the computational co-Diffie–Hellman problem are all intractable in _1 and _2. # There exists an efficiently computable bilinear map (pairing) e(\cdot, \cdot) : _1 \times _2 \rightarrow _T. # The decisional Diffie–Hellman problem (DDH) is intractable in _1. The above formulation is referred to as asymmetric XDH. A stronger version of the assumption (symmetric XDH, or SXDH) holds if DDH is ''also'' intractable in _2. The XDH assumption is used in some pairing-based cryptographic protocols. In certain elli ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Computational Hardness Assumption
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where ''efficiently'' typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood. Computational hardness assumptions are of particular importance in cryptography. A major goal in cryptography is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure ''assuming th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Provable Security
Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields. Usually, this refers to mathematical proofs, which are common in cryptography. In such a proof, the capabilities of the attacker are defined by an adversarial model (also referred to as attacker model): the aim of the proof is to show that the attacker must solve the underlying hard problem in order to break the security of the modelled system. Such a proof generally does not consider side-channel attacks or other implementation-specific attacks, because they are usually impossible to model without implementing the system (and thus, the proof only applies to this implementation). Outside of cryptography, the term is often used in conjunction with secure coding and security by design, both of which can rely on proofs to show the security of a particular approach. As with the cryptographic setting, this involves an attacker model and a model of th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

P = NP
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved. The informal term ''quickly'', used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is " P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be ''verified'' in polynomial time is NP, which stands for "nondeterministic polynomial time".A nondeterministic Turing machine can move to a state that is not ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Hierarchy
In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH. Classes within the hierarchy have complete problems (with respect to polynomial-time reductions) which ask if quantified Boolean formulae hold, for formulae with restrictions on the quantifier order. It is known that equality between classes on the same level or consecutive levels in the hierarchy would imply a "collapse" of the hierarchy to that level. Definitions There are multiple equivalent definitions of the classes of the polynomial hierarchy. Oracle definition For the oracle def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Black-box Obfuscation
In cryptography, black-box obfuscation was a proposed cryptographic primitive which would allow a computer program to be obfuscated in a way such that it was impossible to determine anything about it except its input and output behavior. Black-box obfuscation has been proven to be impossible, even in principle. Impossibility The unobfuscatable programs Barak et al. constructed a family of unobfuscatable programs, for which an efficient attacker can always learn more from ''any'' obfuscated code than from black-box access. Broadly, they start by engineering a special pair of programs that cannot be obfuscated together. For some randomly selected strings \alpha, \beta of a fixed, pre-determined length k, define one program to be one that computes C_(x) := \begin \beta & \textx = \alpha\\ 0 & \text \end and the other program to one that computes D_(X) := \begin 1 & \textX(\alpha) = \beta\textX\text\leq\text(k)\\ 0 & \text \end. (Here, D_ interprets its input as the code for a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Zero-knowledge Proof
In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true while the prover avoids conveying any additional information apart from the fact that the statement is indeed true. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information. If proving a statement requires that the prover possess some secret information, then the verifier will not be able to prove the statement to anyone else without possessing the secret information. The statement being proved must include the assertion that the prover has such knowledge, but without including or transmitting the knowledge itself in the assertion. Otherwise, the statement would not be proved in zero-knowledge because it ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]