HOME





Pseudorandom Permutation
In cryptography, a pseudorandom permutation (PRP) is a function that cannot be distinguished from a random permutation (that is, a permutation selected at random with uniform probability, from the family of all permutations on the function's domain) with practical effort. Definition Let ''F'' be a mapping \left\^n \times \left\^s \rightarrow \left\^n. ''F'' is a PRP if and only if * For any K \in \left\^s, F_K is a bijection from \left\^n to \left\^n, where F_K(x)=F(x,K). * For any K \in \left\^s, there is an "efficient" algorithm to evaluate F_K(x) for any x \in \left\^n,. * For all probabilistic polynomial-time distinguishers D: \left, Pr\left(D^(1^n) = 1\right) - Pr\left(D^(1^n) = 1\right) \ < \varepsilon(s), where K \in \left\^s is chosen uniformly at random and f_n is chosen uniformly at random from the set of permutations on ''n''-bit strings. A pseudorandom permutation family is a collection of pseudorandom permutations, where a specific p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), adversarial behavior. More generally, cryptography is about constructing and analyzing Communication protocol, protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (confidentiality, data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, Smart card#EMV, chip-based payment cards, digital currencies, password, computer passwords, and military communications. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Randomized Algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables. There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result ( Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem. In common practice, randomized alg ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Theory Of Cryptography
A theory is a systematic and rational form of abstract thinking about a phenomenon, or the conclusions derived from such thinking. It involves contemplative and logical reasoning, often supported by processes such as observation, experimentation, and research. Theories can be scientific, falling within the realm of empirical and testable knowledge, or they may belong to non-scientific disciplines, such as philosophy, art, or sociology. In some cases, theories may exist independently of any formal discipline. In modern science, the term "theory" refers to Scientific theory, scientific theories, a well-confirmed type of explanation of nature, made in a way Consistency, consistent with the scientific method, and fulfilling the Scientific theory#Characteristics of theories, criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide Empirical evidence, empirical support for it, or Empirical evidence, empirical contradi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Format-preserving Encryption
In cryptography, format-preserving encryption (FPE), refers to encrypting in such a way that the output (the ciphertext) is in the same format as the input (the plaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example: * Encrypting a 16-digit credit card number so that the ciphertext is another 16-digit number. * Encrypting an English word so that the ciphertext is another English word. * Encrypting an ''n''-bit number so that the ciphertext is another ''n''-bit number (this is the definition of an ''n''-bit block cipher). For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation of ''N'' integers where ''N'' is the size of the domain. Motivation Restricted field lengths or formats One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typica ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Block Cipher
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called ''blocks''. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Definition A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Advanced Encryption Standard
The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001. AES is a variant of the Rijndael block cipher developed by two Belgium, Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the Advanced Encryption Standard process, AES selection process. Rijndael is a family of ciphers with different key size, key and Block size (cryptography), block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits. AES has been adopted by the Federal government of the United States, U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Data Encryption Standard
The Data Encryption Standard (DES ) is a symmetric-key algorithm for the encryption of digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement of cryptography. Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS selected a slightly modified version (strengthened against differential cryptanalysis, but weakened against brute-force attacks), which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977. The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. C ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Message Authentication Code
In cryptography, a message authentication code (MAC), sometimes known as an authentication tag, is a short piece of information used for authentication, authenticating and Data integrity, integrity-checking a message. In other words, it is used to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content. Terminology The term message integrity code (MIC) is frequently substituted for the term ''MAC'', especially in communications to distinguish it from the use of the latter as ''media access control address'' (''MAC address''). However, some authors use MIC to refer to a message digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the term ''message integrity code'' (MIC), and instead using ''checksum'', ''error detection code'', ''hash function, hash'', ''keyed hash'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Oracle Machine
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a black box, called an oracle, which is able to solve certain problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used. Oracles An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a given computational problem: * A decision problem is represented as a set ''A'' of natural numbers (or strings). An instance of the problem is an arbitrary natural number (or string). The solution to t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

PP (complexity)
In complexity theory, PP, or PPT is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977. If a decision problem is in PP, then there is an algorithm running in polynomial time that is allowed to make random decisions, such that it returns the correct answer with chance higher than 1/2. In more practical terms, it is the class of problems that can be solved to any fixed degree of accuracy by running a randomized, polynomial-time algorithm a sufficient (but bounded) number of times. Turing machines that are polynomially-bound and probabilistic are characterized as PPT, which stands for probabilistic polynomial-time machines. This characterization of Turing machines does not require a bounded error probability. Hence, PP is the complexity class containing all problems s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complexity Class
In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and space complexity, memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time complexity, time or space complexity, memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements. For instance, the class P (complexity), P is the set of decision problems solvable by a deterministic Turing machine in polynomial time. There are, however, many complexity classes defined in terms of other types of problems (e.g. Counting problem (complexity), counting problems and function problems) and using other models of computation (e.g. probabil ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial Time
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is gener ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]