Computationally Bounded Adversary
   HOME



picture info

Computationally Bounded Adversary
In information theory, the computationally bounded adversary problem is a different way of looking at the problem of sending data over a noisy channel. In previous models the best that could be done was ensuring correct decoding for up to ''d''/2 errors, where d was the Hamming distance of the code. The problem with doing it this way is that it does not take into consideration the actual amount of computing power available to the adversary. Rather, it only concerns itself with how many bits of a given code word can change and still have the message decode properly. In the computationally bounded adversary model the channel – the adversary – is restricted to only being able to perform a reasonable amount of computation to decide which bits of the code word need to change. In other words, this model does not need to consider how many errors can possibly be handled, but only how many errors could possibly be introduced given a reasonable amount of computing power on the part of the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Information Theory
Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, Neuroscience, neurobiology, physics, and electrical engineering. A key measure in information theory is information entropy, entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a Fair coin, fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a dice, die (which has six equally likely outcomes). Some other important measu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hamming Distance
In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to change one string into the other, or equivalently, the minimum number of ''errors'' that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. It is named after the American mathematician Richard Hamming. A major application is in coding theory, more specifically to block codes, in which the equal-length strings are Vector space, vectors over a finite field. Definition The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are different. Examples The symbols may be letters, bits, or decimal digits, am ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Adversary (cryptography)
In cryptography, an adversary (rarely opponent, enemy) is an entity whose aim is to prevent the users of the cryptosystem from achieving their goal (primarily privacy, integrity, and availability of data), often with malicious intent. An adversary's efforts might take the form of attempting to discover secret data, corrupting some of the data in the system, spoofing the identity of a message sender or receiver, or forcing system downtime. Actual adversaries, as opposed to idealized ones, are referred to as ''attackers''. The former term predominates in the cryptographic and the latter in the computer security literature. Eavesdropper Eve, malicious attacker Mallory, opponent Oscar, and intruder Trudy are all adversarial characters widely used in both types of texts. This notion of an adversary helps both intuitive and formal reasoning about cryptosystems by casting security analysis of cryptosystems as a 'game' between the users and a ''centrally co-ordinated'' enemy. The notion ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Worst-case Complexity
In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic notation). It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given ''any'' input of size , and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms. The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input. Definition Given a model of computation and an algorithm \mathsf that halts on each input s, the mapping ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions. Quicksort is a divide-and-conquer algorithm. It works by selecting a "pivot" element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. For this reason, it is sometimes called partition-exchange sort. The sub-arrays are then sorted recursively. This can be done in-place, requiring small additional amounts of memory to perform the sorting. Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. It is a comparison-based sort since elements ''a'' and ''b'' are only swapped in ca ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Illustration Of Proof For Computationally Bounded Adversary
An illustration is a decoration, interpretation, or visual explanation of a text, concept, or process, designed for integration in print and digitally published media, such as posters, flyers, magazines, books, teaching materials, animations, video games and films. An illustration is typically created by an illustrator. Digital illustrations are often used to make websites and apps more user-friendly, such as the use of emojis to accompany digital type. Illustration also means providing an example; either in writing or in picture form. The origin of the word "illustration" is late Middle English (in the sense ‘illumination; spiritual or intellectual enlightenment’): via Old French from Latin">-4; we might wonder whether there's a point at which it's appropriate to talk of the beginnings of French, that is, when it wa ... from Latin ''illustratio''(n-), from the verb ''illustrare''. Illustration styles Contemporary illustration uses a wide range of styles and technique ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Locally Decodable Code
A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining (or querying) a small number of bits of a possibly corrupted codeword.Sergey Yekhanin. New locally decodable codes and private information retrieval schemesTechnical Report ECCC TR06-127 2006. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two. Codewords are generated from the original message using an algorithm that introduces a certain amount of redundancy into the codeword; thus, the codeword is always longer than the original message. This redundancy is distributed across the codeword and allows the original message to be r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Private Information Retrieval
In cryptography, a private information retrieval (PIR) protocol is a protocol that allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved. PIR is a weaker version of 1-out-of-''n'' oblivious transfer, where it is also required that the user should not get information about other database items. One trivial, but very inefficient way to achieve PIR is for the server to send an entire copy of the database to the user. In fact, this is the only possible protocol (in the classical or the quantum setting) that gives the user information theoretic privacy for their query in a single-server setting. There are two ways to address this problem: make the server computationally bounded or assume that there are multiple non-cooperating servers, each having a copy of the database. The problem was introduced in 1995 by Chor, Goldreich, Kushilevitz and Sudan in the information-theoretic setting and in 1997 by Kushilevitz and Ostrov ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fault-tolerant
Fault tolerance is the ability of a system to maintain proper operation despite failures or faults in one or more of its components. This capability is essential for high-availability, mission-critical, or even life-critical systems. Fault tolerance specifically refers to a system's capability to handle faults without any degradation or downtime. In the event of an error, end-users remain unaware of any issues. Conversely, a system that experiences errors with some interruption in service or graceful degradation of performance is termed 'resilient'. In resilience, the system adapts to the error, maintaining service but acknowledging a certain impact on performance. Typically, fault tolerance describes computer systems, ensuring the overall system remains functional despite hardware or software issues. Non-computing examples include structures that retain their integrity despite damage from fatigue, corrosion or impact. History The first known fault-tolerant computer was S ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


List Decoding
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding. The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder out ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Concrete Security
In cryptography, concrete security or exact security is a practice-oriented approach that aims to give more precise estimates of the computational complexities of adversarial tasks than polynomial equivalence would allow. It quantifies the security of a cryptosystem by bounding the probability of success for an adversary running for a fixed amount of time. Security proofs with precise analyses are referred to as ''concrete''. Traditionally, provable security is asymptotic: it classifies the hardness of computational problems using polynomial-time reducibility. Secure schemes are defined to be those in which the advantage of any computationally bounded adversary is negligible. While such a theoretical guarantee is important, in practice one needs to know exactly how efficient a reduction is because of the need to instantiate the security parameter - it is not enough to know that "sufficiently large" security parameters will do. An inefficient reduction results either in the succ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]