Comparison to other models
Worst-case model
At first glance, the worst-case model seems intuitively ideal. The guarantee that an algorithm will succeed no matter what is, of course, highly alluring. However, it demands too much. A real-life adversary cannot spend an indefinite amount of time examining a message in order to find the one error pattern which an algorithm would struggle with. As a comparison, consider the Quicksort algorithm. In the worst-case scenario, Quicksort makes O(''n''2) comparisons; however, such an occurrence is rare. Quicksort almost invariably makes O(''n'' log ''n'') comparisons instead, and even outperforms other algorithms which can guarantee O(''n'' log ''n'') behavior. Let us suppose an adversary wishes to force the Quicksort algorithm to make O(''n''2) comparisons. Then he would have to search all of the ''n''! permutations of the input string and test the algorithm on each until he found the one for which the algorithm runs significantly slower. But since this would take O(''n''!) time, it is clearly infeasible for an adversary to do this. Similarly, it is unreasonable to assume an adversary for an encoding and decoding system would be able to test every single error pattern in order to find the most effective one.Stochastic noise model
The stochastic noise model can be described as a kind of "dumb" noise model. That is to say that it does not have the adaptability to deal with "intelligent" threats. Even if the attacker is bounded it is still possible that they might be able to overcome the stochastic model with a bit of cleverness. The stochastic model has no real way to fight against this sort of attack and as such is unsuited to dealing with the kind of "intelligent" threats that would be preferable to have defenses against. ---- Therefore, a computationally bounded adversarial model has been proposed as a compromise between the two. This forces one to consider that messages may be perverted in conscious, even malicious ways, but without forcing an algorithm designer to worry about rare cases which likely will never occur.Applications
Comparison to stochastic noise channel
Since any computationally bounded adversary could in O(''n'') time flip a coin for each bit, it is intuitively clear that any encoding and decoding system which can work against this adversary must also work in the stochastic noise model. The converse is less simple; however, it can be shown that any system which works in the stochastic noise model can also efficiently encode and decode against a computationally bounded adversary, and only at an additional cost which is polynomial in ''n''. The following method to accomplish this was designed by Dick Lipton, and is taken from: Let be an encoder for the stochastic noise model and be a simple decoder for the same, each of which runs in polynomial time. Furthermore, let both the sender and receiver share some random permutation function and a random pattern . For encoding: 1. Let . 2. Let . 3. Transmit Then for decoding: 1. Receive . Compute . 2. Calculate . Similarly to the Quicksort comparison above, if the channel wants to do something smart, it must first test all the permutations. However, this is infeasible for a computationally bounded adversary, so the most it can do is make a random error pattern . But then: : since by definition. : , where since any permutation is linear with respect to XOR, : as per the definition of above. Since is random, is just random noise and we can use the simple decoder to decode the received message and get back .Specific applications
By assuming a computationally bounded adversary, it is possibly to design aSee also
*References
{{Reflist Computational complexity theory Coding theory