HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
, the Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that
bounded-error probabilistic polynomial In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by ...
(BPP) time is contained in the
polynomial time hierarchy In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. T ...
, and more specifically Σ2 ∩ Π2. In 1983,
Michael Sipser Michael Fredric Sipser (born September 17, 1954) is an American theoretical computer scientist who has made early contributions to computational complexity theory. He is a professor of applied mathematics and was the Dean of Science at the Massa ...
showed that BPP is contained in the
polynomial time hierarchy In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. T ...
. Péter Gács showed that BPP is actually contained in Σ2 ∩ Π2. Clemens Lautemann contributed by giving a simple proof of BPP’s membership in Σ2 ∩ Π2, also in 1983. It is conjectured that in fact BPP= P, which is a much stronger statement than the Sipser–Lautemann theorem.


Proof

Here we present the Lautemann's proof. Without loss of generality, a machine ''M'' ∈ BPP with error ≤ 2−, ''x'', can be chosen. (All BPP problems can be amplified to reduce the error probability exponentially.) The basic idea of the proof is to define a Σ2 sentence that is equivalent to stating that ''x'' is in the language, ''L'', defined by ''M'' by using a set of transforms of the random variable inputs. Since the output of ''M'' depends on random input, as well as the input ''x'', it is useful to define which random strings produce the correct output as ''A''(''x'') = . The key to the proof is to note that when ''x'' ∈ ''L'', ''A''(''x'') is very large and when ''x'' ∉ ''L'', ''A''(''x'') is very small. By using bitwise parity, ⊕, a set of transforms can be defined as ''A''(''x'') ⊕ ''t''=. The first main lemma of the proof shows that the union of a small finite number of these transforms will contain the entire space of random input strings. Using this fact, a Σ2 sentence and a Π2 sentence can be generated that is true if and only if ''x'' ∈ ''L'' (see conclusion).


Lemma 1

The general idea of lemma one is to prove that if ''A''(''x'') covers a large part of the random space R= \ ^ then there exists a small set of translations that will cover the entire random space. In more mathematical language: :''If'' \frac \ge 1 - \frac, ''then'' \exists t_1,t_2,\ldots,t_, ''where'' t_i \in \ ^ ''such that'' \bigcup_i A(x) \oplus t_i = R. Proof. Randomly pick ''t''1, ''t''2, ..., ''t'', ''r'', . Let S=\bigcup_i A(x)\oplus t_i (the union of all transforms of ''A''(''x'')). So, for all ''r'' in ''R'', :\Pr \notin S= \Pr \notin A(x) \oplus t_1\cdot \Pr \notin A(x) \oplus t_2\cdots \Pr \notin A(x) \oplus t_\le . The probability that there will exist at least one element in ''R'' not in ''S'' is :\Pr \Bigl \bigvee_i (r_i \notin S)\Bigr\le \sum_i \frac = \frac < 1. Therefore :\Pr = R\ge 1 - \frac > 0. Thus there is a selection for each t_1,t_2,\ldots,t_ such that : \bigcup_i A(x) \oplus t_i = R.


Lemma 2

The previous lemma shows that ''A''(''x'') can cover every possible point in the space using a small set of translations. Complementary to this, for ''x'' ∉ ''L'' only a small fraction of the space is covered by S=\bigcup_i A(x)\oplus t_i. We have: :\frac \le , r, \cdot \frac \le , r, \cdot 2^ < 1 because , r, is polynomial in , x, .


Conclusion

The lemmas show that language membership of a language in BPP can be expressed as a Σ2 expression, as follows. :x \in L \iff \exists t_1,t_2,\dots,t_\, \forall r \in R \bigvee_ (M(x, r \oplus t_i) \text). That is, ''x'' is in language ''L'' if and only if there exist , r, binary vectors, where for all random bit vectors ''r'', TM ''M'' accepts at least one random vector ⊕ ''ti''. The above expression is in Σ2 in that it is first existentially then universally quantified. Therefore BPP ⊆ Σ2. Because BPP is closed under
complement A complement is something that completes something else. Complement may refer specifically to: The arts * Complement (music), an interval that, when added to another, spans an octave ** Aggregate complementation, the separation of pitch-clas ...
, this proves BPP ⊆ Σ2 ∩ Π2.


Stronger version

The theorem can be strengthened to \mathsf \subseteq \mathsf \subseteq \mathsf_2^P \subseteq \Sigma_2 \cap \Pi_2 (see MA, S).


References

{{DEFAULTSORT:Sipser-Lautemann theorem Structural complexity theory Randomized algorithms Theorems in computational complexity theory Articles containing proofs