HOME

TheInfoList



OR:

In
theoretical computer science Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumsc ...
, a small-bias sample space (also known as \epsilon-biased sample space, \epsilon-biased generator, or small-bias probability space) is a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
that fools
parity function In Boolean algebra, a parity function is a Boolean function whose value is one if and only if the input vector has an odd number of ones. The parity function of two inputs is also known as the XOR function. The parity function is notable for its ...
s. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to
pseudorandom generator In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class c ...
s for parity functions. The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are
derandomization A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
,
error-correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
s, and
probabilistically checkable proofs In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm i ...
. The connection with
error-correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
s is in fact very strong since \epsilon-biased sample spaces are ''equivalent'' to \epsilon-balanced error-correcting codes.


Definition


Bias

Let X be a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
over \^n. The ''bias'' of X with respect to a set of indices I \subseteq \ is defined as : \text_I(X) = \left, \Pr_ \left(\sum_ x_i = 0\right) - \Pr_ \left(\sum_ x_i = 1\right) \ = \left, 2 \cdot \Pr_ \left(\sum_ x_i = 0\right) -1 \ \,, where the sum is taken over \mathbb F_2, the
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subt ...
with two elements. In other words, the sum \sum_ x_i equals 0 if the number of ones in the sample x\in\^n at the positions defined by I is even, and otherwise, the sum equals 1. For I=\emptyset, the empty sum is defined to be zero, and hence \text_ (X) = 1.


ϵ-biased sample space

A probability distribution X over \^n is called an ''\epsilon-biased sample space'' if \text_I(X) \leq \epsilon holds for all non-empty subsets I \subseteq \.


ϵ-biased set

An \epsilon-biased sample space X that is generated by picking a uniform element from a
multiset In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that ...
X\subseteq \^n is called ''\epsilon-biased set''. The ''size'' s of an \epsilon-biased set X is the size of the multiset that generates the sample space.


ϵ-biased generator

An \epsilon-biased generator G:\^\ell \to \^n is a function that maps strings of length \ell to strings of length n such that the multiset X_G=\ is an \epsilon-biased set. The ''seed length'' of the generator is the number \ell and is related to the size of the \epsilon-biased set X_G via the equation s=2^\ell.


Connection with epsilon-balanced error-correcting codes

There is a close connection between \epsilon-biased sets and ''\epsilon-balanced'' linear error-correcting codes. A linear code C:\^n\to\^s of message length n and
block length In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract defin ...
s is ''\epsilon-balanced'' if the
Hamming weight The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string o ...
of every nonzero codeword C(x) is between (\frac-\epsilon)s and (\frac+\epsilon)s. Since C is a linear code, its
generator matrix In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Terminol ...
is an (n\times s)-matrix A over \mathbb F_2 with C(x)=x \cdot A. Then it holds that a multiset X\subset\^ is \epsilon-biased if and only if the linear code C_X, whose columns are exactly elements of X, is \epsilon-balanced.cf., e.g., p. 2 of


Constructions of small epsilon-biased sets

Usually the goal is to find \epsilon-biased sets that have a small size s relative to the parameters n and \epsilon. This is because a smaller size s means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits.


Theoretical bounds

The probabilistic method gives a non-explicit construction that achieves size s=O(n/\epsilon^2). The construction is non-explicit in the sense that finding the \epsilon-biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness. However, this non-explicit construction is useful because it shows that these efficient codes exist. On the other hand, the best known lower bound for the size of \epsilon-biased sets is s=\Omega(n/ (\epsilon^2 \log (1/\epsilon)), that is, in order for a set to be \epsilon-biased, it must be at least that big.


Explicit constructions

There are many explicit, i.e., deterministic constructions of \epsilon-biased sets with various parameter settings: * achieve \displaystyle s= \frac. The construction makes use of
Justesen code In coding theory, Justesen codes form a class of error-correcting codes that have a constant rate, constant relative distance, and a constant alphabet size. Before the Justesen error correction code was discovered, no error correction code was kn ...
s (which is a concatenation of Reed–Solomon codes with the Wozencraft ensemble) as well as expander walk sampling. * achieve \displaystyle s= O\left(\frac\right)^2. One of their constructions is the concatenation of Reed–Solomon codes with the
Hadamard code The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mar ...
; this concatenation turns out to be an \epsilon-balanced code, which gives rise to an \epsilon-biased sample space via the connection mentioned above. * Concatenating Algebraic geometric codes with the
Hadamard code The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mar ...
gives an \epsilon-balanced code with \displaystyle s= O\left(\frac\right). * achieves \displaystyle s= O\left(\frac\right)^. * achieves \displaystyle s= O\left(\frac\right) which is almost optimal because of the lower bound. These bounds are mutually incomparable. In particular, none of these constructions yields the smallest \epsilon-biased sets for all settings of \epsilon and n.


Application: almost k-wise independence

An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces.


k-wise independent spaces

A random variable Y over \^n is a ''k-wise independent space'' if, for all index sets I\subseteq\ of size k, the
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables ...
Y, _I is exactly equal to the uniform distribution over \^k. That is, for all such I and all strings z\in\^k, the distribution Y satisfies \Pr_Y (Y, _I = z) = 2^.


Constructions and bounds

k-wise independent spaces are fairly well understood. * A simple construction by achieves size n^k. * construct a k-wise independent space whose size is n^. * prove that no k-wise independent space can be significantly smaller than n^.


Joffe's construction

constructs a k-wise independent space Y over the
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subt ...
with some prime number n>k of elements, i.e., Y is a distribution over \mathbb F_n^n. The initial k marginals of the distribution are drawn independently and uniformly at random: :(Y_0,\dots,Y_) \sim\mathbb F_n^k. For each i with k \leq i < n, the marginal distribution of Y_i is then defined as :Y_i=Y_0 + Y_1 \cdot i + Y_2 \cdot i^2 + \dots + Y_ \cdot i^\,, where the calculation is done in \mathbb F_n. proves that the distribution Y constructed in this way is k-wise independent as a distribution over \mathbb F_n^n. The distribution Y is uniform on its support, and hence, the support of Y forms a ''k-wise independent set''. It contains all n^k strings in \mathbb F_n^k that have been extended to strings of length n using the deterministic rule above.


Almost k-wise independent spaces

A random variable Y over \^n is a ''\delta-almost k-wise independent space'' if, for all index sets I\subseteq\ of size k, the restricted distribution Y, _I and the uniform distribution U_k on \^k are \delta-close in 1-norm, i.e., \Big\, Y, _I - U_k\Big\, _1 \leq \delta.


Constructions

give a general framework for combining small k-wise independent spaces with small \epsilon-biased spaces to obtain \delta-almost k-wise independent spaces of even smaller size. In particular, let G_1:\^h\to\^n be a
linear mapping In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
that generates a k-wise independent space and let G_2:\^\ell \to \^h be a generator of an \epsilon-biased set over \^h. That is, when given a uniformly random input, the output of G_1 is a k-wise independent space, and the output of G_2 is \epsilon-biased. Then G : \^\ell \to \^n with G(x) = G_1(G_2(x)) is a generator of an \delta-almost k-wise independent space, where \delta=2^ \epsilon.Section 4 in As mentioned above, construct a generator G_1 with h=\tfrac \log n, and construct a generator G_2 with \ell=\log s=\log h + O(\log(\epsilon^)). Hence, the concatenation G of G_1 and G_2 has seed length \ell = \log k + \log \log n + O(\log(\epsilon^)). In order for G to yield a \delta-almost k-wise independent space, we need to set \epsilon = \delta 2^, which leads to a seed length of \ell = \log \log n + O(k+\log(\delta^)) and a sample space of total size 2^\ell \leq \log n \cdot \text(2^k \cdot\delta^).


Notes


References

* * * * * * * * {{Citation , last1 = Ta-Shma , first1 = Amnon , title = Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing , chapter = Explicit, almost optimal, epsilon-balanced codes , date = 2017 , year = 2017 , pages=238–251 , doi=10.1145/3055399.3055408 , isbn = 9781450345286, s2cid = 5648543 Pseudorandomness Theoretical computer science