Bernoulli distribution
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with probability q = 1-p. Less formally, it can be thought of as a model for the set of possible outcomes of any single
experiment An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs whe ...
that asks a yes–no question. Such questions lead to outcomes that are boolean-valued: a single bit whose value is success/
yes Yes or YES may refer to: * An affirmative particle in the English language; see yes and no Education * YES Prep Public Schools, Houston, Texas, US * YES (Your Extraordinary Saturday), a learning program from the Minnesota Institute for Talent ...
/ true/
one 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting or measurement. For example, a line segment of ''unit length'' is a line segment of length 1. I ...
with
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, ...
''p'' and failure/no/ false/ zero with probability ''q''. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and ''p'' would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and ''p'' would be the probability of tails). In particular, unfair coins would have p \neq 1/2. The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so ''n'' would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.


Properties

If X is a random variable with this distribution, then: :\Pr(X=1) = p = 1 - \Pr(X=0) = 1 - q. The probability mass function f of this distribution, over possible outcomes ''k'', is : f(k;p) = \begin p & \textk=1, \\ q = 1-p & \text k = 0. \end This can also be expressed as :f(k;p) = p^k (1-p)^ \quad \text k\in\ or as :f(k;p)=pk+(1-p)(1-k) \quad \text k\in\. The Bernoulli distribution is a special case of the binomial distribution with n = 1. The kurtosis goes to infinity for high and low values of p, but for p=1/2 the two-point distributions including the Bernoulli distribution have a lower excess kurtosis than any other probability distribution, namely −2. The Bernoulli distributions for 0 \le p \le 1 form an exponential family. The maximum likelihood estimator of p based on a random sample is the sample mean.


Mean

The expected value of a Bernoulli random variable X is :\operatorname p This is due to the fact that for a Bernoulli distributed random variable X with \Pr(X=1)=p and \Pr(X=0)=q we find :\operatorname = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0 = p \cdot 1 + q\cdot 0 = p.


Variance

The variance of a Bernoulli distributed X is :\operatorname = pq = p(1-p) We first find :\operatorname ^2= \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p = \operatorname From this follows :\operatorname = \operatorname ^2\operatorname 2 = \operatorname \operatorname 2 = p-p^2 = p(1-p) = pq With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside ,1/4/math>.


Skewness

The skewness is \frac=\frac. When we take the standardized Bernoulli distributed random variable \frac we find that this random variable attains \frac with probability p and attains -\frac with probability q. Thus we get :\begin \gamma_1 &= \operatorname \left left(\frac\right)^3\right\\ &= p \cdot \left(\frac\right)^3 + q \cdot \left(-\frac\right)^3 \\ &= \frac \left(pq^3-qp^3\right) \\ &= \frac (q-p) \\ &= \frac. \end


Higher moments and cumulants

The raw moments are all equal due to the fact that 1^k=1 and 0^k=0. :\operatorname ^k= \Pr(X=1)\cdot 1^k + \Pr(X=0)\cdot 0^k = p \cdot 1 + q\cdot 0 = p = \operatorname The central moment of order k is given by : \mu_k =(1-p)(-p)^k +p(1-p)^k. The first six central moments are :\begin \mu_1 &= 0, \\ \mu_2 &= p(1-p), \\ \mu_3 &= p(1-p)(1-2p), \\ \mu_4 &= p(1-p)(1-3p(1-p)), \\ \mu_5 &= p(1-p)(1-2p)(1-2p(1-p)), \\ \mu_6 &= p(1-p)(1-5p(1-p)(1-p(1-p))). \end The higher central moments can be expressed more compactly in terms of \mu_2 and \mu_3 :\begin \mu_4 &= \mu_2 (1-3\mu_2 ), \\ \mu_5 &= \mu_3 (1-2\mu_2 ), \\ \mu_6 &= \mu_2 (1-5\mu_2 (1-\mu_2 )). \end The first six cumulants are :\begin \kappa_1 &= p, \\ \kappa_2 &= \mu_2 , \\ \kappa_3 &= \mu_3 , \\ \kappa_4 &= \mu_2 (1-6\mu_2 ), \\ \kappa_5 &= \mu_3 (1-12\mu_2 ), \\ \kappa_6 &= \mu_2 (1-30\mu_2 (1-4\mu_2 )). \end


Related distributions

*If X_1,\dots,X_n are independent, identically distributed ( i.i.d.) random variables, all
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is c ...
s with success probability ''p'', then their sum is distributed according to a binomial distribution with parameters ''n'' and ''p'': *:\sum_^n X_k \sim \operatorname(n,p) ( binomial distribution). :The Bernoulli distribution is simply \operatorname(1, p), also written as \mathrm (p). *The categorical distribution is the generalization of the Bernoulli distribution for variables with any constant number of discrete values. *The
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and ...
of the Bernoulli distribution. *The geometric distribution models the number of independent and identical Bernoulli trials needed to get one success. *If Y \sim \mathrm\left(\frac\right), then 2Y - 1 has a Rademacher distribution.


See also

* Bernoulli process, a random process consisting of a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
Bernoulli trials * Bernoulli sampling * Binary entropy function * Binary decision diagram


References


Further reading

* *


External links

*. * * Interactive graphic
Univariate Distribution Relationships
{{DEFAULTSORT:Bernoulli Distribution Discrete distributions Conjugate prior distributions Exponential family distributions