HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Bernoulli distribution, named after Swiss mathematician
Jacob Bernoulli Jacob Bernoulli (also known as James in English or Jacques in French; – 16 August 1705) was a Swiss mathematician. He sided with Gottfried Wilhelm Leibniz during the Leibniz–Newton calculus controversy and was an early proponent of Leibniz ...
, is the
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
of a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
which takes the value 1 with probability p and the value 0 with probability q = 1-p. Less formally, it can be thought of as a model for the set of possible outcomes of any single
experiment An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs whe ...
that asks a
yes–no question In linguistics, a yes–no question, also known as a binary question, a polar question, or a general question, is a closed-ended question whose expected answer is one of two choices, one that provides an affirmative answer to the question versus ...
. Such questions lead to outcomes that are Boolean-valued: a single
bit The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as ...
whose value is success/
yes Yes or YES may refer to: * An affirmative particle in the English language; see yes and no Education * YES Prep Public Schools, Houston, Texas, US * Young Eisner Scholars, in Los Angeles, New York City, Chicago, and Appalachia, US * Young Ep ...
/
true True most commonly refers to truth, the state of being in congruence with fact or reality. True may also refer to: Places * True, West Virginia, an unincorporated community in the United States * True, Wisconsin, a town in the United States * ...
/
one 1 (one, unit, unity) is a number, numeral, and glyph. It is the first and smallest positive integer of the infinite sequence of natural numbers. This fundamental property has led to its unique uses in other fields, ranging from science to sp ...
with
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
''p'' and failure/no/ false/
zero 0 (zero) is a number representing an empty quantity. Adding (or subtracting) 0 to any number leaves that number unchanged; in mathematical terminology, 0 is the additive identity of the integers, rational numbers, real numbers, and compl ...
with probability ''q''. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and ''p'' would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and ''p'' would be the probability of tails). In particular, unfair coins would have p \neq 1/2. The Bernoulli distribution is a special case of the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
where a single trial is conducted (so ''n'' would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.


Properties

If X is a random variable with a Bernoulli distribution, then: :\Pr(X=1) = p, \Pr(X=0) = q =1 - p. The
probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
f of this distribution, over possible outcomes ''k'', is : f(k;p) = \begin p & \textk=1, \\ q = 1-p & \text k = 0. \end This can also be expressed as :f(k;p) = p^k (1-p)^ \quad \text k\in\ or as :f(k;p)=pk+(1-p)(1-k) \quad \text k\in\. The Bernoulli distribution is a special case of the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
with n = 1. The
kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtos ...
goes to infinity for high and low values of p, but for p=1/2 the two-point distributions including the Bernoulli distribution have a lower
excess kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtosi ...
, namely −2, than any other probability distribution. The Bernoulli distributions for 0 \le p \le 1 form an
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
. The
maximum likelihood estimator In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
of p based on a random sample is the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
.


Mean

The
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of a Bernoulli random variable X is :\operatorname p This is because for a Bernoulli distributed random variable X with \Pr(X=1)=p and \Pr(X=0)=q we find :\operatorname = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0 = p \cdot 1 + q\cdot 0 = p.


Variance

The
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of a Bernoulli distributed X is :\operatorname = pq = p(1-p) We first find :\operatorname ^2= \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 : = p \cdot 1^2 + q\cdot 0^2 = p = \operatorname From this follows :\operatorname = \operatorname ^2\operatorname 2 = \operatorname \operatorname 2 : = p-p^2 = p(1-p) = pq With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside ,1/4/math>.


Skewness

The
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
is \frac=\frac. When we take the standardized Bernoulli distributed random variable \frac we find that this random variable attains \frac with probability p and attains -\frac with probability q. Thus we get :\begin \gamma_1 &= \operatorname \left left(\frac\right)^3\right\\ &= p \cdot \left(\frac\right)^3 + q \cdot \left(-\frac\right)^3 \\ &= \frac \left(pq^3-qp^3\right) \\ &= \frac (q^2-p^2) \\ &= \frac \\ &= \frac = \frac. \end


Higher moments and cumulants

The raw moments are all equal because 1^k=1 and 0^k=0. :\operatorname ^k= \Pr(X=1)\cdot 1^k + \Pr(X=0)\cdot 0^k = p \cdot 1 + q\cdot 0 = p = \operatorname The central moment of order k is given by : \mu_k =(1-p)(-p)^k +p(1-p)^k. The first six central moments are :\begin \mu_1 &= 0, \\ \mu_2 &= p(1-p), \\ \mu_3 &= p(1-p)(1-2p), \\ \mu_4 &= p(1-p)(1-3p(1-p)), \\ \mu_5 &= p(1-p)(1-2p)(1-2p(1-p)), \\ \mu_6 &= p(1-p)(1-5p(1-p)(1-p(1-p))). \end The higher central moments can be expressed more compactly in terms of \mu_2 and \mu_3 :\begin \mu_4 &= \mu_2 (1-3\mu_2 ), \\ \mu_5 &= \mu_3 (1-2\mu_2 ), \\ \mu_6 &= \mu_2 (1-5\mu_2 (1-\mu_2 )). \end The first six cumulants are :\begin \kappa_1 &= p, \\ \kappa_2 &= \mu_2 , \\ \kappa_3 &= \mu_3 , \\ \kappa_4 &= \mu_2 (1-6\mu_2 ), \\ \kappa_5 &= \mu_3 (1-12\mu_2 ), \\ \kappa_6 &= \mu_2 (1-30\mu_2 (1-4\mu_2 )). \end


Entropy and Fisher's Information


Entropy

Entropy is a measure of uncertainty or randomness in a probability distribution. For a Bernoulli random variable X with success probability p and failure probability q = 1 - p, the entropy H(X) is defined as: :\begin H(X) &= \mathbb_p \ln (\frac) = - (X = 0) \ln P(X = 0) + P(X = 1) \ln P(X = 1)\\ H(X) &= - (q \ln q + p \ln p) , \quad q = P(X = 0), p = P(X = 1) \end The entropy is maximized when p = 0.5, indicating the highest level of uncertainty when both outcomes are equally likely. The entropy is zero when p = 0 or p = 1, where one outcome is certain.


Fisher's Information

Fisher information measures the amount of information that an observable random variable X carries about an unknown parameter p upon which the probability of X depends. For the Bernoulli distribution, the Fisher information with respect to the parameter p is given by: :\begin I(p) = \frac \end Proof: *The Likelihood Function for a Bernoulli random variableX is: :\begin L(p; X) = p^X (1 - p)^ \end This represents the probability of observing X given the parameter p. *The Log-Likelihood Function is: :\begin \ln L(p; X) = X \ln p + (1 - X) \ln (1 - p) \end *The Score Function (the first derivative of the log-likelihood w.r.t. p is: :\begin \frac \ln L(p; X) = \frac - \frac \end *The second derivative of the log-likelihood function is: :\begin \frac \ln L(p; X) = -\frac - \frac \end *Fisher information is calculated as the negative expected value of the second derivative of the log-likelihood: :\begin I(p) = -E\left frac \ln L(p; X)\right= -\left(-\frac - \frac\right) = \frac = \frac \end It is maximized when p = 0.5, reflecting maximum uncertainty and thus maximum information about the parameter p.


Related distributions

*If X_1,\dots,X_n are independent, identically distributed ( i.i.d.) random variables, all
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s with success probability ''p'', then their sum is distributed according to a
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
with parameters ''n'' and ''p'': *:\sum_^n X_k \sim \operatorname(n,p) (
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
). :The Bernoulli distribution is simply \operatorname(1, p), also written as \mathrm (p). *The
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
is the generalization of the Bernoulli distribution for variables with any constant number of discrete values. *The
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
is the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
of the Bernoulli distribution. *The
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number X of Bernoulli trials needed to get one success, supported on \mathbb = \; * T ...
models the number of independent and identical Bernoulli trials needed to get one success. *If Y \sim \mathrm\left(\frac\right), then 2Y - 1 has a Rademacher distribution.


See also

*
Bernoulli process In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The ...
, a
random process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
consisting of a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
Bernoulli trials *
Bernoulli sampling In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the statistical population, population is subjected to an statistical independence, independent Bernoulli trial which determines whether the ...
*
Binary entropy function Binary may refer to: Science and technology Mathematics * Binary number, a representation of numbers using only two values (0 and 1) for each digit * Binary function, a function that takes two arguments * Binary operation, a mathematical op ...
* Binary decision diagram


References


Further reading

* *


External links

*. * * Interactive graphic
Univariate Distribution Relationships
{{DEFAULTSORT:Bernoulli Distribution Discrete distributions Conjugate prior distributions Exponential family distributions