HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the binomial distribution with parameters and is the
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
of the number of successes in a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
experiment An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs whe ...
s, each asking a yes–no question, and each with its own Boolean-valued outcome: ''success'' (with probability ) or ''failure'' (with probability ). A single success/failure experiment is also called a
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
or Bernoulli experiment, and a sequence of outcomes is called a
Bernoulli process In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The ...
; for a single trial, i.e., , the binomial distribution is a
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
. The binomial distribution is the basis for the
binomial test Binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data. Usage A binomial test is a statistical hypothesis test used to deter ...
of
statistical significance In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
. The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used.


Definitions


Probability mass function

If the
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the
probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
: : f(k,n,p) = \Pr(X = k) = \binomp^k(1-p)^ for , where : \binom =\frac is the
binomial coefficient In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written \tbinom. It is the coefficient of the t ...
. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are \binom such sequences, since the binomial coefficient \binom counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining ''any'' of these sequences, meaning the probability of obtaining one of them () must be added \binom times, hence \Pr(X = k) = \binom p^k (1-p)^. In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as : f(k,n,p)=f(n-k,n,1-p). Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating : \frac=\frac and comparing it to 1. There is always an integer that satisfies : (n+1)p-1 \leq M < (n+1)p. is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the ''most probable'' outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode. Equivalently, . Taking the
floor function In mathematics, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or eq ...
, we obtain .


Example

Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is : f(4,6,0.3) = \binom0.3^4 (1-0.3)^= 0.059535.


Cumulative distribution function

The
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
can be expressed as: : F(k;n,p) = \Pr(X \le k) = \sum_^ p^i(1-p)^, where \lfloor k\rfloor is the "floor" under , i.e. the greatest integer less than or equal to . It can also be represented in terms of the regularized incomplete beta function, as follows: : \begin F(k;n,p) & = \Pr(X \le k) \\ &= I_(n-k, k+1) \\ & = (n-k) \int_0^ t^ (1-t)^k \, dt , \end which is equivalent to the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
s of the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
and of the -distribution: : F(k;n,p) = F_\left(x=1-p;\alpha=n-k,\beta=k+1\right) : F(k;n,p) = F_\left(x=\frac\frac;d_1=2(n-k),d_2=2(k+1)\right). Some closed-form bounds for the cumulative distribution function are given below.


Properties


Expected value and variance

If , that is, is a binomially distributed random variable, being the total number of experiments and ''p'' the probability of each experiment yielding a successful result, then the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of is: : \operatorname = np. This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if X_1, \ldots, X_n are identical (and independent) Bernoulli random variables with parameter , then and : \operatorname = \operatorname _1 + \cdots + X_n= \operatorname _1+ \cdots + \operatorname _n= p + \cdots + p = np. The
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
is: : \operatorname(X) = npq = np(1 - p). This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.


Higher moments

The first 6
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s, defined as \mu _=\operatorname \left X-\operatorname [X^\right">.html" ;"title="X-\operatorname [X">X-\operatorname [X^\right, are given by : \begin \mu_1 &= 0, \\ \mu_2 &= np(1-p),\\ \mu_3 &= np(1-p)(1-2p),\\ \mu_4 &= np(1-p)(1+(3n-6)p(1-p)),\\ \mu_5 &= np(1-p)(1-2p)(1+(10n-12)p(1-p)),\\ \mu_6 &= np(1-p)(1-30p(1-p)(1-4p(1-p))+5np(1-p)(5-26p(1-p))+15n^2 p^2 (1-p)^2). \end The non-central moments satisfy : \begin \operatorname &= np, \\ \operatorname [X^2] &= np(1-p)+n^2p^2, \end and in general : \operatorname ^c= \sum_^c \left\ n^ p^k, where \textstyle \left\ are the Stirling numbers of the second kind, and n^ = n(n-1)\cdots(n-k+1) is the kth falling power of n. A simple bound follows by bounding the Binomial moments via the higher Poisson moments: : \operatorname ^c\le \left(\frac\right)^c \le (np)^c \exp\left(\frac\right). This shows that if c=O(\sqrt), then \operatorname ^c/math> is at most a constant factor away from \operatorname c


Mode

Usually the mode of a binomial distribution is equal to \lfloor (n+1)p\rfloor, where \lfloor\cdot\rfloor is the
floor function In mathematics, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or eq ...
. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows: : \text = \begin \lfloor (n+1)\,p\rfloor & \text(n+1)p\text, \\ (n+1)\,p\ \text\ (n+1)\,p - 1 &\text(n+1)p\in\, \\ n & \text(n+1)p = n + 1. \end Proof: Let : f(k)=\binom nk p^k q^. For p=0 only f(0) has a nonzero value with f(0)=1. For p=1 we find f(n)=1 and f(k)=0 for k\neq n. This proves that the mode is 0 for p=0 and n for p=1. Let 0 < p < 1. We find :\frac = \frac. From this follows : \begin k > (n+1)p-1 \Rightarrow f(k+1) < f(k) \\ k = (n+1)p-1 \Rightarrow f(k+1) = f(k) \\ k < (n+1)p-1 \Rightarrow f(k+1) > f(k) \end So when (n+1)p-1 is an integer, then (n+1)p-1 and (n+1)p is a mode. In the case that (n+1)p-1\notin \Z, then only \lfloor (n+1)p-1\rfloor+1=\lfloor (n+1)p\rfloor is a mode.


Median

In general, there is no single formula to find the
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
for a binomial distribution, and it may even be non-unique. However, several special results have been established: * If is an integer, then the mean, median, and mode coincide and equal . * Any median must lie within the interval \lfloor np \rfloor\leq m \leq \lceil np \rceil. * A median cannot lie too far away from the mean:, m-np, \leq \min\ . * The median is unique and equal to when (except for the case when and is odd). * When is a rational number (with the exception of \ and odd) the median is unique. * When p= \frac and is odd, any number in the interval \frac \bigl(n-1\bigr)\leq m \leq \frac \bigl(n+1\bigr) is a median of the binomial distribution. If p= \frac and is even, then m= \frac is the unique median.


Tail bounds

For , upper bounds can be derived for the lower tail of the cumulative distribution function F(k;n,p) = \Pr(X \le k), the probability that there are at most successes. Since \Pr(X \ge k) = F(n-k;n,1-p) , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for . Hoeffding's inequality yields the simple bound : F(k;n,p) \leq \exp\left(-2 n\left(p-\frac\right)^2\right), \! which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from the Chernoff bound: : F(k;n,p) \leq \exp\left(-nD\left(\frac\parallel p\right)\right) where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution): : D(a\parallel p)=(a)\ln\frac+(1-a)\ln\frac. \! Asymptotically, this bound is reasonably tight; see for details. One can also obtain ''lower'' bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that : F(k;n,p) \geq \frac \exp\left(-nD\left(\frac\parallel p\right)\right), which implies the simpler but looser bound : F(k;n,p) \geq \frac1 \exp\left(-nD\left(\frac\parallel p\right)\right). For and for even , it is possible to make the denominator constant: : F(k;n,\tfrac) \geq \frac \exp\left(- 16n \left(\frac -\frac\right)^2\right). \!


Statistical inference


Estimation of parameters

When is known, the parameter can be estimated using the proportion of successes: : \widehat = \frac. This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also
consistent In deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory T is consistent if there is no formula \varphi such that both \varphi and its negation \lnot\varphi are elements of the set of consequences ...
both in probability and in MSE. This statistic is asymptotically normal thanks to the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
, because it is the same as taking the
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
over Bernoulli samples. It has a variance of var(\widehat) = \frac, a property which is used in various ways, such as in Wald's confidence intervals. A closed form
Bayes estimator In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the ...
for also exists when using the
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
as a conjugate
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
. When using a general \operatorname(\alpha, \beta) as a prior, the posterior mean estimator is: : \widehat_b = \frac. The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and
consistent In deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory T is consistent if there is no formula \varphi such that both \varphi and its negation \lnot\varphi are elements of the set of consequences ...
in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling. For the special case of using the standard uniform distribution as a
non-informative prior A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
, \operatorname(\alpha=1, \beta=1) = U(0,1), the posterior mean estimator becomes: : \widehat_b = \frac. (A
posterior mode An estimation procedure that is often claimed to be part of Bayesian statistics is the maximum a posteriori (MAP) estimate of an unknown quantity, that equals the mode of the posterior density with respect to some reference measure, typically ...
should just lead to the standard estimator.) This method is called the
rule of succession In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when ...
, which was introduced in the 18th century by
Pierre-Simon Laplace Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
. When relying on
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
, the prior is \operatorname(\alpha=\frac, \beta=\frac), which leads to the estimator: : \widehat_ = \frac. When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to \widehat = 0, which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator \widehat_b, leading to: : \widehat_b = \frac. Another method is to use the upper bound of the confidence interval obtained using the rule of three: : \widehat_ = \frac.


Confidence intervals for the parameter p

Even for quite large values of ''n'', the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: * ''n''1 is the number of successes out of ''n'', the total number of trials * \widehat = \frac is the proportion of successes * z is the 1 - \tfrac\alpha
quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. There is one fewer quantile t ...
of a standard normal distribution (i.e.,
probit In probability theory and statistics, the probit function is the quantile function associated with the standard normal distribution. It has applications in data analysis and machine learning, in particular exploratory statistical graphics and ...
) corresponding to the target error rate \alpha. For example, for a 95% confidence level the error \alpha = 0.05, so 1 - \tfrac\alpha = 0.975 and z = 1.96.


Wald method

: \widehat \pm z \sqrt . A continuity correction of may be added.


Agresti–Coull method

: \tilde \pm z \sqrt Here the estimate of is modified to : \tilde= \frac This method works well for and . See here for n\leq 10. For use the Wilson (score) method below.


Arcsine method

: \sin^2 \left(\arcsin \left(\sqrt\right) \pm \frac \right).


Wilson (score) method

The notation in the formula below differs from the previous formulas in two respects: * Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'. * Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use z = z_ to get the lower bound, or use z = z_ to get the upper bound. For example: for a 95% confidence level the error \alpha = 0.05, so one gets the lower bound by using z = z_ = z_ = - 1.96, and one gets the upper bound by using z = z_ = z_ = 1.96. : \frac


Comparison

The so-called "exact" ( Clopper–Pearson) method is the most conservative. (''Exact'' does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased.


Related distributions


Sums of binomials

If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is : : \begin \operatorname P(Z=k) &= \sum_^k\left binomi p^i (1-p)^\rightleft binom p^ (1-p)^\right\ &= \binomk p^k (1-p)^ \end A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule. However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as .


Poisson binomial distribution

The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials .


Ratio of two binomial distributions

This result was first derived by Katz and coauthors in 1978. Let and be independent. Let . Then log(''T'') is approximately normally distributed with mean log(''p''1/''p''2) and variance .


Conditional binomials

If ''X'' ~ B(''n'', ''p'') and ''Y'' ,  ''X'' ~ B(''X'', ''q'') (the conditional distribution of ''Y'', given ''X''), then ''Y'' is a simple binomial random variable with distribution ''Y'' ~ B(''n'', ''pq''). For example, imagine throwing ''n'' balls to a basket ''UX'' and taking the balls that hit and throwing them to another basket ''UY''. If ''p'' is the probability to hit ''UX'' then ''X'' ~ B(''n'', ''p'') is the number of balls that hit ''UX''. If ''q'' is the probability to hit ''UY'' then the number of balls that hit ''UY'' is ''Y'' ~ B(''X'', ''q'') and therefore ''Y'' ~ B(''n'', ''pq''). Since X \sim B(n, p) and Y \sim B(X, q) , by the law of total probability, : \begin \Pr = m&= \sum_^ \Pr = m \mid X = k\Pr = k\\ pt &= \sum_^n \binom \binom p^k q^m (1-p)^ (1-q)^ \end Since \tbinom \tbinom = \tbinom \tbinom, the equation above can be expressed as : \Pr = m= \sum_^ \binom \binom p^k q^m (1-p)^ (1-q)^ Factoring p^k = p^m p^ and pulling all the terms that don't depend on k out of the sum now yields : \begin \Pr = m&= \binom p^m q^m \left( \sum_^n \binom p^ (1-p)^ (1-q)^ \right) \\ pt &= \binom (pq)^m \left( \sum_^n \binom \left(p(1-q)\right)^ (1-p)^ \right) \end After substituting i = k - m in the expression above, we get : \Pr = m= \binom (pq)^m \left( \sum_^ \binom (p - pq)^i (1-p)^ \right) Notice that the sum (in the parentheses) above equals (p - pq + 1 - p)^ by the
binomial theorem In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and a ...
. Substituting this in finally yields : \begin \Pr =m&= \binom (pq)^m (p - pq + 1 - p)^\\ pt &= \binom (pq)^m (1-pq)^ \end and thus Y \sim B(n, pq) as desired.


Bernoulli distribution

The
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability .


Normal approximation

If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
: \mathcal(np,\,np(1-p)), and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one: * One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if *: \frac=\frac1\left, \sqrt-\sqrt\,\<0.3. This can be made precise using the Berry–Esseen theorem. * A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if *: \mu\pm3\sigma=np\pm3\sqrt\in(0,n). : This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. :: n>9 \left(\frac \right)\quad\text\quad n>9\left(\frac\right). The rule np\pm3\sqrt\in(0,n) is totally equivalent to request that : np-3\sqrt>0\quad\text\quad np+3\sqrt Moving terms around yields: : np>3\sqrt\quad\text\quad n(1-p)>3\sqrt. Since 0, we can apply the square power and divide by the respective factors np^2 and n(1-p)^2, to obtain the desired conditions: : n>9 \left(\fracp\right) \quad\text\quad n>9 \left(\frac\right). Notice that these conditions automatically imply that n>9. On the other hand, apply again the square root and divide by 3, : \frac3>\sqrt>0 \quad \text \quad \frac3 > \sqrt>0. Subtracting the second set of inequalities from the first one yields: : \frac3>\sqrt-\sqrt>-\frac3; and so, the desired first rule is satisfied, : \left, \sqrt-\sqrt\,\<\frac3. * Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs. Assume that both values np and n(1-p) are greater than 9. Since 0< p<1, we easily have that : np\geq9>9(1-p)\quad\text\quad n(1-p)\geq9>9p. We only have to divide now by the respective factors p and 1-p, to deduce the alternative form of the 3-standard-deviation rule: : n>9 \left(\fracp\right) \quad\text\quad n>9 \left(\frac\right). The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in
Abraham de Moivre Abraham de Moivre FRS (; 26 May 166727 November 1754) was a French mathematician known for de Moivre's formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He move ...
's book '' The Doctrine of Chances'' in 1738. Nowadays, it can be seen as a consequence of the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic. For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of ''n'' people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion ''p'' of agreement in the population and with standard deviation : \sigma = \sqrt


Poisson approximation

The binomial distribution converges towards the
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and .
NIST The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical s ...
/
SEMATECH SEMATECH (from Semiconductor Manufacturing Technology) was a not-for-profit consortium that performed research and development to advance chip manufacturing. SEMATECH involved collaboration between various sectors of the R&D community, includin ...

"6.3.3.1. Counts Control Charts"
''e-Handbook of Statistical Methods.''
Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein.


Limiting distributions

* '' Poisson limit theorem'': As approaches and approaches 0 with the product held fixed, the distribution approaches the
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
with
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
. * '' de Moivre–Laplace theorem'': As approaches while remains fixed, the distribution of *: \frac : approaches the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
with expected value 0 and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
 1. This result is a specific case of the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
.


Beta distribution

The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success. Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of : : \operatorname(p;\alpha;\beta) = (n+1)B(k;n;p)
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
s also provide a family of
prior probability distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
s for binomial distributions in
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
: : P(p;\alpha,\beta) = \frac. Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution.


Computational methods


Random number generation

Methods for
random number generation Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular ou ...
where the
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
is a binomial distribution are well-established. One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a
pseudorandom number generator A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random number generation, random n ...
to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step.


History

This distribution was derived by
Jacob Bernoulli Jacob Bernoulli (also known as James in English or Jacques in French; – 16 August 1705) was a Swiss mathematician. He sided with Gottfried Wilhelm Leibniz during the Leibniz–Newton calculus controversy and was an early proponent of Leibniz ...
. He considered the case where where is the probability of success and and are positive integers.
Blaise Pascal Blaise Pascal (19June 162319August 1662) was a French mathematician, physicist, inventor, philosopher, and Catholic Church, Catholic writer. Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest ...
had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as
Pascal's triangle In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Bla ...
.


See also

*
Logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
* Multinomial distribution *
Negative binomial distribution In probability theory and statistics, the negative binomial distribution, also called a Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Berno ...
* Beta-binomial distribution * Binomial measure, an example of a
multifractal A multifractal system is a generalization of a fractal system in which a single exponent (the fractal dimension) is not enough to describe its dynamics; instead, a continuous spectrum of exponents (the so-called singularity spectrum) is needed. ...
measure.Mandelbrot, B. B., Fisher, A. J., & Calvet, L. E. (1997). A multifractal model of asset returns. ''3.2 The Binomial Measure is the Simplest Example of a Multifractal'' *
Statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
* Piling-up lemma, the resulting probability when XOR-ing independent Boolean variables


References


Further reading

* *


External links

* Interactive graphic
Univariate Distribution Relationships

Binomial distribution formula calculator
* Difference of two binomial variables
X-Y
o
, X-Y,

Querying the binomial probability distribution in WolframAlpha
* Confidence (credible) intervals for binomial probability, p

available a
causaScientia.org
{{DEFAULTSORT:Binomial Distribution Discrete distributions Factorial and binomial topics Conjugate prior distributions Exponential family distributions