In
probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, the multinomial distribution is a generalization of the
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n''
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in Pennsylvania, United States
* Independentes (English: Independents), a Portuguese artist ...
trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the
Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
. When ''k'' is 2 and ''n'' is bigger than 1, it is the
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. When ''k'' is bigger than 2 and ''n'' is 1, it is the
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the suffix, and ''k'' the prefix).
The
Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
models the outcome of a single
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
. In other words, it models whether flipping a (possibly
biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
generalizes this to the number of heads from performing ''n'' independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of ''n'' experiments, where the outcome of each trial has a
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
, such as rolling a (possibly
biased) ''k''-sided die ''n'' times.
Let ''k'' be a fixed finite number. Mathematically, we have ''k'' possible mutually exclusive outcomes, with corresponding probabilities ''p''
1, ..., ''p''
''k'', and ''n'' independent trials. Since the ''k'' outcomes are mutually exclusive and one must occur we have ''p''
''i'' ≥ 0 for ''i'' = 1, ..., ''k'' and
. Then if the random variables ''X''
''i'' indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X'' = (''X''
1, ..., ''X''
''k'') follows a multinomial distribution with parameters ''n'' and p, where p = (''p''
1, ..., ''p''
''k''). While the trials are independent, their outcomes ''X''
''i'' are dependent because they must sum to n.
Definitions
Probability mass function
Suppose one does an experiment of extracting ''n'' balls of ''k'' different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color ''i'' (''i'' = 1, ..., ''k'') as ''X''
''i'', and denote as ''p''
''i'' the probability that a given extraction will be in color ''i''. The
probability mass function
In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
of this multinomial distribution is:
:
for non-negative integers ''x''
1, ..., ''x''
''k''.
The probability mass function can be expressed using the
gamma function
In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
as:
:
This form shows its resemblance to the
Dirichlet distribution
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
, which is its
conjugate prior
In Bayesian probability theory, if, given a likelihood function
p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
.
Example
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the
multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size'.''
:
Properties
Normalization
The multinomial distribution is normalized according to:
:
where the sum is over all permutations of
such that
.
Expected value and variance
The
expected number of times the outcome ''i'' was observed over ''n'' trials is
:
The
covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
is as follows. Each diagonal entry is the
variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of a binomially distributed random variable, and is therefore
:
The off-diagonal entries are the
covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
s:
:
for ''i'', ''j'' distinct.
All covariances are negative because for fixed ''n'', an increase in one component of a multinomial vector requires a decrease in another component.
When these expressions are combined into a matrix with ''i, j'' element
the result is a ''k'' × ''k''
positive-semidefinite covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of rank ''k'' − 1. In the special case where ''k'' = ''n'' and where the ''p''
''i'' are all equal, the covariance matrix is the
centering matrix
In mathematics and multivariate statistics, the centering matrixJohn I. Marden, ''Analyzing and Modeling Rank Data'', Chapman & Hall, 1995, , page 59. is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect a ...
.
The entries of the corresponding
correlation matrix
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
are
:
:
Note that the number of trials ''n'' drops out of this expression.
Each of the ''k'' components separately has a binomial distribution with parameters ''n'' and ''p''
''i'', for the appropriate value of the subscript ''i''.
The
support of the multinomial distribution is the set
:
Its number of elements is
:
Matrix notation
In matrix notation,
:
and
:
with = the row vector transpose of the column vector .
Visualization
As slices of generalized Pascal's triangle
Just like one can interpret the
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
as (normalized) one-dimensional (1D) slices of
Pascal's triangle
In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Bla ...
, so too can one interpret the multinomial distribution as 2D (triangular) slices of
Pascal's pyramid
In mathematics, Pascal's pyramid is a three-dimensional arrangement of the coefficients of the trinomial expansion and the trinomial distribution. Pascal's pyramid is the three-dimensional analog of the two-dimensional Pascal's triangle, which ...
, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the
range
Range may refer to:
Geography
* Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra)
** Mountain range, a group of mountains bordered by lowlands
* Range, a term used to i ...
of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a
simplex
In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
with a grid.
As polynomial coefficients
Similarly, just like one can interpret the
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
as the polynomial coefficients of
when expanded, one can interpret the multinomial distribution as the coefficients of
when expanded, noting that just the coefficients must sum up to 1.
Large deviation theory
Asymptotics
By
Stirling's formula, at the limit of
, we have
where relative frequencies
in the data can be interpreted as probabilities from the empirical distribution
, and
is the
Kullback–Leibler divergence
In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
.
This formula can be interpreted as follows.
Consider
, the space of all possible distributions over the categories
. It is a
simplex
In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
. After
independent samples from the categorical distribution
(which is how we construct the multinomial distribution), we obtain an empirical distribution
.
By the asymptotic formula, the probability that empirical distribution
deviates from the actual distribution
decays exponentially, at a rate
. The more experiments and the more different
is from
, the less likely it is to see such an empirical distribution.
If
is a closed subset of
, then by dividing up
into pieces, and reasoning about the growth rate of
on each piece
, we obtain
Sanov's theorem, which states that
Concentration at large ''n''
Due to the exponential decay, at large
, almost all the probability mass is concentrated in a small neighborhood of
. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of
, to obtain
This resembles the gaussian distribution, which suggests the following theorem:
Theorem. At the
limit,
converges in distribution to the
chi-squared distribution
In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
.
The space of all distributions over categories
is a
simplex
In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
:
, and the set of all possible empirical distributions after
experiments is a subset of the simplex:
. That is, it is the intersection between
and the lattice
.
As
increases, most of the probability mass is concentrated in a subset of
near
, and the probability distribution near
becomes well-approximated by
From this, we see that the subset upon which the mass is concentrated has radius on the order of
, but the points in the subset are separated by distance on the order of
, so at large
, the points merge into a continuum.
To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of
in
. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density
, where
is a constant.
Finally, since the simplex
is not all of
, but only within a
-dimensional plane, we obtain the desired result.
Conditional concentration at large ''n''
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for
Pearson's chi-squared test
Pearson's chi-squared test or Pearson's \chi^2 test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squa ...
.
Theorem. Given frequencies
observed in a dataset with
points, we impose
independent linear constraints
(notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical
satisfy all these constraints simultaneously. Let
denote the
-projection of prior distribution
on the sub-region of the simplex allowed by the linear constraints. At the
limit, sampled counts
from the multinomial distribution conditional on the linear constraints are governed by
which
converges in distribution to the
chi-squared distribution
In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
.
An analogous proof applies in this Diophantine problem of coupled linear equations in count variables
, but this time
is the intersection of
with
and
hyperplanes, all linearly independent, so the probability density
is restricted to a
-dimensional plane. In particular, expanding the KL divergence
around its minimum
(the
-projection of
on
) in the constrained problem ensures by the Pythagorean theorem for
-divergence that any constant and linear term in the counts
vanishes from the conditional probability to multinationally sample those counts.
Notice that
by definition, every one of
must be a rational number,
whereas
may be chosen from any real number in