HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, the multinomial distribution is a generalization of the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n''
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
. When ''k'' is 2 and ''n'' is bigger than 1, it is the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
. When ''k'' is bigger than 2 and ''n'' is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the prefix, and ''k'' the suffix). The
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
models the outcome of a single
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is c ...
. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
generalizes this to the number of heads from performing ''n'' independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of ''n'' experiments, where the outcome of each trial has a categorical distribution, such as rolling a ''k''-sided dice ''n'' times. Let ''k'' be a fixed finite number. Mathematically, we have ''k'' possible mutually exclusive outcomes, with corresponding probabilities ''p''1, ..., ''p''''k'', and ''n'' independent trials. Since the ''k'' outcomes are mutually exclusive and one must occur we have ''p''''i'' ≥ 0 for ''i'' = 1, ..., ''k'' and \sum_^k p_i = 1. Then if the random variables ''X''''i'' indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X'' = (''X''1, ..., ''X''''k'') follows a multinomial distribution with parameters ''n'' and p, where p = (''p''1, ..., ''p''''k''). While the trials are independent, their outcomes ''X''''i'' are dependent because they must be summed to n.


Definitions


Probability mass function

Suppose one does an experiment of extracting ''n'' balls of ''k'' different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color ''i'' (''i'' = 1, ..., ''k'') as ''X''''i'', and denote as ''p''''i'' the probability that a given extraction will be in color ''i''. The
probability mass function In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. The probability mass ...
of this multinomial distribution is: : \begin f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & = \Pr(X_1 = x_1 \text \dots \text X_k = x_k) \\ & = \begin , \quad & \text \sum_^k x_i=n \\ \\ 0 & \text \end \end for non-negative integers ''x''1, ..., ''x''''k''. The probability mass function can be expressed using the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers excep ...
as: :f(x_1,\dots, x_; p_1,\ldots, p_k) = \frac \prod_^k p_i^. This form shows its resemblance to the
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector \bold ...
, which is its
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and ...
.


Example

Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample? ''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size'.'' : \Pr(A=1,B=2,C=3) = \frac(0.2^1) (0.3^2) (0.5^3) = 0.135


Properties


Expected value and variance

The expected number of times the outcome ''i'' was observed over ''n'' trials is :\operatorname(X_i) = n p_i.\, The
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
is as follows. Each diagonal entry is the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
of a binomially distributed random variable, and is therefore :\operatorname(X_i)=np_i(1-p_i).\, The off-diagonal entries are the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
s: :\operatorname(X_i,X_j)=-np_i p_j\, for ''i'', ''j'' distinct. All covariances are negative because for fixed ''n'', an increase in one component of a multinomial vector requires a decrease in another component. When these expressions are combined into a matrix with ''i, j'' element \operatorname (X_i,X_j), the result is a ''k'' × ''k'' positive-semidefinite
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of rank ''k'' − 1. In the special case where ''k'' = ''n'' and where the ''p''''i'' are all equal, the covariance matrix is the centering matrix. The entries of the corresponding
correlation matrix In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
are :\rho(X_i,X_i) = 1. :\rho(X_i,X_j) = \frac = \frac = -\sqrt. Note that the sample size drops out of this expression. Each of the ''k'' components separately has a binomial distribution with parameters ''n'' and ''p''''i'', for the appropriate value of the subscript ''i''. The
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of the multinomial distribution is the set : \.\, Its number of elements is : .


Matrix notation

In matrix notation, :\operatorname(\mathbf) = n \mathbf,\, and :\operatorname(\mathbf) = n \lbrace \operatorname(\mathbf) - \mathbf \mathbf^ \rbrace ,\, with = the row vector transpose of the column vector .


Visualization


As slices of generalized Pascal's triangle

Just like one can interpret the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
as (normalized) one-dimensional (1D) slices of
Pascal's triangle In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although o ...
, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
with a grid.


As polynomial coefficients

Similarly, just like one can interpret the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
as the polynomial coefficients of (p + q)^n when expanded, one can interpret the multinomial distribution as the coefficients of (p_1 + p_2 + p_3 + \cdots + p_k)^n when expanded, noting that just the coefficients must sum up to 1.


Related distributions

In some fields such as
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 \dots K; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial. * When ''k'' = 2, the multinomial distribution is the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
. * Categorical distribution, the distribution of each trial; for ''k'' = 2, this is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
. * The
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector \bold ...
is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and ...
of the multinomial in
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
. * Dirichlet-multinomial distribution. *
Beta-binomial distribution In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Ber ...
. * Negative multinomial distribution *
Hardy–Weinberg principle In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that allele and genotype frequencies in a population will remain constant from generation to generation in t ...
(it is a trinomial distribution with probabilities (\theta^2, 2 \theta (1-\theta), (1-\theta)^2) )


Statistical inference


Equivalence tests for multinomial distributions

The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions. Let q denote a theoretical multinomial distribution and let p be a true underlying distribution. The distributions p and q are considered equivalent if d(p,q)<\varepsilon for a distance d and a tolerance parameter \varepsilon>0. The equivalence test problem is H_0=\ versus H_1=\. The true underlying distribution p is unknown. Instead, the counting frequencies p_n are observed, where n is a sample size. An equivalence test uses p_n to reject H_0. If H_0 can be rejected then the equivalence between p and q is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010). The equivalence test for the total variation distance is developed in Ostrovski (2017). The exact equivalence test for the specific cumulative distance is proposed in Frey (2009). The distance between the true underlying distribution p and a family of the multinomial distributions \mathcal is defined by d(p, \mathcal)=\min_d(p,h) . Then the equivalence test problem is given by H_0=\ and H_1=\. The distance d(p,\mathcal) is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).Official web link (subscription required)Alternate, free web link


Random variate generation

First, reorder the parameters p_1, \ldots, p_k such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable ''X'' from a uniform (0, 1) distribution. The resulting outcome is the component : j = \min \left\. is one observation from the multinomial distribution with p_1, \ldots, p_k and ''n'' = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with ''n'' equal to the number of such repetitions.


References


Citations


Sources

* * {{DEFAULTSORT:Multinomial Distribution Discrete distributions Multivariate discrete distributions Factorial and binomial topics Exponential family distributions