Multinomial distribution
   HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, the multinomial distribution is a generalization of the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n''
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. When ''k'' is 2 and ''n'' is 1, the multinomial distribution is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
. When ''k'' is 2 and ''n'' is bigger than 1, it is the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. When ''k'' is bigger than 2 and ''n'' is 1, it is the
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so ''n'' determines the suffix, and ''k'' the prefix). The
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
models the outcome of a single
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
generalizes this to the number of heads from performing ''n'' independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of ''n'' experiments, where the outcome of each trial has a
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
, such as rolling a (possibly biased) ''k''-sided die ''n'' times. Let ''k'' be a fixed finite number. Mathematically, we have ''k'' possible mutually exclusive outcomes, with corresponding probabilities ''p''1, ..., ''p''''k'', and ''n'' independent trials. Since the ''k'' outcomes are mutually exclusive and one must occur we have ''p''''i'' ≥ 0 for ''i'' = 1, ..., ''k'' and \sum_^k p_i = 1. Then if the random variables ''X''''i'' indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X'' = (''X''1, ..., ''X''''k'') follows a multinomial distribution with parameters ''n'' and p, where p = (''p''1, ..., ''p''''k''). While the trials are independent, their outcomes ''X''''i'' are dependent because they must sum to n.


Definitions


Probability mass function

Suppose one does an experiment of extracting ''n'' balls of ''k'' different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color ''i'' (''i'' = 1, ..., ''k'') as ''X''''i'', and denote as ''p''''i'' the probability that a given extraction will be in color ''i''. The
probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
of this multinomial distribution is: : \begin f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & = \Pr(X_1 = x_1 \text \dots \text X_k = x_k) \\ & = \begin , \quad & \text \sum_^k x_i=n \\ \\ 0 & \text \end \end for non-negative integers ''x''1, ..., ''x''''k''. The probability mass function can be expressed using the
gamma function In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
as: :f(x_1,\dots, x_; p_1,\ldots, p_k) = \frac \prod_^k p_i^. This form shows its resemblance to the
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
, which is its
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
.


Example

Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample? ''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size'.'' : \Pr(A=1,B=2,C=3) = \frac(0.2^1) (0.3^2) (0.5^3) = 0.135


Properties


Normalization

The multinomial distribution is normalized according to: :\sum_ f(x_1,...,x_k;n,p_1,...,p_k) = 1 where the sum is over all permutations of x_jsuch that \sum_^k x_j=n .


Expected value and variance

The expected number of times the outcome ''i'' was observed over ''n'' trials is :\operatorname(X_i) = n p_i.\, The
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
is as follows. Each diagonal entry is the
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of a binomially distributed random variable, and is therefore :\operatorname(X_i)=np_i(1-p_i).\, The off-diagonal entries are the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
s: :\operatorname(X_i,X_j)=-np_i p_j\, for ''i'', ''j'' distinct. All covariances are negative because for fixed ''n'', an increase in one component of a multinomial vector requires a decrease in another component. When these expressions are combined into a matrix with ''i, j'' element \operatorname (X_i,X_j), the result is a ''k'' × ''k'' positive-semidefinite
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of rank ''k'' − 1. In the special case where ''k'' = ''n'' and where the ''p''''i'' are all equal, the covariance matrix is the
centering matrix In mathematics and multivariate statistics, the centering matrixJohn I. Marden, ''Analyzing and Modeling Rank Data'', Chapman & Hall, 1995, , page 59. is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect a ...
. The entries of the corresponding
correlation matrix In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
are :\rho(X_i,X_i) = 1. :\rho(X_i,X_j) = \frac = \frac = -\sqrt. Note that the number of trials ''n'' drops out of this expression. Each of the ''k'' components separately has a binomial distribution with parameters ''n'' and ''p''''i'', for the appropriate value of the subscript ''i''. The support of the multinomial distribution is the set : \.\, Its number of elements is : .


Matrix notation

In matrix notation, :\operatorname(\mathbf) = n \mathbf,\, and :\operatorname(\mathbf) = n \lbrace \operatorname(\mathbf) - \mathbf \mathbf^ \rbrace ,\, with = the row vector transpose of the column vector .


Visualization


As slices of generalized Pascal's triangle

Just like one can interpret the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
as (normalized) one-dimensional (1D) slices of
Pascal's triangle In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Bla ...
, so too can one interpret the multinomial distribution as 2D (triangular) slices of
Pascal's pyramid In mathematics, Pascal's pyramid is a three-dimensional arrangement of the coefficients of the trinomial expansion and the trinomial distribution. Pascal's pyramid is the three-dimensional analog of the two-dimensional Pascal's triangle, which ...
, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the
range Range may refer to: Geography * Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra) ** Mountain range, a group of mountains bordered by lowlands * Range, a term used to i ...
of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
with a grid.


As polynomial coefficients

Similarly, just like one can interpret the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
as the polynomial coefficients of (p + q)^n when expanded, one can interpret the multinomial distribution as the coefficients of (p_1 + p_2 + p_3 + \cdots + p_k)^n when expanded, noting that just the coefficients must sum up to 1.


Large deviation theory


Asymptotics

By Stirling's formula, at the limit of n, x_1, ..., x_k \to \infty, we have\ln \binom + \sum_^k x_i\ln p_i = -n D_(\hat p \, p) - \frac \ln(2\pi n) - \frac 12 \sum_^k \ln(\hat p_i) + o(1)where relative frequencies \hat p_i = x_i/n in the data can be interpreted as probabilities from the empirical distribution \hat p, and D_ is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
. This formula can be interpreted as follows. Consider \Delta_k, the space of all possible distributions over the categories \. It is a
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
. After n independent samples from the categorical distribution p (which is how we construct the multinomial distribution), we obtain an empirical distribution \hat p. By the asymptotic formula, the probability that empirical distribution \hat p deviates from the actual distribution p decays exponentially, at a rate n D_(\hat p \, p). The more experiments and the more different \hat p is from p, the less likely it is to see such an empirical distribution. If A is a closed subset of \Delta_k, then by dividing up A into pieces, and reasoning about the growth rate of Pr(\hat p \in A_\epsilon) on each piece A_\epsilon, we obtain Sanov's theorem, which states that\lim_ \frac 1n \ln Pr(\hat p \in A) = - \inf_ D_(\hat p \, p)


Concentration at large ''n''

Due to the exponential decay, at large n, almost all the probability mass is concentrated in a small neighborhood of p. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of D_, to obtain\ln \binom p_1^ \cdots p_k^ \approx -\frac n2 \sum_^k \frac = -\frac 12 \sum_^k \fracThis resembles the gaussian distribution, which suggests the following theorem: Theorem. At the n \to \infty limit, n \sum_^k \frac = \sum_^k \frac converges in distribution to the
chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
\chi^2(k-1). The space of all distributions over categories \ is a
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
: \Delta_ = \left\, and the set of all possible empirical distributions after n experiments is a subset of the simplex: \Delta_ = \left\. That is, it is the intersection between \Delta_k and the lattice (\Z^k)/n. As n increases, most of the probability mass is concentrated in a subset of \Delta_ near p, and the probability distribution near p becomes well-approximated by \binom p_1^ \cdots p_k^ \approx e^From this, we see that the subset upon which the mass is concentrated has radius on the order of 1/\sqrt n, but the points in the subset are separated by distance on the order of 1/n, so at large n, the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of \Delta_ in \Delta_k. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density \rho(\hat p) = C e^, where C is a constant. Finally, since the simplex \Delta_k is not all of \R^k, but only within a (k-1)-dimensional plane, we obtain the desired result.


Conditional concentration at large ''n''

The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for
Pearson's chi-squared test Pearson's chi-squared test or Pearson's \chi^2 test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squa ...
. Theorem. Given frequencies x_i\in\mathbb N observed in a dataset with n points, we impose \ell + 1 independent linear constraints \begin \sum_i \hat p_i = 1, \\ \sum_i a_ \hat p_i = b_1, \\ \sum_i a_ \hat p_i = b_2, \\ \cdots, \\ \sum_i a_ \hat p_i = b_ \end (notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical \hat p_i=x_i/n satisfy all these constraints simultaneously. Let q denote the I-projection of prior distribution p on the sub-region of the simplex allowed by the linear constraints. At the n \to \infty limit, sampled counts n \hat p_i from the multinomial distribution conditional on the linear constraints are governed by 2n D_(\hat p \vert\vert q) \approx n \sum_i \frac which converges in distribution to the
chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
\chi^2(k-1-\ell). An analogous proof applies in this Diophantine problem of coupled linear equations in count variables n \hat p_i, but this time \Delta_ is the intersection of (\Z^k)/n with \Delta_k and \ell hyperplanes, all linearly independent, so the probability density \rho(\hat p) is restricted to a (k-\ell-1)-dimensional plane. In particular, expanding the KL divergence D_(\hat p\vert\vert p) around its minimum q (the I-projection of p on \Delta_) in the constrained problem ensures by the Pythagorean theorem for I-divergence that any constant and linear term in the counts n \hat p_i vanishes from the conditional probability to multinationally sample those counts. Notice that by definition, every one of \hat p_1, \hat p_2, ..., \hat p_k must be a rational number, whereas p_1, p_2, ..., p_k may be chosen from any real number in
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
/math> and need not satisfy the Diophantine system of equations. Only asymptotically as n\rightarrow\infty, the \hat p_i's can be regarded as probabilities over
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
/math>. Away from empirically observed constraints b_1,\ldots,b_\ell (such as moments or prevalences) the theorem can be generalized: Theorem. * Given functions f_1, ..., f_\ell, such that they are continuously differentiable in a neighborhood of p, and the vectors (1, 1, ..., 1), \nabla f_1(p), ..., \nabla f_\ell(p) are linearly independent; * given sequences \epsilon_1(n), ..., \epsilon_\ell(n), such that asymptotically \frac 1n \ll \epsilon_i(n) \ll \frac for each i \in \; * then for the multinomial distribution conditional on constraints f_1(\hat p) \in _1(p)- \epsilon_1(n), f_1(p) + \epsilon_1(n) ..., f_\ell(\hat p) \in _\ell(p)- \epsilon_\ell(n), f_\ell(p) + \epsilon_\ell(n)/math>, we have the quantity n \sum_i \frac = \sum_i \frac converging in distribution to \chi^2(k-1-\ell) at the n \to \infty limit. In the case that all \hat p_i are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.


Related distributions

In some fields such as
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 \dots k; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial. * When ''k'' = 2, the multinomial distribution is the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
. *
Categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
, the distribution of each trial; for ''k'' = 2, this is the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
. * The
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
is the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
of the multinomial in
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
. * Dirichlet-multinomial distribution. * Beta-binomial distribution. * Negative multinomial distribution *
Hardy–Weinberg principle In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that Allele frequency, allele and genotype frequencies in a population will remain constant from generation ...
( a trinomial distribution with probabilities (\theta^2, 2 \theta (1-\theta), (1-\theta)^2) )


Statistical inference


Equivalence tests for multinomial distributions

The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions. Let q denote a theoretical multinomial distribution and let p be a true underlying distribution. The distributions p and q are considered equivalent if d(p,q)<\varepsilon for a distance d and a tolerance parameter \varepsilon>0. The equivalence test problem is H_0=\ versus H_1=\. The true underlying distribution p is unknown. Instead, the counting frequencies p_n are observed, where n is a sample size. An equivalence test uses p_n to reject H_0. If H_0 can be rejected then the equivalence between p and q is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010). The equivalence test for the total variation distance is developed in Ostrovski (2017). The exact equivalence test for the specific cumulative distance is proposed in Frey (2009). The distance between the true underlying distribution p and a family of the multinomial distributions \mathcal is defined by d(p, \mathcal)=\min_d(p,h) . Then the equivalence test problem is given by H_0=\ and H_1=\. The distance d(p,\mathcal) is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).


Confidence intervals for the difference of two proportions

In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events, p_i-p_j, requires the incorporation of the negative covariance between the sample estimators \hat_i = \frac and \hat_j = \frac. Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of p_i-p_j for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case. Wald's standard error (SE) of the difference of proportion can be estimated using: \widehat = \sqrt For a 100(1 - \alpha)\% approximate confidence interval, the
margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
may incorporate the appropriate quantile from the
standard normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac e^ ...
, as follows: (\hat_i - \hat_j) \pm z_ \cdot \widehat As the sample size (n) increases, the sample proportions will approximately follow a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
, thanks to the multidimensional central limit theorem (and it could also be shown using the
Cramér–Wold theorem In mathematics, the Cramér–Wold theorem or the Cramér–Wold device is a theorem in measure theory and which states that a Borel probability measure on \mathbb^k is uniquely determined by the totality of its one-dimensional projections. It is ...
). Therefore, their difference will also be approximately normal. Also, these estimators are weakly consistent and plugging them into the SE estimator makes it also weakly consistent. Hence, thanks to
Slutsky's theorem In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
, the
pivotal quantity In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters (including nuisance parameters). A pivot need not be a ...
\frac approximately follows the
standard normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac e^ ...
. And from that, the above approximate confidence interval is directly derived. The SE can be constructed using the calculus of the variance of the difference of two random variables: \begin \widehat & = \sqrt \\ & = \sqrt \\ & = \sqrt \end A modification which includes a continuity correction adds \frac to the margin of error as follows: (\hat_i - \hat_j) \pm \left(z_ \cdot \widehat + \frac\right) Another alternative is to rely on a Bayesian estimator using
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
which leads to using a
dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the ''k'' elements, leading to an overall increase of the sample size by \frac. This was originally developed for a multinomial distribution with four events, and is known as ''wald+2'', for analyzing matched pairs data (see the next section for more details). This leads to the following SE: \widehat_ = \sqrt \begin \widehat_ & = \sqrt \\ & = \sqrt \\ & = \sqrt \end Which can just be plugged into the original Wald formula as follows: (p_i - p_j)\frac \pm z_ \cdot \widehat_


Occurrence and applications


Confidence intervals for the difference in matched-pairs binary data (using multinomial with ''k=4'')

For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time. Such scenarios can be represented using a two-by-two
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
with the number of elements that had each of the combination of events. We can use small ''f'' for sampling frequencies: f_, f_, f_, f_, and capital ''F'' for population frequencies: F_, F_, F_, F_. These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can be ''n'' and ''N'' respectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table: In this case, checking the difference in marginal proportions means we are interested in using the following definitions: p_ = \frac = \frac, p_ = \frac = \frac. And the difference we want to build confidence intervals for is: p_ - p_ = \frac - \frac = \frac - \frac = p_ - p_ Hence, a confidence intervals for the marginal positive proportions (p_ - p_) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (p_ - p_). Calculating a
p-value In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
for such a difference is known as
McNemar's test McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are eq ...
. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions. The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as: \widehat = \widehat = \frac Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application. One such modification includes ''Agresti and Min’s Wald+2'' (similar to some of their other works) in which each cell frequency had an extra \frac added to it. This leads to the ''Wald+2'' confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a
dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
with all parameters being equal to 0.5 (which is, in fact, the
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
). The ''+2'' in the name ''wald+2'' can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior). This leads to the following modified SE for the case of matched pairs data: \widehat = \frac Which can just be plugged into the original Wald formula as follows: (p_ - p_)\frac \pm z_ \cdot \widehat_ Other modifications include ''Bonett and Price’s Adjusted Wald'', and ''Newcombe’s Score''.


Computational methods


Random variate generation

First, reorder the parameters p_1, \ldots, p_k such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable ''X'' from a uniform (0, 1) distribution. The resulting outcome is the component : j = \min \left\. is one observation from the multinomial distribution with p_1, \ldots, p_k and ''n'' = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with ''n'' equal to the number of such repetitions.


Sampling using repeated conditional binomial samples

Given the parameters p_1, p_2, \ldots, p_k and a total for the sample n such that \sum_^k X_i = n , it is possible to sample sequentially for the number in an arbitrary state X_i , by partitioning the state space into i and not-i , conditioned on any prior samples already taken, repeatedly.


Algorithm: Sequential conditional binomial sampling

S = n rho = 1 for i in ,k-1 if rho != 0: X ~ Binom(S,p rho) else X = 0 S = S - X rho = rho - p X = S Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.


Software implementations

* The ''MultinomialCI'' R package allows the computation of simultaneous confidence intervals for the probabilities of a multinomial distribution given a set of observations.


See also

*
Additive smoothing In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts \mathbf = \lang ...


References


Further reading

* * {{DEFAULTSORT:Multinomial Distribution Discrete distributions Multivariate discrete distributions Factorial and binomial topics Exponential family distributions