In
probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Dirichlet-multinomial distribution is a family of discrete multivariate
probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribution (DCM) or multivariate Pólya distribution (after
George Pólya
George Pólya (; ; December 13, 1887 – September 7, 1985) was a Hungarian-American mathematician. He was a professor of mathematics from 1914 to 1940 at ETH Zürich and from 1940 to 1953 at Stanford University. He made fundamental contributi ...
). It is a
compound probability distribution
In probability and statistics, a compound probability distribution (also known as a mixture distribution or contagious distribution) is the probability distribution that results from assuming that a random variable is distributed according to some ...
, where a probability vector p is drawn from a
Dirichlet distribution
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
with parameter vector
, and an observation drawn from a
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
with probability vector p and number of trials ''n''. The Dirichlet parameter vector captures the prior belief about the situation and can be seen as a pseudocount: observations of each outcome that occur before the actual data is collected. The compounding corresponds to a
Pólya urn scheme. It is frequently encountered in
Bayesian statistics
Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
,
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
,
empirical Bayes methods and
classical statistics as an
overdispersed multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
.
It reduces to the
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
as a special case when ''n'' = 1. It also approximates the
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
arbitrarily well for large ''α''. The Dirichlet-multinomial is a multivariate extension of the
beta-binomial distribution, as the multinomial and Dirichlet distributions are multivariate versions of the
binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
and
beta distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
s, respectively.
Specification
Dirichlet-multinomial as a compound distribution
The Dirichlet distribution is a
conjugate distribution to the multinomial distribution. This fact leads to an analytically tractable
compound distribution.
For a random vector of category counts
, distributed according to a
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
, the
marginal distribution
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
is obtained by integrating on the distribution for p which can be thought of as a
random vector
In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
following a Dirichlet distribution:
:
which results in the following explicit formula:
:
where
is defined as the sum
. Another form for this same compound distribution, written more compactly in terms of the
beta function
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral
: \Beta(z_1,z_2) = \int_0^1 t^ ...
, ''B'', is as follows:
The latter form emphasizes the fact that zero count categories can be ignored in the calculation - a useful fact when the number of categories is very large and
sparse (e.g. word counts in documents).
Observe that the pdf is the Beta-binomial distribution when
. It can also be shown that it approaches the multinomial distribution as
approaches infinity. The parameter
governs the degree of overdispersion or
burstiness relative to the multinomial. Alternative choices to denote
found in the literature are S and A.
Dirichlet-multinomial as an urn model
The Dirichlet-multinomial distribution can also be motivated via an
urn model for positive
integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
values of the vector
, known as the
Polya urn model. Specifically, imagine an urn containing balls of
colors numbering
for the ith color, where random draws are made. When a ball is randomly drawn and observed, then two balls of the same color are returned to the urn. If this is performed
times, then the probability of observing the random vector
of color counts is a Dirichlet-multinomial with parameters
and
.
If the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a multinomial distribution and if the random draws are made without replacement, the distribution follows a
multivariate hypergeometric distribution.
Properties
Moments
Once again, let
and let
, then the
expected number of times the outcome ''i'' was observed over ''n'' trials is
:
The
covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
is as follows. Each diagonal entry is the
variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of a beta-binomially distributed random variable, and is therefore
:
The off-diagonal entries are the
covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
s:
:
for ''i'', ''j'' distinct.
All covariances are negative because for fixed ''n'', an increase in one component of a Dirichlet-multinomial vector requires a decrease in another component.
This is a ''K'' × ''K''
positive-semidefinite matrix of
rank ''K'' − 1.
The entries of the corresponding
correlation matrix
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
are
:
:
The sample size drops out of this expression.
Each of the ''k'' components separately has a beta-binomial distribution.
The
support of the Dirichlet-multinomial distribution is the set
:
Its number of elements is
:
Matrix notation
In matrix notation,
:
and
:
with = the row vector transpose of the column vector . Letting
:
, we can write alternatively
:
The parameter
is known as the "intra class" or "intra cluster" correlation. It is this positive correlation which gives rise to overdispersion relative to the multinomial distribution.
Aggregation
If
:
then, if the random variables with subscripts ''i'' and ''j'' are dropped from the vector and replaced by their sum,
:
This aggregation property may be used to derive the marginal distribution of
.
Likelihood function
Conceptually, we are making ''N'' independent draws from a categorical distribution with ''K'' categories. Let us represent the independent draws as random categorical variables
for
. Let us denote the number of times a particular category
has been seen (for
) among all the categorical variables as
, and
. Then, we have two separate views onto this problem:
# A set of
categorical variables
.
# A single vector-valued variable
, distributed according to a
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
.
The former case is a set of random variables specifying each ''individual'' outcome, while the latter is a variable specifying the ''number'' of outcomes of each of the ''K'' categories. The distinction is important, as the two cases have correspondingly different probability distributions.
The parameter of the categorical distribution is
where
is the probability to draw value
;
is likewise the parameter of the multinomial distribution
. Rather than specifying
directly, we give it a
conjugate prior distribution, and hence it is drawn from a Dirichlet distribution with parameter vector
.
By integrating out
, we obtain a compound distribution. However, the form of the distribution is different depending on which view we take.
For a set of individual outcomes
Joint distribution
For categorical variables
, the
marginal joint distribution
A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
is obtained by integrating out
:
:
which results in the following explicit formula:
:
where
is the
gamma function
In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
, with
:
Note the absence of the multinomial coefficient due to the formula being about the probability of a sequence of categorical variables instead of a probability on the counts within each category.
Although the variables
do not appear explicitly in the above formula, they enter in through the
values.
Conditional distribution
Another useful formula, particularly in the context of
Gibbs sampling
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
, asks what the conditional density of a given variable
is, conditioned on all the other variables (which we will denote
). It turns out to have an extremely simple form:
:
where
specifies the number of counts of category
seen in all variables other than
.
It may be useful to show how to derive this formula. In general,
conditional distribution
Conditional (if then) may refer to:
* Causal conditional, if X then Y, where X is a cause of Y
*Conditional probability, the probability of an event A given that another event B
* Conditional proof, in logic: a proof that asserts a conditional, ...
s are proportional to the corresponding
joint distribution
A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
s, so we simply start with the above formula for the joint distribution of all the
values and then eliminate any factors not dependent on the particular
in question. To do this, we make use of the notation
defined above, and
:
We also use the fact that
:
Then:
:
In general, it is not necessary to worry about the
normalizing constant
In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one.
For example, a Gaussian function can be normalized into a probabilit ...
at the time of deriving the equations for conditional distributions. The normalizing constant will be determined as part of the algorithm for sampling from the distribution (see
Categorical distribution#Sampling). However, when the conditional distribution is written in the simple form above, it turns out that the normalizing constant assumes a simple form:
:
Hence
:
This formula is closely related to the
Chinese restaurant process, which results from taking the limit as
.
In a Bayesian network
In a larger
Bayesian network
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
in which categorical (or so-called "multinomial") distributions occur with
Dirichlet distribution
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
priors as part of a larger network, all Dirichlet priors can be collapsed provided that the only nodes depending on them are categorical distributions. The collapsing happens for each Dirichlet-distribution node separately from the others, and occurs regardless of any other nodes that may depend on the categorical distributions. It also occurs regardless of whether the categorical distributions depend on nodes additional to the Dirichlet priors (although in such a case, those other nodes must remain as additional conditioning factors). Essentially, all of the categorical distributions depending on a given Dirichlet-distribution node become connected into a single Dirichlet-multinomial joint distribution defined by the above formula. The joint distribution as defined this way will depend on the parent(s) of the integrated-out Dirichet prior nodes, as well as any parent(s) of the categorical nodes other than the Dirichlet prior nodes themselves.
In the following sections, we discuss different configurations commonly found in Bayesian networks. We repeat the probability density from above, and define it using the symbol
:
:
=Multiple Dirichlet priors with the same hyperprior
=
Imagine we have a hierarchical model as follows:
:
In cases like this, we have multiple Dirichet priors, each of which generates some number of categorical observations (possibly a different number for each prior). The fact that they are all dependent on the same hyperprior, even if this is a random variable as above, makes no difference. The effect of integrating out a Dirichlet prior links the categorical variables attached to that prior, whose joint distribution simply inherits any conditioning factors of the Dirichlet prior. The fact that multiple priors may share a hyperprior makes no difference:
:
where
is simply the collection of categorical variables dependent on prior ''d''.
Accordingly, the conditional probability distribution can be written as follows:
:
where
specifically means the number of variables ''among the set''
, excluding
itself, that have the value
.
It is necessary to count only the variables having the value ''k'' that are tied together to the variable in question through having the same prior. We do not want to count any other variables also having the value ''k''.
=Multiple Dirichlet priors with the same hyperprior, with dependent children
=
Now imagine a slightly more complicated hierarchical model as follows:
:
This model is the same as above, but in addition, each of the categorical variables has a child variable dependent on it. This is typical of a
mixture model
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
.
Again, in the joint distribution, only the categorical variables dependent on the same prior are linked into a single Dirichlet-multinomial:
:
The conditional distribution of the categorical variables dependent only on their parents and ancestors would have the identical form as above in the simpler case. However, in Gibbs sampling it is necessary to determine the conditional distribution of a given node
dependent not only on
and ancestors such as
but on ''all'' the other parameters.
The simplified expression for the conditional distribution is derived above simply by rewriting the expression for the joint probability and removing constant factors. Hence, the same simplification would apply in a larger joint probability expression such as the one in this model, composed of Dirichlet-multinomial densities plus factors for many other random variables dependent on the values of the categorical variables.
This yields the following:
:
Here the probability density of
appears directly. To do
random sampling
In this statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the who ...
over
, we would compute the unnormalized probabilities for all ''K'' possibilities for
using the above formula, then normalize them and proceed as normal using the algorithm described in the
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
article.
Correctly speaking, the additional factor that appears in the conditional distribution is derived not from the model specification but directly from the joint distribution. This distinction is important when considering models where a given node with Dirichlet-prior parent has multiple dependent children, particularly when those children are dependent on each other (e.g. if they share a parent that is collapsed out). This is discussed more below.
=Multiple Dirichlet priors with shifting prior membership
=
Now imagine we have a hierarchical model as follows:
:
Here we have a tricky situation where we have multiple Dirichlet priors as before and a set of dependent categorical variables, but the relationship between the priors and dependent variables isn't fixed, unlike before. Instead, the choice of which prior to use is dependent on another random categorical variable. This occurs, for example, in topic models, and indeed the names of the variables above are meant to correspond to those in
latent Dirichlet allocation
In natural language processing, latent Dirichlet allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) for modeling automatically extracted topics in textual corpora. The LDA is an example of a Bayesian topic ...
. In this case, the set
is a set of words, each of which is drawn from one of
possible topics, where each topic is a Dirichlet prior over a vocabulary of
possible words, specifying the frequency of different words in the topic. However, the topic membership of a given word isn't fixed; rather, it's determined from a set of
latent variable
In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
s
. There is one latent variable per word, a
-dimensional
categorical variable
In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
specifying the topic the word belongs to.
In this case, all variables dependent on a given prior are tied together (i.e.
correlated
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistic ...
) in a group, as before — specifically, all words belonging to a given topic are linked. In this case, however, the group membership shifts, in that the words are not fixed to a given topic but the topic depends on the value of a latent variable associated with the word. However, the definition of the Dirichlet-multinomial density doesn't actually depend on the number of categorical variables in a group (i.e. the number of words in the document generated from a given topic), but only on the counts of how many variables in the group have a given value (i.e. among all the word tokens generated from a given topic, how many of them are a given word). Hence, we can still write an explicit formula for the joint distribution:
: