HOME

TheInfoList



OR:

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous
multivariate Multivariate may refer to: In mathematics * Multivariable calculus * Multivariate function * Multivariate polynomial In computing * Multivariate cryptography * Multivariate division algorithm * Multivariate interpolation * Multivariate optical c ...
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s parameterized by a vector \boldsymbol\alpha of positive reals. It is a multivariate generalization of the beta distribution, (Chapter 49: Dirichlet and Inverted Dirichlet Distributions) hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
s in
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
, and in fact, the Dirichlet distribution is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and ...
of the
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
and multinomial distribution. The infinite-dimensional generalization of the Dirichlet distribution is the '' Dirichlet process''.


Definitions


Probability density function

The Dirichlet distribution of order ''K'' ≥ 2 with parameters ''α''1, ..., ''α''''K'' > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space R''K-1'' given by :f \left(x_1,\ldots, x_; \alpha_1,\ldots, \alpha_K \right) = \frac \prod_^K x_i^ :where \_^ belong to the standard K-1 simplex, or in other words: \sum_^ x_i = 1 \mbox x_i \in \left ,1\right\mbox i \in \ The normalizing constant is the multivariate
beta function In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral : \Beta(z_1,z_2) = \int_0^1 t^( ...
, which can be expressed in terms of the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except t ...
: :\mathrm(\boldsymbol\alpha) = \frac,\qquad\boldsymbol=(\alpha_1,\ldots,\alpha_K).


Support

The
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of the Dirichlet distribution is the set of ''K''-dimensional vectors \boldsymbol x whose entries are real numbers in the interval ,1such that \, \boldsymbol x\, _1 = 1, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a ''K''-way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s, specifically the set of ''K''-dimensional discrete distributions. The technical term for the set of points in the support of a ''K''-dimensional Dirichlet distribution is the open standard (''K'' − 1)-simplex, which is a generalization of a triangle, embedded in the next-higher dimension. For example, with ''K'' = 3, the support is an equilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.


Special cases

A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector \boldsymbol\alpha have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value ''α'', called the
concentration parameter In probability theory and statistics, a concentration parameter is a special kind of numerical parameter of a parametric family of probability distributions. Concentration parameters occur in two kinds of distribution: In the Von Mises–Fisher ...
. In terms of ''α,'' the density function has the form :f(x_1,\dots, x_; \alpha) = \frac \prod_^K x_i^. When ''α''=1, the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard (''K'' − 1)-simplex, i.e. it is uniform over all points in its
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values. More generally, the parameter vector is sometimes written as the product \alpha \boldsymbol n of a (
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
)
concentration parameter In probability theory and statistics, a concentration parameter is a special kind of numerical parameter of a parametric family of probability distributions. Concentration parameters occur in two kinds of distribution: In the Von Mises–Fisher ...
''α'' and a ( vector) base measure \boldsymbol n=(n_1,\dots,n_K) where \boldsymbol n lies within the (''K'' − 1)-simplex (i.e.: its coordinates n_i sum to one). The concentration parameter in this case is larger by a factor of ''K'' than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature.
: If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter ''K'', the dimension of the distribution, is the uniform distribution on the (''K'' − 1)-simplex.


Properties


Moments

Let X = (X_1, \ldots, X_K)\sim\operatorname(\alpha). Let :\alpha_0 = \sum_^K \alpha_i. Then : \operatorname _i= \frac, :\operatorname _i= \frac. Furthermore, if i\neq j :\operatorname _i,X_j= \frac. The matrix is thus singular. More generally, moments of Dirichlet-distributed random variables can be expressed as :\operatorname\left prod_^K X_i^\right= \frac = \frac\times\prod_^K \frac.


Mode

The
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
of the distribution is the vector (''x''1, ..., ''xK'') with : x_i = \frac, \qquad \alpha_i > 1.


Marginal distributions

The
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variab ...
s are beta distributions: :X_i \sim \operatorname (\alpha_i, \alpha_0 - \alpha_i).


Conjugate to categorical/multinomial

The Dirichlet distribution is the
conjugate prior In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and ...
distribution of the
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
(a generic discrete probability distribution with a given number of possible outcomes) and multinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties. Formally, this can be expressed as follows. Given a model :\begin \boldsymbol\alpha &=& \left(\alpha_1, \ldots, \alpha_K \right) &=& \text \\ \mathbf\mid\boldsymbol\alpha &=& \left(p_1, \ldots, p_K \right ) &\sim& \operatorname(K, \boldsymbol\alpha) \\ \mathbb\mid\mathbf &=& \left(\mathbf_1, \ldots, \mathbf_K \right ) &\sim& \operatorname(K,\mathbf) \end then the following holds: :\begin \mathbf &=& \left(c_1, \ldots, c_K \right ) &=& \texti \\ \mathbf \mid \mathbb,\boldsymbol\alpha &\sim& \operatorname(K,\mathbf+\boldsymbol\alpha) &=& \operatorname \left (K,c_1+\alpha_1,\ldots,c_K+\alpha_K \right) \end This relationship is used in
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
to estimate the underlying parameter p of a
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
given a collection of ''N'' samples. Intuitively, we can view the
hyperprior In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution. As with the term ''hyperparameter,'' the use of ''hyper'' is to distinguish it from a prior distribution of a para ...
vector α as
pseudocount In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth categorical data. Given a set of observation counts \textstyle from a \textstyle -dimensional multinomial distribution with ...
s, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution. In Bayesian
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation ...
s and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the
categorical variable In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
s appearing in the models. See the section on
applications Application may refer to: Mathematics and computing * Application software, computer software designed to help the user to perform specific tasks ** Application layer, an abstraction layer that specifies protocols and interface methods used in a c ...
below for more information.


Relation to Dirichlet-multinomial distribution

In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the
marginal Marginal may refer to: * ''Marginal'' (album), the third album of the Belgian rock band Dead Man Ray, released in 2001 * ''Marginal'' (manga) * '' El Marginal'', Argentine TV series * Marginal seat or marginal constituency or marginal, in polit ...
joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution. This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is diff ...
or
variational Bayes Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually ...
, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.


Entropy

If ''X'' is a Dir(''α'') random variable, the differential entropy of ''X'' (in nat units) is : h(\boldsymbol X) = \operatorname \ln f(\boldsymbol X)= \ln \operatorname(\boldsymbol\alpha) + (\alpha_0-K)\psi(\alpha_0) - \sum_^K (\alpha_j-1)\psi(\alpha_j) where \psi is the digamma function. The following formula for \operatorname ln(X_i)/math> can be used to derive the differential entropy above. Since the functions \ln(X_i) are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of \ln(X_i) and its associated covariance matrix: : \operatorname ln(X_i)= \psi(\alpha_i)-\psi(\alpha_0) and : \operatorname ln(X_i),\ln(X_j)= \psi'(\alpha_i) \delta_ - \psi'(\alpha_0) where \psi is the digamma function, \psi' is the
trigamma function In mathematics, the trigamma function, denoted or , is the second of the polygamma functions, and is defined by : \psi_1(z) = \frac \ln\Gamma(z). It follows from this definition that : \psi_1(z) = \frac \psi(z) where is the digamma functi ...
, and \delta_ is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
. The spectrum of Rényi information for values other than \lambda = 1 is given by :F_R(\lambda) = (1-\lambda)^ \left( - \lambda \log \mathrm(\alpha) + \sum_^K \log \Gamma(\lambda(\alpha_i - 1) + 1) - \log \Gamma(\lambda (\alpha_0 -d)+d ) \right) and the information entropy is the limit as \lambda goes to 1. Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector \boldsymbol Z with probability-mass distribution \boldsymbol X , i.e., P(Z_i=1, Z_ = 0 , \boldsymbol X) = X_i . The conditional
information entropy In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
of \boldsymbol Z , given \boldsymbol X is : S(\boldsymbol X) = H(\boldsymbol Z , \boldsymbol X) = \operatorname_ \boldsymbol X ) = \sum_^K - X_i \log X_i This function of \boldsymbol X is a scalar random variable. If \boldsymbol X has a symmetric Dirichlet distribution with all \alpha_i = \alpha, the expected value of the entropy (in nat units) is : \operatorname (\boldsymbol X)= \sum_^K \operatorname X_i \ln X_i= \psi(K\alpha + 1) - \psi(\alpha + 1)


Aggregation

If :X = (X_1, \ldots, X_K)\sim\operatorname(\alpha_1,\ldots,\alpha_K) then, if the random variables with subscripts ''i'' and ''j'' are dropped from the vector and replaced by their sum, :X' = (X_1, \ldots, X_i + X_j, \ldots, X_K)\sim\operatorname (\alpha_1, \ldots, \alpha_i + \alpha_j, \ldots, \alpha_K). This aggregation property may be used to derive the marginal distribution of X_i mentioned above.


Neutrality

If X = (X_1, \ldots, X_K)\sim\operatorname(\alpha), then the vector ''X'' is said to be ''neutral'' in the sense that ''XK'' is independent of X^ where :X^=\left(\frac,\frac,\ldots,\frac \right), and similarly for removing any of X_2,\ldots,X_. Observe that any permutation of ''X'' is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution). Combining this with the property of aggregation it follows that ''X''''j'' + ... + ''X''''K'' is independent of \left(\frac,\frac,\ldots,\frac \right). In fact it is true, further, for the Dirichlet distribution, that for 3\le j\le K-1, the pair \left(X_1+\cdots +X_, X_j+\cdots +X_K\right), and the two vectors \left(\frac,\frac,\ldots,\frac \right) and \left(\frac,\frac,\ldots,\frac \right), viewed as triple of normalised random vectors, are mutually independent. The analogous result is true for partition of the indices into any other pair of non-singleton subsets.


Characteristic function

The characteristic function of the Dirichlet distribution is a
confluent In geography, a confluence (also: ''conflux'') occurs where two or more flowing bodies of water join to form a single channel. A confluence can occur in several configurations: at the point where a tributary joins a larger river (main stem); o ...
form of the Lauricella hypergeometric series. It is given by Phillips as : CF\left(s_1,\ldots,s_\right) = \operatorname\left(e^ \right)= \Psi^ (\alpha_1,\ldots,\alpha_;\alpha;is_1,\ldots, is_) where \alpha = \alpha_1 + \cdots + \alpha_K and : \Psi^ (a_1,\ldots,a_m;c;z_1,\ldots z_m) = \sum\frac. The sum is over non-negative integers k_1,\ldots,k_m and k=k_1+\cdots+k_m. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a complex path integral: : \Psi^ = \frac\int_L e^t\,t^\,\prod_^m (t-z_j)^ \, dt where ''L'' denotes any path in the complex plane originating at -\infty, encircling in the positive direction all the singularities of the integrand and returning to -\infty.


Inequality

Probability density function f \left(x_1,\ldots, x_; \alpha_1,\ldots, \alpha_K \right) plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.


Related distributions

For ''K'' independently distributed
Gamma distribution In probability theory and statistics, the gamma distribution is a two- parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma di ...
s: : Y_1 \sim \operatorname(\alpha_1, \theta), \ldots, Y_K \sim \operatorname(\alpha_K, \theta) we have: :V=\sum_^K Y_i\sim\operatorname \left(\sum_^K\alpha_i, \theta \right ), :X = (X_1, \ldots, X_K) = \left(\frac, \ldots, \frac \right)\sim \operatorname\left (\alpha_1, \ldots, \alpha_K \right). Although the ''Xi''s are not independent from one another, they can be seen to be generated from a set of ''K'' independent gamma random variable. Unfortunately, since the sum ''V'' is lost in forming ''X'' (in fact it can be shown that ''V'' is stochastically independent of ''X''), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.


Conjugate prior of the Dirichlet distribution

Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior. The conjugate prior is of the form: :\operatorname(\boldsymbol\alpha \mid \boldsymbol,\eta) \propto \left(\frac\right)^\eta \exp\left(-\sum_k v_k \alpha_k\right). Here \boldsymbol is a ''K''-dimensional real vector and \eta is a scalar parameter. The domain of (\boldsymbol,\eta) is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is: : \forall k\;\;v_k>0\;\;\;\;\text \;\;\;\;\eta>-1 \;\;\;\;\text \;\;\;\;(\eta\leq0\;\;\;\;\text\;\;\;\;\sum_k \exp-\frac \eta < 1) The conjugation property can be expressed as : if 'prior'': \boldsymbol\sim\operatorname(\cdot \mid \boldsymbol,\eta)and 'observation'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol)then 'posterior'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol-\log \boldsymbol, \eta+1) In the published literature there is no practical algorithm to efficiently generate samples from \operatorname(\boldsymbol \mid \boldsymbol,\eta).


Occurrence and applications


Bayesian models

Dirichlet distributions are most commonly used as the
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
of
categorical variable In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
s or multinomial variables in Bayesian
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation ...
s and other hierarchical Bayesian models. (In many fields, such as in
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabili ...
s and
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no questi ...
s are commonly conflated.) Inference over hierarchical Bayesian models is often done using
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is diff ...
, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the
concentration parameter In probability theory and statistics, a concentration parameter is a special kind of numerical parameter of a parametric family of probability distributions. Concentration parameters occur in two kinds of distribution: In the Von Mises–Fisher ...
s). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.


Intuitive interpretations of the parameters


The concentration parameter

Dirichlet distributions are very often used as
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
s in
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and ...
. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value ''α'' to which all parameters are set is called the
concentration parameter In probability theory and statistics, a concentration parameter is a special kind of numerical parameter of a parametric family of probability distributions. Concentration parameters occur in two kinds of distribution: In the Von Mises–Fisher ...
. If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the
concentration parameter In probability theory and statistics, a concentration parameter is a special kind of numerical parameter of a parametric family of probability distributions. Concentration parameters occur in two kinds of distribution: In the Von Mises–Fisher ...
for further discussion.


String cutting

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into ''K'' pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that \alpha_0 = \sum_^K \alpha_i. The ''α''/''α''0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with ''α''0.


Pólya's urn

Consider an urn containing balls of ''K'' different colors. Initially, the urn contains ''α''1 balls of color 1, ''α''2 balls of color 2, and so on. Now perform ''N'' draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as ''N'' approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir(''α''1,...,''αK''). For a formal proof, note that the proportions of the different colored balls form a bounded ,1sup>''K''-valued martingale, hence by the martingale convergence theorem, these proportions converge
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree. Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation


From gamma distribution

With a source of Gamma-distributed random variates, one can easily sample a random vector x=(x_1, \ldots, x_K) from the ''K''-dimensional Dirichlet distribution with parameters (\alpha_1, \ldots, \alpha_K) . First, draw ''K'' independent random samples y_1, \ldots, y_K from
Gamma distribution In probability theory and statistics, the gamma distribution is a two- parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma di ...
s each with density : \operatorname(\alpha_i, 1) = \frac, \! and then set :x_i = \frac. The joint distribution of \ is given by: : e^ \prod _^ \frac Next, one uses a change of variables, parametrising \ in terms of y_, y_, \ldots , y_ and \sum _^y_ , and performs a change of variables from y \to x such that x_ = \sum_^y_ , x_ = \frac, x_ = \frac, \ldots , x_ = \frac One must then use the change of variables formula, P(x) = P(y(x))\bigg, \frac\bigg, in which \bigg, \frac\bigg, is the transformation Jacobian. Writing y explicitly as a function of x, one obtains y_ = x_x_, y_ = x_x_ \ldots y_ = x_x_, y_ = x_(1-\sum_^x_) The Jacobian now looks like :\beginx_ & 0 & \ldots & x_ \\ 0 & x_ & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ -x_ & -x_ & \ldots & 1-\sum_^x_ \end The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain :\beginx_ & 0 & \ldots & x_ \\ 0 & x_ & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end which can be expanded about the bottom row to obtain x_^ Substituting for x in the joint pdf and including the Jacobian, one obtains: :\fracx_^e^ Each of the variables 0 \leq x_, x_, \ldots , x_ \leq 1 and likewise 0 \leq \sum _^x_ \leq 1 . Finally, integrate out the extra degree of freedom x_ and one obtains: : x_, x_, \ldots, x_ \sim \frac Which is equivalent to : \frac with support \sum_^x_=1 Below is example Python code to draw the sample: params = 1, a2, ..., aksample = andom.gammavariate(a, 1) for a in paramssample = / sum(sample) for v in sample This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.


From marginal beta distributions

A less efficient algorithm relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate x_1 from : \textrm\left(\alpha_1, \sum_^K \alpha_i \right) Then simulate x_2, \ldots, x_ in order, as follows. For j=2, \ldots, K-1, simulate \phi_j from :\textrm \left(\alpha_j, \sum_^K \alpha_i \right ), and let :x_j= \left(1-\sum_^ x_i \right )\phi_j. Finally, set :x_K=1-\sum_^ x_i. This iterative procedure corresponds closely to the "string cutting" intuition described above. Below is example Python code to draw the sample: params = 1, a2, ..., akxs = andom.betavariate(params[0_sum(params[1:.html" ;"title=".html" ;"title="andom.betavariate(params[0">andom.betavariate(params[0 sum(params[1:">.html" ;"title="andom.betavariate(params[0">andom.betavariate(params[0 sum(params[1:)] for j in range(1, len(params) - 1): phi = random.betavariate(params[j], sum(params[j + 1 :])) xs.append((1 - sum(xs)) * phi) xs.append(1 - sum(xs))


See also

* Generalized Dirichlet distribution * Grouped Dirichlet distribution *
Inverted Dirichlet distribution In statistics, the inverted Dirichlet distribution is a multivariate generalization of the beta prime distribution, and is related to the Dirichlet distribution. It was first described by Tiao and Cuttman in 1965. The distribution has a density f ...
* Latent Dirichlet allocation * Dirichlet process * Matrix variate Dirichlet distribution


References


External links

*
Dirichlet DistributionHow to estimate the parameters of the compound Dirichlet distribution (Pólya distribution) using expectation-maximization (EM)
*
Dirichlet Random Measures, Method of Construction via Compound Poisson Random Variables, and Exchangeability Properties of the resulting Gamma Distribution


R package that contains functions for simulating parameters of the Dirichlet distribution. {{Peter Gustav Lejeune Dirichlet Multivariate continuous distributions Conjugate prior distributions Exponential family distributions Continuous distributions