Simplicial Generalized Beta Distribution
   HOME

TheInfoList



OR:

In
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Dirichlet distribution (after
Peter Gustav Lejeune Dirichlet Johann Peter Gustav Lejeune Dirichlet (; ; 13 February 1805 – 5 May 1859) was a German mathematician. In number theory, he proved special cases of Fermat's last theorem and created analytic number theory. In analysis, he advanced the theory o ...
), often denoted \operatorname(\boldsymbol\alpha), is a family of
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
multivariate
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s parameterized by a vector of positive reals. It is a multivariate generalization of the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
, (Chapter 49: Dirichlet and Inverted Dirichlet Distributions) hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
s in
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
, and in fact, the Dirichlet distribution is the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
of the
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
and
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
. The infinite-dimensional generalization of the Dirichlet distribution is the ''
Dirichlet process In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a pro ...
''.


Definitions


Probability density function

The Dirichlet distribution of order with parameters has a
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
with respect to
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
on the
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
given by f \left(x_1,\ldots, x_; \alpha_1,\ldots, \alpha_K \right) = \frac \prod_^K x_i^ where \_^ belong to the standard K-1
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
, or in other words: \sum_^ x_i = 1 \mbox x_i \in \left ,1\right\mbox i \in \\,. The
normalizing constant In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probabilit ...
is the multivariate
beta function In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral : \Beta(z_1,z_2) = \int_0^1 t^ ...
, which can be expressed in terms of the
gamma function In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
: \mathrm(\boldsymbol\alpha) = \frac,\qquad\boldsymbol=(\alpha_1,\ldots,\alpha_K).


Support

The
support Support may refer to: Arts, entertainment, and media * Supporting character * Support (art), a solid surface upon which a painting is executed Business and finance * Support (technical analysis) * Child support * Customer support * Income Su ...
of the Dirichlet distribution is the set of -dimensional vectors whose entries are real numbers in the interval ,1such that \, \boldsymbol x\, _1 = 1, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a -way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s, specifically the set of -dimensional
discrete distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spac ...
s. The technical term for the set of points in the support of a -dimensional Dirichlet distribution is the
open Open or OPEN may refer to: Music * Open (band), Australian pop/rock band * The Open (band), English indie rock band * ''Open'' (Blues Image album), 1969 * ''Open'' (Gerd Dudek, Buschi Niebergall, and Edward Vesala album), 1979 * ''Open'' (Go ...
standard -simplex, which is a generalization of a
triangle A triangle is a polygon with three corners and three sides, one of the basic shapes in geometry. The corners, also called ''vertices'', are zero-dimensional points while the sides connecting them, also called ''edges'', are one-dimension ...
, embedded in the next-higher dimension. For example, with , the support is an
equilateral triangle An equilateral triangle is a triangle in which all three sides have the same length, and all three angles are equal. Because of these properties, the equilateral triangle is a regular polygon, occasionally known as the regular triangle. It is the ...
embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.


Special cases

A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value , called the concentration parameter. In terms of , the density function has the form f(x_1,\dots, x_; \alpha) = \frac \prod_^K x_i^. When , the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard -simplex, i.e. it is uniform over all points in its
support Support may refer to: Arts, entertainment, and media * Supporting character * Support (art), a solid surface upon which a painting is executed Business and finance * Support (technical analysis) * Child support * Customer support * Income Su ...
. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values. When , the distribution is the same as would be obtained by choosing a point uniformly at random from the surface of a -dimensional unit hypersphere and squaring each coordinate. The distribution is the
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
for the Dirichlet distribution. More generally, the parameter vector is sometimes written as the product \alpha \boldsymbol n of a (
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
) concentration parameter and a (
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
) base measure \boldsymbol n=(n_1,\dots,n_K) where lies within the -simplex (i.e.: its coordinates n_i sum to one). The concentration parameter in this case is larger by a factor of than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing
Dirichlet process In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a pro ...
es and is often used in the topic modelling literature.
: If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter , the dimension of the distribution, is the uniform distribution on the -simplex.


Properties


Moments

Let X = (X_1, \ldots, X_K)\sim\operatorname(\boldsymbol\alpha). Let \alpha_0 = \sum_^K \alpha_i. Then \operatorname _i= \frac, \operatorname _i= \frac. Furthermore, if i\neq j \operatorname _i,X_j= \frac. The covariance matrix is
singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular or sounder, a group of boar, see List of animal names * Singular (band), a Thai jazz pop duo *'' Singula ...
. More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For \boldsymbol=(t_1,\dotsc,t_K) \in \mathbb^K, denote by \boldsymbol^ = (t_1^i,\dotsc,t_K^i) its -th Hadamard power. Then, \operatorname\left (\boldsymbol \cdot \boldsymbol)^n \right= \frac \sum \frac \prod_^K \frac = \frac Z_n(\boldsymbol^ \cdot \boldsymbol, \cdots, \boldsymbol^ \cdot \boldsymbol), where the sum is over non-negative integers k_1,\ldots,k_K with n=k_1+\cdots+k_K, and Z_n is the cycle index polynomial of the
Symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric grou ...
of degree . We have the special case \operatorname\left \boldsymbol \cdot \boldsymbol \right= \frac. The multivariate analogue \operatorname\left (\boldsymbol_1 \cdot \boldsymbol)^ \cdots (\boldsymbol_q \cdot \boldsymbol)^ \right/math> for vectors \boldsymbol_1, \dotsc, \boldsymbol_q \in \mathbb^K can be expressed in terms of a color pattern of the exponents n_1, \dotsc, n_q in the sense of
Pólya enumeration theorem The Pólya enumeration theorem, also known as the Redfield–Pólya theorem and Pólya counting, is a theorem in combinatorics that both follows from and ultimately generalizes Burnside's lemma on the number of orbits of a group action on a set. T ...
. Particular cases include the simple computation \operatorname\left prod_^K X_i^\right= \frac = \frac\times\prod_^K \frac.


Mode

The
mode Mode ( meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * MO''D''E (magazine), a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is the setting fo ...
of the distribution is the vector with x_i = \frac, \qquad \alpha_i > 1.


Marginal distributions

The
marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
s are
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
s: X_i \sim \operatorname (\alpha_i, \alpha_0 - \alpha_i). Also see below.


Conjugate to categorical or multinomial

The Dirichlet distribution is the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
distribution of the
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
(a generic
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
with a given number of possible outcomes) and
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
(the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties. Formally, this can be expressed as follows. Given a model \begin \boldsymbol\alpha &=& \left(\alpha_1, \ldots, \alpha_K \right) &=& \text \\ \mathbf\mid\boldsymbol\alpha &=& \left(p_1, \ldots, p_K \right ) &\sim& \operatorname(K, \boldsymbol\alpha) \\ \mathbb\mid\mathbf &=& \left(\mathbf_1, \ldots, \mathbf_K \right ) &\sim& \operatorname(K,\mathbf) \end then the following holds: \begin \mathbf &=& \left(c_1, \ldots, c_K \right ) &=& \texti \\ \mathbf \mid \mathbb,\boldsymbol\alpha &\sim& \operatorname(K,\mathbf+\boldsymbol\alpha) &=& \operatorname \left (K,c_1+\alpha_1,\ldots,c_K+\alpha_K \right) \end This relationship is used in
Bayesian statistics Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
to estimate the underlying parameter of a
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
given a collection of samples. Intuitively, we can view the
hyperprior In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution. As with the term ''hyperparameter,'' the use of ''hyper'' is to distinguish it from a prior distribution of a para ...
vector as
pseudocount In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts \mathbf = \lang ...
s, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector ) in order to derive the posterior distribution. In Bayesian
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
s and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the
categorical variable In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
s appearing in the models. See the section on
applications Application may refer to: Mathematics and computing * Application software, computer software designed to help the user to perform specific tasks ** Application layer, an abstraction layer that specifies protocols and interface methods used in a ...
below for more information.


Relation to Dirichlet-multinomial distribution

In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the
marginal Marginal may refer to: * Marginal (album), ''Marginal'' (album), the third album of the Belgian rock band Dead Man Ray, released in 2001 * Marginal (manga), ''Marginal'' (manga) * ''El Marginal'', Argentine TV series * Marginal seat or marginal c ...
joint distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a
Dirichlet-multinomial distribution In probability theory and statistics, the Dirichlet-multinomial distribution is a family of discrete multivariate probability distributions on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribu ...
. This distribution plays an important role in hierarchical Bayesian models, because when doing
inference Inferences are steps in logical reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinct ...
over such models using methods such as
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.


Entropy

If is a \operatorname(\boldsymbol\alpha) random variable, the
differential entropy Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy (a measure of average surprisal) of a random variable, to continu ...
of (in nat units) is h(\boldsymbol X) = \operatorname \ln f(\boldsymbol X)= \ln \operatorname(\boldsymbol\alpha) + (\alpha_0-K)\psi(\alpha_0) - \sum_^K (\alpha_j-1)\psi(\alpha_j) where \psi is the
digamma function In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: :\psi(z) = \frac\ln\Gamma(z) = \frac. It is the first of the polygamma functions. This function is Monotonic function, strictly increasing a ...
. The following formula for \operatorname ln(X_i)/math> can be used to derive the differential
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
above. Since the functions \ln(X_i) are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of \ln(X_i) (see equation (2.62) in ) and its associated covariance matrix: \operatorname ln(X_i)= \psi(\alpha_i)-\psi(\alpha_0) and \operatorname ln(X_i),\ln(X_j)= \psi'(\alpha_i) \delta_ - \psi'(\alpha_0) where \psi is the
digamma function In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: :\psi(z) = \frac\ln\Gamma(z) = \frac. It is the first of the polygamma functions. This function is Monotonic function, strictly increasing a ...
, \psi' is the
trigamma function In mathematics, the trigamma function, denoted or , is the second of the polygamma functions, and is defined by : \psi_1(z) = \frac \ln\Gamma(z). It follows from this definition that : \psi_1(z) = \frac \psi(z) where is the digamma functi ...
, and \delta_ is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
. The spectrum of Rényi information for values other than \lambda = 1 is given by F_R(\lambda) = (1-\lambda)^ \left( - \lambda \log \mathrm(\boldsymbol\alpha) + \sum_^K \log \Gamma(\lambda(\alpha_i - 1) + 1) - \log \Gamma(\lambda (\alpha_0 - K) + K ) \right) and the information entropy is the limit as \lambda goes to 1. Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector with probability-mass distribution , i.e., P(Z_i=1, Z_ = 0 , \boldsymbol X) = X_i . The conditional
information entropy In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
of , given is S(\boldsymbol X) = H(\boldsymbol Z , \boldsymbol X) = \operatorname_ \boldsymbol X ) = \sum_^K - X_i \log X_i This function of is a scalar random variable. If has a symmetric Dirichlet distribution with all \alpha_i = \alpha, the expected value of the entropy (in nat units) is \operatorname (\boldsymbol X)= \sum_^K \operatorname X_i \ln X_i= \psi(K\alpha + 1) - \psi(\alpha + 1)


Aggregation

If X = (X_1, \ldots, X_K)\sim\operatorname(\alpha_1,\ldots,\alpha_K) then, if the random variables with subscripts and are dropped from the vector and replaced by their sum, X' = (X_1, \ldots, X_i + X_j, \ldots, X_K)\sim\operatorname (\alpha_1, \ldots, \alpha_i + \alpha_j, \ldots, \alpha_K). This aggregation property may be used to derive the marginal distribution of X_i mentioned above.


Neutrality

If X = (X_1, \ldots, X_K)\sim\operatorname(\boldsymbol\alpha), then the vector  is said to be ''neutral'' in the sense that ''X'' is independent of X^ where X^=\left(\frac,\frac,\ldots,\frac \right), and similarly for removing any of X_2,\ldots,X_. Observe that any permutation of is also neutral (a property not possessed by samples drawn from a
generalized Dirichlet distribution In statistics, the generalized Dirichlet distribution (GD) is a generalization of the Dirichlet distribution with a more general covariance structure and almost twice the number of parameters. Random vectors with a GD distribution are completely ...
). Combining this with the property of aggregation it follows that is independent of \left(\frac,\frac,\ldots,\frac \right). In fact it is true, further, for the Dirichlet distribution, that for 3\le j\le K-1, the pair \left(X_1+\cdots +X_, X_j+\cdots +X_K\right), and the two vectors \left(\frac,\frac,\ldots,\frac \right) and \left(\frac,\frac,\ldots,\frac \right), viewed as triple of normalised random vectors, are
mutually independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
. The analogous result is true for partition of the indices into any other pair of non-singleton subsets.


Characteristic function

The characteristic function of the Dirichlet distribution is a
confluent Confluent, Inc. is an American technology company headquartered in Mountain View, California. Confluent was founded by Jay Kreps, Jun Rao and Neha Narkhede on September 23, 2014, in order to commercialize an open-source streaming platform Apa ...
form of the
Lauricella hypergeometric series In 1893 Giuseppe Lauricella defined and studied four hypergeometric series ''F'A'', ''F'B'', ''F'C'', ''F'D'' of three variables. They are : : F_A^(a,b_1,b_2,b_3,c_1,c_2,c_3;x_1,x_2,x_3) = \sum_^ \frac \,x_1^x_2^x_3^ for , ''x''1, ...
. It is given by Phillips as CF\left(s_1,\ldots,s_\right) = \operatorname\left(e^ \right)= \Psi^ (\alpha_1,\ldots,\alpha_;\alpha_0;is_1,\ldots, is_) where \Psi^ (a_1,\ldots,a_m;c;z_1,\ldots z_m) = \sum\frac. The sum is over non-negative integers k_1,\ldots,k_m and k=k_1+\cdots+k_m. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a complex path integral: \Psi^ = \frac\int_L e^t\,t^\,\prod_^m (t-z_j)^ \, dt where denotes any path in the complex plane originating at -\infty, encircling in the positive direction all the singularities of the integrand and returning to -\infty.


Inequality

Probability density function f \left(x_1,\ldots, x_; \alpha_1,\ldots, \alpha_K \right) plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution. Another inequality relates the moment-generating function of the Dirichlet distribution to the convex conjugate of the scaled reversed Kullback-Leibler divergence: \log \operatorname\left(\exp \right) \leq \sup_p \sum_^K \left(p_i s_i - \alpha_i\log\left(\frac \right)\right), where the supremum is taken over spanning the -simplex.


Related distributions

When \boldsymbol X=(X_1, \ldots,X_K)\sim \operatorname\left(\alpha_1, \ldots, \alpha_K \right), the marginal distribution of each component X_i \sim \operatorname(\alpha_i, \alpha_0-\alpha_i), a
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
. In particular, if then X_1 \sim \operatorname(\alpha_1, \alpha_2) is equivalent to \boldsymbol X=(X_1,1-X_1) \sim \operatorname\left(\alpha_1, \alpha_2 \right). For independently distributed
Gamma distribution In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the g ...
s: Y_1 \sim \operatorname(\alpha_1, \theta), \ldots, Y_K \sim \operatorname(\alpha_K, \theta) we have: V=\sum_^K Y_i\sim\operatorname \left(\alpha_0, \theta \right ), X = (X_1, \ldots, X_K) = \left(\frac, \ldots, \frac \right)\sim \operatorname\left (\alpha_1, \ldots, \alpha_K \right). Although the ''X''s are not independent from one another, they can be seen to be generated from a set of independent
gamma Gamma (; uppercase , lowercase ; ) is the third letter of the Greek alphabet. In the system of Greek numerals it has a value of 3. In Ancient Greek, the letter gamma represented a voiced velar stop . In Modern Greek, this letter normally repr ...
random variables. Unfortunately, since the sum is lost in forming (in fact it can be shown that is stochastically independent of ), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.


Conjugate prior of the Dirichlet distribution

Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior. The conjugate prior is of the form: \operatorname(\boldsymbol\alpha \mid \boldsymbol,\eta) \propto \left(\frac\right)^\eta \exp\left(-\sum_k v_k \alpha_k\right). Here \boldsymbol is a -dimensional real vector and \eta is a scalar parameter. The domain of (\boldsymbol,\eta) is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is: \forall k\;\;v_k>0\;\;\;\;\text \;\;\;\;\eta>-1 \;\;\;\;\text \;\;\;\;(\eta\leq0\;\;\;\;\text\;\;\;\;\sum_k \exp-\frac \eta < 1) The conjugation property can be expressed as : if 'prior'': \boldsymbol\sim\operatorname(\cdot \mid \boldsymbol,\eta)and 'observation'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol)then 'posterior'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol-\log \boldsymbol, \eta+1) In the published literature there is no practical algorithm to efficiently generate samples from \operatorname(\boldsymbol \mid \boldsymbol,\eta).


Generalization by scaling and translation of log-probabilities

As noted above, Dirichlet variates can be generated by normalizing independent
gamma Gamma (; uppercase , lowercase ; ) is the third letter of the Greek alphabet. In the system of Greek numerals it has a value of 3. In Ancient Greek, the letter gamma represented a voiced velar stop . In Modern Greek, this letter normally repr ...
variates. If instead one normalizes generalized gamma variates, one obtains variates from the simplicial generalized beta distribution (SGB). On the other hand, SGB variates can also be obtained by applying the
softmax function The softmax function, also known as softargmax or normalized exponential function, converts a tuple of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, a ...
to scaled and translated logarithms of Dirichlet variates. Specifically, let \mathbf x = (x_1, \ldots, x_K)\sim\operatorname(\boldsymbol\alpha) and let \mathbf y = (y_1, \ldots, y_K), where applying the logarithm elementwise: \mathbf y = \operatorname(a^\log\mathbf x + \log\mathbf b)\;\iff\;\mathbf x = \operatorname(a\log\mathbf y - a\log\mathbf b) or y_k = \frac\; \iff\; x_k = \frac where a>0 and \mathbf b = (b_1, \ldots, b_K), with all b_k>0, then \mathbf y\sim\operatorname(a, \mathbf b, \boldsymbol\alpha). The SGB density function can be derived by noting that the transformation \mathbf x\mapsto\mathbf y, which is a
bijection In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equival ...
from the simplex to itself, induces a differential volume change factor of: R(\mathbf y, a,\mathbf b) = a^\prod_^K\frac where it is understood that \mathbf x is recovered as a function of \mathbf y, as shown above. This facilitates writing the SGB density in terms of the Dirichlet density, as: f_(\mathbf y\mid a, \mathbf b, \boldsymbol\alpha) = \frac This generalization of the Dirichlet density, via a
change of variables In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become si ...
, is closely related to a
normalizing flow A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the Probability density function#Function of random ...
, while it must be noted that the differential volume change is not given by the
Jacobian determinant In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of components ...
of \mathbf x\mapsto\mathbf y:\mathbb R^K\to\mathbb R^K which is zero, but by the Jacobian determinant of (x_1,\ldots,x_)\mapsto\mathbf (y_1,\ldots,y_), as explained in more detail at Normalizing flow § Simplex flow. For further insight into the interaction between the Dirichlet shape parameters \boldsymbol\alpha, and the transformation parameters a, \mathbf b, it may be helpful to consider the logarithmic marginals, \log\frac, which follow the
logistic-beta distribution The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al.Johnson, N.L., Kotz, S., Balakrishnan, N. (1995) ''Continuous Univariate Distributions, Volume 2' ...
, B_\sigma(\alpha_k,\sum_ \alpha_i). See in particular the sections on tail behaviour and
generalization with location and scale parameters A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteri ...
.


Application

When b_1=b_2=\cdots=b_K, then the transformation simplifies to \mathbf x\mapsto\operatorname(a^\log\mathbf x), which is known as temperature scaling in
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, where it is used as a calibration transform for multiclass probabilistic classiers. Traditionally the temperature parameter (a here) is learnt discriminatively by minimizing multiclass
cross-entropy In information theory, the cross-entropy between two probability distributions p and q, over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the ...
over a supervised calibration data set with known class labels. But the above PDF transformation mechanism can be used to facilitate also the design of generatively trained calibration models with a temperature scaling component.


Occurrence and applications


Bayesian models

Dirichlet distributions are most commonly used as the
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
of
categorical variable In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
s or multinomial variables in Bayesian
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
s and other hierarchical Bayesian models. (In many fields, such as in
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
s and
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
s are commonly conflated.) Inference over hierarchical Bayesian models is often done using
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a
Dirichlet-multinomial distribution In probability theory and statistics, the Dirichlet-multinomial distribution is a family of discrete multivariate probability distributions on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribu ...
, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the
Dirichlet-multinomial distribution In probability theory and statistics, the Dirichlet-multinomial distribution is a family of discrete multivariate probability distributions on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribu ...
is extremely easy; see that article for more information.


Intuitive interpretations of the parameters


The concentration parameter

Dirichlet distributions are very often used as
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
s in
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value to which all parameters are set is called the concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter for further discussion.


String cutting

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that \alpha_0 = \sum_^K \alpha_i. The \alpha_i/\alpha_0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with \alpha_0.


Pólya's urn

Consider an urn containing balls of different colors. Initially, the urn contains balls of color 1, balls of color 2, and so on. Now perform draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as approaches infinity, the proportions of different colored balls in the urn will be distributed as . For a formal proof, note that the proportions of the different colored balls form a bounded -valued martingale, hence by the
martingale convergence theorem In mathematicsspecifically, in the theory of stochastic processesDoob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the marting ...
, these proportions converge
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree. Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation


From gamma distribution

With a source of Gamma-distributed random variates, one can easily sample a random vector x=(x_1, \ldots, x_K) from the -dimensional Dirichlet distribution with parameters (\alpha_1, \ldots, \alpha_K) . First, draw independent random samples y_1, \ldots, y_K from
Gamma distribution In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the g ...
s each with density \operatorname(\alpha_i, 1) = \frac, \! and then set x_i = \frac. The joint distribution of the independently sampled gamma variates, \, is given by the product: e^ \prod _^ \frac Next, one uses a change of variables, parametrising \ in terms of y_, y_, \ldots , y_ and \sum _^y_ , and performs a change of variables from y \to x such that \bar x = \textstyle\sum_^y_, x_ = \frac, x_ = \frac, \ldots , x_ = \frac. Each of the variables 0 \leq x_, x_, \ldots , x_ \leq 1 and likewise 0 \leq \textstyle\sum _^x_ \leq 1 . One must then use the change of variables formula, P(x) = P(y(x))\bigg, \frac\bigg, in which \bigg, \frac\bigg, is the transformation Jacobian. Writing y explicitly as a function of x, one obtains y_ = \bar xx_, y_ = \bar xx_ \ldots y_ = \bar xx_, y_ = \bar x(1-\textstyle\sum_^x_) The Jacobian now looks like \begin\bar x & 0 & \ldots & x_ \\ 0 & \bar x & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ -\bar x & -\bar x & \ldots & 1-\sum_^x_ \end The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain \begin\bar x & 0 & \ldots & x_ \\ 0 & \bar x & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end which can be expanded about the bottom row to obtain the determinant value \bar x^. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains: \begin &\frac\bar x^e^ \\ =&\frac\times\frac \end where \bar\alpha=\textstyle\sum_^K\alpha_i. The right-hand side can be recognized as the product of a Dirichlet pdf for the x_i and a gamma pdf for \bar x. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain: x_, x_, \ldots, x_ \sim \frac Which is equivalent to \frac with support \sum_^x_=1 Below is example Python code to draw the sample: params = 1, a2, ..., aksample = andom.gammavariate(a, 1) for a in paramssample = / sum(sample) for v in sample This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.


From marginal beta distributions

A less efficient algorithm relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate x_1 from \textrm\left(\alpha_1, \sum_^K \alpha_i \right) Then simulate x_2, \ldots, x_ in order, as follows. For j=2, \ldots, K-1, simulate \phi_j from \textrm \left(\alpha_j, \sum_^K \alpha_i \right ), and let x_j= \left(1-\sum_^ x_i \right )\phi_j. Finally, set x_K=1-\sum_^ x_i. This iterative procedure corresponds closely to the "string cutting" intuition described above. Below is example Python code to draw the sample: params = 1, a2, ..., akxs = andom.betavariate(params[0 sum(params[1:">.html" ;"title="andom.betavariate(params[0">andom.betavariate(params[0 sum(params[1:)">">andom.betavariate(params[0 sum(params[1:">.html" ;"title="andom.betavariate(params[0">andom.betavariate(params[0 sum(params[1:)for j in range(1, len(params) - 1): phi = random.betavariate(params[j], sum(params[j + 1 :])) xs.append((1 - sum(xs)) * phi) xs.append(1 - sum(xs))


When each alpha is 1

When , a sample from the distribution can be found by randomly drawing a set of values independently and uniformly from the interval , adding the values and to the set to make it have values, sorting the set, and computing the difference between each pair of order-adjacent values, to give , ..., .


When each alpha is 1/2 and relationship to the hypersphere

When , a sample from the distribution can be found by randomly drawing values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give , ..., . A point , ..., can be drawn uniformly at random from the ()-dimensional unit hypersphere (which is the surface of a -dimensional hyperball) via a similar procedure. Randomly draw values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.


See also

*
Generalized Dirichlet distribution In statistics, the generalized Dirichlet distribution (GD) is a generalization of the Dirichlet distribution with a more general covariance structure and almost twice the number of parameters. Random vectors with a GD distribution are completely ...
* Grouped Dirichlet distribution * Inverted Dirichlet distribution *
Latent Dirichlet allocation In natural language processing, latent Dirichlet allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) for modeling automatically extracted topics in textual corpora. The LDA is an example of a Bayesian topic ...
*
Dirichlet process In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a pro ...
* Matrix variate Dirichlet distribution


References


External links

*
Dirichlet DistributionHow to estimate the parameters of the compound Dirichlet distribution (Pólya distribution) using expectation-maximization (EM)
*
Dirichlet Random Measures, Method of Construction via Compound Poisson Random Variables, and Exchangeability Properties of the resulting Gamma Distribution


R package that contains functions for simulating parameters of the Dirichlet distribution. {{Peter Gustav Lejeune Dirichlet Multivariate continuous distributions Conjugate prior distributions Exponential family distributions Continuous distributions