In
statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a
probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
defined on real-valued
positive-definite matrices
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** ''The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
. In
Bayesian statistics
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
it is used as the
conjugate prior
In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and t ...
for the covariance matrix of a
multivariate normal
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One ...
distribution.
We say
follows an inverse Wishart distribution, denoted as
, if its
inverse has a
Wishart distribution
In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.
It is a family of probability distributions defin ...
. Important identities have been derived for the inverse-Wishart distribution.
Density
The
probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
of the inverse Wishart is:
:
where
and
are
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
* Positive-definite bilinear form
* Positive-definite ...
matrices,
is the determinant, and Γ
''p''(·) is the
multivariate gamma function
In mathematics, the multivariate gamma function Γ''p'' is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the ma ...
.
Theorems
Distribution of the inverse of a Wishart-distributed matrix
If
and
is of size
, then
has an inverse Wishart distribution
.
Marginal and conditional distributions from an inverse Wishart-distributed matrix
Suppose
has an inverse Wishart distribution. Partition the matrices
and
conformably with each other
:
where
and
are
matrices, then we have
#
is independent of
and
, where
is the
Schur complement In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.
Suppose ''p'', ''q'' are nonnegative integers, and suppose ''A'', ''B'', ''C'', ''D'' are respectively ''p'' × ''p'', ''p'' × ''q'', ''q'' ...
of
in
;
#
;
#
, where
is a
matrix normal distribution
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.
Definition
The probability density ...
;
#
, where
;
Conjugate distribution
Suppose we wish to make inference about a covariance matrix
whose
prior has a
distribution. If the observations
marginalize out
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
(integrate out) the Gaussian's parameter
\mathbf, using the formula
p(x) = \frac and the linear algebra identity
v^T \Omega v = \text( \Omega v v^T) :
:
f_ (\mathbf x) = \int f_(\mathbf x) f_ (\sigma)\,d\sigma = \frac
(this is useful because the variance matrix
\mathbf is not known in practice, but because
is known ''a priori'', and
can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred
prior knowledge.
Moments
The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.
Let
W \sim \mathcal(\mathbf \Psi^, \nu) with
\nu \ge p and
X \doteq W^, so that
X \sim \mathcal^(\mathbf \Psi, \nu).
The mean:
[
: \operatorname E(\mathbf X) = \frac.
The variance of each element of \mathbf:
:
\operatorname(x_) = \frac
The variance of the diagonal uses the same formula as above with i=j, which simplifies to:
:
\operatorname(x_) = \frac.
The covariance of elements of \mathbf are given by:
:
\operatorname(x_,x_) = \frac
The same results are expressed in Kronecker product form by von Rosen as follows:
:
\begin
\mathbf \left ( W^ \otimes W^ \right ) & = c_1 \Psi \otimes \Psi
+ c_2 Vec (\Psi) Vec (\Psi)^T + c_2 K_ \Psi \otimes \Psi \\
\mathbf_\otimes \left ( W^ ,W^ \right ) & = (c_1 - c_3 ) \Psi \otimes \Psi
+ c_2 Vec (\Psi) Vec (\Psi)^T + c_2 K_ \Psi \otimes \Psi
\end
where
:
\begin
c_2 & = \left (\nu-p)(\nu-p-1)(\nu-p-3) \right \\
c_1 & = (\nu-p-2) c_2 \\
c_3 & = (\nu -p-1)^,
\end
:K_ \text p^2 \times p^2 ]commutation matrix In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(''m'',''n'') is the ' ...
:
\mathbf_\otimes \left ( W^,W^ \right ) = \mathbf \left ( W^ \otimes W^ \right ) - \mathbf \left ( W^ \right ) \otimes \mathbf \left ( W^ \right ).
There appears to be a typo in the paper whereby the coefficient of K_ \Psi \otimes \Psi is given as c_1 rather than c_2, and that the expression for the mean square inverse Wishart, corollary 3.1, should read
:
\mathbf \left W^ W^ \right = (c_1+c_2) \Sigma^ \Sigma^ + c_2 \Sigma^ \mathbf(\Sigma^).
To show how the interacting terms become sparse when the covariance is diagonal, let \Psi = \mathbf I_ and introduce some arbitrary parameters u, v, w :
:
\mathbf \left ( W^ \otimes W^ \right ) = u \Psi \otimes \Psi
+ v \, \mathrm(\Psi) \, \mathrm(\Psi)^T + w K_ \Psi \otimes \Psi.
where \mathrm denotes the matrix vectorization operator. Then the second moment matrix becomes
:
\mathbf \left ( W^ \otimes W^ \right ) = \begin
u+v+w & \cdot & \cdot & \cdot & v & \cdot & \cdot & \cdot & v \\
\cdot & u & \cdot & w & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & u & \cdot & \cdot & \cdot & w & \cdot & \cdot \\
\cdot & w & \cdot & u & \cdot & \cdot & \cdot & \cdot & \cdot \\
v & \cdot & \cdot & \cdot & u+v+w & \cdot & \cdot & \cdot & v \\
\cdot & \cdot & \cdot & \cdot & \cdot & u & \cdot & w & \cdot \\
\cdot & \cdot & w & \cdot & \cdot & \cdot & u & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & w & \cdot & u & \cdot \\
v & \cdot & \cdot & \cdot & v & \cdot & \cdot & \cdot & u+v+w \\
\end
which is non-zero only when involving the correlations of diagonal elements of W^ , all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al. in the singular case and, by extension, to the full rank case.
Muirhead shows in Theorem 3.2.5 that if A is distributed as \mathcal_m (n,\Sigma ) and V is a random vector, independent of A , then \frac \sim \chi^2_ and it follows that \frac follows an Inverse-chi-squared distribution
In probability and statistics, the inverse-chi-squared distribution (or inverted-chi-square distributionBernardo, J.M.; Smith, A.F.M. (1993) ''Bayesian Theory'' ,Wiley (pages 119, 431) ) is a continuous probability distribution of a positive-val ...
. Setting V= (1,\,0, \cdots ,0)^T the marginal distribution of the leading diagonal element is thus
: \frac \sim \text\chi^2( n-m+1) = \frac x^ e^, \;\; k = n-m+1
and by rotating V end-around a similar result applies to all diagonal elements A^ .
A corresponding result in the complex Wishart case was shown by Brennan and Reed and the uncorrelated inverse complex Wishart \mathcal^(\mathbf,\nu,p) was shown by Shaman to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated.
Related distributions
* A univariate
In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariat ...
specialization of the inverse-Wishart distribution is the inverse-gamma distribution
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according t ...
. With p=1 (i.e. univariate) and \alpha = \nu/2, \beta = \mathbf/2 and x=\mathbf the probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
of the inverse-Wishart distribution becomes matrix
::
p(x\mid\alpha, \beta) = \frac.
: i.e., the inverse-gamma distribution, where \Gamma_1(\cdot) is the ordinary Gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
.
* The Inverse Wishart distribution is a special case of the inverse matrix gamma distribution
In statistics, the inverse matrix gamma distribution is a generalization of the inverse gamma distribution to positive-definite matrices. It is a more general version of the inverse Wishart distribution, and is used similarly, e.g. as the conjug ...
when the shape parameter \alpha = \frac and the scale parameter \beta =2 .
* Another generalization has been termed the generalized inverse Wishart distribution, \mathcal^. A p \times p positive definite matrix \mathbf is said to be distributed as \mathcal^(\mathbf,\nu,\mathbf) if \mathbf = \mathbf^\mathbf^\mathbf^ is distributed as \mathcal^(\mathbf,\nu). Here \mathbf^ denotes the symmetric matrix square root of \mathbf, the parameters \mathbf,\mathbf are p \times p positive definite matrices, and the parameter \nu is a positive scalar larger than 2p. Note that when \mathbf is equal to an identity matrix, \mathcal^(\mathbf,\nu,\mathbf) = \mathcal^(\mathbf,\nu). This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.
* A different type of generalization is the normal-inverse-Wishart distribution
In probability theory and statistics, the normal-inverse-Wishart distribution (or Gaussian-inverse-Wishart distribution) is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate ...
, essentially the product of a multivariate normal distribution
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
with an inverse Wishart distribution.
* When the scale matrix is an identity matrix, \mathcal = \mathbf, \text \mathcal is an arbitrary orthogonal matrix, replacement of \mathbf by \mathbf \mathcal^T does not change the pdf of \mathbf so \mathcal^(\mathbf,\nu,p) belongs to the family of spherically invariant random processes (SIRPs) in some sense.
: Thus, an arbitrary ''p-vector'' V with l_2 length V^TV = 1 can be rotated into the vector \mathbfV = \; 0 \; 0 \cdotsT without changing the pdf of V^T \mathbf V , moreover \mathbf can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of \mathbf are identically inverse chi squared distributed, with pdf f_ in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al, where it is expressed in the inverse form \frac \sim \chi^2_ .
See also
*Inverse matrix gamma distribution
In statistics, the inverse matrix gamma distribution is a generalization of the inverse gamma distribution to positive-definite matrices. It is a more general version of the inverse Wishart distribution, and is used similarly, e.g. as the conjug ...
*Matrix normal distribution
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.
Definition
The probability density ...
*Wishart distribution
In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.
It is a family of probability distributions defin ...
* Complex inverse Wishart distribution
References
{{ProbDistributions, multivariate
Continuous distributions
Multivariate continuous distributions
Conjugate prior distributions
Exponential family distributions