HOME
*





Matrix Variate Beta Distribution
In statistics, the matrix variate beta distribution is a generalization of the beta distribution. If U is a p\times p positive definite matrix with a matrix variate beta distribution, and a,b>(p-1)/2 are real parameters, we write U\sim B_p\left(a,b\right) (sometimes B_p^I\left(a,b\right)). The probability density function for U is: : \left\^ \det\left(U\right)^\det\left(I_p-U\right)^. Here \beta_p\left(a,b\right) is the multivariate beta function: : \beta_p\left(a,b\right)=\frac where \Gamma_p\left(a\right) is the multivariate gamma function given by : \Gamma_p\left(a\right)= \pi^\prod_^p\Gamma\left(a-(i-1)/2\right). Theorems Distribution of matrix inverse If U\sim B_p(a,b) then the density of X=U^ is given by : \frac\det(X)^\det\left(X-I_p\right)^ provided that X>I_p and a,b>(p-1)/2. Orthogonal transform If U\sim B_p(a,b) and H is a constant p\times p orthogonal matrix, then HUH^T\sim B(a,b). Also, if H is a random orthogonal p\times p matrix which is in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of statistical survey, surveys and experimental design, experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey sample (statistics), samples. Representative sampling as ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rank (linear Algebra)
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the " nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by or ; sometimes the parentheses are not written, as in .Alternative notation includes \rho (\Phi) from and . Main definitions In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of is the dimension of the column space of , while the row rank of is the dimension of the row space of . A fundamental result in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Variate Dirichlet Distribution
In statistics, the matrix variate Dirichlet distribution is a generalization of the matrix variate beta distribution and of the Dirichlet distribution. Suppose U_1,\ldots,U_r are p\times p positive definite matrices with I_p-\sum_^rU_i also positive-definite, where I_p is the p\times p identity matrix. Then we say that the U_i have a matrix variate Dirichlet distribution, \left(U_1,\ldots,U_r\right)\sim D_p\left(a_1,\ldots,a_r;a_\right), if their joint probability density function is : \left\^\prod_^\det\left(U_i\right)^\det\left(I_p-\sum_^rU_i\right)^ where a_i>(p-1)/2,i=1,\ldots,r+1 and \beta_p\left(\cdots\right) is the multivariate beta function. If we write U_=I_p-\sum_^r U_i then the PDF takes the simpler form : \left\^\prod_^\det\left(U_i\right)^, on the understanding that \sum_^U_i=I_p. Theorems generalization of chi square-Dirichlet result Suppose S_i\sim W_p\left(n_i,\Sigma\right),i=1,\ldots,r+1 are independently distributed Wishart p\times p positive defini ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Wishart Distribution
In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. It is a family of probability distributions defined over symmetric, nonnegative-definite random matrices (i.e. matrix-valued random variables). In random matrix theory, the space of Wishart matrices is called the ''Wishart ensemble''. These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector. Definition Suppose is a matrix, each column of which is independently drawn from a -variate normal distribution with zero mean: :G_ = (g_i^1,\dots,g_i^p)^T\sim \mathcal_p(0,V). Then the Wishart distribution is the probability distribution of the random matrix :S= G G^T = \sum_^n G_G_^T kno ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Inverted Matrix Variate T Distribution
Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when added to the original number, yields zero * Compositional inverse, a function that "reverses" another function * Inverse element * Inverse function, a function that "reverses" another function **Generalized inverse, a matrix that has some properties of the inverse matrix but not necessarily all of them * Multiplicative inverse (reciprocal), a number which when multiplied by a given number yields the multiplicative identity, 1 ** Inverse matrix of an Invertible matrix Other uses * Invert level, the base interior level of a pipe, trench or tunnel * ''Inverse'' (website), an online magazine * An outdated term for an LGBT person; see Sexual inversion (sexology) See also * Inversion (other) * Inverter (other) * Opposite (di ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Independence (probability)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Schur Complement
In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows. Suppose ''p'', ''q'' are nonnegative integers, and suppose ''A'', ''B'', ''C'', ''D'' are respectively ''p'' × ''p'', ''p'' × ''q'', ''q'' × ''p'', and ''q'' × ''q'' matrices of complex numbers. Let :M = \left begin A & B \\ C & D \end\right/math> so that ''M'' is a (''p'' + ''q'') × (''p'' + ''q'') matrix. If ''D'' is invertible, then the Schur complement of the block ''D'' of the matrix ''M'' is the ''p'' × ''p'' matrix defined by :M/D := A - BD^C. If ''A'' is invertible, the Schur complement of the block ''A'' of the matrix ''M'' is the ''q'' × ''q'' matrix defined by :M/A := D - CA^B. In the case that ''A'' or ''D'' is singular, substituting a generalized inverse for the inverses on ''M/A'' and ''M/D'' yields the generalized Schur complement. The Schur complement is named after Issai Schur who used it to prove Schur's lemma, although it had been used previous ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Generalized Matrix Variate Beta Distribution
A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements (thus creating a conceptual model). As such, they are the essential basis of all valid deductive inferences (particularly in logic, mathematics and science), where the process of verification is necessary to determine whether a generalization holds true for any given situation. Generalization can also be used to refer to the process of identifying the parts of a whole, as belonging to the whole. The parts, which might be unrelated when left on their own, may be brought together as a group, hence belonging to the whole by establishing a common relation between them. However, the parts cannot be generalized into a whole—until a common relation is established among ''all'' parts. This does not mean that the p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Independence (probability Theory)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Beta Distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as exponents of the random variable and control the shape of the distribution. The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines. The beta distribution is a suitable model for the random behavior of percentages and proportions. In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial and geometric distributions. The formulation of the beta distribution discussed here is also known as the beta distribution of the first kind, whereas ''beta distribution of the second kind'' is an alternative name for the beta prime distribution. The generalization to mult ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orthogonal Matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity matrix. This leads to the equivalent characterization: a matrix is orthogonal if its transpose is equal to its inverse: Q^\mathrm=Q^, where is the inverse of . An orthogonal matrix is necessarily invertible (with inverse ), unitary (), where is the Hermitian adjoint (conjugate transpose) of , and therefore normal () over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation. The set of orthogonal matrices, under multiplication, forms the group , known as the orthogonal gr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Multivariate Gamma Function
In mathematics, the multivariate gamma function Γ''p'' is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the matrix variate beta distribution. It has two equivalent definitions. One is given as the following integral over the p \times p positive-definite real matrices: : \Gamma_p(a)= \int_ \exp\left( -(S)\right)\, \left, S\^ dS, where , S, denotes the determinant of S. The other one, more useful to obtain a numerical result is: : \Gamma_p(a)= \pi^\prod_^p \Gamma(a+(1-j)/2). In both definitions, a is a complex number whose real part satisfies \Re(a) > (p-1)/2. Note that \Gamma_1(a) reduces to the ordinary gamma function. The second of the above definitions allows to directly obtain the recursive relationships for p\ge 2: : \Gamma_p(a) = \pi^ \Gamma(a) \Gamma_(a-\tfrac) = \pi^ \Gamma_(a) \Gamma(a+(1-p)/2). Thus * \Gamma_2(a)=\pi^\Gamma(a)\Gam ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]