Scatter Matrix
   HOME
*





Scatter Matrix
: ''For the notion in quantum mechanics, see scattering matrix.'' In multivariate statistics and probability theory, the scatter matrix is a statistic that is used to make estimates of the covariance matrix, for instance of the multivariate normal distribution. Definition Given ''n'' samples of ''m''-dimensional data, represented as the m-by-n matrix, X= mathbf_1,\mathbf_2,\ldots,\mathbf_n/math>, the sample mean is :\overline = \frac\sum_^n \mathbf_j where \mathbf_j is the ''j''-th column of X. The scatter matrix is the ''m''-by-''m'' positive semi-definite matrix :S = \sum_^n (\mathbf_j-\overline)(\mathbf_j-\overline)^T = \sum_^n (\mathbf_j-\overline)\otimes(\mathbf_j-\overline) = \left( \sum_^n \mathbf_j \mathbf_j^T \right) - n \overline \overline^T where (\cdot)^T denotes matrix transpose, and multiplication is with regards to the outer product. The scatter matrix may be expressed more succinctly as :S = X\,C_n\,X^T where \,C_n is the ''n''-by-''n'' centering matrix. Ap ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Scattering Matrix
In physics, the ''S''-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT). More formally, in the context of QFT, the ''S''-matrix is defined as the unitary matrix connecting sets of asymptotically free particle states (the ''in-states'' and the ''out-states'') in the Hilbert space of physical states. A multi-particle state is said to be ''free'' (non-interacting) if it transforms under Lorentz transformations as a tensor product, or ''direct product'' in physics parlance, of ''one-particle states'' as prescribed by equation below. ''Asymptotically free'' then means that the state has this appearance in either the distant past or the distant future. While the ''S''-matrix may be defined for any background (spacetime) that is asymptotically solvable and has no event horizons, it has a simple form in the case of the Minkowsk ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Outer Product
In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions ''n'' and ''m'', then their outer product is an ''n'' × ''m'' matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. The outer product contrasts with: * The dot product (a special case of "inner product"), which takes a pair of coordinate vectors as input and produces a scalar * The Kronecker product, which takes a pair of matrices as input and produces a block matrix * Standard matrix multiplication Definition Given two vectors of size m \times 1 and n \times 1 respectively \mathbf = \begin u_1 \\ u_2 \\ \vdots \\ u_m \end, \quad \mathbf = \begin v_1 \\ v_2 \\ \vdots \\ v_n \end their outer product, denoted \mathbf \otimes \mathbf, is defined as the m \times n matrix \mathbf obtained by ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gram Matrix
In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G_ = \left\langle v_i, v_j \right\rangle., p.441, Theorem 7.2.10 If the vectors v_1,\dots, v_n are the columns of matrix X then the Gram matrix is X^* X in the general case that the vector coordinates are complex numbers, which simplifies to X^\top X for the case that the vector coordinates are real numbers. An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero. It is named after Jørgen Pedersen Gram. Examples For finite-dimensional real vectors in \mathbb^n with the usual Euclidean dot product, the Gram matrix is G = V^\top V, where V is a matrix whose columns are the vectors v_k and V^\top is its transpose whose rows are the vectors v_k^ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Outer Product
In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions ''n'' and ''m'', then their outer product is an ''n'' × ''m'' matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. The outer product contrasts with: * The dot product (a special case of "inner product"), which takes a pair of coordinate vectors as input and produces a scalar * The Kronecker product, which takes a pair of matrices as input and produces a block matrix * Standard matrix multiplication Definition Given two vectors of size m \times 1 and n \times 1 respectively \mathbf = \begin u_1 \\ u_2 \\ \vdots \\ u_m \end, \quad \mathbf = \begin v_1 \\ v_2 \\ \vdots \\ v_n \end their outer product, denoted \mathbf \otimes \mathbf, is defined as the m \times n matrix \mathbf obtained by ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sample Covariance Matrix
The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Estimation Of Covariance Matrices
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in R''p''×''p''; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE