HOME
*



picture info

Vectorization (mathematics)
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a column vector. Specifically, the vectorization of a matrix ''A'', denoted vec(''A''), is the column vector obtained by stacking the columns of the matrix ''A'' on top of one another: :\operatorname(A) = _, \ldots, a_, a_, \ldots, a_, \ldots, a_, \ldots, a_\mathrm Here, a_ represents A(i,j) and the superscript ^\mathrm denotes the transpose. Vectorization expresses, through coordinates, the isomorphism \mathbf^ := \mathbf^m \otimes \mathbf^n \cong \mathbf^ between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix A = \begin a & b \\ c & d \end, the vectorization is \operatorname(A) = \begin a \\ c \\ b \\ d \end. The connection between the vectorization of ''A'' and the vectorization of its transpose is given by the commutation matrix. Compatibility with Kronecker products The vector ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Algebra Homomorphism
In mathematics, an algebra homomorphism is a homomorphism between two associative algebras. More precisely, if and are algebras over a field (or commutative ring) , it is a function F\colon A\to B such that for all in and in , * F(kx) = kF(x) * F(x + y) = F(x) + F(y) * F(xy) = F(x) F(y) The first two conditions say that is a ''K''-linear map (or ''K''-module homomorphism if ''K'' is a commutative ring), and the last condition says that is a (non-unital) ring homomorphism. If admits an inverse homomorphism, or equivalently if it is bijective, is said to be an isomorphism between and . Unital algebra homomorphisms If ''A'' and ''B'' are two unital algebras, then an algebra homomorphism F:A\rightarrow B is said to be ''unital'' if it maps the unity of ''A'' to the unity of ''B''. Often the words "algebra homomorphism" are actually used to mean "unital algebra homomorphism", in which case non-unital algebra homomorphisms are excluded. A unital algebra homomorphism is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

GNU Octave
GNU Octave is a high-level programming language primarily intended for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language. As part of the GNU Project, it is free software under the terms of the GNU General Public License. History The project was conceived around 1988. At first it was intended to be a companion to a chemical reactor design course. Full development was started by John W. Eaton in 1992. The first alpha release dates back to 4 January 1993 and on 17 February 1994 version 1.0 was released. Version 7.1.0 was released on Apr 6, 2022. The program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel was known for his ability to perform quick back-of-the-envelope calculations. Development history Developments In addition ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matlab
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. Although MATLAB is intended primarily for numeric computing, an optional toolbox uses the MuPAD symbolic engine allowing access to symbolic computing abilities. An additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. As of 2020, MATLAB has more than 4 million users worldwide. They come from various backgrounds of engineering, science, and economics. History Origins MATLAB was invented by mathematician and computer programmer Cleve Moler. The idea for MATLAB was based on his 1960s PhD thesis. Moler became a math professor at the University of New Mexico and starte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Elimination Matrix
In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. Duplication matrix The duplication matrix D_n is the unique n^2 \times \frac matrix which, for any n \times n symmetric matrix A , transforms \mathrm(A) into \mathrm(A): : D_n \mathrm(A) = \mathrm(A). For the 2 \times 2 symmetric matrix A=\left begin a & b \\ b & d \end\right/math>, this transformation reads : D_n \mathrm(A) = \mathrm(A) \implies \begin 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&1 \end \begin a \\ b \\ d \end = \begin a \\ b \\ b \\ d \end The explicit formula for calculating the duplication matrix for a n \times n matrix is: D^T_n = \sum \limits_ u_ (\mathrmT_)^T Where: * u_ is a unit vector of order \frac n (n+1) having the value 1 in the position (j-1)n+i - \fracj(j-1) and 0 elsewhere; * T_ is a n \times n matrix wi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Duplication Matrix
In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. Duplication matrix The duplication matrix D_n is the unique n^2 \times \frac matrix which, for any n \times n symmetric matrix A , transforms \mathrm(A) into \mathrm(A): : D_n \mathrm(A) = \mathrm(A). For the 2 \times 2 symmetric matrix A=\left begin a & b \\ b & d \end\right/math>, this transformation reads : D_n \mathrm(A) = \mathrm(A) \implies \begin 1&0&0 \\ 0&1&0 \\ 0&1&0 \\ 0&0&1 \end \begin a \\ b \\ d \end = \begin a \\ b \\ b \\ d \end The explicit formula for calculating the duplication matrix for a n \times n matrix is: D^T_n = \sum \limits_ u_ (\mathrmT_)^T Where: * u_ is a unit vector of order \frac n (n+1) having the value 1 in the position (j-1)n+i - \fracj(j-1) and 0 elsewhere; * T_ is a n \times n matrix wi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Main Diagonal
In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix. The following four matrices have their main diagonals indicated by red ones: :\begin \color & 0 & 0\\ 0 & \color & 0\\ 0 & 0 & \color\end \qquad \begin \color & 0 & 0 & 0 \\ 0 & \color & 0 & 0 \\ 0 & 0 & \color & 0 \end \qquad \begin \color & 0 & 0 \\ 0 & \color & 0 \\ 0 & 0 & \color \\ 0 & 0 & 0 \end \qquad \begin \color & 0 & 0 & 0 \\ 0 & \color & 0 & 0 \\ 0 & 0 &\color & 0 \\ 0 & 0 & 0 & \color \end \qquad Antidiagonal The antidiagonal (sometimes counter diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal, or bad diagonal) of an order N square matrix B is the collection of entries b_ such that i + j = N+1 for all 1 \leq i, j \leq N. That is, it runs from the top right corner to the bottom left corner. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Lower Triangular Matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minors are non-zero. Description A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower triangular matrix or left triangular matrix, and a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Symmetric Matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if a_ denotes the entry in the ith row and jth column then for all indices i and j. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Conjugate Transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex conjugate of a+ib being a-ib, for real numbers a and b). It is often denoted as \boldsymbol^\mathrm or \boldsymbol^* or \boldsymbol'. H. W. Turnbull, A. C. Aitken, "An Introduction to the Theory of Canonical Matrices," 1932. For real matrices, the conjugate transpose is just the transpose, \boldsymbol^\mathrm = \boldsymbol^\mathsf. Definition The conjugate transpose of an m \times n matrix \boldsymbol is formally defined by where the subscript ij denotes the (i,j)-th entry, for 1 \le i \le n and 1 \le j \le m, and the overbar denotes a scalar complex conjugate. This definition can also be written as :\boldsymbol^\mathrm = \left(\overline\right)^\mathsf = \overline where \boldsymbol^\mathsf denotes the transpose and \overline denotes the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Inner Product
In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two vectors in the space is a Scalar (mathematics), scalar, often denoted with angle brackets such as in \langle a, b \rangle. Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or ''scalar product'' of Cartesian coordinates. Inner product spaces of infinite Dimension (vector space), dimension are widely used in functional analysis. Inner product spaces over the Field (mathematics), field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hilbert–Schmidt Operator
In mathematics, a Hilbert–Schmidt operator, named after David Hilbert and Erhard Schmidt, is a bounded operator A \colon H \to H that acts on a Hilbert space H and has finite Hilbert–Schmidt norm \, A\, ^2_ \ \stackrel\ \sum_ \, Ae_i\, ^2_H, where \ is an orthonormal basis. The index set I need not be countable. However, the sum on the right must contain at most countably many non-zero terms, to have meaning. This definition is independent of the choice of the orthonormal basis. In finite-dimensional Euclidean space, the Hilbert–Schmidt norm \, \cdot\, _\text is identical to the Frobenius norm. , , ·, , is well defined The Hilbert–Schmidt norm does not depend on the choice of orthonormal basis. Indeed, if \_ and \_ are such bases, then \sum_i \, Ae_i\, ^2 = \sum_ \left, \langle Ae_i, f_j\rangle \^2 = \sum_ \left, \langle e_i, A^*f_j\rangle \^2 = \sum_j\, A^* f_j\, ^2. If e_i = f_i, then \sum_i \, Ae_i\, ^2 = \sum_i\, A^* e_i\, ^2. As for any bounded operato ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]