HOME
*





Operator Monotone Function
In linear algebra, the operator monotone function is an important type of real-valued function, first described by Charles Löwner in 1934. It is closely allied to the operator concave and operator concave functions, and is encountered in operator theory and in matrix theory, and led to the Löwner–Heinz inequality. Definition A function f : I \to \Reals defined on an interval I \subseteq \Reals is said to be operator monotone if whenever A and B are Hermitian matrices (of any size/dimensions) whose eigenvalues all belong to the domain of f and whose difference A - B is a positive semi-definite matrix, then necessarily f(A) - f(B) \geq 0 where f(A) and f(B) are the values of the matrix function induced by f (which are matrices of the same size as A and B). Notation This definition is frequently expressed with the notation that is now defined. Write A \geq 0 to indicate that a matrix A is positive semi-definite and write A \geq B to indicate that the difference A - B of two ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Real-valued Function
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain. Real-valued functions of a real variable (commonly called ''real functions'') and real-valued functions of several real variables are the main object of study of calculus and, more generally, real analysis. In particular, many function spaces consist of real-valued functions. Algebraic structure Let (X,) be the set of all functions from a set to real numbers \mathbb R. Because \mathbb R is a field, (X,) may be turned into a vector space and a commutative algebra over the reals with the following operations: *f+g: x \mapsto f(x) + g(x) – vector addition *\mathbf: x \mapsto 0 – additive identity *c f: x \mapsto c f(x),\quad c \in \mathbb R – scalar multiplication *f g: x \mapsto f(x)g(x) – pointwise multiplication These operations extend to partial functions from to \mathbb R, with the restricti ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Charles Löwner
Charles Loewner (29 May 1893 – 8 January 1968) was an American mathematician. His name was Karel Löwner in Czech and Karl Löwner in German. Karl Loewner was born into a Jewish family in Lany, about 30 km from Prague, where his father Sigmund Löwner was a store owner. Loewner received his Ph.D. from the University of Prague in 1917 under supervision of Georg Pick. One of his central mathematical contributions is the proof of the Bieberbach conjecture in the first highly nontrivial case of the third coefficient. The technique he introduced, the Loewner differential equation, has had far-reaching implications in geometric function theory; it was used in the final solution of the Bieberbach conjecture by Louis de Branges in 1985. Loewner worked at the University of Berlin, University of Prague, University of Louisville, Brown University, Syracuse University and eventually at Stanford University. His students include Lipman Bers, Roger Horn, Adriano Garsia, and P. M. Pu. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Operator Theory
In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis. If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory. Single operator theory Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category. Spectrum of operators The spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides cond ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix Theory
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begin1 & 9 & -13 \\20 & 5 & -6 \end is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "-matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents composition of linear maps. Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices. ''This article focuses on matrices related to linear algebra, and, un ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hermitian Matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and : or in matrix form: A \text \quad \iff \quad A = \overline . Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix A is denoted by A^\mathsf, then the Hermitian property can be written concisely as Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are A^\mathsf = A^\dagger = A^\ast, although note that in quantum mechanics, A^\ast typically means the complex conjugate only, and not the conjugate transpose. Alternative characterizations Hermit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenvalues And Eigenvectors
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Positive Semi-definite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number z^* Mz is positive for every nonzero complex column vector z, where z^* denotes the conjugate transpose of z. Positive semi-definite matrices are defined similarly, except that the scalars z^\textsfMz and z^* Mz are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called indefinite. A matrix is thus positive-definite if and only if it is the matrix of a positive-definite quadratic form or Hermitian form. In other words, a matrix is positive-definite if and only if it defines a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Function
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. Extending scalar function to matrix functions There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ. Power series If the analytic function has the Taylor expansion f(x) = c_0 + c_1 x + c_2 x^2 + \cdots then a matrix function A\mapsto f(A) can be defined by substituting by a square matrix: powers become matrix powers, additions become matrix sums and multiplications by coefficients become scalar multiplications. If the series converges for , x, < r, then the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Positive Semi-definite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number z^* Mz is positive for every nonzero complex column vector z, where z^* denotes the conjugate transpose of z. Positive semi-definite matrices are defined similarly, except that the scalars z^\textsfMz and z^* Mz are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called indefinite. A matrix is thus positive-definite if and only if it is the matrix of a positive-definite quadratic form or Hermitian form. In other words, a matrix is positive-definite if and only if it defines a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Analytic Function Of A Matrix
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. Extending scalar function to matrix functions There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ. Power series If the analytic function has the Taylor expansion f(x) = c_0 + c_1 x + c_2 x^2 + \cdots then a matrix function A\mapsto f(A) can be defined by substituting by a square matrix: powers become matrix powers, additions become matrix sums and multiplications by coefficients become scalar multiplications. If the series converges for , x, < r, then the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Spectral Theorem
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix (mathematics), matrix can be Diagonalizable matrix, diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective. Examples of operators to which the spectral theorem appl ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]