HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, every
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex an ...
can be used for defining a matrix function that maps
square matrices In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of
linear differential equation In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form :a_0(x)y + a_1(x)y' + a_2(x)y'' \cdots + a_n(x)y^ = b( ...
s.


Extending scalar function to matrix functions

There are several techniques for lifting a real function to a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ.


Power series

If the
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex an ...
has the
Taylor expansion In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
f(x) = c_0 + c_1 x + c_2 x^2 + \cdots then a matrix function A\mapsto f(A) can be defined by substituting by a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
: powers become
matrix power In mathematics, a matrix (plural matrices) is a rectangle, rectangular array variable, array or table of numbers, symbol (formal), symbols, or expression (mathematics), expressions, arranged in rows and columns, which is used to represent a math ...
s, additions become matrix sums and multiplications by coefficients become
scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector by ...
s. If the series converges for , x, < r, then the corresponding matrix series converges for matrices such that \, A\, < r for some
matrix norm In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows ...
that satisfies \, AB\, \leq \, A\, \,, B\, .


Diagonalizable matrices

A square matrix is
diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) F ...
, if there is an
invertible matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
such that D = P^\,A\,P is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal ma ...
, that is, has the shape D=\begin d_1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & d_n \end. As A = P\,D\,P^, it is natural to set f(A)=P\, \begin f(d_1) & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & f(d_n) \end\,P^. It can be verified that the matrix does not depend on a particular choice of . For example, suppose one is seeking \Gamma(A) = (A-1)! for A = \begin 1&3\\ 2&1 \end . One has A = P \begin 1-\sqrt& 0 \\ 0 & 1+ \sqrt \end P^~, for P= \begin 1/2 & 1/2 \\ -\frac &\frac \end ~. Application of the formula then simply yields \Gamma(A) = \begin 1/2 & 1/2 \\ -\frac & \frac \end \cdot \begin \Gamma(1-\sqrt) & 0\\ 0&\Gamma(1+\sqrt) \end \cdot \begin 1 & -\sqrt/2 \\ 1 & \sqrt/2 \end \approx \begin 2.8114 & 0.4080 \\ 0.2720 & 2.8114 \end ~. Likewise, A^4 = \begin 1/2 & 1/2 \\ -\frac & \frac \end \cdot \begin (1-\sqrt)^4 & 0\\ 0&(1+\sqrt)^4 \end \cdot \begin 1 & -\sqrt/2 \\ 1 & \sqrt/2 \end = \begin 73 & 84\\ 56 & 73 \end ~.


Jordan decomposition

All complex matrices, whether they are diagonalizable or not, have a
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to som ...
A = P\,J\,P^, where the matrix ''J'' consists of
Jordan block In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has t ...
s. Consider these blocks separately and apply the power series to a Jordan block: f \left( \begin \lambda & 1 & 0 & \cdots & 0 \\ 0 & \lambda & 1 & \vdots & \vdots \\ 0 & 0 & \ddots & \ddots & \vdots \\ \vdots & \cdots & \ddots & \lambda & 1 \\ 0 & \cdots & \cdots & 0 & \lambda \end \right) = \begin \frac & \frac & \frac & \cdots & \frac \\ 0 & \frac & \frac & \vdots & \frac \\ 0 & 0 & \ddots & \ddots & \vdots \\ \vdots & \cdots & \ddots & \frac & \frac \\ 0 & \cdots & \cdots & 0 & \frac \end. This definition can be used to extend the domain of the matrix function beyond the set of matrices with
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
smaller than the radius of convergence of the power series. Note that there is also a connection to
divided differences In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its o ...
. A related notion is the
Jordan–Chevalley decomposition In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an inve ...
which expresses a matrix as a sum of a diagonalizable and a nilpotent part.


Hermitian matrices

A
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
has all real eigenvalues and can always be diagonalized by a
unitary matrix In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is ...
P, according to the
spectral theorem In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix (mathematics), matrix can be Diagonalizable matrix, diagonalized (that is, represented as a diagonal matrix i ...
. In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities for real functions: If f(a) \leq g(a) for all eigenvalues of A, then f(A) \preceq g(A). (As a convention, X \preceq Y \Leftrightarrow Y - X is a
positive-semidefinite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a c ...
.) The proof follows directly from the definition.


Cauchy integral

Cauchy's integral formula In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary o ...
from
complex analysis Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates Function (mathematics), functions of complex numbers. It is helpful in many branches of mathemati ...
can also be used to generalize scalar functions to matrix functions. Cauchy's integral formula states that for any
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex an ...
defined on a set , one has f(x) = \frac \oint_\! \, \mathrmz ~, where is a closed simple curve inside the domain enclosing . Now, replace by a matrix and consider a path inside that encloses all
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
s of . One possibility to achieve this is to let be a circle around the
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * Origin (comics), ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * The Origin (Buffy comic), ''The Origin'' (Bu ...
with
radius In classical geometry, a radius ( : radii) of a circle or sphere is any of the line segments from its center to its perimeter, and in more modern usage, it is also their length. The name comes from the latin ''radius'', meaning ray but also the ...
larger than for an arbitrary
matrix norm In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows ...
. Then, is definable by f(A) = \frac \oint_C f(z)\left(z I - A\right)^ \mathrmz \,. This integral can readily be evaluated numerically using the
trapezium rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works by ...
, which converges exponentially in this case. That means that the
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by
Sylvester's formula In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function of a matrix as a polynomial in , in terms of the eigenvalues and eigenvectors of ...
. This idea applied to
bounded linear operator In functional analysis and operator theory, a bounded linear operator is a linear transformation L : X \to Y between topological vector spaces (TVSs) X and Y that maps bounded subsets of X to bounded subsets of Y. If X and Y are normed vect ...
s on a
Banach space In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vector ...
, which can be seen as infinite matrices, leads to the
holomorphic functional calculus In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function ''f'' of a complex argument ''z'' and an operator ''T'', the aim is to construct an operator, ''f''(' ...
.


Matrix perturbations

The above Taylor power series allows the scalar x to be replaced by the matrix. This is not true in general when expanding in terms of A(\eta) = A+\eta B about \eta = 0 unless ,B0. A counterexample is f(x) = x^, which has a finite length Taylor series. We compute this in two ways, * Distributive law: f(A + \eta B) = (A+\eta B)^ = A^ + \eta(A^B + ABA + BA^) + \eta^(AB^ + BAB + B^A) + \eta^B^ * Using scalar Taylor expansion for f(a+\eta b) and replacing scalars with matrices at the end: \begin f(a+\eta b) &= f(a) + f'(a)\frac + f''(a)\frac + f(a)\frac \\ 5em&= a^3 + 3a^2(\eta b) + 3a(\eta b)^2 + (\eta b)^3 \\ 5em&\to A^3 = + 3A^2(\eta B) + 3A(\eta B)^2 + (\eta B)^3 \end The scalar expression assumes commutativity while the matrix expression does not, and thus they cannot be equated directly unless ,B0. For some ''f''(''x'') this can be dealt with using the same method as scalar Taylor series. For example, f(x) = \frac. If A^ exists then f(A+\eta B) = f(\mathbb + \eta A^B)f(A). The expansion of the first term then follows the power series given above, f(\mathbb + \eta A^B) = \mathbb - \eta A^B + (-\eta A^B)^2 + \cdots = \sum_^\infty (-\eta A^B)^n The convergence criteria of the power series then apply, requiring \Vert \eta A^B \Vert to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked.


Arbitrary function of a 2×2 matrix

An arbitrary function ''f''(''A'') of a 2×2 matrix A has its
Sylvester's formula In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function of a matrix as a polynomial in , in terms of the eigenvalues and eigenvectors of ...
simplify to f(A) = \frac I + \frac \frac ~, where \lambda_\pm are the eigenvalues of its characteristic equation, , and are given by \lambda_\pm = \frac \pm \sqrt .


Examples

*
Matrix polynomial In mathematics, a matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial : P(x) = \sum_^n =a_0 + a_1 x+ a_2 x^2 + \cdots + a_n x^n, this polynomial evaluated at a matrix ''A'' is :P(A) = ...
* Matrix root *
Matrix logarithm In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exp ...
*
Matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives ...
* Matrix sign function


Classes of matrix functions

Using the semidefinite ordering (X \preceq Y \Leftrightarrow Y - X is positive-semidefinite and X \prec Y \Leftrightarrow Y - X is
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite f ...
), some of the classes of scalar functions can be extended to matrix functions of
Hermitian matrices In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
.


Operator monotone

A function is called operator monotone if and only if 0 \prec A \preceq H \Rightarrow f(A) \preceq f(H) for all self-adjoint matrices with spectra in the domain of . This is analogous to
monotone function In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order ...
in the scalar case.


Operator concave/convex

A function is called operator concave if and only if \tau f(A) + (1-\tau) f(H) \preceq f \left ( \tau A + (1-\tau)H \right ) for all self-adjoint matrices with spectra in the domain of and \tau \in ,1/math>. This definition is analogous to a concave scalar function. An operator convex function can be defined be switching \preceq to \succeq in the definition above.


Examples

The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner's theorem states that a function on an ''open'' interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself.


See also

* Algebraic Riccati equation *
Sylvester's formula In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function of a matrix as a polynomial in , in terms of the eigenvalues and eigenvectors of ...
*
Loewner order In mathematics, Loewner order is the partial order defined by the convex cone of positive semi-definite matrices. This order is usually employed to generalize the definitions of monotone and concave/convex scalar functions to monotone and concav ...
*
Matrix calculus In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a ...
*
Trace inequalities In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.E. Carlen, Trace Inequalities and Quantum Entr ...
*
Trigonometric functions of matrices The trigonometric functions (especially sine and cosine) for real or complex square matrices occur in solutions of second-order systems of differential equations. They are defined by the same Taylor series that hold for the trigonometric functio ...


Notes


References

* {{cite book, last1=Higham, first1=Nicholas J., title=Functions of matrices theory and computation, date=2008, publisher=Society for Industrial and Applied Mathematics, location=Philadelphia, author-link=Nicholas_Higham, isbn=9780898717778 Matrix theory Mathematical physics