HOME

TheInfoList



OR:

In mathematics, the logarithmic norm is a real-valued
functional Functional may refer to: * Movements in architecture: ** Functionalism (architecture) ** Form follows function * Functional group, combination of atoms within molecules * Medical conditions without currently visible organic basis: ** Functional sy ...
on
operators Operator may refer to: Mathematics * A symbol indicating a mathematical operation * Logical operator or logical connective in mathematical logic * Operator (mathematics), mapping that acts on elements of a space to produce elements of another sp ...
, and is derived from either an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
, a vector norm, or its induced
operator norm In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its . Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Introdu ...
. The logarithmic norm was independently introduced by
Germund Dahlquist Germund Dahlquist (16 January 1925 – 8 February 2005) was a Swedish mathematician known primarily for his early contributions to the theory of numerical analysis as applied to differential equations. Dahlquist began to study mathematics at Stoc ...
and Sergei Lozinskiĭ in 1958, for square
matrices Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
. It has since been extended to nonlinear operators and
unbounded operator In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases. The ter ...
s as well. The logarithmic norm has a wide range of applications, in particular in matrix theory,
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, an ...
s and
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure.


Original definition

Let A be a square matrix and \, \cdot \, be an induced matrix norm. The associated logarithmic norm \mu of A is defined :\mu(A) = \lim \limits_ \frac Here I is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
of the same dimension as A, and h is a real, positive number. The limit as h\rightarrow 0^- equals -\mu(-A), and is in general different from the logarithmic norm \mu(A), as -\mu(-A) \leq \mu(A) for all matrices. The matrix norm \, A\, is always positive if A\neq 0, but the logarithmic norm \mu(A) may also take negative values, e.g. when A is
negative definite In mathematics, negative definiteness is a property of any object to which a bilinear form may be naturally associated, which is negative-definite. See, in particular: * Negative-definite bilinear form * Negative-definite quadratic form * Nega ...
. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The name ''logarithmic norm,'' which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation :\dot x = Ax. The maximal growth rate of \log \, x\, is \mu(A). This is expressed by the differential inequality :\frac \log \, x\, \leq \mu(A), where \mathrm d/\mathrm dt^+ is the upper right Dini derivative. Using
logarithmic differentiation In calculus, logarithmic differentiation or differentiation by taking logarithms is a method used to differentiate functions by employing the logarithmic derivative of a function ''f'', :(\ln f)' = \frac \quad \implies \quad f' = f \cdot (\ ...
the differential inequality can also be written :\frac \leq \mu(A)\cdot \, x\, , showing its direct relation to Grönwall's lemma. In fact, it can be shown that the norm of the state transition matrix \Phi(t, t_0) associated to the differential equation \dot x = A(t)x is bounded by : \exp\left(-\int_^ \mu(-A(s)) ds \right) \le \, \Phi(t,t_0)\, \le \exp\left(\int_^ \mu(A(s)) ds \right) for all t \ge t_0 .


Alternative definitions

If the vector norm is an inner product norm, as in a
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
, then the logarithmic norm is the smallest number \mu(A) such that for all x : \real\langle x, Ax\rangle \leq \mu(A)\cdot \, x\, ^2 Unlike the original definition, the latter expression also allows A to be unbounded. Thus
differential operator In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and return ...
s too can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products or
duality Duality may refer to: Mathematics * Duality (mathematics), a mathematical concept ** Dual (category theory), a formalization of mathematical duality ** Duality (optimization) ** Duality (order theory), a concept regarding binary relations ** Dual ...
. Both the operator norm and the logarithmic norm are then associated with extremal values of
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, :4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong to a ...
s as follows: : \, A\, ^2 = \sup_\,; \qquad \mu(A) = \sup_


Properties

Basic properties of the logarithmic norm of a matrix include: # \mu(zI) = \real\,(z) # \mu(A) \leq \, A\, # \mu(\gamma A) = \gamma \mu(A)\, for scalar \gamma > 0 # \mu(A+zI) = \mu(A) + \real\,(z) # \mu(A + B) \leq \mu(A) + \mu(B) # \alpha(A) \leq \mu(A)\, where \alpha(A) is the maximal real part of the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of A # \, \mathrm e^\, \leq \mathrm e^\, for t \geq 0 # \mu(A) < 0 \, \Rightarrow \, \, A^\, \leq -1/\mu(A)


Example logarithmic norms

The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas, a_ represents the element on the ith row and jth column of a matrix A. * \mu_1(A) = \sup \limits_j \left( \real (a_) + \sum \limits_ , a_, \right) * \displaystyle \mu_(A) = \lambda_\left(\frac\right) * \mu_(A) = \sup \limits_i \left( \real (a_) + \sum \limits_ , a_, \right)


Applications in matrix theory and spectral theory

The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that :-\mu(-A) \leq \leq \mu(A), and both extreme values are taken for some vectors x\neq 0. This also means that every eigenvalue \lambda_k of A satisfies :-\mu(-A) \leq \real\, \lambda_k \leq \mu(A). More generally, the logarithmic norm is related to the
numerical range In the mathematical field of linear algebra and convex analysis, the numerical range or field of values of a complex n \times n matrix ''A'' is the set :W(A) = \left\ where \mathbf^* denotes the conjugate transpose of the vector \mathbf. The nume ...
of a matrix. A matrix with -\mu(-A)>0 is positive definite, and one with \mu(A)<0 is negative definite. Such matrices have inverses. The inverse of a negative definite matrix is bounded by :\, A^\, \leq - . Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, if R is a rational function with the property :\real \, (z)\leq 0 \, \Rightarrow \, , R(z), \leq 1 then, for inner product norms, :\mu(A)\leq 0 \, \Rightarrow \, \, R(A)\, \leq 1. Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices.


Applications in stability theory and numerical analysis

The logarithmic norm plays an important role in the stability analysis of a continuous dynamical system \dot x = Ax. Its role is analogous to that of the matrix norm for a discrete dynamical system x_ = Ax_n. In the simplest case, when A is a scalar complex constant \lambda, the discrete dynamical system has stable solutions when , \lambda, \leq 1, while the differential equation has stable solutions when \real\,\lambda\leq 0. When A is a matrix, the discrete system has stable solutions if \, A\, \leq 1. In the continuous system, the solutions are of the form \mathrm e^x(0). They are stable if \, \mathrm e^\, \leq 1 for all t\geq 0, which follows from property 7 above, if \mu(A)\leq 0. In the latter case, \, x\, is a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s se ...
for the system.
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. The ...
for the numerical solution of \dot x = Ax replace the differential equation by a discrete equation x_ = R(hA)\cdot x_n, where the rational function R is characteristic of the method, and h is the time step size. If , R(z), \leq 1 whenever \real\,(z)\leq 0, then a stable differential equation, having \mu(A)\leq 0, will always result in a stable (contractive) numerical method, as \, R(hA)\, \leq 1. Runge-Kutta methods having this property are called A-stable. Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as to
semigroup In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative internal binary operation on it. The binary operation of a semigroup is most often denoted multiplicatively: ''x''·''y'', or simply ''xy'', ...
theory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem is well posed. Similar results also apply in the stability analysis in
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
, where there is a need to discriminate between positive and negative feedback.


Applications to elliptic differential operators

In connection with differential operators it is common to use inner products and
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. ...
. In the simplest case we consider functions satisfying u(0)=u(1)=0 with inner product :\langle u,v\rangle = \int_0^1 uv\, \mathrm dx. Then it holds that :\langle u,u''\rangle = -\langle u',u'\rangle \leq -\pi^2\, u\, ^2, where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality. In the latter, equality is attained for the function \sin\, \pi x, implying that the constant -\pi^2 is the best possible. Thus :\langle u, Au\rangle \leq -\pi^2 \, u\, ^2 for the differential operator A=\mathrm d^2/\mathrm dx^2, which implies that :\mu() = -\pi^2. As an operator satisfying \langle u,Au \rangle > 0 is called
elliptic In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in ...
, the logarithmic norm quantifies the (strong) ellipticity of -\mathrm d^2/\mathrm dx^2. Thus, if A is strongly elliptic, then \mu(-A)<0, and is invertible given proper data. If a finite difference method is used to solve -u''=f, the problem is replaced by an algebraic equation Tu=f. The matrix T will typically inherit the ellipticity, i.e., -\mu(-T)>0, showing that T is positive definite and therefore invertible. These results carry over to the
Poisson equation Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with t ...
as well as to other numerical methods such as the
Finite element method The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat ...
.


Extensions to nonlinear maps

For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities :l(f)\cdot \, u-v\, \leq \, f(u)-f(v)\, \leq L(f)\cdot \, u-v\, , where L(f) is the least upper bound
Lipschitz constant In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exis ...
of f, and l(f) is the greatest lower bound Lipschitz constant; and :m(f)\cdot \, u-v\, ^2 \leq \langle u-v, f(u)-f(v)\rangle \leq M(f)\cdot \, u-v\, ^2, where u and v are in the domain D of f. Here M(f) is the least upper bound logarithmic Lipschitz constant of f, and l(f) is the greatest lower bound logarithmic Lipschitz constant. It holds that m(f)=-M(-f) (compare above) and, analogously, l(f)=L(f^)^, where L(f^) is defined on the image of f. For nonlinear operators that are Lipschitz continuous, it further holds that :M(f) = \lim_. If f is differentiable and its domain D is convex, then :L(f) = \sup_ \, f'(x)\, and \displaystyle M(f) = \sup_ \mu(f'(x)). Here f'(x) is the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
of f, linking the nonlinear extension to the matrix norm and logarithmic norm. An operator having either m(f) > 0 or M(f) < 0 is called uniformly monotone. An operator satisfying L(f) < 1 is called
contractive In mathematics, a contraction mapping, or contraction or contractor, on a metric space (''M'', ''d'') is a function ''f'' from ''M'' to itself, with the property that there is some real number 0 \leq k < 1 such that for all ''x'' and ...
. This extension offers many connections to fixed point theory, and critical point theory. The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that :M(f)<0\,\Rightarrow\,L(f^)\leq -{\frac {1}{M(f), which quantifies the Uniform Monotonicity Theorem due to Browder & Minty (1963).


References

Matrix theory