Bauer–Fike Theorem
   HOME
*





Bauer–Fike Theorem
In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that ''the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors''. The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960. The setup In what follows we assume that: * is a diagonalizable matrix; * is the non-singular eigenvector matrix such that , where is a diagonal matrix. * If is invertible, its condition number in -norm is denoted by and defined by: ::\kappa_p(X)=\, X\, _p \left \, X^ \right \, _p. The Bauer–Fike Theorem :Bauer–Fike Theorem. Let be an eigenvalue of . Then there exists such that: ::, \lambda-\mu, \leq \kappa_p (V) \, \delta A \, _p Proof. We can su ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Unitary Matrix
In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is referred to as the Hermitian adjoint of a matrix and is denoted by a dagger (†), so the equation above is written U^\dagger U = UU^\dagger = I. The real analogue of a unitary matrix is an orthogonal matrix. Unitary matrices have significant importance in quantum mechanics because they preserve norms, and thus, probability amplitudes. Properties For any unitary matrix of finite size, the following hold: * Given two complex vectors and , multiplication by preserves their inner product; that is, . * is normal (U^* U = UU^*). * is diagonalizable; that is, is unitarily similar to a diagonal matrix, as a consequence of the spectral theorem. Thus, has a decomposition of the form U = VDV^*, where is unitary, and is diagonal and uni ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Spectral Theory
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. Mathematical background The name ''spectral theory'' was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hausdorff Distance
In mathematics, the Hausdorff distance, or Hausdorff metric, also called Pompeiu–Hausdorff distance, measures how far two subsets of a metric space are from each other. It turns the set of non-empty compact subsets of a metric space into a metric space in its own right. It is named after Felix Hausdorff and Dimitrie Pompeiu. Informally, two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is the longest distance you can be forced to travel by an adversary who chooses a point in one of the two sets, from where you then must travel to the other set. In other words, it is the greatest of all the distances from a point in one set to the closest point in the other set. This distance was first introduced by Hausdorff in his book ''Grundzüge der Mengenlehre'', first published in 1914, although a very close relative appeared in the doctoral thesis of Maurice Fréchet in 1906, in his study of the space of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Non-expansive Function
In the mathematical theory of metric spaces, a metric map is a function between metric spaces that does not increase any distance (such functions are always continuous). These maps are the morphisms in the category of metric spaces, Met (Isbell 1964). They are also called Lipschitz functions with Lipschitz constant 1, nonexpansive maps, nonexpanding maps, weak contractions, or short maps. Specifically, suppose that ''X'' and ''Y'' are metric spaces and ƒ is a function from ''X'' to ''Y''. Thus we have a metric map when, for any points ''x'' and ''y'' in ''X'', : d_(f(x),f(y)) \leq d_(x,y) . \! Here ''d''''X'' and ''d''''Y'' denote the metrics on ''X'' and ''Y'' respectively. Examples Let us consider the metric space ,1/2/math> with the Euclidean metric. Then the function f(x)=x^2 is a metric map, since for x\ne y, , f(x)-f(y), =, x+y, , x-y, <, x-y, .


Category of metric maps

The

picture info

Spectrum
A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors in visible light after passing through a prism. As scientific understanding of light advanced, it came to apply to the entire electromagnetic spectrum. It thereby became a mapping of a range of magnitudes (wavelengths) to a range of qualities, which are the perceived "colors of the rainbow" and other properties which correspond to wavelengths that lie outside of the visible light spectrum. Spectrum has since been applied by analogy to topics outside optics. Thus, one might talk about the " spectrum of political opinion", or the "spectrum of activity" of a drug, or the "autism spectrum". In these uses, values within a spectrum may not be associated with precisely quantifiable numbers or definitions. Such uses imply a broad range of condition ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Weyl's Inequality
In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix. Weyl's inequality about perturbation Let M=N+R,\,N, and R be ''n''×''n'' Hermitian matrices, with their respective eigenvalues \mu_i,\,\nu_i,\,\rho_i ordered as follows: :M:\quad \mu_1 \ge \cdots \ge \mu_n, :N:\quad\nu_1 \ge \cdots \ge \nu_n, :R:\quad\rho_1 \ge \cdots \ge \rho_n. Then the following inequalities hold: :\nu_i + \rho_n \le \mu_i \le \nu_i + \rho_1,\quad i=1,\dots,n, and, more generally, :\nu_j + \rho_k \le \mu_i \le \nu_r + \rho_s,\quad j+k-n \ge i \ge r+s-1. In particular, if R is positive definite then plugging \rho_n > 0 into the above inequalities leads to :\mu_i > \nu_i \quad \forall i = 1,\dots,n. Note that these eigenvalues can be ordered, because they are real (as eigenvalues of Hermitian matrices). Weyl's inequality between eigenvalues and singular val ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hermitian Matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and : or in matrix form: A \text \quad \iff \quad A = \overline . Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix A is denoted by A^\mathsf, then the Hermitian property can be written concisely as Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are A^\mathsf = A^\dagger = A^\ast, although note that in quantum mechanics, A^\ast typically means the complex conjugate only, and not the conjugate transpose. Alternative characterizations Hermit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Normal Matrix
In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis. The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces. The left and right singular vectors in the singular value decomposition of a normal matrix \mathbf = \mathbf \boldsymbol \mathbf^* differ only in complex phase from each other and from the corresponding eigenvectors, since the phase must be factored out ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Perturbation Theory
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of \varepsilon usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Matrix Norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows and n columns and entries in the field K. A matrix norm is a norm on K^. This article will always write such norms with double vertical bars (like so: \, A\, ). Thus, the matrix norm is a function \, \cdot\, : K^ \to \R that must satisfy the following properties: For all scalars \alpha \in K and matrices A, B \in K^, *\, A\, \ge 0 (''positive-valued'') *\, A\, = 0 \iff A=0_ (''definite'') *\left\, \alpha A\right\, =\left, \alpha\ \left\, A\right\, (''absolutely homogeneous'') *\, A+B\, \le \, A\, +\, B\, (''sub-additive'' or satisfying the ''triangle inequality'') The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative: *\left\, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Condition Number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given f(x) = y, one is solving for ''x,'' and thus the condition number of the (local) inverse must be used. In linear regression the condition number of the moment matrix can be used as a diagnostic for multicollinearity. The condition number is an application of the derivative, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]