HOME

TheInfoList



OR:

In mathematics, an eigenvalue perturbation problem is that of finding the
eigenvectors and eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
of a system Ax=\lambda x that is perturbed from one with known eigenvectors and eigenvalues A_0 x=\lambda_0x_0 . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues x_, \lambda_, i=1, \dots n are to changes in the system. This type of analysis was popularized by
Lord Rayleigh John William Strutt, 3rd Baron Rayleigh, (; 12 November 1842 – 30 June 1919) was an English mathematician and physicist who made extensive contributions to science. He spent all of his academic career at the University of Cambridge. Amo ...
, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues)


Why generalized eigenvalues?

In the entry , applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions.
Generalized_eigenvalue_problem In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matri ...
are less widespread but are a key in the study of
vibrations Vibration is a mechanical phenomenon whereby oscillations occur about an equilibrium point. The word comes from Latin ''vibrationem'' ("shaking, brandishing"). The oscillations may be periodic, such as the motion of a pendulum—or random, such ...
. They are useful when we use the
Galerkin method In mathematics, in the area of numerical analysis, Galerkin methods, named after the Russian mathematician Boris Galerkin, convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete proble ...
or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The
Finite element method The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat ...
is a widespread particular case. In classical mechanics, we may find generalized eigenvalues when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix M , the potential strain energy provides the rigidity matrix K . To get details, for example see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or
Matrix differential equation A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one funct ...
M \ddot x+B \dot x +Kx=0 with the mass matrix M , the damping matrix B and the rigidity matrix K . If we neglect the damping effect, we use B=0, we can look for a solution of the following form x=e^ u; we obtain that u and \omega^2 are solution of the generalized eigenvalue problem -\omega^2 M u+Ku =0


Setting of perturbation for a generalized eigenvalue problem

Suppose we have solutions to the
generalized eigenvalue problem In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matri ...
, :\mathbf_0 \mathbf_ = \lambda_ \mathbf_0 \mathbf_. \qquad (0) where \mathbf_0 and \mathbf_0 are matrices. That is, we know the eigenvalues and eigenvectors for . It is also required that ''the eigenvalues are distinct.'' Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of :\mathbf \mathbf_i = \lambda_i \mathbf \mathbf_i \qquad (1) where :\begin \mathbf &= \mathbf_0 + \delta \mathbf\\ \mathbf &= \mathbf_0 + \delta \mathbf \end with the perturbations \delta\mathbf and \delta\mathbf much smaller than \mathbf and \mathbf respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: :\begin \lambda_i &= \lambda_+\delta\lambda_ \\ \mathbf_i &= \mathbf_ + \delta\mathbf_ \end


Steps

We assume that the matrices are
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
and
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite f ...
, and assume we have scaled the eigenvectors such that : \mathbf_^\top \mathbf_0\mathbf_ = \delta_, \quad \mathbf_^T \mathbf \mathbf_= \delta_ \qquad(2) where is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
. Now we want to solve the equation :\mathbf\mathbf_i - \lambda_i \mathbf \mathbf_i=0. In this article we restrict the study to first order perturbation.


First order expansion of the equation

Substituting in (1), we get :(\mathbf_0+\delta \mathbf)(\mathbf_ + \delta \mathbf_) = \left (\lambda_+\delta\lambda_ \right ) \left (\mathbf_0+ \delta \mathbf \right ) \left (\mathbf_+\delta\mathbf_ \right ), which expands to :\begin \mathbf_0\mathbf_ &+ \delta \mathbf\mathbf_ + \mathbf_0\delta \mathbf_i + \delta \mathbf\delta \mathbf_i = \\ pt&\lambda_\mathbf_0\mathbf_+\lambda_\mathbf_0\delta\mathbf_i + \lambda_ \delta \mathbf \mathbf_ +\delta\lambda_i\mathbf_0\mathbf_ + \\ & \quad \lambda_ \delta \mathbf \delta\mathbf_i + \delta\lambda_i \delta \mathbf\mathbf_ + \delta\lambda_i\mathbf_0\delta\mathbf_i + \delta\lambda_i \delta \mathbf \delta\mathbf_i. \end Canceling from (0) (\mathbf_0 \mathbf_ = \lambda_ \mathbf_0 \mathbf_) leaves :\begin \delta \mathbf \mathbf_ + & \mathbf_0\delta \mathbf_i + \delta \mathbf\delta \mathbf_i = \lambda_\mathbf_0\delta\mathbf_i + \lambda_ \delta \mathbf \mathbf_ + \delta\lambda_i\mathbf_0\mathbf_ + \\ & \lambda_ \delta \mathbf \delta\mathbf_i + \delta\lambda_i \delta \mathbf \mathbf_ + \delta\lambda_i\mathbf_0\delta\mathbf_i + \delta\lambda_i \delta \mathbf \delta\mathbf_i. \end Removing the higher-order terms, this simplifies to :\mathbf_0 \delta\mathbf_i+ \delta \mathbf \mathbf_ = \lambda_\mathbf_0 \delta \mathbf_i + \lambda_\delta \mathbf \mathrm_ + \delta \lambda_i \mathbf_0\mathbf_. \qquad(3) :In other words, \delta \lambda_i no longer denotes the exact variation of the eigenvalue but its first order approximation. As the matrix is symmetric, the unperturbed eigenvectors are M orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct :\delta \mathbf_i = \sum_^N \varepsilon_ \mathbf_ \qquad (4) \quad with \varepsilon_=\mathbf_^T M \delta \mathbf_i , where the are small constants that are to be determined. In the same way, substituting in (2), and removing higher order terms, we get \delta\mathbf_j \mathbf_0 \mathbf_ + \mathbf_ \mathbf_0 \delta \mathbf_ + \mathbf_ \delta \mathbf_0 \mathbf_=0 \quad The derivation can go on with two forks.


First fork: get first eigenvalue perturbation


= Eigenvalue perturbation

= :We start with (3)\quad \mathbf_0 \delta\mathbf_i+ \delta \mathbf \mathbf_ = \lambda_\mathbf_0 \delta \mathbf_i + \lambda_\delta \mathbf \mathrm_ + \delta \lambda_i \mathbf_0\mathbf_; we left multiply with \mathbf_^T and use (2) as well as its first order variation (5); we get : \mathbf_^T \delta \mathbf \mathbf_ = \lambda_ \mathbf_^T\delta \mathbf \mathrm_ + \delta \lambda_i or : \delta \lambda_i=\mathbf_^T \delta \mathbf \mathbf_ -\lambda_ \mathbf_^T\delta \mathbf \mathrm_ We notice that it is the first order perturbation of the generalized
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix ''M'' and nonzero vector ''x'' is defined as: R(M,x) = . For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the con ...
with fixed x_: R(K,M;x_)=x_^T K x_/x_^TMx_, \textx_^TMx_=1 Moreover, for M=I, the formula \delta \lambda_i = x_ ^T \delta K x_ should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation.


= Eigenvector perturbation

= We left multiply (3) with x_^T for j \neq i and get :\mathbf_^T\mathbf_0 \delta\mathbf_i+ \mathbf_^T \delta \mathbf \mathbf_ = \lambda_ \mathbf_^T \mathbf_0 \delta \mathbf_i + \lambda_ \mathbf_^T\delta \mathbf \mathrm_ + \delta \lambda_i \mathbf_^T\mathbf_0\mathbf_. We use \mathbf_^T K=\lambda_ \mathbf_^TM \text \mathbf_^T\mathbf_0\mathbf_=0, for j \neq i . :\lambda_ \mathbf_^T\mathbf_0 \delta\mathbf_i+ \mathbf_^T \delta \mathbf \mathbf_ = \lambda_ \mathbf_^T \mathbf_0 \delta \mathbf_i + \lambda_ \mathbf_^T\delta \mathbf \mathrm_ . or :(\lambda_-\lambda_) \mathbf_^T\mathbf_0 \delta\mathbf_i+ \mathbf_^T \delta \mathbf \mathbf_ = \lambda_ \mathbf_^T\delta \mathbf \mathrm_ . As the eigenvalues are assumed to be simple, for j \neq i : \epsilon_=\mathbf_^T\mathbf_0 \delta\mathbf_i =\frac , i=1, \dots N; j=1, \dots N; j \neq i. Moreover (5) (the first order variation of (2) ) yields 2 \epsilon_=2 \mathbf_^T \mathbf_0 \delta x_i=-\mathbf_^T \delta M \mathbf_ . We have obtained all the components of \delta x_i .


Second fork: Straightforward manipulations

Substituting (4) into (3) and rearranging gives :\begin \mathbf_0 \sum_^N \varepsilon_ \mathbf_ + \delta \mathbf \mathbf_ &= \lambda_ \mathbf_0 \sum_^N \varepsilon_ \mathbf_ + \lambda_ \delta \mathbf \mathbf_ + \delta\lambda_i \mathbf_0\mathbf_ && (5) \\ \sum_^N \varepsilon_ \mathbf_0 \mathbf_ + \delta \mathbf \mathbf_ &= \lambda_ \mathbf_0 \sum_^N \varepsilon_ \mathbf_ + \lambda_ \delta \mathbf \mathbf_ + \delta\lambda_i \mathbf_0 \mathbf_ && \\ (\text \mathbf_0 \text )\\ \sum_^N \varepsilon_ \lambda_ \mathbf_0 \mathbf_ + \delta \mathbf \mathbf_ &= \lambda_ \mathbf_0 \sum_^N \varepsilon_ \mathbf_ + \lambda_ \delta \mathbf \mathbf_ + \delta\lambda_i \mathbf_0 \mathbf_ && (\text (1) ) \end Because the eigenvectors are -orthogonal when is positive definite, we can remove the summations by left-multiplying by \mathbf_^\top: :\mathbf_^\top \varepsilon_ \lambda_ \mathbf_0 \mathbf_ + \mathbf_^\top \delta \mathbf \mathbf_ = \lambda_ \mathbf_^\top \mathbf_0 \varepsilon_ \mathbf_ + \lambda_\mathbf_^\top \delta \mathbf \mathbf_ + \delta\lambda_i\mathbf_^\top \mathbf_0 \mathbf_. By use of equation (1) again: :\mathbf_^\top \mathbf_0 \varepsilon_ \mathbf_ + \mathbf_^\top \delta \mathbf \mathbf_ = \lambda_ \mathbf_^\top \mathbf_0\varepsilon_ \mathbf_ + \lambda_\mathbf_^\top \delta \mathbf\mathbf_ + \delta\lambda_i\mathbf_^\top \mathbf_0 \mathbf_. \qquad (6) The two terms containing are equal because left-multiplying (1) by \mathbf_^\top gives :\mathbf_^\top\mathbf_0\mathbf_ = \lambda_\mathbf_^\top \mathbf_0 \mathbf_. Canceling those terms in (6) leaves :\mathbf_^\top \delta \mathbf \mathbf_ = \lambda_ \mathbf_^\top \delta \mathbf \mathbf_ + \delta\lambda_i \mathbf_^\top \mathbf_0\mathbf_. Rearranging gives :\delta\lambda_i = \frac But by (2), this denominator is equal to 1. Thus :\delta\lambda_i = \mathbf^\top_ \left (\delta \mathbf - \lambda_ \delta \mathbf \right )\mathbf_. Then, as \lambda_i \neq \lambda_k for i \neq k (assumption simple eigenvalues) by left-multiplying equation (5) by \mathbf_^\top: :\varepsilon_ = \frac, \qquad i\neq k. Or by changing the name of the indices: :\varepsilon_ = \frac, \qquad i\neq j. To find , use the fact that: :\mathbf^\top_i \mathbf \mathbf_i = 1 implies: :\varepsilon_=-\tfrac\mathbf^\top_ \delta \mathbf \mathbf_.


Summary of the first order perturbation result

In the case where ''all the matrices are Hermitian positive definite and all the eigenvalues are distinct'', :\begin \lambda_i &= \lambda_ + \mathbf^\top_ \left (\delta \mathbf - \lambda_\delta \mathbf \right ) \mathbf_ \\ \mathbf_i &= \mathbf_ \left (1 - \tfrac \mathbf^\top_ \delta \mathbf \mathbf_ \right ) + \sum_^N \frac \mathbf_ \end for infinitesimal \delta\mathbf and \delta\mathbf (the higher order terms in (3) being neglected). So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion.


Theoretical derivation


Perturbation of an implicit function.

In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function f:\R^ \to \R^m, \; f: (x,y) \mapsto f(x,y), with an invertible Jacobian matrix J_(x_0,y_0) , from a point (x_0,y_0) solution of f(x_0,y_0)=0 , we get solutions of f(x,y)=0 with x close to x_0 in the form y=g(x) where g is a continuously differentiable function ; moreover the Jacobian marix of g is provided by the linear system J_(x,g(x)) J_(x)+J_(x,g(x))=0 \quad (6) . As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of g may be computed with a first order expansion of f(x_0+ \delta x, y_0+\delta y)=0 , we get J_(x,g(x)) \delta x+ J_(x,g(x))\delta y=0 ; as \delta y=J_(x) \delta x , it is equivalent to equation (6) .


Eigenvalue perturbation: a theoretical basis.

We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce \tilde: \R^ \times \R^ \to \R^, with * \tilde (K,M, \lambda,x)= \binom with f(K,M, \lambda,x) =Kx -\lambda x, f_(M,x)=x^T Mx -1. In order to use the Implicit function theorem, we study the invertibility of the Jacobian J_ (K,M;\lambda_,x_) with J_ (K,M;\lambda_i,x_i)(\delta \lambda,\delta x)=\binom \delta \lambda +\binom \delta x_i. Indeed, the solution of J_ (K,M;\lambda_,x_)(\delta \lambda_i,\delta x_i)=\binom may be derived with computations similar to the derivation of the expansion. \delta \lambda_i= -x_^T y, \; \text (\lambda_-\lambda_)x_^T M \delta x_i=x_j^T y, j=1, \dots, n, j \neq i\;; \textx_^T M \delta x_i=x_j^T y/(\lambda_-\lambda_), \text \; 2x_^TM \delta x_i=y_ When \lambda_i is a simple eigenvalue, as the eigenvectors x_, j=1, \dots,n form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible. The implicit function theorem provides a continuously differentiable function (K,M) \mapsto (\lambda_i(K,M), x_i(K,M)) hence the expansion with
little o notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Land ...
: \lambda_i=\lambda_+ \delta \lambda_i +o(\, \delta K \, +\, \delta M \, ) x_i=x_+ \delta x_i +o(\, \delta K \, +\, \delta M \, ). with \delta \lambda_i=\mathbf_^T \delta \mathbf \mathbf_ -\lambda_ \mathbf_^T\delta \mathbf \mathrm_; \delta x_i=\mathbf_^T\mathbf_0 \delta\mathbf_i \mathbf_ \text \mathbf_^T\mathbf_0 \delta\mathbf_i =\frac , i=1, \dots n; j=1, \dots n; j \neq i. This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved.


Results of sensitivity analysis with respect to the entries of the matrices


The results

This means it is possible to efficiently do a
sensitivity analysis Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty anal ...
on as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing will also change , hence the term.) :\begin \frac &= \frac\left(\lambda_ + \mathbf^\top_ \left (\delta \mathbf - \lambda_ \delta \mathbf \right ) \mathbf_ \right) = x_ x_ \left (2 - \delta_ \right ) \\ \frac &= \frac\left(\lambda_ + \mathbf^\top_ \left (\delta \mathbf - \lambda_ \delta \mathbf \right ) \mathbf_\right) = - \lambda_i x_ x_ \left (2- \delta_ \right ). \end Similarly :\begin \frac &= \sum_^N \frac\mathbf_ \\ \frac &= -\mathbf_\frac(2-\delta_) - \sum_^N \frac\mathbf_ \left (2-\delta_ \right ). \end


Eigenvalue sensitivity, a small example

A simple case is K=\begin 2 & b \\ b & 0 \end; however you can compute eigenvalues and eigenvectors with the help of online tools such as

(see introduction in Wikipedia WWW Interactive Multipurpose Server, WIMS) or using Sage
SageMath SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, numerical analysis, numbe ...
. You get the smallest eigenvalue \lambda=- \left sqrt +1 \right/math> and an explicit computation \frac=\frac; more over, an associated eigenvector is \tilde x_0= ,-(\sqrt+1))T; it is not an unitary vector; so x_x_ = \tilde x_ \tilde x_/\, \tilde x_0 \, ^2; we get \, \tilde x_0 \, ^2=2 \sqrt(\sqrt+1) and \tilde x_ \tilde x_ =-x (\sqrt+1) ; hence x_ x_=-\frac; for this example , we have checked that \frac= 2x_ x_ or \delta \lambda=2x_ x_ \delta b.


Existence of eigenvectors

Note that in the above example we assumed that both the unperturbed and the perturbed systems involved
symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with re ...
, which guaranteed the existence of N linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have N linearly independent eigenvectors, though a sufficient condition is that \mathbf and \mathbf be
simultaneously diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) ...
.


The case of repeated eigenvalues

A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from
archive.org The Internet Archive is an American digital library with the stated mission of "universal access to all knowledge". It provides free public access to collections of digitized materials, including websites, software applications/games, music, ...
. We draw an example in which the eigenvectors have a nasty behavior.


Example 1

Consider the following matrix B(\epsilon)= \epsilon \begin \cos(2/\epsilon) &, \sin(2/\epsilon) \\ \sin(2/\epsilon) &,s \cos(2/\epsilon) \end and A(\epsilon)=I- e^ B; A(0)=I. For \epsilon \neq 0, the matrix A(\epsilon) has eigenvectors \Phi^1= cos(1/\epsilon), -\sin(1/\epsilon)T; \Phi^2= sin(1/\epsilon), -\cos(1/\epsilon)T belonging to eigenvalues \lambda_1= 1-e^ , \lambda_2= 1+e^ . Since \lambda_1 \neq \lambda_2 for \epsilon \neq 0 if u^j (\epsilon), j= 1,2, are any normalized eigenvectors belonging to \lambda_j(\epsilon),j=1,2 respectively then u^j=e^ \Phi^j(\epsilon) where \alpha_j , j=1,2 are real for \epsilon \neq 0 . It is obviously impossible to define \alpha_1(\epsilon) , say, in such a way that u^1 (\epsilon) tends to a limit as \epsilon \rightarrow 0 , because , u^1(\epsilon), =, \cos(1/\epsilon), has no limit as \epsilon \rightarrow 0 . Note in this example that A_ (\epsilon) is not only continuous but also has continuous derivatives of all orders. Rellich draws the following important consequence. << Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator A(\epsilon) does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >>


Example 2

This example is less nasty that the previous one. Suppose _0/math> is the 2x2 identity matrix, any vector is an eigenvector; then u_0=
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
T/\sqrt is one possible eigenvector. But if one makes a small perturbation, such as = _0+ \begin\epsilon & 0 \\0 & 0 \end Then the eigenvectors are v_1= , 0T and v_2=
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
T; they are constant with respect to \epsilon so that \, u_0-v_1 \, is constant and does not go to zero.


See also

*
Perturbation theory (quantum mechanics) In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for whic ...
*
Bauer–Fike theorem In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eig ...


References

* .


Further reading


Books

* * . * Bhatia, R. (1987). Perturbation bounds for matrix eigenvalues. SIAM.


Report

* {{cite book , last1=Rellich , first1=Franz , title=Perturbation theory of eigenvalue problems , date=1954 , publisher=Courant Institute of Mathematical Sciences, New-York University , location=New-York


Journal papers

* Simon, B. (1982). Large orders and summability of eigenvalue perturbation theory: a mathematical overview. International Journal of Quantum Chemistry, 21(1), 3-25. * Crandall, M. G., & Rabinowitz, P. H. (1973). Bifurcation, perturbation of simple eigenvalues, and linearized stability. Archive for Rational Mechanics and Analysis, 52(2), 161-180. * Stewart, G. W. (1973). Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM review, 15(4), 727-764. * Löwdin, P. O. (1962). Studies in perturbation theory. IV. Solution of eigenvalue problem by projection operator formalism. Journal of Mathematical Physics, 3(5), 969-982. Perturbation theory Differential calculus Multivariable calculus Linear algebra Numerical linear algebra