Tridiagonal
   HOME

TheInfoList



OR:

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: :\begin 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end. The determinant of a tridiagonal matrix is given by the ''
continuant In phonetics, a continuant is a speech sound produced without a complete closure in the oral cavity, namely fricatives, approximants, vowels, and trills. While vowels are included in continuants, the term is often reserved for consonant sounds. ...
'' of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.


Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more ...
of ''p'' 1-by-1 and ''q'' 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix ''A'' satisfies ''a''''k'',''k''+1 ''a''''k''+1,''k'' > 0 for all ''k'', so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ''a''''k'',''k''+1 ''a''''k''+1,''k'' ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix. The set of all ''n × n'' tridiagonal matrices forms a ''3n-2''
dimensional In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordi ...
vector space. Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.


Determinant

The determinant of a tridiagonal matrix ''A'' of order ''n'' can be computed from a three-term recurrence relation. Write ''f''1 = , ''a''1,  = ''a''1 (i.e., ''f''1 is the determinant of the 1 by 1 matrix consisting only of ''a''1), and let :f_n = \begin a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_ \\ & & & c_ & a_n \end. The sequence (''f''''i'') is called the
continuant In phonetics, a continuant is a speech sound produced without a complete closure in the oral cavity, namely fricatives, approximants, vowels, and trills. While vowels are included in continuants, the term is often reserved for consonant sounds. ...
and satisfies the recurrence relation :f_n = a_n f_ - c_b_f_ with initial values ''f''0 = 1 and ''f''−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in ''n'', while the cost is cubic for a general matrix.


Inversion

The
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when ad ...
of a non-singular tridiagonal matrix ''T'' :T = \begin a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_ \\ & & & c_ & a_n \end is given by :(T^)_ = \begin (-1)^b_i \cdots b_ \theta_ \phi_/\theta_n & \text i < j\\ \theta_ \phi_/\theta_n & \text i = j\\ (-1)^c_j \cdots c_ \theta_ \phi_/\theta_n & \text i > j\\ \end where the ''θi'' satisfy the recurrence relation :\theta_i = a_i \theta_ - b_c_\theta_ \qquad i=2,3,\ldots,n with initial conditions ''θ''0 = 1, ''θ''1 = ''a''1 and the ''Ï•''''i'' satisfy :\phi_i = a_i \phi_ - b_i c_i \phi_ \qquad i=n-1,\ldots,1 with initial conditions ''Ï•''''n''+1 = 1 and ''Ï•''''n'' = ''an''. Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal or Toeplitz matrices and for the general case as well. In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.


Solution of linear system

A system of equations ''Ax'' = ''b'' for b\in \R^n can be solved by an efficient form of Gaussian elimination when ''A'' is tridiagonal called tridiagonal matrix algorithm, requiring ''O''(''n'') operations.


Eigenvalues

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely: : a + 2 \sqrt \cos \left (\frac \right ), \qquad k=1, \ldots, n. A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero. Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring O(n^2) operations for a matrix of size n\times n, although fast algorithms exist which (without parallel computation) require only O(n\log n). As a side note, an ''unreduced'' symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.


Similarity to symmetric tridiagonal matrix

For ''unsymmetric'' or ''nonsymmetric'' tridiagonal matrices one can compute the eigendecomposition using a similarity transformation. Given a real tridiagonal, nonsymmetric matrix : T = \begin a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_ \\ & & & c_ & a_n \end where b_i \neq c_i . Assume that each product of off-diagonal entries is positive b_i c_i > 0 and define a transformation matrix D by : D := \operatorname(\delta_1 , \dots, \delta_n) \quad \text \quad \delta_i := \begin 1 & , \, i=1 \\ \sqrt & , \, i=2,\dots,n \,. \end The similarity transformation D^ T D yields a ''symmetric'' tridiagonal matrix J by: : J:=D^ T D = \begin a_1 & \sgn b_1 \, \sqrt \\ \sgn b_1 \, \sqrt & a_2 & \sgn b_2 \, \sqrt \\ & \sgn b_2 \, \sqrt & \ddots & \ddots \\ & & \ddots & \ddots & \sgn b_ \, \sqrt \\ & & & \sgn b_ \, \sqrt & a_n \end \,. Note that T and J have the same eigenvalues.


Computer programming

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step. A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order ''n'' in three one-dimensional arrays, one of length ''n'' containing the diagonal elements, and two of length ''n'' − 1 containing the
subdiagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δΠ...
and
superdiagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δΠ...
elements.


Applications

The discretization in space of the one-dimensional diffusion or
heat equation In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for t ...
:\frac = \alpha \frac using second order central finite differences results in : \begin \frac \\ \frac \\ \vdots \\ \frac \end = \frac \begin -2 & 1 & 0 & \ldots & 0 \\ 1 & -2 & 1 & \ddots & \vdots \\ 0 & \ddots & \ddots & \ddots & 0 \\ \vdots & & 1 & -2 & 1 \\ 0 & \ldots & 0 & 1 & -2 \end \begin u_(t) \\ u_(t) \\ \vdots \\ u_(t) \\ \end with discretization constant \Delta x. The matrix is tridiagonal with a_=-2 and b_=c_=1. Note: no boundary conditions are used here.


See also

*
Pentadiagonal matrix In linear algebra, a pentadiagonal matrix is a special case of band matrices. Its only nonzero entries are on the main diagonal and the first two upper and two lower diagonals. So, it is of the form. :\begin c_1 & d_1 & e_1 & 0 & \cdots & ...
* Jacobi matrix (operator)


Notes


External links


Tridiagonal and Bidiagonal Matrices
in the LAPACK manual. *
High performance algorithms
for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
Tridiagonal linear system solver
in C++ {{Matrix classes Sparse matrices