Thomas algorithm
   HOME

TheInfoList



OR:

In
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematic ...
, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after
Llewellyn Thomas Llewellyn Hilleth Thomas (21 October 1903 – 20 April 1992) was a British physicist and applied mathematician. He is best known for his contributions to atomic and molecular physics and solid-state physics. His key achievements include calculat ...
), is a simplified form of
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
that can be used to solve tridiagonal systems of equations. A tridiagonal system for ''n'' unknowns may be written as :a_i x_ + b_i x_i + c_i x_ = d_i, where a_1 = 0 and c_n = 0. : \begin b_1 & c_1 & & & 0 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ 0 & & & a_n & b_n \end \begin x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end = \begin d_1 \\ d_2 \\ d_3 \\ \vdots \\ d_n \end . For such systems, the solution can be obtained in O(n) operations instead of O(n^3) required by
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
. A first sweep eliminates the a_i's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrices commonly arise from the discretization of 1D
Poisson equation Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with ...
and natural cubic
spline interpolation In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all ...
. Thomas' algorithm is not
stable A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals and livestock. There are many different types of stables in use today; the ...
in general, but is so in several special cases, such as when the matrix is
diagonally dominant In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row ...
(either by rows or columns) or
symmetric positive definite In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a c ...
; for a more precise characterization of stability of Thomas' algorithm, see Higham Theorem 9.12. If stability is required in the general case,
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
with
partial pivoting The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex algorithm, etc.), to do certain calculations. In the case of matrix algorithms, a pivot entry is usuall ...
(GEPP) is recommended instead.


Method

The forward sweep consists of the computation of new coefficients as follows, denoting the new coefficients with primes: : c'_i = \begin \cfrac, & i = 1, \\ \cfrac, & i = 2, 3, \dots, n - 1 \end and : d'_i = \begin \cfrac, & i = 1, \\ \cfrac, & i = 2, 3, \dots, n. \end The solution is then obtained by back substitution: : x_n = d'_n, : x_i = d'_i - c'_i x_, \quad i = n - 1, n - 2, \ldots, 1. The method above does not modify the original coefficient vectors, but must also keep track of the new coefficients. If the coefficient vectors may be modified, then an algorithm with less bookkeeping is: For i = 2, 3, \dots, n, do : w = \cfrac, : b_i := b_i-w c_, : d_i := d_i-w d_, followed by the back substitution : x_n = \cfrac, : x_i = \cfrac \quad \text i = n - 1, n - 2, \dots, 1. The implementation in a VBA subroutine without preserving the coefficient vectors: Sub TriDiagonal_Matrix_Algorithm(N%, A#(), B#(), C#(), D#(), X#()) Dim i%, W# For i = 2 To N W = A(i) / B(i - 1) B(i) = B(i) - W * C(i - 1) D(i) = D(i) - W * D(i - 1) Next i X(N) = D(N) / B(N) For i = N - 1 To 1 Step -1 X(i) = (D(i) - C(i) * X(i + 1)) / B(i) Next i End Sub


Derivation

The derivation of the tridiagonal matrix algorithm is a special case of
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
. Suppose that the unknowns are x_1,\ldots, x_n, and that the equations to be solved are: :\begin & && b_1 x_1 && + c_1 x_2 && = d_1 \\ & a_i x_ && + b_i x_i && + c_i x_ && = d_i \,, \quad i = 2, \ldots, n - 1 \\ & a_n x_ && + b_n x_n && && = d_n \,. \end Consider modifying the second (i = 2) equation with the first equation as follows: : (\mbox) \cdot b_1 - (\mbox) \cdot a_2 which would give: : (b_2 b_1 - c_1 a_2) x_2 + c_2 b_1 x_3 = d_2 b_1 - d_1 a_2. Note that x_1 has been eliminated from the second equation. Using a similar tactic with the modified second equation on the third equation yields: : (b_3 (b_2 b_1 - c_1 a_2) - c_2 b_1 a_3 )x_3 + c_3 (b_2 b_1 - c_1 a_2) x_4 = d_3 (b_2 b_1 - c_1 a_2) - (d_2 b_1 - d_1 a_2) a_3. \, This time x_2 was eliminated. If this procedure is repeated until the n^ row; the (modified) n^ equation will involve only one unknown, x_n. This may be solved for and then used to solve the (n - 1)^ equation, and so on until all of the unknowns are solved for. Clearly, the coefficients on the modified equations get more and more complicated if stated explicitly. By examining the procedure, the modified coefficients (notated with tildes) may instead be defined recursively: :\tilde a_i = 0\, :\tilde b_1 = b_1\, :\tilde b_i = b_i \tilde b_ - \tilde c_ a_i\, :\tilde c_1 = c_1\, :\tilde c_i = c_i \tilde b_\, :\tilde d_1 = d_1\, :\tilde d_i = d_i \tilde b_ - \tilde d_ a_i.\, To further hasten the solution process, \tilde b_i may be divided out (if there's no division by zero risk), the newer modified coefficients, each notated with a prime, will be: :a'_i = 0\, :b'_i = 1\, :c'_1 = \frac\, :c'_i = \frac\, :d'_1 = \frac\, :d'_i = \frac.\, This gives the following system with the same unknowns and coefficients defined in terms of the original ones above: :\begin x_i + c'_i x_ = d'_i \qquad &;& \ i = 1, \ldots, n - 1 \\ x_n = d'_n \qquad &;& \ i = n. \\ \end \, The last equation involves only one unknown. Solving it in turn reduces the next last equation to one unknown, so that this backward substitution can be used to find all of the unknowns: :x_n = d'_n\, :x_i = d'_i - c'_i x_ \qquad ; \ i = n - 1, n - 2, \ldots, 1.


Variants

In some situations, particularly those involving
periodic boundary conditions Periodic boundary conditions (PBCs) are a set of boundary conditions which are often chosen for approximating a large (infinite) system by using a small part called a ''unit cell''. PBCs are often used in computer simulations and mathematical mod ...
, a slightly perturbed form of the tridiagonal system may need to be solved: : \begin & a_1 x_ && + b_1 x_1 && + c_1 x_2 && = d_1 \\ & a_i x_ && + b_i x_i && + c_i x_ && = d_i \,, \quad i = 2, \ldots, n - 1 \\ & a_n x_ && + b_n x_n && + c_n x_1 && = d_n \,. \end In this case, we can make use of the
Sherman–Morrison formula In mathematics, in particular linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix A and the outer product, u v^\textsf, of vectors u and v. Th ...
to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. The method requires solving a modified non-cyclic version of the system for both the input and a sparse corrective vector, and then combining the solutions. This can be done efficiently if both solutions are computed at once, as the forward portion of the pure tridiagonal matrix algorithm can be shared. If we indicate by: A=\begin b_1 & c_1 & & & a_1 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ c_n & & & a_n & b_n \end ,x= \begin x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end ,d= \begin d_1 \\ d_2 \\ d_3 \\ \vdots \\ d_n \end Then the system to be solved is: Ax = d In this case the coefficients a_1 and c_n are, generally speaking, non-zero, so their presence does not allow to apply the Thomas algorithm directly. We can therefore consider B\in\mathbb^ and u,v\in\mathbb^ as following: B=\begin b_1-\gamma & c_1 & & & 0 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ 0 & & & a_n & b_n-\frac \end, u=\begin \gamma \\ 0 \\ 0 \\ \vdots \\ c_n \end ,v= \begin 1 \\ 0 \\ 0 \\ \vdots \\ a_1/\gamma \end . Where \gamma\in\mathbb is a parameter to be chosen. The matrix A can be reconstructed as A = B + u v^\mathsf. The solution is then obtained in the following way: first we solve two tridiagonal systems of equations applying the Thomas algorithm: By=d \qquad \qquad Bq=u Then we reconstruct the solution x using the Shermann-Morrison formula: \begin x &=A^d =(B+uv^T)^d =B^d-\fracd =y-\frac \end The implementation in
Dev-C++ Dev-C++ is a free full-featured integrated development environment (IDE) distributed under the GNU General Public License for programming in C and C++. It was originally developed by Colin Laplace and first released in 1998. It is written in ...
without preserving the coefficient vectors: typedef struct COEFFICIENTS; //Apply Thomas Alg., unknowns x ...,x void ThomasAlg(double x +1COEFFICIENTS* coeff) There is also another way to solve the slightly perturbed form of the tridiagonal system considered above. Let us consider two auxiliary linear systems of dimension (n-1) \times (n-1) : \begin \qquad \ \ \ \ \ b_2 u_ + c_2 u_3 &= d_2 \\ a_3 u_2 + b_3 u_3 + c_3 u_4 &= d_3 \\ a_i u_ + b_i u_i + c_i u_ &= d_i\\ \dots \\ a_n u_+ b_n u_n\qquad &= d_n \,. \end \quad i = 4, \ldots, n - 1 \qquad \qquad \begin \qquad \ \ \ \ \ b_2 v_ + c_2 v_3 &= -a_2 \\ a_3 v_2 + b_3 v_3 + c_3 v_4 &= 0 \\ a_i u_ + b_i u_i + c_i u_ &= 0\\ \dots \\ a_n v_+ b_n v_n\qquad &= -c_n \,. \end \quad i = 4, \ldots, n - 1 For convenience, we additionally define u_1 = 0 and v_1 = 1. We can now find the solutions \ and \ applying Thomas algorithm to the two auxiliary tridiagonal system. The solution \ can be then represented in the form: x_i = u_i + x_1 v_i \qquad i=1,2,\dots, n Indeed, multiplying each equation of the second auxiliary system by x_1, adding with the corresponding equation of the first auxiliary system and using the representation x_i = u_i + x_1 v_i, we immediately see that equations number 2 through n of the original system are satisfied; it only remains to satisfy equation number 1. To do so, consider formula for i=2 and i=n and substitute x_2 = u_2 + x_1 v_2and x_n = u_n + x_1 v_n into the first equation of the original system. This yields one scalar equation for x_1: b_1x_1+c_1(u_2+x_1v_2)+a_1(u_n+x_1v_n) = d_1 As such, we find: x_1 = \frac The implementation in
Dev-C++ Dev-C++ is a free full-featured integrated development environment (IDE) distributed under the GNU General Public License for programming in C and C++. It was originally developed by Colin Laplace and first released in 1998. It is written in ...
without preserving the coefficient vectors: typedef struct COEFFICIENTS; //Apply Thomas Alg., unknowns x ...,x void ThomasAlg(double x +1COEFFICIENTS* coeff) In both cases the auxiliary systems to be solved are genuinely tri-diagonal, so the overall computational complexity of solving system Ax = d remains linear with the respect to the dimension of the system n , that is O(n) arithmetic operations. In other situations, the system of equations may be block tridiagonal (see
block matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original mat ...
), with smaller submatrices arranged as the individual elements in the above matrix system (e.g., the 2D
Poisson problem Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with ...
). Simplified forms of Gaussian elimination have been developed for these situations. The textbook ''Numerical Mathematics'' by Quarteroni, Sacco and Saleri, lists a modified version of the algorithm which avoids some of the divisions (using instead multiplications), which is beneficial on some computer architectures. Parallel tridiagonal solvers have been published for many vector and parallel architectures, including GPUs For an extensive treatment of parallel tridiagonal and block tridiagonal solvers see


References

* * * {{DEFAULTSORT:Tridiagonal Matrix Algorithm Numerical linear algebra Articles with example BASIC code