HOME





Biconjugate Gradient Method
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose . The Algorithm # Choose initial guess x_0\,, two other vectors x_0^* and b^*\, and a preconditioner M\, # r_0 \leftarrow b-A\, x_0\, # r_0^* \leftarrow b^*-x_0^*\, A^* # p_0 \leftarrow M^ r_0\, # p_0^* \leftarrow r_0^*M^\, # for k=0, 1, \ldots do ## \alpha_k \leftarrow \, ## x_ \leftarrow x_k + \alpha_k \cdot p_k\, ## x_^* \leftarrow x_k^* + \overline\cdot p_k^*\, ## r_ \leftarrow r_k - \alpha_k \cdot A p_k\, ## r_^* \leftarrow r_k^*- \overline \cdot p_k^*\, A^* ## \beta_k \leftarrow \, ## p_ \leftarrow M^ r_ + \beta_k \cdot p_k\, ## p_^* \leftarrow r_^*M^ + \overline\cdot p_k^*\, In the above formulation, the computed r_k\, and r_k^* satis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Numerical Stability
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and another is algorithms for solving ordinary and partial differential equations by discrete approximation. In numerical linear algebra, the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues. On the other hand, in numerical algorithms for differential equations the concern is the growth of round-off errors and/or small fluctuations in initial data which might cause a large deviation of final answer from the exact solution. Some numerical algorithms may damp out the small fluctuations (errors) in the input data; others might magnify such errors. Calculations that can be proven not to magnify approximation errors are called ''numerically stable''. One ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Conjugate Gradient Squared Method
In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form A = , particularly in cases where computing the transpose A^T is impractical. The CGS method was developed as an improvement to the biconjugate gradient method. Background A system of linear equations A = consists of a known matrix A and a known vector . To solve the system is to find the value of the unknown vector . A direct method for solving a system of linear equations is to take the inverse of the matrix A, then calculate \bold x = A^\bold b. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess \bold x^, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution. As with the conjugate gradient method, biconjugate gradient method, and similar i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Conjugate Gradient Method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems. The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it. The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems. Description of the problem addres ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Biconjugate Gradient Stabilized Method
In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the biconjugate gradient method (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugate gradient squared method (CGS). It is a Krylov subspace method. Unlike the original BiCG method, it doesn't require multiplication by the transpose of the system matrix. Algorithmic steps Unpreconditioned BiCGSTAB In the following sections, denotes the dot product of vectors. To solve a linear system , BiCGSTAB starts with an initial guess and proceeds as follows: # # Choose an arbitrary vector such that , e.g., # # # For ## ## ## ## ## If is accurate enough, i.e., if s is small enough, then set and quit ## ## ## ## ## If is accurate enough, i.e., if is small enough, then quit ## ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Krylov Subspace
In linear algebra, the order-''r'' Krylov subspace generated by an ''n''-by-''n'' matrix ''A'' and a vector ''b'' of dimension ''n'' is the linear subspace spanned by the images of ''b'' under the first ''r'' powers of ''A'' (starting from A^0=I), that is, :\mathcal_r(A,b) = \operatorname \, \. Background The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about the concept in 1931. Properties * \mathcal_r(A,b), A\,\mathcal_r(A,b)\subset \mathcal_(A,b). * Let r_0 = \operatorname \operatorname \, \. Then \ are linearly independent unless r>r_0, \mathcal_r(A,b) \subset \mathcal_(A,b) for all r, and \operatorname \mathcal_(A,b) = r_0. So r_0 is the maximal dimension of the Krylov subspaces \mathcal_r(A,b). * The maximal dimension satisfies r_0\leq 1 + \operatorname A and r_0 \leq n. * Consider \dim \operatorname \, \ = \deg\,p(A), where p(A) is the minimal polynomial of A. We have r_0\leq \deg\,p(A). Moreover, for an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Biorthogonal System
In mathematics, a biorthogonal system is a pair of indexed families of vectors \tilde v_i \text E \text \tilde u_i \text F such that \left\langle\tilde v_i , \tilde u_j\right\rangle = \delta_, where E and F form a pair of topological vector spaces that are in duality, \langle \,\cdot, \cdot\, \rangle is a bilinear mapping and \delta_ is the Kronecker delta. An example is the pair of sets of respectively left and right eigenvectors of a matrix, indexed by eigenvalue, if the eigenvalues are distinct. A biorthogonal system in which E = F and \tilde v_i = \tilde u_i is an orthonormal system. Projection Related to a biorthogonal system is the projection P := \sum_ \tilde u_i \otimes \tilde v_i, where (u \otimes v) (x) := u \langle v, x \rangle; its image is the linear span In mathematics, the linear span (also called the linear hull or just span) of a set S of elements of a vector space V is the smallest linear subspace of V that contains S. It is the set of all finite linear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Conjugate Transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate of a+ib being a-ib, for real numbers a and b). There are several notations, such as \mathbf^\mathrm or \mathbf^*, \mathbf', or (often in physics) \mathbf^. For real matrices, the conjugate transpose is just the transpose, \mathbf^\mathrm = \mathbf^\operatorname. Definition The conjugate transpose of an m \times n matrix \mathbf is formally defined by where the subscript ij denotes the (i,j)-th entry (matrix element), for 1 \le i \le n and 1 \le j \le m, and the overbar denotes a scalar complex conjugate. This definition can also be written as :\mathbf^\mathrm = \left(\overline\right)^\operatorname = \overline where \mathbf^\operatorname denotes the transpose and \overline denotes the matrix with complex conjugated entries. Other na ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quasi-Newton Method
In numerical analysis, a quasi-Newton method is an iterative numerical method used either to find zeroes or to find local maxima and minima of functions via an iterative recurrence formula much like the one for Newton's method, except using approximations of the derivatives of the functions in place of exact derivatives. Newton's method requires the Jacobian matrix of all partial derivatives of a multivariate function when used to search for zeros or the Hessian matrix when used for finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration. Some iterative methods that reduce to Newton's method, such as sequential quadratic programming, may also be considered quasi-Newton methods. Search for zeros: root finding Newton's method to find zeroes of a function g of multiple variables is given by x_ = x_n - _g(x_n) g(x_n), where _g(x_n) is the left inverse ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Projection (linear Algebra)
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it were applied once (i.e. P is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. Definitions A projection on a vector space V is a linear operator P\colon V \to V such that P^2 = P. When V has an inner product and is complete, i.e. when V is a Hilbert space, the concept of orthogonality can be used. A projection P on a Hilbert space V is called an orthogonal projection if it satisfies \langle P \mathbf x, \mathbf y \rangle = \langle \mathbf x, P \mathbf y \rangle for all \mathbf x, \mathbf y \in V. A projecti ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Biconjugate Gradient Stabilized Method
In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the biconjugate gradient method (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugate gradient squared method (CGS). It is a Krylov subspace method. Unlike the original BiCG method, it doesn't require multiplication by the transpose of the system matrix. Algorithmic steps Unpreconditioned BiCGSTAB In the following sections, denotes the dot product of vectors. To solve a linear system , BiCGSTAB starts with an initial guess and proceeds as follows: # # Choose an arbitrary vector such that , e.g., # # # For ## ## ## ## ## If is accurate enough, i.e., if s is small enough, then set and quit ## ## ## ## ## If is accurate enough, i.e., if is small enough, then quit ## ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Complex Conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - bi. The complex conjugate of z is often denoted as \overline or z^*. In polar form, if r and \varphi are real numbers then the conjugate of r e^ is r e^. This can be shown using Euler's formula. The product of a complex number and its conjugate is a real number: a^2 + b^2 (or r^2 in polar coordinates). If a root of a univariate polynomial with real coefficients is complex, then its complex conjugate is also a root. Notation The complex conjugate of a complex number z is written as \overline z or z^*. The first notation, a vinculum, avoids confusion with the notation for the conjugate transpose of a matrix, which can be thought of as a generalization of the complex conjugate. The second is preferred in physics, where ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]