HOME
*





BLOPEX
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free methods, matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem :A x= \lambda B x, for a given pair (A, B) of complex Hermitian matrix, Hermitian or real Symmetric matrix, symmetric matrices, where the matrix B is also assumed positive-definite matrix, positive-definite. Background Kantorovich in 1948 proposed calculating the smallest eigenvalue \lambda_1 of a symmetric matrix A by steepest descent using a direction r = Ax-\lambda (x) x of a scaled gradient of a Rayleigh quotient \lambda(x) = (x, Ax)/(x, x) in a scalar product (x, y) = x'y, with the step size computed by minimizing the Rayleigh quotient in the linear span of the vectors x and w, i.e. in a locally optimal manner. Samokish proposed applying a preconditioner T to the residual vector r to generate the preconditioned direction w = T r and derived ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix-free Methods
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including: * the power method, * the Lanczos algorithm, * Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG), * Wiedemann's coordinate recurrence algorithm, and * the conjugate gradient method. Distributed solutions have also been explored using coarse-grain parallel software systems to achieve homogeneous solutions of linear systems. It is generally used in solving non-linear equations like Euler's equations in Computational Fluid Dynamics Computational fluid dyna ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Preconditioning
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P^A has a smaller condition number than A. It is also common to call T=P^ the preconditioner, rather than P, since P itself is rarely explicitly available. In modern preconditioning, the application of T=P^, i.e., multiplication of a column vector, or a block of column vectors, by T=P^, is commonly performed in a matrix-free fashion, i.e., where neither P, nor T=P^ (and often not even A) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a linear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Residual (numerical Analysis)
Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find ''x'' such that : f(x)=b. Given an approximation ''x''0 of ''x'', the residual is : b - f(x_0) that is, "what is left of the right hand side" after subtracting ''f''(''x''0)" (thus, the name "residual": what is left, the rest). On the other hand, the error is : x - x_0 If the exact value of ''x'' is not known, the residual can be computed, whereas the error cannot. Residual of the approximation of a function Similar terminology is used dealing with differential, integral and functional equations. For the approximation f_\text of the solution f of the equation : T(f)(x)=g(x) \, , the residual can either be the function : ~g(x)~ - ~T(f_\text)(x) or can be said to be the maximum of the norm of this difference : \max_ , g(x)-T(f_\text)(x), over the domain \mathcal X, where the function f_\text is expected to approximate the solution f , or some integral of a function of the differe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Iterative
Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms. Mathematics In mathematics, iteration may refer to the process of iterating a function, i.e. applying a function repeatedly, using the output from one iteration as the input to the next. Iteration of apparently simple functions can produce complex behaviors and difficult problems – for examples, see the Collatz conjecture and juggler sequences. Another use of iteration in mathematics is in iterative methods which are used to produce approximate numerical solutions to certain mathematical problems. Newton's method is an example of an iterative method. Manual calculation of a number's square root is a co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient Descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent. Gradient descent is generally attributed to Augustin-Louis Cauchy, who first suggested it in 1847. Jacques Hadamard independently proposed a similar method in 1907. Its convergence properties for non-linear optimization problems were first studied by Haskell Curry in 1944, with the method becoming increasingly well-studied and used in the following decades. Description Gradient descent is based on the observation that if the multi-variable function F(\mathbf) is def ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


BLAS
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the ''de facto'' standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979* and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the ''reference implementation'' (sometimes co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Multigrid
In numerical analysis, a multigrid method (MG method) is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners. The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by a ''global'' correction of the fine grid solution approximation from time to time, accomplished by solving a coarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Domain Decomposition
In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method, GMRES, and LOBPCG. In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lanczos Method
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n Hermitian matrix, where m is often but not necessarily much smaller than n . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the Lanczos vectors (i.e. by repeatedly reorthogonalizing each newly generated vector with all previously generated ones) to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies. In their original work, these authors ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Decomposition
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. Example In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations A \mathbf = \mathbf, the matrix ''A'' can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix ''L'' and an upper triangular matrix ''U''. The systems L(U \mathbf) = \mathbf and U \mathbf = L^ \mathbf require fewer additions and multiplications to solve, compared with the original system A \mathbf = \mathbf, though one might require significantly more digits in inexact arithmetic such as floating point. Similarly, the QR decomposition expresses ''A'' as ''QR'' with ''Q'' an orthogonal matrix and ''R'' an upp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Factorization
In mathematics, factorization (or factorisation, see American and British English spelling differences#-ise, -ize (-isation, -ization), English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several ''factors'', usually smaller or simpler objects of the same kind. For example, is a factorization of the integer , and is a factorization of the polynomial . Factorization is not usually considered meaningful within number systems possessing division ring, division, such as the real number, real or complex numbers, since any x can be trivially written as (xy)\times(1/y) whenever y is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by Greek mathematics, ancient Greek mathematicians in the case of integers. They proved the fundamental theorem o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]