SLEPc
   HOME
*





SLEPc
SLEPc is a software library for the parallel computation of eigenvalues and eigenvectors of large, sparse matrices. It can be seen as a module of Portable, Extensible Toolkit for Scientific Computation, PETSc that provides solvers for different types of eigenproblems, including linear (standard and generalized) and nonlinear (quadratic eigenvalue problem, quadratic, polynomial and nonlinear eigenproblem, general), as well as the singular value decomposition, SVD. Recent versions also include support for matrix function, matrix functions. It uses the Message Passing Interface, MPI standard for parallelization. Both real and complex arithmetic are supported, with single, double and quadruple precision. When using SLEPc, the application programmer can use any of the PETSc's data structures and solvers. Other PETSc features are incorporated into SLEPc as well, such as command-line option setting, automatic profiling, error checking, portability to virtually all computing platforms, etc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


BLOPEX
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free methods, matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem :A x= \lambda B x, for a given pair (A, B) of complex Hermitian matrix, Hermitian or real Symmetric matrix, symmetric matrices, where the matrix B is also assumed positive-definite matrix, positive-definite. Background Kantorovich in 1948 proposed calculating the smallest eigenvalue \lambda_1 of a symmetric matrix A by steepest descent using a direction r = Ax-\lambda (x) x of a scaled gradient of a Rayleigh quotient \lambda(x) = (x, Ax)/(x, x) in a scalar product (x, y) = x'y, with the step size computed by minimizing the Rayleigh quotient in the linear span of the vectors x and w, i.e. in a locally optimal manner. Samokish proposed applying a preconditioner T to the residual vector r to generate the preconditioned direction w = T r and derived ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Nonlinear Eigenproblem
In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form : M (\lambda) x = 0 , where x\neq0 is a vector, and ''M'' is a matrix-valued function of the number \lambda. The number \lambda is known as the (nonlinear) eigenvalue, the vector x as the (nonlinear) eigenvector, and (\lambda,x) as the eigenpair. The matrix M (\lambda) is singular at an eigenvalue \lambda. Definition In the discipline of numerical linear algebra the following definition is typically used. Let \Omega \subseteq \Complex, and let M : \Omega \rightarrow \Complex^ be a function that maps scalars to matrices. A scalar \lambda \in \Complex is called an ''eigenvalue'', and a nonzero vector x \in \Complex^n is called a ''right eigevector'' if M (\lambda) x = 0. Moreover, a nonzero vector y \in \Complex^n is called a ''left ei ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


List Of Numerical Libraries
This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions. The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed. Multi-language C C++ Delphi * ALGLIB - an open source numerical analysis library. .NET Framework languages C#, F#, VB.NET and PowerShell Fortran Java * SuanShu is an open-source Java math library. It supports numerical anal ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




ARPACK
ARPACK, the ARnoldi PACKage, is a numerical computation, numerical software library written in Fortran, FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free methods, matrix-free fashion. The package is designed to compute a few eigenvalues and corresponding eigenvectors of large Sparse matrix, sparse or structured Matrix (mathematics), matrices, using the Arnoldi iteration#Implicitly restarted Arnoldi method .28IRAM.29, Implicitly Restarted Arnoldi Method (IRAM) or, in the case of symmetric matrices, the corresponding variant of the Lanczos algorithm#Variations, Lanczos algorithm. It is used by many popular numerical computing environments such as SciPy, Mathematica, GNU Octave and MATLAB to provide this functionality. Reverse Communication Interface A powerful matrix-free methods, matrix-free feature of ARPACK is its ability to use any matrix storage format. This is possible because it doesn't operate on the matrices directly, but instead when a matrix opera ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linux
Linux ( or ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution, which includes the kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name "GNU/Linux" to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian, Fedora Linux, and Ubuntu, the latter of which itself consists of many different distributions and modifications, including Lubuntu and Xubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for ser ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Arnoldi Iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called ''direct methods'' which must complete to give any useful results (see for example, Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to the Lanczos algorithm. The Arnoldi iteration was invented by W. E. Arnoldi in 1951. Krylov subspaces and the power iteration An intuitive method for finding the largest (in absolute value) eigenvalu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Numerical Libraries
Numerical may refer to: * Number * Numerical digit * Numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
{{disambig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bidiagonalization
Bidiagonalization is one of unitary (orthogonal) matrix decompositions such that U* A V = B, where U and V are unitary (orthogonal) matrices; * denotes Hermitian transpose; and B is upper bidiagonal. A is allowed to be rectangular. For dense matrices, the left and right unitary matrices are obtained by a series of Householder reflections alternately applied from the left and right. This is known as Golub-Kahan bidiagonalization. For large matrices, they are calculated iteratively by using Lanczos method, referred to as Golub-Kahan-Lanczos method. Bidiagonalization has a very similar structure to the singular value decomposition (SVD). However, it is computed within finite operations, while SVD requires iterative schemes to find singular values. The latter is because the squared singular values are the roots of characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Generalized Singular Value Decomposition
In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD. First version: two-matrix decomposition The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Preconditioners
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P^A has a smaller condition number than A. It is also common to call T=P^ the preconditioner, rather than P, since P itself is rarely explicitly available. In modern preconditioning, the application of T=P^, i.e., multiplication of a column vector, or a block of column vectors, by T=P^, is commonly performed in a matrix-free fashion, i.e., where neither P, nor T=P^ (and often not even A) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a linear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Preconditioner
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P^A has a smaller condition number than A. It is also common to call T=P^ the preconditioner, rather than P, since P itself is rarely explicitly available. In modern preconditioning, the application of T=P^, i.e., multiplication of a column vector, or a block of column vectors, by T=P^, is commonly performed in a matrix-free fashion, i.e., where neither P, nor T=P^ (and often not even A) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve a line ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Lanczos Algorithm
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power iteration, power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n Hermitian matrix, where m is often but not necessarily much smaller than n . Although computationally efficient in principle, the method as initially formulated was not useful, due to its Numerical stability, numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the Lanczos vectors (i.e. by repeatedly reorthogonalizing each newly generated vector with all previously generated ones) to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies. I ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]