HOME

TheInfoList



OR:

This is a list of
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods th ...
topics.


General

*
Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (german: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization er ...
*
Iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
* Rate of convergence — the speed at which a convergent sequence approaches its limit ** Order of accuracy — rate at which numerical solution of differential equation converges to exact solution *
Series acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve ...
— methods to accelerate the speed of convergence of a series ** Aitken's delta-squared process — most useful for linearly converging sequences **
Minimum polynomial extrapolation In mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration of vector sequences, due to Cabay and Jackson. While Aitken's method is the most famous, it often fails for vector sequences. An effecti ...
— for vector sequences **
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, ...
** Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums **
Van Wijngaarden transformation In mathematics and numerical analysis, the van Wijngaarden transformation is a variant on the Euler transform used to accelerate the convergence of an alternating series In mathematics, an alternating series is an infinite series of the form ...
— for accelerating the convergence of an alternating series *
Abramowitz and Stegun ''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the ''National Institute of Standards and ...
— book containing formulas and tables of many special functions **
Digital Library of Mathematical Functions The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is intend ...
— successor of book by Abramowitz and Stegun *
Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. Th ...
* Local convergence and global convergence — whether you need a good initial guess to get convergence * Superconvergence *
Discretization In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numeri ...
*
Difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the limit as ''h'' approaches 0 gives the derivative of the function ''f''. The name of the expression stems from the fact ...
*Complexity: ** Computational complexity of mathematical operations **
Smoothed analysis In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from mathematica ...
— measuring the expected performance of algorithms under slight random perturbations of worst-case inputs * Symbolic-numeric computation — combination of symbolic and numeric methods *Cultural and historical aspects: **
History of numerical solution of differential equations using computers Differential equations, in particular Euler equations, rose in prominence during World War II in calculating the accurate trajectory of ballistics, both rocket-propelled and gun or cannon type projectiles. Originally, mathematicians used the simpl ...
**
Hundred-dollar, Hundred-digit Challenge problems The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for t ...
— list of ten problems proposed by
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
in 2002 **
International Workshops on Lattice QCD and Numerical Analysis International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * ''International'' (Kevin Michael album), 2011 * ''International'' (New Order album), 2002 * ''International'' (The T ...
**
Timeline of numerical analysis after 1945 The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see ti ...
*General classes of methods: **
Collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
— discretizes a continuous equation by requiring it only to hold at certain points **
Level-set method Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces on a ...
***
Level set (data structures) In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions. A common use of this form of data structure is in efficient image rendering. The underlying method constructs a signed dist ...
— data structures for representing level sets **
Sinc numerical methods In numerical analysis and applied mathematics, sinc numerical methods are numerical techniques for finding approximate solutions of partial differential equations and integral equations based on the translates of sinc function and Cardinal function ...
— methods based on the sinc function, sinc(''x'') = sin(''x'') / ''x'' **
ABS methods ABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications: * solution of general linear alg ...


Error

Error analysis (mathematics) In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics. Error a ...
*
Approximation An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix '' ...
*
Approximation error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
*
Catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L ...
*
Condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
* Discretization error *
Floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be r ...
number ** Guard digit — extra precision introduced during a computation to reduce round-off error **
Truncation In mathematics and computer science, truncation is limiting the number of digits right of the decimal point. Truncation and floor function Truncation of positive real numbers can be done using the floor function. Given a number x \in \mathbb ...
— rounding a floating-point number by discarding all digits after a certain digit **
Round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
*** Numeric precision in Microsoft Excel **
Arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are l ...
*
Interval arithmetic Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods usin ...
— represent every number by two floating-point numbers guaranteed to have the unknown number between them ** Interval contractor — maps interval to subinterval which still contains the unknown exact answer ** Interval propagation — contracting interval domains without removing any value consistent with the constraints ***See also:
Interval boundary element method Interval boundary element method is classical boundary element method with the interval parameters. Boundary element method is based on the following integral equation c\cdot u=\int\limits_\left(G\frac - \fracu\right)dS The exact interval sol ...
,
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics ...
* Loss of significance * Numerical error * Numerical stability *Error propagation: **
Propagation of uncertainty In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of ...
***
List of uncertainty propagation software A ''list'' is any set of items in a row. List or lists may also refer to: People * List (surname) Organizations * List College, an undergraduate division of the Jewish Theological Seminary of America * SC Germania List, German rugby union ...
** Significance arithmetic ** Residual (numerical analysis) *
Relative change and difference In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a ''standard'' or ''reference'' or ''starting'' v ...
— the relative difference between ''x'' and ''y'' is , ''x'' − ''y'', / max(, ''x'', , , ''y'', ) *
Significant figures Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something. If a number expre ...
**
False precision False precision (also called overprecision, fake precision, misplaced precision and spurious precision) occurs when numerical data are presented in a manner that implies better precision than is justified; since precision is a limit to accuracy ...
— giving more significant figures than appropriate *
Sterbenz lemma In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma is a theorem giving conditions under which floating-point differences are computed exactly. It is named after Pat H. Sterbenz, who published a variant of it in 1974. The Sterbe ...
*
Truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
— error committed by doing only a finite numbers of steps *
Well-posed problem The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the s ...
*
Affine arithmetic Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations (affine forms) of certain primitive variables, which stand for sources of uncertainty in the data or ...


Elementary and special functions

* Unrestricted algorithm *Summation: ** Kahan summation algorithm ** Pairwise summation — slightly worse than Kahan summation but cheaper ** Binary splitting **
2Sum 2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation. 2Sum and its variant Fast2Sum were first published by Møller in 1965. Fast2Sum is often used implicitly in other algorithms such as ...
*Multiplication: **
Multiplication algorithm A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the d ...
— general discussion, simple methods **
Karatsuba algorithm The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. Knuth D.E. (1969) ''The Art of Computer Programming. v.2.'' Addison-Wesley Publ.Co., 724 pp. It is a div ...
— the first algorithm which is faster than straightforward multiplication ** Toom–Cook multiplication — generalization of Karatsuba multiplication **
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ...
— based on Fourier transform, asymptotically very fast ** Fürer's algorithm — asymptotically slightly faster than Schönhage–Strassen *
Division algorithm A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Di ...
— for computing quotient and/or remainder of two numbers ** Long division **
Restoring division A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. D ...
** Non-restoring division ** SRT division ** Newton–Raphson division: uses
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
to find the
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
of D, and multiply that reciprocal by N to find the final quotient Q. **
Goldschmidt division A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Di ...
*Exponentiation: **
Exponentiation by squaring Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to re ...
** Addition-chain exponentiation * Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). **
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
*Polynomials: **
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
** Estrin's scheme — modification of the Horner scheme with more possibilities for parallelization ** Clenshaw algorithm ** De Casteljau's algorithm *Square roots and other roots: ** Integer square root **
Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for ...
** ''n''th root algorithm ** Shifting ''n''th root algorithm — similar to long division ** hypot — the function (''x''2 + ''y''2)1/2 ** Alpha max plus beta min algorithm — approximates hypot(x,y) ** Fast inverse square root — calculates 1 / using details of the IEEE floating-point system *Elementary functions (exponential, logarithm, trigonometric functions): ** Trigonometric tables — different methods for generating them **
CORDIC CORDIC (for "coordinate rotation digital computer"), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH C ...
— shift-and-add algorithm using a table of arc tangents ** BKM algorithm — shift-and-add algorithm using a table of logarithms and complex numbers *Gamma function: ** Lanczos approximation ** Spouge's approximation — modification of Stirling's approximation; easier to apply than Lanczos * AGM method — computes arithmetic–geometric mean; related methods compute special functions *
FEE method In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel -f ...
(Fast E-function Evaluation) — fast summation of series like the power series for e''x'' * Gal's accurate tables — table of function values with unequal spacing to reduce round-off error * Spigot algorithm — algorithms that can compute individual digits of a real number * Approximations of : ** Liu Hui's π algorithm — first algorithm that can compute π to arbitrary precision ** Leibniz formula for π — alternating series with very slow convergence ** Wallis product — infinite product converging slowly to π/2 ** Viète's formula — more complicated infinite product which converges faster ** Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean ** Borwein's algorithm — iteration which converges quartically to 1/π, and other algorithms ** Chudnovsky algorithm — fast algorithm that calculates a hypergeometric series **
Bailey–Borwein–Plouffe formula The Bailey–Borwein–Plouffe formula (BBP formula) is a formula for . It was discovered in 1995 by Simon Plouffe and is named after the authors of the article in which it was published, David H. Bailey, Peter Borwein, and Plouffe. Before that ...
— can be used to compute individual hexadecimal digits of π ** Bellard's formula — faster version of Bailey–Borwein–Plouffe formula ** List of formulae involving π


Numerical linear algebra

Numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics ...
— study of numerical algorithms for linear algebra problems


Basic concepts

*Types of matrices appearing in numerical analysis: **
Sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
***
Band matrix Band or BAND may refer to: Places *Bánd, a village in Hungary * Band, Iran, a village in Urmia County, West Azerbaijan Province, Iran * Band, Mureș, a commune in Romania *Band-e Majid Khan, a village in Bukan County, West Azerbaijan Province, ...
*** Bidiagonal matrix *** Tridiagonal matrix *** Pentadiagonal matrix *** Skyline matrix **
Circulant matrix In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplit ...
**
Triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal ar ...
** Diagonally dominant matrix **
Block matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original ma ...
— matrix composed of smaller matrices ** Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries ** Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle) ** Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues ** Convergent matrix — square matrix whose successive powers approach the zero matrix *Algorithms for matrix multiplication: ** Strassen algorithm ** Coppersmith–Winograd algorithm **
Cannon's algorithm In computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.
— a distributed algorithm, especially suitable for processors laid out in a 2d grid ** Freivalds' algorithm — a randomized algorithm for checking the result of a multiplication *
Matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
s: **
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a ...
— lower triangular times upper triangular **
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
— orthogonal matrix times triangular matrix *** RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix **
Polar decomposition In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is an orthogonal matrix and P is a positive semi-definite symmetric matrix (U is a unitary matrix and P is a positive ...
— unitary matrix times positive-semidefinite Hermitian matrix **Decompositions by similarity: ***
Eigendecomposition In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matr ...
— decomposition in terms of eigenvectors and eigenvalues ***
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to ...
— bidiagonal matrix of a certain form; generalizes the eigendecomposition **** Weyr canonical form — permutation of Jordan normal form *** Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix ***
Schur decomposition In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper tria ...
— similarity transform bringing the matrix to a triangular matrix **
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
— unitary matrix times diagonal matrix times unitary matrix * Matrix splitting — expressing a given matrix as a sum or difference of matrices


Solving systems of linear equations

*
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
**
Row echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian el ...
— matrix in which all entries below a nonzero entry are zero ** Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has integer entries ** Tridiagonal matrix algorithm — simplified form of Gaussian elimination for tridiagonal matrices *
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a ...
— write a matrix as a product of an upper- and a lower-triangular matrix ** Crout matrix decomposition ** LU reduction — a special parallelized version of a LU decomposition algorithm * Block LU decomposition *
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effi ...
— for solving a system with a positive definite matrix **
Minimum degree algorithm In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor. This re ...
**
Symbolic Cholesky decomposition In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants. Algo ...
* Iterative refinement — procedure to turn an inaccurate solution in a more accurate one *Direct methods for sparse matrices: ** Frontal solver — used in finite element methods **
Nested dissection In numerical analysis, nested dissection is a divide and conquer heuristic for the solution of sparse symmetric systems of linear equations based on graph partitioning. Nested dissection was introduced by ; the name was suggested by Garrett B ...
— for symmetric matrices, based on graph partitioning * Levinson recursion — for Toeplitz matrices * SPIKE algorithm — hybrid parallel solver for narrow-banded matrices * Cyclic reduction — eliminate even or odd rows or columns, repeat *Iterative methods: ** Jacobi method ** Gauss–Seidel method ***
Successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergin ...
(SOR) — a technique to accelerate the Gauss–Seidel method ****
Symmetric successive over-relaxation In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is defined as M=(D+L) ...
(SSOR) — variant of SOR for symmetric matrices *** Backfitting algorithm — iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel **
Modified Richardson iteration Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the ...
**
Conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iter ...
(CG) — assumes that the matrix is positive definite ***
Derivation of the conjugate gradient method In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system :\boldsymbol=\boldsymbol where \boldsymbol is symmetric positive-definite. The conjugate gradient method can be derive ...
***
Nonlinear conjugate gradient method In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f(x) :: \displaystyle f(x)=\, Ax-b\, ^2, the minimum of f is obtained whe ...
— generalization for nonlinear optimization problems **
Biconjugate gradient method In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A ...
(BiCG) *** Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with better convergence ** Conjugate residual method — similar to CG but only assumed that the matrix is symmetric **
Generalized minimal residual method In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace wit ...
(GMRES) — based on the Arnoldi iteration **
Chebyshev iteration In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev. Chebyshev iteration avoids the computatio ...
— avoids inner products but needs bounds on the spectrum **
Stone's method In numerical analysis, Stone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU deco ...
(SIP — Strongly Implicit Procedure) — uses an incomplete LU decomposition **
Kaczmarz method The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems A x = b . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction fr ...
**
Preconditioner In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing ...
***
Incomplete Cholesky factorization In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like ...
— sparse approximation to the Cholesky factorization ***
Incomplete LU factorization In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner. Introduction Consider a sparse linear system Ax = b. These are often ...
— sparse approximation to the LU factorization ** Uzawa iteration — for saddle node problems *Underdetermined and overdetermined systems (systems that have no or more than one solution): ** Numerical computation of null space — find all solutions of an underdetermined system ** Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual ** Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as possible)


Eigenvalue algorithms

Eigenvalue algorithm — a numerical algorithm for locating the eigenvalues of a matrix *
Power iteration In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vect ...
*
Inverse iteration In numerical analysis, inverse iteration (also known as the ''inverse power method'') is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The me ...
*
Rayleigh quotient iteration Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates. Rayleigh quotient iteration is an iterative method, tha ...
*
Arnoldi iteration In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by con ...
— based on Krylov subspaces * Lanczos algorithm — Arnoldi, specialized for positive-definite matrices ** Block Lanczos algorithm — for when matrix is over a finite field *
QR algorithm In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and b ...
*
Jacobi eigenvalue algorithm In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacob ...
— select a small submatrix which can be diagonalized exactly, and repeat ** Jacobi rotation — the building block, almost a Givens rotation ** Jacobi method for complex Hermitian matrices * Divide-and-conquer eigenvalue algorithm *
Folded spectrum method In mathematics, the folded spectrum method (FSM) is an iterative method for solving large eigenvalue problems. Here you always find a vector with an eigenvalue close to a search-value \varepsilon. This means you can get a vector \Psi in the midd ...
*
LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem :A x= \lambda B x, for a ...
— Locally Optimal Block Preconditioned Conjugate Gradient Method *
Eigenvalue perturbation In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system Ax=\lambda x that is perturbed from one with known eigenvectors and eigenvalues A_0 x=\lambda_0x_0 . This is useful for studyi ...
— stability of eigenvalues under perturbations of the matrix


Other concepts and algorithms

*
Orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
algorithms: ** Gram–Schmidt process ** Householder transformation *** Householder operator — analogue of Householder transformation for general inner product spaces ** Givens rotation *
Krylov subspace In linear algebra, the order-''r'' Krylov subspace generated by an ''n''-by-''n'' matrix ''A'' and a vector ''b'' of dimension ''n'' is the linear subspace In mathematics, and more specifically in linear algebra, a linear subspace, also known ...
* Block matrix pseudoinverse * Bidiagonalization * Cuthill–McKee algorithm — permutes rows/columns in sparse matrix to yield a narrow band matrix *
In-place matrix transposition In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an ''N''×''M'' matrix in-place in computer memory, ideally with ''O''(1) (bounded) additional storage, or at most with additional storage muc ...
— computing the transpose of a matrix without using much additional storage * Pivot element — entry in a matrix on which the algorithm concentrates *
Matrix-free methods In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products ...
— methods that only access the matrix by evaluating matrix-vector products


Interpolation and approximation

Interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has ...
— construct a function going through some given data points * Nearest-neighbor interpolation — takes the value of the nearest neighbor


Polynomial interpolation

Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with n ...
— interpolation by polynomials *
Linear interpolation In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. Linear interpolation between two known points If the two known po ...
*
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
*
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x ...
*
Chebyshev polynomials The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebys ...
*
Chebyshev nodes In numerical analysis, Chebyshev nodes are specific real algebraic numbers, namely the roots of the Chebyshev polynomials of the first kind. They are often used as nodes in polynomial interpolation because the resulting interpolation polynomial ...
* Lebesgue constant (interpolation) *Different forms for the interpolant: **
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences inte ...
*** Divided differences ***
Neville's algorithm In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given ''n'' + 1 points, there is a unique polynomial of degree ''≤ n'' which goes through the ...
— for evaluating the interpolant; based on the Newton form **
Lagrange polynomial In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' an ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate ...
— especially useful for approximation ** Brahmagupta's interpolation formula — seventh-century formula for quadratic interpolation *Extensions to multiple dimensions: **
Bilinear interpolation In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., ''x'' and ''y'') using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be ge ...
**
Trilinear interpolation Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism linearly, using function data ...
**
Bicubic interpolation In mathematics, bicubic interpolation is an extension of cubic interpolation (not to be confused with cubic spline interpolation, a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regula ...
** Tricubic interpolation ** Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant *
Hermite interpolation In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the s ...
*
Birkhoff interpolation In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial ''p'' of degree ''d'' such that certain derivatives have specified values at specified points: : p^(x_i) = y_i \q ...
*
Abel–Goncharov interpolation In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in mode ...


Spline interpolation

Spline interpolation In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all ...
— interpolation by piecewise polynomials *
Spline (mathematics) In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree poly ...
— the piecewise polynomials used as interpolants * Perfect spline — polynomial spline of degree ''m'' whose ''m''th derivate is ±1 * Cubic Hermite spline **
Centripetal Catmull–Rom spline In computer graphics, the centripetal Catmull–Rom spline is a variant form of the Catmull–Rom spline, originally formulated by Edwin Catmull and Raphael Rom, which can be evaluated using a recursive algorithm proposed by Barry and Goldman. It i ...
— special case of cubic Hermite splines without self-intersections or cusps *
Monotone cubic interpolation In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Monotonicity is preserved by linear interpolation but not guarante ...
* Hermite spline * Bézier curve ** De Casteljau's algorithm ** composite Bézier curve **Generalizations to more dimensions: ***
Bézier triangle A Bézier triangle is a special type of Bézier surface that is created by ( linear, quadratic, cubic or higher degree) interpolation of control points. ''n''th-order Bézier triangle A general ''n''th-order Bézier triangle has (''n'' +1)('' ...
— maps a triangle to R3 ***
Bézier surface Bézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling. As with Bézier curves, a Bézier surface is defined by a set of control points. Similar to interpolation in ...
— maps a square to R3 * B-spline **
Box spline In the mathematical fields of numerical analysis and approximation theory, box splines are piecewise polynomial functions of several variables. Box splines are considered as a multivariate generalization of basis splines (B-splines) and are ge ...
— multivariate generalization of B-splines ** Truncated power function **
De Boor's algorithm In the mathematical subfield of numerical analysis de Boor's algorithmC. de Boor 971 "Subroutine package for calculating with B-splines", Techn.Rep. LA-4728-MS, Los Alamos Sci.Lab, Los Alamos NM; p. 109, 121. is a polynomial-time and numerically ...
— generalizes De Casteljau's algorithm *
Non-uniform rational B-spline Non-uniform rational basis spline (NURBS) is a mathematical model using basis splines (B-splines) that is commonly used in computer graphics for representing curves and surfaces. It offers great flexibility and precision for handling both analy ...
(NURBS) ** T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate * Kochanek–Bartels spline *
Coons patch In mathematics, a Coons patch, is a type of surface patch or manifold parametrization used in computer graphics to smoothly join other surfaces together, and in computational mechanics applications, particularly in finite element method and boun ...
— type of manifold parametrization used to smoothly join other surfaces together * M-spline — a non-negative spline * I-spline — a monotone spline, defined in terms of M-splines *
Smoothing spline Smoothing splines are function estimates, \hat f(x), obtained from a set of noisy observations y_i of the target f(x_i), in order to balance a measure of goodness of fit of \hat f(x_i) to y_i with a derivative based measure of the smoothness of \ ...
— a spline fitted smoothly to noisy data *
Blossom (functional) In numerical analysis, a blossom is a functional that can be applied to any polynomial, but is mostly used for Bézier and spline curves and surfaces. The blossom of a polynomial ''ƒ'', often denoted \mathcal is completely characterised by the ...
— a unique, affine, symmetric map associated to a polynomial or spline *See also: List of numerical computational geometry topics


Trigonometric interpolation

Trigonometric interpolation — interpolation by trigonometric polynomials *
Discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
— can be viewed as trigonometric interpolation at equidistant points ** Relations between Fourier transforms and Fourier series *
Fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in t ...
(FFT) — a fast method for computing the discrete Fourier transform **
Bluestein's FFT algorithm The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, correspon ...
** Bruun's FFT algorithm **
Cooley–Tukey FFT algorithm The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size N = N_1N_2 in terms of ''N''1 ...
** Split-radix FFT algorithm — variant of Cooley–Tukey that uses a blend of radices 2 and 4 ** Goertzel algorithm ** Prime-factor FFT algorithm **
Rader's FFT algorithm Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution (the othe ...
** Bit-reversal permutation — particular permutation of vectors with 2''m'' entries used in many FFTs. ** Butterfly diagram ** Twiddle factor — the trigonometric constant coefficients that are multiplied by the data **
Cyclotomic fast Fourier transform The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields.S.V. Fedorenko and P.V. Trifonov, This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results fro ...
— for FFT over finite fields **Methods for computing discrete convolutions with finite impulse response filters using the FFT: *** Overlap–add method *** Overlap–save method * Sigma approximation *
Dirichlet kernel In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as D_n(x)= \sum_^n e^ = \left(1+2\sum_^n\cos(kx)\right)=\frac, where is any nonne ...
— convolving any function with the Dirichlet kernel yields its trigonometric interpolant *
Gibbs phenomenon In mathematics, the Gibbs phenomenon, discovered by Available on-line at:National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. and rediscovered by , is the oscillatory behavior of the Fourier series of a piecewise continuou ...


Other interpolants

* Simple rational approximation ** Polynomial and rational function modeling — comparison of polynomial and rational interpolation *
Wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
** Continuous wavelet ** Transfer matrix **See also:
List of functional analysis topics This is a list of functional analysis topics, by Wikipedia page. See also: Glossary of functional analysis. Hilbert space Functional analysis, classic results Operator theory Banach space examples * Lp space *Hardy space *Sobolev space ...
,
List of wavelet-related transforms {{Short description, none A list of wavelet related transforms: * Continuous wavelet transform (CWT) * Discrete wavelet transform (DWT) * Multiresolution analysis (MRA) * Lifting scheme * Binomial QMF (BQMF) * Fast wavelet transform (FWT) * Com ...
* Inverse distance weighting *
Radial basis function A radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), or some other fixed ...
(RBF) — a function of the form ƒ(''x'') = ''φ''(, ''x''−''x''0, ) **
Polyharmonic spline In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cu ...
— a commonly used radial basis function **
Thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common e ...
— a specific polyharmonic spline: ''r''2 log ''r'' ** Hierarchical RBF *
Subdivision surface In the field of 3D computer graphics, a subdivision surface (commonly shortened to SubD surface) is a curved surface represented by the specification of a coarser polygon mesh and produced by a recursive algorithmic method. The curved surface, ...
— constructed by recursively subdividing a piecewise linear interpolant ** Catmull–Clark subdivision surface ** Doo–Sabin subdivision surface ** Loop subdivision surface * Slerp (spherical linear interpolation) — interpolation between two points on a sphere **Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions *
Irrational base discrete weighted transform In mathematics, the irrational base discrete weighted transform (IBDWT) is a variant of the fast Fourier transform using an irrational base; it was developed by Richard Crandall ( Reed College), Barry Fagin ( Dartmouth College) and Joshua Doeni ...
* Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound **
Pick matrix Pick may refer to: Places * Pick City, North Dakota, a town in the United States * Pick Lake (Cochrane District, Ontario), a lake in Canada * Pick Lake (Thunder Bay District), a lake in Canada * Pick Mere, a lake in Pickmere, England People w ...
— the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite *
Multivariate interpolation In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable; when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given po ...
— the function being interpolated depends on more than one variable ** Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology ** Coons surface — combination of linear interpolation and bilinear interpolation ** Lanczos resampling — based on convolution with a sinc function ** Natural neighbor interpolation **
Nearest neighbor value interpolation In mathematics applied to computer graphics, nearest neighbor value interpolation is an advanced method of image interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (f ...
**
PDE surface PDE surfaces are used in geometric modelling and computer graphics for creating smooth surfaces conforming to a given boundary configuration. PDE surfaces use partial differential equations to generate a surface which usually satisfy a mathematical ...
** Transfinite interpolation — constructs function on planar domain given its values on the boundary ** Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations **Method based on polynomials are listed under ''Polynomial interpolation''


Approximation theory

Approximation theory In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by ''best'' and ''simpler'' wil ...
*
Orders of approximation In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is. Usage in science and engineering In formal expressions, the ordinal number used ...
* Lebesgue's lemma *
Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data i ...
** Vector field reconstruction *
Modulus of continuity In mathematical analysis, a modulus of continuity is a function ω :
, ∞ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
, ∞ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
used to measure quantitatively the uniform continuity of functions. So, a function ''f'' : ''I'' → R admits ω as a modulus of continuity if and only if :, ...
— measures smoothness of a function * Least squares (function approximation) — minimizes the error in the L2-norm *
Minimax approximation algorithm A minimax approximation algorithm (or L∞ approximation or uniform approximation) is a method to find an approximation of a mathematical function that minimizes maximum error. For example, given a function f defined on the interval ,b/math> and ...
— minimizes the maximum error over an interval (the L-norm) ** Equioscillation theorem — characterizes the best approximation in the L-norm *
Unisolvent point set In approximation theory, a finite collection of points X \subset R^n is often called unisolvent for a space W if any element w \in W is uniquely determined by its values on X. X is unisolvent for \Pi^m_n (polynomials in n variables of degree at m ...
— function from given function space is determined uniquely by values on such a set of points * Stone–Weierstrass theorem — continuous functions can be approximated uniformly by polynomials, or certain other function spaces *Approximation by polynomials: **
Linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving o ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate ...
— basis of polynomials useful for approximating a function ** Bernstein's constant — error when approximating , ''x'', by a polynomial **
Remez algorithm The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are th ...
— for constructing the best polynomial approximation in the L-norm **
Bernstein's inequality (mathematical analysis) Bernstein's theorem is an inequality relating the maximum modulus of a complex polynomial function on the unit disk with the maximum modulus of its derivative on the unit disk. It was proven by Sergei Bernstein while he was working on approximation ...
— bound on maximum of derivative of polynomial in unit disk **
Mergelyan's theorem Mergelyan's theorem is a result from approximation by polynomials in complex analysis proved by the Armenian mathematician Sergei Mergelyan in 1951. Statement :Let ''K'' be a compact subset of the complex plane C such that C∖''K'' is con ...
— generalization of Stone–Weierstrass theorem for polynomials ** Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero ** Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions ** Discrete Chebyshev polynomials — polynomials orthogonal with respect to a discrete measure ** Favard's theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials *Approximation by Fourier series / trigonometric polynomials: **
Jackson's inequality In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by polynomials, algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function ...
— upper bound for best approximation by a trigonometric polynomial *** Bernstein's theorem (approximation theory) — a converse to Jackson's inequality ** Fejér's theorem — Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions ** Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of Fourier coefficients *Different approximations: **
Moving least squares Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is ...
**
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is a ...
*** Padé table — table of Padé approximants ** Hartogs–Rosenthal theorem — continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero **
Szász–Mirakyan operator In functional analysis, a discipline within mathematics, the Szász–Mirakyan operators (also spelled "Mirakjan" and "Mirakian") are generalizations of Bernstein polynomials to infinite intervals, introduced by Otto Szász in 1950 and G. M. Mira ...
— approximation by e−''n'' ''x''''k'' on a semi-infinite interval **
Szász–Mirakjan–Kantorovich operator In functional analysis, a discipline within mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their cha ...
**
Baskakov operator In functional analysis, a branch of mathematics, the Baskakov operators are generalizations of Bernstein polynomials, Szász–Mirakyan operator In functional analysis, a discipline within mathematics, the Szász–Mirakyan operators (also spelled ...
— generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators **
Favard operator In functional analysis, a branch of mathematics, the Favard operators are defined by: : mathcal_n(f)x) = \frac \sum_^\infty where x\in\mathbb, n\in\mathbb. They are named after Jean Favard Jean Favard (28 August 190221 January 1965) was a Frenc ...
— approximation by sums of Gaussians *
Surrogate model A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so a model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design ...
— application: replacing a function that is hard to evaluate by a simpler function * Constructive function theory — field that studies connection between degree of approximation and smoothness * Universal differential equation — differential–algebraic equation whose solutions can approximate any continuous function *
Fekete problem In mathematics, the Fekete problem is, given a natural number ''N'' and a real ''s'' ≥ 0, to find the points ''x''1,...,''x'N'' on the 2-sphere for which the ''s''-energy, defined by : \sum_ \, x_i - x_j \, ^ for ''s'' >&nbs ...
— find ''N'' points on a sphere that minimize some kind of energy * Carleman's condition — condition guaranteeing that a measure is uniquely determined by its moments * Krein's condition — condition that exponential sums are dense in weighted L2 space * Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces * Wirtinger's representation and projection theorem *Journals: **
Constructive Approximation ''Constructive Approximation'' is "an international mathematics journal dedicated to Approximations and Expansions and related research in computation, function theory, functional analysis, interpolation spaces and interpolation of operators, nume ...
**
Journal of Approximation Theory The ''Journal of Approximation Theory'' is "devoted to advances in pure and applied approximation theory and related areas." References External links ''Journal of Approximation Theory'' web site''Journal of Approximation Theory'' home page a ...


Miscellaneous

*
Extrapolation In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between kno ...
** Linear predictive analysis — linear extrapolation *
Unisolvent functions In mathematics, a set of ''n'' functions ''f''1, ''f''2, ..., ''f'n'' is unisolvent (meaning "uniquely solvable") on a domain Ω if the vectors : \beginf_1(x_1) \\ f_1(x_2) \\ \vdots \\ f_1(x_n)\end, \beginf_2(x_1) \\ f_2(x_2) \\ \vdots \\ f ...
— functions for which the interpolation problem has a unique solution *
Regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
**
Isotonic regression In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing (or non-increasing) everywhere, and lies as ...
* Curve-fitting compaction * Interpolation (computer graphics)


Finding roots of nonlinear equations

:''See #Numerical linear algebra for linear equations''
Root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbe ...
— algorithms for solving the equation ''f''(''x'') = 0 *General methods: **
Bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and ...
— simple and robust; linear convergence *** Lehmer–Schur algorithm — variant for complex functions **
Fixed-point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point itera ...
**
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
— based on linear approximation around the current iterate; quadratic convergence *** Kantorovich theorem — gives a region around solution such that Newton's method converges ***
Newton fractal The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's method. Wh ...
— indicates which initial condition converges to which root under Newton iteration *** Quasi-Newton method — uses an approximation of the Jacobian: **** Broyden's method — uses a rank-one update for the Jacobian ****
Symmetric rank-one The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This upda ...
— a symmetric (but not necessarily positive definite) rank-one update of the Jacobian **** Davidon–Fletcher–Powell formula — update of the Jacobian in which the matrix remains positive definite ****
Broyden–Fletcher–Goldfarb–Shanno algorithm In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines th ...
— rank-two update of the Jacobian in which the matrix remains positive definite **** Limited-memory BFGS method — truncated, matrix-free variant of BFGS method suitable for large problems *** Steffensen's method — uses divided differences instead of the derivative **
Secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation ...
— based on linear interpolation at last two iterates **
False position method In mathematics, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and e ...
— secant method with ideas from the bisection method **
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every iter ...
— based on quadratic interpolation at last three iterates ** Sidi's generalized secant method — higher-order variants of secant method **
Inverse quadratic interpolation In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form ''f''(''x'') = 0. The idea is to use polynomial interpolation, quadratic interpolation to approxima ...
— similar to Muller's method, but interpolates the inverse **
Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliabl ...
— combines bisection method, secant method and inverse quadratic interpolation ** Ridders' method — fits a linear function times an exponential to last two iterates and their midpoint ** Halley's method — uses ''f'', ''f''' and ''f''''; achieves the cubic convergence ** Householder's method — uses first ''d'' derivatives to achieve order ''d'' + 1; generalizes Newton's and Halley's method *Methods for polynomials: ** Aberth method ** Bairstow's method ** Durand–Kerner method ** Graeffe's method ** Jenkins–Traub algorithm — fast, reliable, and widely used **
Laguerre's method In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this metho ...
** Splitting circle method *Analysis: ** Wilkinson's polynomial *
Numerical continuation Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, :F(\mathbf u,\lambda) = 0. The ''parameter'' \lambda is usually a real scalar, and the ''solution'' \mathbf u an ''n''-vector. ...
— tracking a root as one parameter in the equation changes ** Piecewise linear continuation


Optimization

Mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
— algorithm for finding maxima or minima of a given function


Basic concepts

* Active set *
Candidate solution In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, poten ...
*
Constraint (mathematics) In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of ...
**
Constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
— studies optimization problems with constraints **
Binary constraint A binary constraint, in mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available al ...
— a constraint that involves exactly two variables * Corner solution *
Feasible region In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
— contains all solutions that satisfy the constraints but may not be optimal *
Global optimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
and
Local optimum In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which i ...
*
Maxima and minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
*
Slack variable In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity c ...
* Continuous optimization *
Discrete optimization Discrete optimization is a branch of optimization in applied mathematics and computer science. Scope As opposed to continuous optimization, some or all of the variables used in a discrete mathematical program are restricted to be discrete var ...


Linear programming

Linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
(also treats ''integer programming'') — objective function and constraints are linear * Algorithms for linear programming: **
Simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are no ...
*** Bland's rule — rule to avoid cycling in the simplex method *** Klee–Minty cube — perturbed (hyper)cube; simplex method has exponential complexity on such a domain *** Criss-cross algorithm — similar to the simplex algorithm *** Big M method — variation of simplex algorithm for problems with both "less than" and "greater than" constraints **
Interior point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
***
Ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an ...
***
Karmarkar's algorithm Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also po ...
*** Mehrotra predictor–corrector method ** Column generation **
k-approximation of k-hitting set In computer science, k-approximation of k-hitting set is an approximation algorithm for weighted Set_cover_problem#Hitting_set_formulation, hitting set. The input is a Collection (abstract data type), collection ''S'' of subsets of some universe '' ...
— algorithm for specific LP problems (to find a weighted hitting set) *
Linear complementarity problem In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. ...
*Decompositions: **
Benders' decomposition Benders decomposition (or Benders' decomposition) is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. This block structure often occurs in applications ...
**
Dantzig–Wolfe decomposition Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have se ...
** Theory of two-level planning **
Variable splitting In applied mathematics and computer science, variable splitting is a decomposition method that relaxes a set of constraints. Details When the variable ''x'' appears in two sets of constraints, it is possible to substitute the new variables ' ...
* Basic solution (linear programming) — solution at vertex of feasible region *
Fourier–Motzkin elimination Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions. The algorithm is named after Joseph Fourier who proposed the ...
* Hilbert basis (linear programming) — set of integer vectors in a convex cone which generate all integer vectors in the cone * LP-type problem *
Linear inequality In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:. It shows the data which is not equal in graph form. * greater than * ≤ less than or equal to * ...
* Vertex enumeration problem — list all vertices of the feasible set


Convex optimization

Convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization prob ...
*
Quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constr ...
**
Linear least squares (mathematics) Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
**
Total least squares In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalizat ...
** Frank–Wolfe algorithm ** Sequential minimal optimization — breaks up large QP problems into a series of smallest possible QP problems ** Bilinear program * Basis pursuit — minimize L1-norm of vector subject to linear constraints **
Basis pursuit denoising In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form : \min_x \left(\frac \, y - Ax\, ^2_2 + \lambda \, x\, _1\right), where \lambda is a parameter that controls the trad ...
(BPDN) — regularized version of basis pursuit *** In-crowd algorithm — algorithm for solving basis pursuit denoising * Linear matrix inequality *
Conic optimization Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone. The class of conic optimization problems includes some of the ...
**
Semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positiv ...
** Second-order cone programming **
Sum-of-squares optimization A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficient ...
**Quadratic programming (see above) * Bregman method — row-action method for strictly convex optimization problems * Proximal gradient method — use splitting of objective function in sum of possible non-differentiable pieces * Subgradient method — extension of steepest descent for problems with a non-differentiable objective function *
Biconvex optimization Biconvex optimization is a generalization of convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functi ...
— generalization where objective function and constraint set can be biconvex


Nonlinear programming

Nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or st ...
— the most general optimization problem in the usual framework *Special cases of nonlinear programming: **See ''Linear programming'' and ''Convex optimization'' above ** Geometric programming — problems involving signomials or posynomials *** Signomial — similar to polynomials, but exponents need not be integers *** Posynomial — a signomial with positive coefficients **
Quadratically constrained quadratic program In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form : \begin & \text && \tfrac12 x^\mathr ...
** Linear-fractional programming — objective is ratio of linear functions, constraints are linear *** Fractional programming — objective is ratio of nonlinear functions, constraints are linear **
Nonlinear complementarity problem In applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combi ...
(NCP) — find ''x'' such that ''x'' ≥ 0, ''f''(''x'') ≥ 0 and ''x''T ''f''(''x'') = 0 **
Least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the r ...
— the objective function is a sum of squares ***
Non-linear least squares Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
*** Gauss–Newton algorithm **** BHHH algorithm — variant of Gauss–Newton in econometrics **** Generalized Gauss–Newton method — for constrained nonlinear least-squares problems ***
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least s ...
***
Iteratively reweighted least squares The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a ''p''-norm: :\underset \sum_^n \big, y_i - f_i (\boldsymbol\beta) \big, ^p, by an iterative m ...
(IRLS) — solves a weighted least-squares problem at every iteration ***
Partial least squares Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a li ...
— statistical techniques similar to principal components analysis **** Non-linear iterative partial least squares (NIPLS) ** Mathematical programming with equilibrium constraints — constraints include variational inequalities or complementarities **Univariate optimization: ***
Golden section search The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an inte ...
***
Successive parabolic interpolation Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or, ...
— based on quadratic interpolation through the last three iterates *General algorithms: **Concepts: *** Descent direction ***
Guess value {{unreferenced, date=June 2012 In mathematical modeling, a guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorithms are mainly de ...
— the initial guess for a solution with which an algorithm starts ***
Line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
**** Backtracking line search ****
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
**
Gradient method In optimization, a gradient method is an algorithm to solve problems of the form :\min_\; f(x) with the search directions defined by the gradient of the function at the current point. Examples of gradient methods are the gradient descent and th ...
— method that uses the gradient as the search direction ***
Gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of ...
****
Stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of ...
***
Landweber iteration The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, ...
— mainly used for ill-posed problems **
Successive linear programming Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. Starting at some estimate of the optimal solution, the method is based on so ...
(SLP) — replace problem by a linear programming problem, solve that, and repeat **
Sequential quadratic programming Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable. SQP me ...
(SQP) — replace problem by a quadratic programming problem, solve that, and repeat ** Newton's method in optimization ***See also under ''Newton algorithm'' in the section ''Finding roots of nonlinear equations'' **
Nonlinear conjugate gradient method In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f(x) :: \displaystyle f(x)=\, Ax-b\, ^2, the minimum of f is obtained whe ...
**Derivative-free methods ***
Coordinate descent Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, ...
— move in one of the coordinate directions **** Adaptive coordinate descent — adapt coordinate directions to objective function ****
Random coordinate descent Randomized (Block) Coordinate Descent Method is an optimization algorithm popularized by Nesterov (2010) and Richtárik and Takáč (2011). The first analysis of this method, when applied to the problem of minimizing a smooth convex function, was pe ...
— randomized version *** Nelder–Mead method ***
Pattern search (optimization) Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or different ...
*** Powell's method — based on conjugate gradient descent *** Rosenbrock methods — derivative-free method, similar to Nelder–Mead but with guaranteed convergence **
Augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems ...
— replaces constrained problems by unconstrained problems with a term added to the objective function ** Ternary search **
Tabu search Tabu search is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986 and formalized in 1989. Local (neighborhood) searches take a potential solution to a prob ...
** Guided Local Search — modification of search algorithms which builds up penalties during a search **Reactive search optimization (RSO) — the algorithm adapts its parameters automatically ** MM algorithm — majorize-minimization, a wide framework of methods **
Least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based minimizing the '' su ...
***
Expectation–maximization algorithm In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variab ...
****
Ordered subset expectation maximization In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography, f ...
**
Nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function ...
** Space mapping — uses "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models


Optimal control and infinite-dimensional optimization

Optimal control *
Pontryagin's minimum principle Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that i ...
— infinite-dimensional version of Lagrange multipliers ** Costate equations — equation for the "Lagrange multipliers" in Pontryagin's minimum principle **
Hamiltonian (control theory) The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. In ...
— minimum principle says that this function should be minimized *Types of problems: ** Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic ** Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic *** Optimal projection equations — method for reducing dimension of LQG control problem * Algebraic Riccati equation — matrix equation occurring in many optimal control problems * Bang–bang control — control that switches abruptly between two states *
Covector mapping principle The covector mapping principle is a special case of Riesz' representation theorem, which is a fundamental theorem in functional analysis. The name was coined by Ross and co-workers,Ross, I. M., “A Historical Introduction to the Covector Mappin ...
*
Differential dynamic programming Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by David Mayne, Mayne and subsequently analysed in Jacobson and Mayne's eponymous book. The algorith ...
— uses locally-quadratic models of the dynamics and cost functions *
DNSS point Sethi-Skiba points, also known as DNSS points, arise in optimal control problems that exhibit multiple optimal solutions. A Sethi-Skiba point is an indifference point in an optimal control problem such that starting from such a point, the problem ha ...
— initial state for certain optimal control problems with multiple optimal solutions *
Legendre–Clebsch condition __NOTOC__ In the calculus of variations the Legendre–Clebsch condition is a second-order condition which a solution of the Euler–Lagrange equation must satisfy in order to be a minimum. For the problem of minimizing : \int_^ L(t,x,x')\, dt . ...
— second-order condition for solution of optimal control problem * Pseudospectral optimal control ** Bellman pseudospectral method — based on Bellman's principle of optimality ** Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind) ** Flat pseudospectral method — combines Ross–Fahroo pseudospectral method with differential flatness ** Gauss pseudospectral method — uses collocation at the Legendre–Gauss points **
Legendre pseudospectral method The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was origin ...
— uses Legendre polynomials ** Pseudospectral knotting method — generalization of pseudospectral methods in optimal control ** Ross–Fahroo pseudospectral method — class of pseudospectral method including Chebyshev, Legendre and knotting * Ross–Fahroo lemma — condition to make discretization and duality operations commute * Ross' π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability *
Sethi model The Sethi model was developed by Suresh P. Sethi and describes the process of how sales evolve over time in response to advertising. The model assumes that the rate of change in sales depend on three effects: response to advertising that acts posit ...
— optimal control problem modelling advertising Infinite-dimensional optimization * Semi-infinite programming — infinite number of variables and finite number of constraints, or other way around * Shape optimization, Topology optimization — optimization over a set of regions **
Topological derivative The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradi ...
— derivative with respect to changing in the shape * Generalized semi-infinite programming — finite number of variables, infinite number of constraints


Uncertainty and randomness

*Approaches to deal with uncertainty: ** Markov decision process **
Partially observable Markov decision process A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot ...
** Robust optimization *** Wald's maximin model **
Scenario optimization The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in mo ...
— constraints are uncertain ** Stochastic approximation **
Stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective fun ...
**
Stochastic programming In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain ...
**
Stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of ...
* Random optimization algorithms: ** Random search — choose a point randomly in ball around current iterate **
Simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
*** Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation. *** Great Deluge algorithm *** Mean field annealing — deterministic variant of simulated annealing ** Bayesian optimization — treats objective function as a random function and places a prior over it **
Evolutionary algorithm In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as rep ...
***
Differential evolution In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics a ...
***
Evolutionary programming Evolutionary programming is one of the four major evolutionary algorithm paradigms. It is similar to genetic programming, but the structure of the program to be optimized is fixed, while its numerical parameters are allowed to evolve. It was fir ...
***
Genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to gen ...
,
Genetic programming In artificial intelligence, genetic programming (GP) is a technique of evolving programs, starting from a population of unfit (usually random) programs, fit for a particular task by applying operations analogous to natural genetic processes to t ...
**** Genetic algorithms in economics *** MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent *** Simultaneous perturbation stochastic approximation (SPSA) **
Luus–Jaakola In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that gen ...
** Particle swarm optimization ** Stochastic tunneling ** Harmony search — mimicks the improvisation process of musicians **see also the section ''Monte Carlo method''


Theoretical aspects

*
Convex analysis Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. Convex sets A subset C \subseteq X of ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≥ ''tf''(''x'') + (1 − ''t'')''f''(''y'') for ''t'' ∈ ,1** Pseudoconvex function — function ''f'' such that ∇''f'' · (''y'' − ''x'') ≥ 0 implies ''f''(''y'') ≥ ''f''(''x'') **
Quasiconvex function In mathematics, a quasiconvex function is a real number, real-valued function (mathematics), function defined on an interval (mathematics), interval or on a convex set, convex subset of a real vector space such that the inverse image of any s ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≤ max(''f''(''x''), ''f''(''y'')) for ''t'' ∈ ,1**
Subderivative In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection ...
**
Geodesic convexity In mathematics — specifically, in Riemannian geometry — geodesic convexity is a natural generalization of convexity for sets and functions to Riemannian manifolds. It is common to drop the prefix "geodesic" and refer simply to "convex ...
— convexity for functions defined on a Riemannian manifold *
Duality (optimization) In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then th ...
**
Weak duality In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is ''always'' greater than or equal to the soluti ...
— dual solution gives a bound on the primal solution **
Strong duality Strong duality is a condition in mathematical optimization in which the primal optimal objective and the dual optimal objective are equal. This is as opposed to weak duality (the primal problem has optimal value smaller than or equal to the dual p ...
— primal and dual solutions are equivalent **
Shadow price A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in o ...
** Dual cone and polar cone **
Duality gap In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If d^* is the optimal dual value and p^* is the optimal primal value then the duality gap is equal to p^* - d^*. This value ...
— difference between primal and dual solution **
Fenchel's duality theorem In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ''ƒ'' be a proper convex function on R''n'' and let ''g'' be a proper concave function on R''n''. Then, if regularity con ...
— relates minimization problems with maximization problems of convex conjugates **
Perturbation function In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the initial problem. In many cases this takes the form ...
— any function which relates to primal and dual problems **
Slater's condition In mathematics, Slater's condition (or Slater condition) is a sufficient condition for strong duality to hold for a convex optimization problem, named after Morton L. Slater. Informally, Slater's condition states that the feasible region must ha ...
— sufficient condition for strong duality to hold in a convex optimization problem ** Total dual integrality — concept of duality for integer linear programming **
Wolfe duality In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be f ...
— for when objective function and constraints are differentiable *
Farkas' lemma Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality an ...
*
Karush–Kuhn–Tucker conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be ...
(KKT) — sufficient conditions for a solution to be optimal ** Fritz John conditions — variant of KKT conditions *
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied ...
**
Lagrange multipliers on Banach spaces In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems. The method is a generalization of the classical method ...
*
Semi-continuity In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is upper (respectively, lower) semicontinuous at a point x_0 if, r ...
* Complementarity theory — study of problems with constraints of the form ⟨''u'', ''v''⟩ = 0 ** Mixed complementarity problem *** Mixed linear complementarity problem *** Lemke's algorithm — method for solving (mixed) linear complementarity problems * Danskin's theorem — used in the analysis of minimax problems *
Maximum theorem The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematic ...
— the maximum and maximizer are continuous as function of parameters, under some conditions * No free lunch in search and optimization *
Relaxation (approximation) In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about t ...
— approximating a given problem by an easier problem by relaxing some constraints **
Lagrangian relaxation In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint ...
**
Linear programming relaxation In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
— ignoring the integrality constraints in a linear programming problem * Self-concordant function *
Reduced cost In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a co ...
— cost for increasing a variable by a small amount *
Hardness of approximation In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems. Scope Hardness of approximation complements the study of approximation algorithms by pro ...
— computational complexity of getting an approximate solution


Applications

*In geometry: **
Geometric median In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distance ...
— the point minimizing the sum of distances to a given set of points **
Chebyshev center In geometry, the Chebyshev center of a bounded set Q having non-empty interior is the center of the minimal-radius ball enclosing the entire set Q, or alternatively (and non-equivalently) the center of largest inscribed ball of Q. In the field ...
— the centre of the smallest ball containing a given set of points *In statistics: ** Iterated conditional modes — maximizing joint probability of Markov random field **
Response surface methodology In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM ...
— used in the design of experiments *
Automatic label placement Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted ...
*
Compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This ...
— reconstruct a signal from knowledge that it is sparse or compressible * Cutting stock problem * Demand optimization *
Destination dispatch Destination dispatch is an optimization technique used for multi-elevator installations, in which groups passengers heading to the same destinations use the same elevators, thereby reducing waiting and travel times. Comparatively, the traditional ...
— an optimization technique for dispatching elevators * Energy minimization * Entropy maximization *
Highly optimized tolerance In applied mathematics, highly optimized tolerance (HOT) is a method of generating power law behavior in systems by including a global optimization principle. It was developed by Jean M. Carlson and John Doyle in the early 2000s. For some syst ...
*
Hyperparameter optimization In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the ...
* Inventory control problem **
Newsvendor model The newsvendor (or newsboy or single-periodWilliam J. Stevenson, Operations Management. 10th edition, 2009; page 581 or salvageable) model is a mathematical model in operations management and applied economics used to determine optimal inventory l ...
**
Extended newsvendor model Extended newsvendor models are variations of the classic newsvendor model involving production capacity constraints, multiple products, multiple production cycles, demand dependent selling price, etc. that appear in the Operations management liter ...
**
Assemble-to-order system In applied probability Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains. Scope Much research involving probability is done under the auspices of applied probability. ...
* Linear programming decoding *
Linear search problem In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck. The problem "An immobile hider is located on the real line according to a ...
— find a point on a line by moving along the line * Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than a given number * Meta-optimization — optimization of the parameters in an optimization method *
Multidisciplinary design optimization Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and Multi ...
* Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision *
Paper bag problem In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can b ...
*
Process optimization Process optimization is the discipline of adjusting a process so as to optimize (make the best or most effective use of) some specified set of parameters without violating some constraint. The most common goals are minimizing cost and maximizing ...
* Recursive economics — individuals make a series of two-period optimization decisions over time. *
Stigler diet The Stigler diet is an optimization problem named for George Stigler, a 1982 Nobel Laureate in economics, who posed the following problem: The nutrient RDAs required to be met in Stigler’s experiment were calories, protein, calcium, iron, as well ...
* Space allocation problem *
Stress majorization Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of ''n'' ''m''-dimensional data items, a configuration ''X'' of n points in ''r (\ll m)''-dimensional space is sought that minimizes the so-ca ...
*
Trajectory optimization Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop ...
* Transportation theory * Wing-shape optimization


Miscellaneous

*
Combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
*
Dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
**
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
** Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation **
Backward induction Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying wha ...
— solving dynamic programming problems by reasoning backwards in time ** Optimal stopping — choosing the optimal time to take a particular action *** Odds algorithm ***
Robbins' problem In probability theory, Robbins' problem of optimal stopping, named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information. Its statement is as follows. ...
*
Global optimization Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the max ...
: ** BRST algorithm **
MCS algorithm For mathematical optimization, Multilevel Coordinate Search (MCS) is an efficient algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specifi ...
*
Multi-objective optimization Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with ...
— there are multiple conflicting objectives ** Benson's algorithm — for linear vector optimization problems * Bilevel optimization — studies problems in which one problem is embedded in another *
Optimal substructure In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. This property is used to determine the usefulness of greedy algorithms for a problem.{{cite boo ...
* Dykstra's projection algorithm — finds a point in intersection of two convex sets *Algorithmic concepts: ** Barrier function ** Penalty method ** Trust region *
Test functions for optimization In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as: * Convergence rate. * Precision. * Robustness. * General performance. Here some test functions are ...
: **
Rosenbrock function In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rose ...
— two-dimensional function with a banana-shaped valley **
Himmelblau's function In mathematical optimization, Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms. The function is defined by: : f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2.\quad It has one local maximum at x = - ...
— two-dimensional with four local minima, defined by f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2 ** Rastrigin function — two-dimensional function with many local minima ** Shekel function — multimodal and multidimensional * Mathematical Optimization Society


Numerical quadrature (integration)

Numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
— the numerical evaluation of an integral *
Rectangle method In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is approximating the area of functions or l ...
— first-order method, based on (piecewise) constant approximation *
Trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
— second-order method, based on (piecewise) linear approximation *
Simpson's rule In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads \int_a^b f(x) ...
— fourth-order method, based on (piecewise) quadratic approximation ** Adaptive Simpson's method * Boole's rule — sixth-order method, based on the values at five equidistant points * Newton–Cotes formulas — generalizes the above methods *
Romberg's method In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triang ...
— Richardson extrapolation applied to trapezium rule *
Gaussian quadrature In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for m ...
— highest possible degree with given number of points ** Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight {{nowrap, (1 − ''x''2)±1/2 on ��1, 1** Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−''x''2) on ��∞, ∞** Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − ''x'')''α'' (1 + ''x'')''β'' on ��1, 1**
Gauss–Laguerre quadrature In numerical analysis Gauss–Laguerre quadrature (named after Carl Friedrich Gauss and Edmond Laguerre) is an extension of the Gaussian quadrature method for approximating the value of integrals of the following kind: :\int_^ e^ f(x)\,dx. In thi ...
— extension of Gaussian quadrature for integrals with weight exp(−''x'') on
, ∞ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
**
Gauss–Kronrod quadrature formula The Gauss–Kronrod quadrature formula is an adaptive method for numerical integration. It is a variant of Gaussian quadrature, in which the evaluation points are chosen so that an accurate approximation can be computed by re-using the information ...
— nested rule based on Gaussian quadrature ** Gauss–Kronrod rules *
Tanh-sinh quadrature Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974. It is especially applied where singularities or infinite derivatives exist at one or both endpoints. The method uses hyperbolic ...
— variant of Gaussian quadrature which works well with singularities at the end points * Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials * Adaptive quadrature — adapting the subintervals in which the integration interval is divided depending on the integrand *
Monte Carlo integration In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at ...
— takes random samples of the integrand **''See also #Monte Carlo method'' *
Quantized state systems method The quantized state systems (QSS) methods are a family of numerical integration solvers based on the idea of state quantization, dual to the traditional idea of time discretization. Unlike traditional numerical solution methods, which approach the ...
(QSS) — based on the idea of state quantization *
Lebedev quadrature In numerical analysis, Lebedev quadrature, named after Vyacheslav Ivanovich Lebedev, is an approximation to the surface integral of a function over a three-dimensional sphere. The grid is constructed so to have octahedral rotation and inversion sy ...
— uses a grid on a sphere with octahedral symmetry *
Sparse grid Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally developed by the Russian mathematician Sergey A. Smolyak, a student of Lazar Lyusternik, and are based on a sparse tens ...
*
Coopmans approximation The Coopmans approximation is a method for approximating a fractional-order integrator in a continuous process with constant space complexity. The most correct and accurate methods for calculating the fractional integral require a record of all p ...
*
Numerical differentiation In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function. Finite differences The simplest ...
— for fractional-order integrals ** Numerical smoothing and differentiation **
Adjoint state method The adjoint state method is a numerical method for efficiently computing the gradient of a function or operator in a numerical optimization problem. It has applications in geophysics, seismic imaging, photonics and more recently in neural ne ...
— approximates gradient of a function in an optimization problem *
Euler–Maclaurin formula In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using ...


Numerical methods for ordinary differential equations

Numerical methods for ordinary differential equations — the numerical solution of ordinary differential equations (ODEs) *
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit m ...
— the most basic method for solving an ODE *
Explicit and implicit methods Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical proc ...
— implicit methods need to solve an equation at every step *
Backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but d ...
— implicit variant of the Euler method *
Trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works b ...
— second-order implicit method *
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. Th ...
— one of the two main classes of methods for initial-value problems **
Midpoint method In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, : y'(t) = f(t, y(t)), \quad y(t_0) = y_0 . The explicit midpoint method is given by the formula ...
— a second-order method with two stages ** Heun's method — either a second-order method with two stages, or a third-order method with three stages ** Bogacki–Shampine method — a third-order method with four stages (FSAL) and an embedded fourth-order method ** Cash–Karp method — a fifth-order method with six stages and an embedded fourth-order method ** Dormand–Prince method — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method ** Runge–Kutta–Fehlberg method — a fifth-order method with six stages and an embedded fourth-order method ** Gauss–Legendre method — family of A-stable method with optimal order based on Gaussian quadrature ** Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods ** List of Runge–Kutta methods *
Linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
— the other main class of methods for initial-value problems **
Backward differentiation formula The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that f ...
— implicit methods of order 2 to 6; especially suitable for stiff equations ** Numerov's method — fourth-order method for equations of the form y'' = f(t,y) ** Predictor–corrector method — uses one method to approximate solution and another one to increase accuracy * General linear methods — a class of methods encapsulating linear multistep and Runge-Kutta methods * Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order *
Exponential integrator Exponential integrators are a class of numerical methods for the solution of ordinary differential equations, specifically initial value problems. This large class of methods from numerical analysis is based on the exact integration of the linear ...
— based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part *Methods designed for the solution of ODEs from classical physics: ** Newmark-beta method — based on the extended mean-value theorem **
Verlet integration Verlet integration () is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 ...
— a popular second-order method **
Leapfrog integration In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form \ddot x = \frac = A(x), or equivalently of the form \dot v = \frac = A(x), \;\dot x = \frac = v, particularly in the case of a ...
— another name for Verlet integration ** Beeman's algorithm — a two-step method extending the Verlet method ** Dynamic relaxation *
Geometric integrator In the mathematical field of numerical ordinary differential equations, a geometric integrator is a numerical method that preserves geometric properties of the exact flow of a differential equation. Pendulum example We can motivate the study of ge ...
— a method that preserves some geometric structure of the equation **
Symplectic integrator In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in ...
— a method for the solution of Hamilton's equations that preserves the symplectic structure ***
Variational integrator Variational integrators are numerical integrators for Hamiltonian systems derived from the Euler–Lagrange equations of a discretized Hamilton's principle. Variational integrators are momentum-preserving and symplectic. Derivation of a simpl ...
— symplectic integrators derived using the underlying variational principle *** Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians **
Energy drift In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in sim ...
— phenomenon that energy, which should be conserved, drifts away due to numerical errors *Other methods for initial value problems (IVPs): ** Bi-directional delay line ** Partial element equivalent circuit *Methods for solving two-point boundary value problems (BVPs): **
Shooting method In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the ...
** Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval *Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints: **
Constraint algorithm In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The gene ...
— for solving Newton's equations with constraints ** Pantelides algorithm — for reducing the index of a DEA *Methods for solving stochastic differential equations (SDEs): ** Euler–Maruyama method — generalization of the Euler method for SDEs ** Milstein method — a method with strong order one ** Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs *Methods for solving integral equations: ** Nyström method — replaces the integral with a quadrature rule *Analysis: ** Truncation error (numerical integration) — local and global truncation errors, and their relationships *** Lady Windermere's Fan (mathematics) — telescopic identity relating local and global truncation errors *
Stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
— roughly, an ODE for which unstable methods need a very short step size, but stable methods do not ** L-stability — method is A-stable and stability function vanishes at infinity * Adaptive stepsize — automatically changing the step size when that seems advantageous * Parareal -- a parallel-in-time integration algorithm


Numerical methods for partial differential equations

Numerical partial differential equations Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). In principle, specialized methods for hyperbolic, parabolic or elliptic part ...
— the numerical solution of partial differential equations (PDEs)


Finite difference methods

Finite difference method In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are dis ...
— based on approximating differential operators with difference operators *
Finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
— the discrete analogue of a differential operator ** Finite difference coefficient — table of coefficients of finite-difference approximations to derivatives **
Discrete Laplace operator In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices) ...
— finite-difference approximation of the Laplace operator ***
Eigenvalues and eigenvectors of the second derivative Explicit formulas for eigenvalues and eigenvectors of the second derivative with different boundary conditions are provided both for the continuous and discrete cases. In the discrete case, the standard central difference approximation of the s ...
— includes eigenvalues of discrete Laplace operator *** Kronecker sum of discrete Laplacians — used for Laplace operator in multiple dimensions ** Discrete Poisson equation — discrete analogue of the Poisson equation using the discrete Laplace operator * Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm ** Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours *** Higher-order compact finite difference scheme ** Non-compact stencil — any stencil that is not compact ** Five-point stencil — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid *Finite difference methods for heat equation and related PDEs: ** FTCS scheme (forward-time central-space) — first-order explicit **
Crank–Nicolson method In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be wri ...
— second-order implicit *Finite difference methods for hyperbolic PDEs like the wave equation: ** Lax–Friedrichs method — first-order explicit ** Lax–Wendroff method — second-order explicit ** MacCormack method — second-order explicit ** Upwind scheme ***
Upwind differencing scheme for convection The upwind differencing scheme is a method used in numerical methods in computational fluid dynamics for convection– diffusion problems. This scheme is specific for Peclet number greater than 2 or less than −2 Description By taking ...
— first-order scheme for convection–diffusion problems ** Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution *
Alternating direction implicit method In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, ...
(ADI) — update using the flow in ''x''-direction and then using flow in ''y''-direction *Nonstandard finite difference scheme *Specific applications: **Finite difference methods for option pricing **Finite-difference time-domain method — a finite-difference method for electrodynamics


Finite element methods, gradient discretisation methods

Finite element method — based on a discretization of the space of solutions gradient discretisation method — based on both the discretization of the solution and of its gradient *Finite element method in structural mechanics — a physical approach to finite element methods *Galerkin method — a finite element method in which the residual is orthogonal to the finite element space **Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous *Rayleigh–Ritz method — a finite element method based on variational principles *Spectral element method — high-order finite element methods *hp-FEM — variant in which both the size and the order of the elements are automatically adapted *Examples of finite elements: **Bilinear quadrilateral element — also known as the Q4 element **Constant strain triangle element (CST) — also known as the T3 element **Quadratic quadrilateral element — also known as the Q8 element **Barsoum elements *Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis *Trefftz method *Finite element updating *Extended finite element method — puts functions tailored to the problem in the approximation space *Functionally graded elements — elements for describing functionally graded materials *Superelement — particular grouping of finite elements, employed as a single element *
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics ...
method — combination of finite elements with interval arithmetic *Discrete exterior calculus — discrete form of the exterior calculus of differential geometry *Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations *Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the true solution *Patch test (finite elements) — simple test for the quality of a finite element *MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University *NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis *Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture *
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics ...
*Applied element method — for simulation of cracks and structural collapse *Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs *Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools *Loubignac iteration *Stiffness matrix — finite-dimensional analogue of differential operator *Combination with meshfree methods: **Weakened weak form — form of a PDE that is weaker than the standard weak form **G space — functional space used in formulating the weakened weak form **Smoothed finite element method *Variational multiscale method *List of finite element software packages


Other methods

*Spectral method — based on the Fourier transformation **Pseudo-spectral method *Method of lines — reduces the PDE to a large system of ordinary differential equations *Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain **
Interval boundary element method Interval boundary element method is classical boundary element method with the interval parameters. Boundary element method is based on the following integral equation c\cdot u=\int\limits_\left(G\frac - \fracu\right)dS The exact interval sol ...
— a version using interval arithmetics *Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically *Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics **Godunov's scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation **MUSCL scheme — second-order variant of Godunov's scheme **AUSM — advection upstream splitting method **Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations **Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data) **Properties of discretization schemes — finite volume methods can be conservative, bounded, etc. *Discrete element method — a method in which the elements can move freely relative to each other **Extended discrete element method — adds properties such as strain to each particle **Movable cellular automaton — combination of cellular automata with discrete elements *Meshfree methods — does not use a mesh, but uses a particle view of the field **Discrete least squares meshless method — based on minimization of weighted summation of the squared residual **Diffuse element method **Finite pointset method — represent continuum by a point cloud **Moving Particle Semi-implicit Method **Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions **Variants of MFS with source points on the physical boundary: ***Boundary knot method (BKM) ***Boundary particle method (BPM) ***Regularized meshless method (RMM) ***Singular boundary method (SBM) *Methods designed for problems from electromagnetics: **Finite-difference time-domain method — a finite-difference method **Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet's theorem **Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines **Uniform theory of diffraction — specifically designed for scattering problems *Particle-in-cell — used especially in fluid dynamics **Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid *High-resolution scheme *Shock capturing method *Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing *Split-step method *Fast marching method *Orthogonal collocation *Lattice Boltzmann methods — for the solution of the Navier-Stokes equations *Roe solver — for the solution of the Euler equation *Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations *Broad classes of methods: **Mimesis (mathematics), Mimetic methods — methods that respect in some sense the structure of the original problem **Multiphysics — models consisting of various submodels with different physics **Immersed boundary method — for simulating elastic structures immersed within fluids *Multisymplectic integrator — extension of symplectic integrators, which are for ODEs *Stretched grid method — for problems solution that can be related to an elastic grid behavior.


Techniques for improving these methods

*Multigrid method — uses a hierarchy of nested meshes to speed up the methods *Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains **Additive Schwarz method **Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information **Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices **BDDC, Balancing domain decomposition by constraints (BDDC) — further development of BDD **FETI, Finite element tearing and interconnect (FETI) **FETI-DP — further development of FETI **Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape **Mortar methods — meshes on subdomain do not mesh **Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain **Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains **Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current **Schur complement method — early and basic method on subdomains that do not overlap **Schwarz alternating method — early and basic method on subdomains that overlap *Coarse space (numerical analysis), Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom *Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary *Fast multipole method — hierarchical method for evaluating particle-particle interactions *Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions


Grids and meshes

*Grid classification / Types of mesh: **Polygon mesh — consists of polygons in 2D or 3D **Triangle mesh — consists of triangles in 2D or 3D ***Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue ***Nonobtuse mesh — mesh in which all angles are less than or equal to 90° ***Point-set triangulation — triangle mesh such that given set of point are all a vertex of a triangle ***Polygon triangulation — triangle mesh inside a polygon ***Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle ***Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation ***Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex ***Minimum-weight triangulation — triangulation of minimum total edge length ***Kinetic triangulation — a triangulation that moves over time ***Triangulated irregular network ***Quasi-triangulation — subdivision into simplices, where vertices are not points but arbitrary sloped line segments **Volume mesh — consists of three-dimensional shapes **Regular grid — consists of congruent parallelograms, or higher-dimensional analogue **Unstructured grid **Geodesic grid — isotropic grid on a sphere *Mesh generation **Image-based meshing — automatic procedure of generating meshes from 3D image data **Marching cubes — extracts a polygon mesh from a scalar field **Parallel mesh generation **Ruppert's algorithm — creates quality Delauney triangularization from piecewise linear data *Subdivisions: *Apollonian network — undirected graph formed by recursively subdividing a triangle *Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue *Improving an existing mesh: **Chew's second algorithm — improves Delauney triangularization by refining poor-quality triangles **Laplacian smoothing — improves polynomial meshes by moving the vertices *Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point *Spatial twist continuum — dual representation of a mesh consisting of hexahedra *Pseudotriangle — simply connected region between any three mutually tangent convex sets *Simplicial complex — all vertices, line segments, triangles, tetrahedra, ..., making up a mesh


Analysis

*Lax equivalence theorem — a consistent method is convergent if and only if it is stable *Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs *Von Neumann stability analysis — all Fourier components of the error should be stable *Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present **False diffusion *Numerical dispersion *Numerical resistivity — the same, with resistivity instead of diffusion *Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods *Total variation diminishing — property of schemes that do not introduce spurious oscillations *Godunov's theorem — linear monotone schemes can only be of first order *Motz's problem — benchmark problem for singularity problems


Monte Carlo method

*Variants of the Monte Carlo method: **Direct simulation Monte Carlo **Quasi-Monte Carlo method **Markov chain Monte Carlo ***Metropolis–Hastings algorithm ****Multiple-try Metropolis — modification which allows larger step sizes ****Wang and Landau algorithm — extension of Metropolis Monte Carlo ****Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm ****Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals ***Gibbs sampling ***Coupling from the past ***Reversible-jump Markov chain Monte Carlo **Dynamic Monte Carlo method ***Kinetic Monte Carlo ***Gillespie algorithm **Particle filter ***Auxiliary particle filter **Reverse Monte Carlo **Demon algorithm *Pseudo-random number sampling **Inverse transform sampling — general and straightforward method but computationally expensive **Rejection sampling — sample from a simpler distribution but reject some of the samples ***Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments **For sampling from a normal distribution: ***Box–Muller transform ***Marsaglia polar method **Convolution random number generator — generates a random variable as a sum of other random variables **Indexed search *Variance reduction techniques: **Antithetic variates **Control variates **Importance sampling **Stratified sampling **VEGAS algorithm *Low-discrepancy sequence **Constructions of low-discrepancy sequences *Event generator *Parallel tempering *Umbrella sampling — improves sampling in physical systems with significant energy barriers *Hybrid Monte Carlo *Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables *Transition path sampling *Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains *Applications: **Ensemble forecasting — produce multiple numerical predictions from slightly initial conditions or parameters **Bond fluctuation model — for simulating the conformation and dynamics of polymer systems **Iterated filtering **Metropolis light transport **Monte Carlo localization — estimates the position and orientation of a robot **Monte Carlo methods for electron transport **Monte Carlo method for photon transport **Monte Carlo methods in finance ***Monte Carlo methods for option pricing ***Quasi-Monte Carlo methods in finance **Monte Carlo molecular modeling ***Path integral molecular dynamics — incorporates Feynman path integrals **Quantum Monte Carlo ***Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation ***Gaussian quantum Monte Carlo ***Path integral Monte Carlo ***Reptation Monte Carlo ***Variational Monte Carlo **Methods for simulating the Ising model: ***Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters ***Wolff algorithm — improvement of the Swendsen–Wang algorithm ***Metropolis–Hastings algorithm **Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems **Cross-entropy method — for multi-extremal optimization and importance sampling *Also see the list of statistics topics


Applications

*Computational physics **Computational electromagnetics **Computational fluid dynamics (CFD) ***Numerical methods in fluid mechanics ***Large eddy simulation ***Smoothed-particle hydrodynamics ***Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types ***Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures ***Explicit algebraic stress model **Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids **Climate model **Numerical weather prediction ***Geodesic grid **Celestial mechanics ***Numerical model of the Solar System **Quantum jump method — used for simulating open quantum systems, operates on wave function **Dynamic design analysis method (DDAM) — for evaluating effect of underwater explosions on equipment *Computational chemistry **Cell lists **Coupled cluster **Density functional theory **DIIS — direct inversion in (or of) the iterative subspace *Computational sociology *Computational statistics


Software

For a large list of software, see the list of numerical-analysis software.


Journals

*Acta Numerica *Mathematics of Computation (published by the American Mathematical Society) *Journal of Computational and Applied Mathematics *BIT Numerical Mathematics *Numerische Mathematik *Journals from the Society for Industrial and Applied Mathematics **SIAM Journal on Numerical Analysis **SIAM Journal on Scientific Computing


Researchers

*Cleve Moler *Gene H. Golub *James H. Wilkinson *Margaret H. Wright *Nicholas Higham, Nicholas J. Higham *
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
*Peter Lax *Richard S. Varga *Ulrich W. Kulisch *Vladik Kreinovich Numerical analysis, *Topics Mathematics-related lists, Numerical analysis topics Outlines of mathematics and logic, Numerical analysis Wikipedia outlines, Numerical analysis Lists of topics, Numerical analysis