List Of Numerical Analysis Topics
   HOME

TheInfoList



OR:

This is a list of
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
topics.


General

*
Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (german: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization er ...
* Iterative method *
Rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
— the speed at which a convergent sequence approaches its limit **
Order of accuracy In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider u, the exact solution to a differential equation in an appropriate normed space (V,, ...
— rate at which numerical solution of differential equation converges to exact solution * Series acceleration — methods to accelerate the speed of convergence of a series **
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
— most useful for linearly converging sequences **
Minimum polynomial extrapolation In mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series ...
— for vector sequences **
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we ...
**
Shanks transformation In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was ...
— similar to Aitken's delta-squared process, but applied to the partial sums **
Van Wijngaarden transformation In mathematics and numerical analysis, the van Wijngaarden transformation is a variant on the Euler transform used to accelerate the convergence of an alternating series. One algorithm to compute Euler's transform runs as follows: Compute a row ...
— for accelerating the convergence of an alternating series *
Abramowitz and Stegun ''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the ''National Institute of Standards and Te ...
— book containing formulas and tables of many special functions **
Digital Library of Mathematical Functions The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is intend ...
— successor of book by Abramowitz and Stegun *
Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The ...
*
Local convergence In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterati ...
and global convergence — whether you need a good initial guess to get convergence * Superconvergence *
Discretization In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical ...
*
Difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the limit as ''h'' approaches 0 gives the derivative of the function ''f''. The name of the expression stems from the fact t ...
*Complexity: **
Computational complexity of mathematical operations The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an ...
**
Smoothed analysis In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from mathematical ...
— measuring the expected performance of algorithms under slight random perturbations of worst-case inputs *
Symbolic-numeric computation In mathematics and computer science, symbolic-numeric computation is the use of software that combines symbolic Symbolic may refer to: * Symbol, something that represents an idea, a process, or a physical entity Mathematics, logic, and computing ...
— combination of symbolic and numeric methods *Cultural and historical aspects: **
History of numerical solution of differential equations using computers Differential equations, in particular Euler equations, rose in prominence during World War II in calculating the accurate trajectory of ballistics, both rocket-propelled and gun or cannon type projectiles. Originally, mathematicians used the simple ...
**
Hundred-dollar, Hundred-digit Challenge problems The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for th ...
— list of ten problems proposed by
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
in 2002 **
International Workshops on Lattice QCD and Numerical Analysis International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * ''International'' (Kevin Michael album), 2011 * ''International'' (New Order album), 2002 * ''International'' (The T ...
**
Timeline of numerical analysis after 1945 The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see tim ...
*General classes of methods: **
Collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
— discretizes a continuous equation by requiring it only to hold at certain points **
Level-set method Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces on a ...
***
Level set (data structures) In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions. A common use of this form of data structure is in efficient image rendering. The underlying method constructs a signed dista ...
— data structures for representing level sets **
Sinc numerical methods In numerical analysis and applied mathematics, sinc numerical methods are numerical techniques for finding approximate solutions of partial differential equations and integral equations based on the translates of sinc function and Cardinal function ...
— methods based on the sinc function, sinc(''x'') = sin(''x'') / ''x'' **
ABS methods ABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications: * solution of general linear alge ...


Error

Error analysis (mathematics) In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics. Error an ...
*
Approximation An approximation is anything that is intentionally similar but not exactly equality (mathematics), equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very ...
*
Approximation error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
*
Catastrophic cancellation In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
*
Condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
*
Discretization error In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a ...
*
Floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
number **
Guard digit In numerical analysis, one or more guard digits can be used to reduce the amount of roundoff error. For example, suppose that the final result of a long, multi-step calculation can be safely rounded off to ''N'' decimal places. That is to say, the ...
— extra precision introduced during a computation to reduce round-off error **
Truncation In mathematics and computer science, truncation is limiting the number of digits right of the decimal point. Truncation and floor function Truncation of positive real numbers can be done using the floor function. Given a number x \in \mathbb ...
— rounding a floating-point number by discarding all digits after a certain digit **
Round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
*** Numeric precision in Microsoft Excel **
Arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are li ...
*
Interval arithmetic Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods using ...
— represent every number by two floating-point numbers guaranteed to have the unknown number between them **
Interval contractor In mathematics, an interval contractor (or contractor for short) associated to a set X is an operator C which associates to a hyperrectangle in \bold^n another box C( of \bold^n such that the two following properties are always satisfied: ...
— maps interval to subinterval which still contains the unknown exact answer **
Interval propagation In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints (i.e., equations ...
— contracting interval domains without removing any value consistent with the constraints ***See also: Interval boundary element method,
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
*
Loss of significance In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
*
Numerical error In software engineering and mathematics, numerical error is the error in the numerical computations. Types It can be the combined effect of two kinds of error in a calculation. * the first is caused by the finite precision of computations involvi ...
*
Numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
*Error propagation: **
Propagation of uncertainty In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of exp ...
*** List of uncertainty propagation software **
Significance arithmetic Significance arithmetic is a set of rules (sometimes called significant figure rules) for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant ...
**
Residual (numerical analysis) Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find ''x'' such that : f(x)=b. Given an approximation ''x''0 of ''x'', the residual is : b - f(x_0) that is, "what is left of the right hand side" after ...
*
Relative change and difference In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a ''standard'' or ''reference'' or ''starting'' va ...
— the relative difference between ''x'' and ''y'' is , ''x'' − ''y'', / max(, ''x'', , , ''y'', ) *
Significant figures Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something. If a number expre ...
**
False precision False precision (also called overprecision, fake precision, misplaced precision and spurious precision) occurs when numerical data are presented in a manner that implies better precision than is justified; since precision is a limit to accuracy ( ...
— giving more significant figures than appropriate *
Sterbenz lemma In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma is a theorem giving conditions under which floating-point differences are computed exactly. It is named after Pat H. Sterbenz, who published a variant of it in 1974. The Sterbe ...
*
Truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
— error committed by doing only a finite numbers of steps *
Well-posed problem The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the sol ...
*
Affine arithmetic Affine arithmetic (AA) is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations (affine forms) of certain primitive variables, which stand for sources of uncertainty in the data or ...


Elementary and special functions

*
Unrestricted algorithm An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by ...
*Summation: **
Kahan summation algorithm In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious appro ...
**
Pairwise summation In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum ...
— slightly worse than Kahan summation but cheaper **
Binary splitting In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points. Method Given a series :S(a,b) = ...
**
2Sum 2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation. 2Sum and its variant Fast2Sum were first published by Møller in 1965. Fast2Sum is often used implicitly in other algorithms such as ...
*Multiplication: **
Multiplication algorithm A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the de ...
— general discussion, simple methods **
Karatsuba algorithm The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. Knuth D.E. (1969) ''The Art of Computer Programming. v.2.'' Addison-Wesley Publ.Co., 724 pp. It is a divi ...
— the first algorithm which is faster than straightforward multiplication **
Toom–Cook multiplication Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers. Given two large integ ...
— generalization of Karatsuba multiplication **
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ''C ...
— based on Fourier transform, asymptotically very fast **
Fürer's algorithm A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the de ...
— asymptotically slightly faster than Schönhage–Strassen *
Division algorithm A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Divis ...
— for computing quotient and/or remainder of two numbers **
Long division In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps ...
** Restoring division ** Non-restoring division ** SRT division **
Newton–Raphson division A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Divis ...
: uses
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
to find the
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
of D, and multiply that reciprocal by N to find the final quotient Q. ** Goldschmidt division *Exponentiation: **
Exponentiation by squaring Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to re ...
**
Addition-chain exponentiation In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by a positive integer power that requires a minimal number of multiplications. Using ''the form of'' the shortest addition chain, with multipl ...
* Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). **
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
*Polynomials: **
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
**
Estrin's scheme In numerical analysis, Estrin's scheme (after Gerald Estrin), also known as Estrin's method, is an algorithm for numerical evaluation of polynomials. Horner's method for evaluation of polynomials is one of the most commonly used algorithms for thi ...
— modification of the Horner scheme with more possibilities for parallelization **
Clenshaw algorithm In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. Note that this paper is written in terms of the ''Shifted'' Chebyshev polynomials of th ...
**
De Casteljau's algorithm In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to ...
*Square roots and other roots: **
Integer square root In number theory, the integer square root (isqrt) of a non-negative integer ''n'' is the non-negative integer ''m'' which is the greatest integer less than or equal to the square root of ''n'', : \mbox( n ) = \lfloor \sqrt n \rfloor. For example ...
**
Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fin ...
** ''n''th root algorithm ** Shifting ''n''th root algorithm — similar to long division **
hypot In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides a and b, this lengt ...
— the function (''x''2 + ''y''2)1/2 **
Alpha max plus beta min algorithm The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenu ...
— approximates hypot(x,y) **
Fast inverse square root Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates \frac, the Multiplicative inverse, reciprocal (or multiplicative inverse) of the square root of a 32-bit ...
— calculates 1 / using details of the IEEE floating-point system *Elementary functions (exponential, logarithm, trigonometric functions): **
Trigonometric tables In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables w ...
— different methods for generating them **
CORDIC CORDIC (for "coordinate rotation digital computer"), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH C ...
— shift-and-add algorithm using a table of arc tangents **
BKM algorithm The BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by Jean-Claude Bajard, Sylvanus Kla, and Jean-Michel Muller. BKM is based on computing complex logarithms (''L-mode'') and exponentials ( ...
— shift-and-add algorithm using a table of logarithms and complex numbers *Gamma function: **
Lanczos approximation In mathematics, the Lanczos approximation is a method for computing the gamma function numerically, published by Cornelius Lanczos in 1964. It is a practical alternative to the more popular Stirling's approximation for calculating the gamma functio ...
**
Spouge's approximation In mathematics, Spouge's approximation is a formula for computing an approximation of the gamma function. It was named after John L. Spouge, who defined the formula in a 1994 paper. The formula is a modification of Stirling's approximation, and ha ...
— modification of Stirling's approximation; easier to apply than Lanczos *
AGM method AGM or agm may refer to: Military * Air-to-ground missile, a missile designed to be launched from military aircraft * Artillery Gun Module, an air-portable self-propelled howitzer * Missile Range Instrumentation Ship (US Navy hull classification ...
— computes arithmetic–geometric mean; related methods compute special functions *
FEE method In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel - ...
(Fast E-function Evaluation) — fast summation of series like the power series for e''x'' *
Gal's accurate tables Gal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential function, expon ...
— table of function values with unequal spacing to reduce round-off error *
Spigot algorithm A spigot algorithm is an algorithm for computing the value of a transcendental number (such as or ''e'') that generates the digits of the number sequentially from left to right providing increasing precision as the algorithm proceeds. Spigot alg ...
— algorithms that can compute individual digits of a real number * Approximations of : **
Liu Hui's π algorithm Liu Hui's algorithm was invented by Liu Hui (fl. 3rd century), a mathematician of the state of Cao Wei. Before his time, the ratio of the circumference of a circle to its diameter was often taken experimentally as three in China, while Zhang H ...
— first algorithm that can compute π to arbitrary precision **
Leibniz formula for π In mathematics, the Leibniz formula for , named after Gottfried Leibniz, states that 1-\frac+\frac-\frac+\frac-\cdots=\frac, an alternating series. It is also called the Madhava–Leibniz series as it is a special case of a more general serie ...
— alternating series with very slow convergence **
Wallis product In mathematics, the Wallis product for , published in 1656 by John Wallis, states that :\begin \frac & = \prod_^ \frac = \prod_^ \left(\frac \cdot \frac\right) \\ pt& = \Big(\frac \cdot \frac\Big) \cdot \Big(\frac \cdot \frac\Big) \cdot \Big(\fr ...
— infinite product converging slowly to π/2 **
Viète's formula In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant : \frac2\pi = \frac2 \cdot \frac2 \cdot \frac2 \cdots It can also be represented as: \frac2\pi ...
— more complicated infinite product which converges faster ** Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean **
Borwein's algorithm In mathematics, Borwein's algorithm is an algorithm devised by Jonathan Borwein, Jonathan and Peter Borwein to calculate the value of . They devised several other algorithms. They published the book ''Pi and the AGM – A Study in Analytic Number Th ...
— iteration which converges quartically to 1/π, and other algorithms **
Chudnovsky algorithm The Chudnovsky algorithm is a fast method for calculating the digits of , based on Ramanujan’s formulae. It was published by the Chudnovsky brothers in 1988. It was used in the world record calculations of 2.7 trillion digits of in Decembe ...
— fast algorithm that calculates a hypergeometric series **
Bailey–Borwein–Plouffe formula The Bailey–Borwein–Plouffe formula (BBP formula) is a formula for . It was discovered in 1995 by Simon Plouffe and is named after the authors of the article in which it was published, David H. Bailey, Peter Borwein, and Plouffe. Before that, ...
— can be used to compute individual hexadecimal digits of π ** Bellard's formula — faster version of Bailey–Borwein–Plouffe formula **
List of formulae involving π The following is a list of significant formulae involving the mathematical constant . Many of these formulae can be found in the article Pi, or the article Approximations of . Euclidean geometry :\pi = \frac Cd where is the circumference o ...


Numerical linear algebra

Numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. ...
— study of numerical algorithms for linear algebra problems


Basic concepts

*Types of matrices appearing in numerical analysis: **
Sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
***
Band matrix Band or BAND may refer to: Places *Bánd, a village in Hungary *Band, Iran, a village in Urmia County, West Azerbaijan Province, Iran *Band, Mureș, a commune in Romania * Band-e Majid Khan, a village in Bukan County, West Azerbaijan Province, I ...
*** Bidiagonal matrix ***
Tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main di ...
*** Pentadiagonal matrix ***
Skyline matrix In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In ...
**
Circulant matrix In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplitz ...
**
Triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
**
Diagonally dominant matrix In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row ...
**
Block matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original mat ...
— matrix composed of smaller matrices **
Stieltjes matrix In mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix. Every ''n×n' ...
— symmetric positive definite with non-positive off-diagonal entries **
Hilbert matrix In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions : H_ = \frac. For example, this is the 5 × 5 Hilbert matrix: : H = \begin 1 & \frac & \frac & \frac & \frac \\ \frac & \frac & \f ...
— example of a matrix which is extremely ill-conditioned (and thus difficult to handle) ** Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues **
Convergent matrix In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation. Background When successive powers of a matrix T become small (that is, when all of the entries of T approach zero, upon raising T t ...
— square matrix whose successive powers approach the zero matrix *Algorithms for matrix multiplication: **
Strassen algorithm In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although t ...
**
Coppersmith–Winograd algorithm In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and nu ...
**
Cannon's algorithm In computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.
— a distributed algorithm, especially suitable for processors laid out in a 2d grid ** Freivalds' algorithm — a randomized algorithm for checking the result of a multiplication *
Matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
s: **
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a pe ...
— lower triangular times upper triangular **
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decompo ...
— orthogonal matrix times triangular matrix *** RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix **
Polar decomposition In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is an orthogonal matrix and P is a positive semi-definite symmetric matrix (U is a unitary matrix and P is a positive semi ...
— unitary matrix times positive-semidefinite Hermitian matrix **Decompositions by similarity: ***
Eigendecomposition In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matri ...
— decomposition in terms of eigenvectors and eigenvalues ***
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to som ...
— bidiagonal matrix of a certain form; generalizes the eigendecomposition ****
Weyr canonical form In mathematics, in linear algebra, a Weyr canonical form (or, Weyr form or Weyr matrix) is a square matrix satisfying certain conditions. A square matrix is said to be ''in'' the Weyr canonical form if the matrix satisfies the conditions defining ...
— permutation of Jordan normal form ***
Jordan–Chevalley decomposition In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an inve ...
— sum of commuting nilpotent matrix and diagonalizable matrix ***
Schur decomposition In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper tri ...
— similarity transform bringing the matrix to a triangular matrix **
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
— unitary matrix times diagonal matrix times unitary matrix *
Matrix splitting In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (for example, for systems of differential equations) depen ...
— expressing a given matrix as a sum or difference of matrices


Solving systems of linear equations

*
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
**
Row echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian ...
— matrix in which all entries below a nonzero entry are zero **
Bareiss algorithm In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be ...
— variant which ensures that all entries remain integers if the initial matrix has integer entries **
Tridiagonal matrix algorithm In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagona ...
— simplified form of Gaussian elimination for tridiagonal matrices *
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a pe ...
— write a matrix as a product of an upper- and a lower-triangular matrix **
Crout matrix decomposition In linear algebra, the Crout matrix decomposition is an LU decomposition which decomposes a matrix into a lower triangular matrix (L), an upper triangular matrix (U) and, although not always needed, a permutation matrix (P). It was developed by Pres ...
** LU reduction — a special parallelized version of a LU decomposition algorithm *
Block LU decomposition In linear algebra, a Block LU decomposition is a matrix decomposition of a block matrix into a lower block triangular matrix ''L'' and an upper block triangular matrix ''U''. This decomposition is used in numerical analysis to reduce the complexit ...
*
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
— for solving a system with a positive definite matrix **
Minimum degree algorithm In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor. This results ...
**
Symbolic Cholesky decomposition In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants. Algo ...
*
Iterative refinement Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations. When solving a linear system \,\mathrm \, \mathbf = \mathbf \,, due to the compounded accu ...
— procedure to turn an inaccurate solution in a more accurate one *Direct methods for sparse matrices: **
Frontal solver A frontal solver, conceived by Bruce Irons (engineer), Bruce Irons, is an approach to solving sparse matrix, sparse linear systems which is used extensively in finite element analysis. It is a variant of Gauss elimination that automatically avoids a ...
— used in finite element methods **
Nested dissection In numerical analysis, nested dissection is a divide and conquer heuristic for the solution of sparse symmetric systems of linear equations based on graph partitioning. Nested dissection was introduced by ; the name was suggested by Garrett Birkho ...
— for symmetric matrices, based on graph partitioning *
Levinson recursion Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in time, which is a strong improvement over Gauss–Jordan elimi ...
— for Toeplitz matrices *
SPIKE algorithm The SPIKE algorithm is a hybrid Parallel computing, parallel solver for Band matrix, banded System of linear equations, linear systems developed by Eric Polizzi and Ahmed Sameh Overview The SPIKE algorithm deals with a linear system , where is a b ...
— hybrid parallel solver for narrow-banded matrices *
Cyclic reduction Cyclic reduction is a numerical method for solving large linear systems by repeatedly splitting the problem. Each step eliminates even or odd rows and columns of a matrix and remains in a similar form. The elimination step is relatively expensive ...
— eliminate even or odd rows or columns, repeat *Iterative methods: **
Jacobi method In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The ...
**
Gauss–Seidel method In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl ...
***
Successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging ...
(SOR) — a technique to accelerate the Gauss–Seidel method ****
Symmetric successive over-relaxation In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is defined as M=(D+L) D^ ...
(SSOR) — variant of SOR for symmetric matrices ***
Backfitting algorithm In statistics, the backfitting algorithm is a simple iterative procedure used to fit a generalized additive model. It was introduced in 1985 by Leo Breiman and Jerome H. Friedman, Jerome Friedman along with generalized additive models. In most case ...
— iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel **
Modified Richardson iteration Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the ...
**
Conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterativ ...
(CG) — assumes that the matrix is positive definite ***
Derivation of the conjugate gradient method In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system :\boldsymbol=\boldsymbol where \boldsymbol is symmetric positive-definite. The conjugate gradient method can be derived fr ...
***
Nonlinear conjugate gradient method In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f(x) :: \displaystyle f(x)=\, Ax-b\, ^2, the minimum of f is obtained whe ...
— generalization for nonlinear optimization problems **
Biconjugate gradient method In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A to ...
(BiCG) ***
Biconjugate gradient stabilized method In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the bic ...
(BiCGSTAB) — variant of BiCG with better convergence **
Conjugate residual method The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence prop ...
— similar to CG but only assumed that the matrix is symmetric **
Generalized minimal residual method In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with ...
(GMRES) — based on the Arnoldi iteration **
Chebyshev iteration In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev. Chebyshev iteration avoids the computatio ...
— avoids inner products but needs bounds on the spectrum ** Stone's method (SIP — Strongly Implicit Procedure) — uses an incomplete LU decomposition **
Kaczmarz method The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems A x = b . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from ...
**
Preconditioner In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing ...
***
Incomplete Cholesky factorization In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like ...
— sparse approximation to the Cholesky factorization ***
Incomplete LU factorization In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner. Introduction Consider a sparse linear system Ax = b. These are often s ...
— sparse approximation to the LU factorization **
Uzawa iteration In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming. Basic idea We consider a saddle point problem ...
— for saddle node problems *Underdetermined and overdetermined systems (systems that have no or more than one solution): ** Numerical computation of null space — find all solutions of an underdetermined system ** Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual **
Sparse approximation Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, sign ...
— for finding the sparsest solution (i.e., the solution with as many zeros as possible)


Eigenvalue algorithms

Eigenvalue algorithm In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors. Eigenvalues and eigenvectors Given an squa ...
— a numerical algorithm for locating the eigenvalues of a matrix *
Power iteration In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix (mathematics), matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of ...
*
Inverse iteration In numerical analysis, inverse iteration (also known as the ''inverse power method'') is an Iterative method, iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is alr ...
*
Rayleigh quotient iteration Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates. Rayleigh quotient iteration is an iterative method, that is, ...
*
Arnoldi iteration In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by const ...
— based on Krylov subspaces *
Lanczos algorithm The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power iteration, power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n ...
— Arnoldi, specialized for positive-definite matrices ** Block Lanczos algorithm — for when matrix is over a finite field *
QR algorithm In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by ...
*
Jacobi eigenvalue algorithm In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacobi, ...
— select a small submatrix which can be diagonalized exactly, and repeat **
Jacobi rotation In numerical linear algebra, a Jacobi rotation is a rotation, ''Q'k''ℓ, of a 2-dimensional linear subspace of an ''n-''dimensional inner product space, chosen to zero a symmetric pair of off-diagonal entries of an ''n''×''n'' real symmetric ma ...
— the building block, almost a Givens rotation **
Jacobi method for complex Hermitian matrices In mathematics, the Jacobi method for complex Hermitian matrices is a generalization of the Jacobi iteration method. The Jacobi iteration method is also explained in "Introduction to Linear Algebra" by . Derivation The complex unitary rotation ...
*
Divide-and-conquer eigenvalue algorithm Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as ...
* Folded spectrum method *
LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem :A x= \lambda B x, for a g ...
— Locally Optimal Block Preconditioned Conjugate Gradient Method *
Eigenvalue perturbation In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system Ax=\lambda x that is perturbed from one with known eigenvectors and eigenvalues A_0 x=\lambda_0x_0 . This is useful for studyin ...
— stability of eigenvalues under perturbations of the matrix


Other concepts and algorithms

*
Orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean spa ...
algorithms: **
Gram–Schmidt process In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space equipped with the standard inner prod ...
**
Householder transformation In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformatio ...
*** Householder operator — analogue of Householder transformation for general inner product spaces **
Givens rotation In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne Nation ...
*
Krylov subspace In linear algebra, the order-''r'' Krylov subspace generated by an ''n''-by-''n'' matrix ''A'' and a vector ''b'' of dimension ''n'' is the linear subspace spanned by the images of ''b'' under the first ''r'' powers of ''A'' (starting from A^0=I), ...
*
Block matrix pseudoinverse In mathematics, a block matrix pseudoinverse is a formula for the pseudoinverse of a partitioned matrix. This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on the least square ...
*
Bidiagonalization Bidiagonalization is one of unitary (orthogonal) matrix decompositions such that U* A V = B, where U and V are unitary (orthogonal) matrices; * denotes Hermitian transpose; and B is upper bidiagonal. A is allowed to be rectangular. For dense matr ...
*
Cuthill–McKee algorithm In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee,E. Cuthill and J. McKeethe bandwidth of sparse symmetric matrices''In Proc. 24th Nat. Conf. ACM, pages 157–172, 1969. is an algori ...
— permutes rows/columns in sparse matrix to yield a narrow band matrix *
In-place matrix transposition In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an ''N''×''M'' matrix in-place in computer memory, ideally with ''O''(1) (bounded) additional storage, or at most with additional storage much ...
— computing the transpose of a matrix without using much additional storage *
Pivot element The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex algorithm, etc.), to do certain calculations. In the case of matrix algorithms, a pivot entry is usuall ...
— entry in a matrix on which the algorithm concentrates *
Matrix-free methods In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. ...
— methods that only access the matrix by evaluating matrix-vector products


Interpolation and approximation

Interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a n ...
— construct a function going through some given data points *
Nearest-neighbor interpolation Nearest-neighbor interpolation (also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions. Interpolation is the problem of approximating the value of ...
— takes the value of the nearest neighbor


Polynomial interpolation

Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
— interpolation by polynomials *
Linear interpolation In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. Linear interpolation between two known points If the two known poin ...
*
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
*
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_3 ...
*
Chebyshev polynomials The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshe ...
*
Chebyshev nodes In numerical analysis, Chebyshev nodes are specific real algebraic numbers, namely the roots of the Chebyshev polynomials of the first kind. They are often used as nodes in polynomial interpolation because the resulting interpolation polynomial ...
*
Lebesgue constant (interpolation) In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolation, interpolant of a Function (mathematics), function (at the given nodes) is in comparison with the best polynomial appro ...
*Different forms for the interpolant: **
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an polynomial interpolation, interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's ...
***
Divided differences In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its o ...
***
Neville's algorithm In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given ''n'' + 1 points, there is a unique polynomial of degree ''≤ n'' which goes through the ...
— for evaluating the interpolant; based on the Newton form **
Lagrange polynomial In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree of a polynomial, degree that polynomial interpolation, interpolates a given set of data. Given a data set of graph of a function, coordinate p ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate polyn ...
— especially useful for approximation **
Brahmagupta's interpolation formula Brahmagupta's interpolation formula is a second-order polynomial interpolation formula developed by the Indian mathematician and astronomer Brahmagupta (598–668 CE) in the early 7th century CE. The Sanskrit couplet describing the formula can be ...
— seventh-century formula for quadratic interpolation *Extensions to multiple dimensions: **
Bilinear interpolation In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., ''x'' and ''y'') using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be ge ...
**
Trilinear interpolation Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism linearly, using function data on ...
**
Bicubic interpolation In mathematics, bicubic interpolation is an extension of cubic interpolation (not to be confused with cubic spline interpolation, a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regular ...
**
Tricubic interpolation In the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid. The approach involves approximating the function locally by an expre ...
**
Padua points In polynomial interpolation of two variables, the Padua points are the first known example (and up to now the only one) of a unisolvent point set (that is, the interpolating polynomial is unique) with ''minimal growth'' of their Lebesgue constant ...
— set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant *
Hermite interpolation In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the s ...
*
Birkhoff interpolation In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial ''p'' of degree ''d'' such that certain derivatives have specified values at specified points: : p^(x_i) = y_i \qq ...
* Abel–Goncharov interpolation


Spline interpolation

Spline interpolation In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all ...
— interpolation by piecewise polynomials *
Spline (mathematics) In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polyn ...
— the piecewise polynomials used as interpolants *
Perfect spline Perfect commonly refers to: * Perfection, completeness, excellence * Perfect (grammar), a grammatical category in some languages Perfect may also refer to: Film * Perfect (1985 film), ''Perfect'' (1985 film), a romantic drama * Perfect (2018 f ...
— polynomial spline of degree ''m'' whose ''m''th derivate is ±1 *
Cubic Hermite spline In numerical analysis, a cubic Hermite spline or cubic Hermite interpolator is a spline where each piece is a third-degree polynomial specified in Hermite form, that is, by its values and first derivatives at the end points of the corresponding ...
**
Centripetal Catmull–Rom spline In computer graphics, the centripetal Catmull–Rom spline is a variant form of the Catmull–Rom spline, originally formulated by Edwin Catmull and Raphael Rom, which can be evaluated using a recursive algorithm proposed by Barry and Goldman. It ...
— special case of cubic Hermite splines without self-intersections or cusps *
Monotone cubic interpolation In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Monotonicity is preserved by linear interpolation but not guaranteed ...
*
Hermite spline In the mathematical subfield of numerical analysis, a Hermite spline is a spline curve where each polynomial of the spline is in Hermite form. See also * Cubic Hermite spline *Hermite polynomials *Hermite interpolation In numerical analysis, Her ...
*
Bézier curve A Bézier curve ( ) is a parametric curve used in computer graphics and related fields. A set of discrete "control points" defines a smooth, continuous curve by means of a formula. Usually the curve is intended to approximate a real-world shape t ...
**
De Casteljau's algorithm In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to ...
**
composite Bézier curve In geometric modelling and in computer graphics, a composite Bézier curve or Bézier spline is a spline made out of Bézier curves that is at least C^0 continuous. In other words, a composite Bézier curve is a series of Bézier curves joined ...
**Generalizations to more dimensions: ***
Bézier triangle A Bézier triangle is a special type of Bézier surface that is created by (linear, quadratic, cubic or higher degree) interpolation of control points. ''n''th-order Bézier triangle A general ''n''th-order Bézier triangle has (''n'' +1)('' ...
— maps a triangle to R3 ***
Bézier surface Bézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling. As with Bézier curves, a Bézier surface is defined by a set of control points. Similar to interpolation in man ...
— maps a square to R3 *
B-spline In the mathematical subfield of numerical analysis, a B-spline or basis spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. Any spline function of given degree can be expresse ...
**
Box spline In the mathematical fields of numerical analysis and approximation theory, box splines are piecewise polynomial functions of several variables. Box splines are considered as a multivariate generalization of basis splines (B-splines) and are gene ...
— multivariate generalization of B-splines **
Truncated power function In mathematics, the truncated power function with exponent n is defined as :x_+^n = \begin x^n &:\ x > 0 \\ 0 &:\ x \le 0. \end In particular, :x_+ = \begin x &:\ x > 0 \\ 0 &:\ x \le 0. \end and interpret the exponent as conventional power. ...
**
De Boor's algorithm In the mathematical subfield of numerical analysis de Boor's algorithmC. de Boor 971 "Subroutine package for calculating with B-splines", Techn.Rep. LA-4728-MS, Los Alamos Sci.Lab, Los Alamos NM; p. 109, 121. is a polynomial-time and numerically sta ...
— generalizes De Casteljau's algorithm *
Non-uniform rational B-spline Non-uniform rational basis spline (NURBS) is a mathematical model using basis splines (B-splines) that is commonly used in computer graphics for representing curves and surfaces. It offers great flexibility and precision for handling both analyt ...
(NURBS) ** T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate *
Kochanek–Bartels spline In mathematics, a Kochanek–Bartels spline or Kochanek–Bartels curve is a cubic Hermite spline with tension, bias, and continuity parameters defined to change the behavior of the tangents. Given ''n'' + 1 knots, :p0, ..., p''n'', to be inte ...
*
Coons patch In mathematics, a Coons patch, is a type of surface patch or manifold parametrization used in computer graphics to smoothly join other surfaces together, and in computational mechanics applications, particularly in finite element method and bo ...
— type of manifold parametrization used to smoothly join other surfaces together *
M-spline In the mathematical subfield of numerical analysis, an M-spline is a non-negative spline function. Definition A family of ''M-spline'' functions of order ''k'' with ''n'' free parameters is defined by a set of knots ''t''1  ≤ ''t''2 ...
— a non-negative spline *
I-spline In the mathematical subfield of numerical analysis, an I-spline is a monotone spline function. Definition A family of ''I-spline'' functions of degree ''k'' with ''n'' free parameters is defined in terms of the M-splines ''M'i''(''x'', ''k' ...
— a monotone spline, defined in terms of M-splines *
Smoothing spline Smoothing splines are function estimates, \hat f(x), obtained from a set of noisy observations y_i of the target f(x_i), in order to balance a measure of goodness of fit of \hat f(x_i) to y_i with a derivative based measure of the smoothness of \ ...
— a spline fitted smoothly to noisy data * Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline *See also:
List of numerical computational geometry topics List of numerical computational geometry topics enumerates the topics of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. This area is ...


Trigonometric interpolation

Trigonometric interpolation In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a tr ...
— interpolation by trigonometric polynomials *
Discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex- ...
— can be viewed as trigonometric interpolation at equidistant points ** Relations between Fourier transforms and Fourier series *
Fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in th ...
(FFT) — a fast method for computing the discrete Fourier transform **
Bluestein's FFT algorithm The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, correspon ...
** Bruun's FFT algorithm **
Cooley–Tukey FFT algorithm The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size N = N_1N_2 in terms of ''N''1 ...
**
Split-radix FFT algorithm The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (1968) and subsequently rediscovered simultaneously by ...
— variant of Cooley–Tukey that uses a blend of radices 2 and 4 **
Goertzel algorithm The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone mult ...
**
Prime-factor FFT algorithm The prime-factor algorithm (PFA), also called the Good–Thomas algorithm (1958/1963), is a fast Fourier transform (FFT) algorithm that re-expresses the discrete Fourier transform (DFT) of a size ''N'' = ''N''1''N''2 as a two-dimensional ''N''1× ...
**
Rader's FFT algorithm Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution (the oth ...
** Bit-reversal permutation — particular permutation of vectors with 2''m'' entries used in many FFTs. ** Butterfly diagram **
Twiddle factor A twiddle factor, in fast Fourier transform (FFT) algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. This term was apparently coined by Gentleman & Sande in 1966, and has ...
— the trigonometric constant coefficients that are multiplied by the data **
Cyclotomic fast Fourier transform The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields.S.V. Fedorenko and P.V. Trifonov, This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results fro ...
— for FFT over finite fields **Methods for computing discrete convolutions with finite impulse response filters using the FFT: ***
Overlap–add method In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x /math> with a finite impulse response (FIR) filter h /math>: where for ''m'' outside the region . This article uses c ...
***
Overlap–save method In signal processing, ''overlap–save'' is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x /math> and a finite impulse response (FIR) filter h /math>: where for ''m'' outside the regio ...
*
Sigma approximation In mathematics, σ-approximation adjusts a Fourier summation to greatly reduce the Gibbs phenomenon, which would otherwise occur at discontinuities. A σ-approximated summation for a series of period ''T'' can be written as follows: s(\theta) = ...
*
Dirichlet kernel In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as D_n(x)= \sum_^n e^ = \left(1+2\sum_^n\cos(kx)\right)=\frac, where is any nonneg ...
— convolving any function with the Dirichlet kernel yields its trigonometric interpolant *
Gibbs phenomenon In mathematics, the Gibbs phenomenon, discovered by Available on-line at:National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. and rediscovered by , is the oscillatory behavior of the Fourier series of a piecewise continuousl ...


Other interpolants

* Simple rational approximation **
Polynomial and rational function modeling In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting. Polynomial function models A polynomial function is one that has the form : y = a_x ...
— comparison of polynomial and rational interpolation *
Wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
**
Continuous wavelet {{Unreferenced, date=December 2009 In numerical analysis, continuous wavelets are functions used by the continuous wavelet transform. These functions are defined as analytical expressions, as functions either of time or of frequency. Most of the co ...
**
Transfer matrix In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element t ...
**See also: List of functional analysis topics, List of wavelet-related transforms *
Inverse distance weighting Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the kno ...
*
Radial basis function A radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), or some other fixed ...
(RBF) — a function of the form ƒ(''x'') = ''φ''(, ''x''−''x''0, ) **
Polyharmonic spline In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubi ...
— a commonly used radial basis function **
Thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ex ...
— a specific polyharmonic spline: ''r''2 log ''r'' **
Hierarchical RBF In computer graphics, a hierarchical RBF is an interpolation method based on Radial basis functions (RBF). Hierarchical RBF interpolation has applications in the construction of shape models in 3D computer graphics (see Stanford Bunny image below ...
*
Subdivision surface In the field of 3D computer graphics, a subdivision surface (commonly shortened to SubD surface) is a curved surface represented by the specification of a coarser polygon mesh and produced by a recursive algorithmic method. The curved surface, t ...
— constructed by recursively subdividing a piecewise linear interpolant **
Catmull–Clark subdivision surface The Catmull–Clark algorithm is a technique used in 3D computer graphics to create curved surfaces by using subdivision surface 3D modeling, modeling. It was devised by Edwin Catmull and James H. Clark, Jim Clark in 1978 as a generalization of b ...
** Doo–Sabin subdivision surface **
Loop subdivision surface In computer graphics, the Loop method for subdivision surfaces is an approximating subdivision scheme developed by Charles Loop in 1987 for triangular meshes. Prior methods, namely Catmull-Clark and Doo-Sabin (1978), focused on quad meshes. L ...
* Slerp (spherical linear interpolation) — interpolation between two points on a sphere **Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions *
Irrational base discrete weighted transform In mathematics, the irrational base discrete weighted transform (IBDWT) is a variant of the fast Fourier transform using an irrational base; it was developed by Richard Crandall (Reed College), Barry Fagin (Dartmouth College) and Joshua Doenias ...
*
Nevanlinna–Pick interpolation In complex analysis, given ''initial data'' consisting of n points \lambda_1, \ldots, \lambda_n in the complex unit disc \mathbb and ''target data'' consisting of n points z_1, \ldots, z_n in \mathbb, the Nevanlinna–Pick interpolation problem is ...
— interpolation by analytic functions in the unit disc subject to a bound **
Pick matrix Pick may refer to: Places * Pick City, North Dakota, a town in the United States * Pick Lake (Cochrane District, Ontario), a lake in Canada * Pick Lake (Thunder Bay District), a lake in Canada * Pick Mere, a lake in Pickmere, England People w ...
— the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite *
Multivariate interpolation In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable; when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given poin ...
— the function being interpolated depends on more than one variable ** Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology **
Coons surface In mathematics, a Coons patch, is a type of surface patch or manifold parametrization used in computer graphics to smoothly join other surfaces together, and in computational mechanics applications, particularly in finite element method and boun ...
— combination of linear interpolation and bilinear interpolation **
Lanczos resampling filtering and Lanczos resampling are two applications of a mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case it maps each sample of t ...
— based on convolution with a sinc function **
Natural neighbor interpolation image:Natural-neighbors-coefficients-example.png, 200px, Natural neighbor interpolation with Sibson weights. The area of the green circles are the interpolating weights, ''w'i''. The purple-shaded region is the new Voronoi cell, after inserting ...
** Nearest neighbor value interpolation ** PDE surface ** Transfinite interpolation — constructs function on planar domain given its values on the boundary ** Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations **Method based on polynomials are listed under ''Polynomial interpolation''


Approximation theory

Approximation theory In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characteri ...
*
Orders of approximation In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is. Usage in science and engineering In formal expressions, the ordinal number used b ...
*
Lebesgue's lemma ''For Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemma'' In mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error, controlling the e ...
*
Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is ...
**
Vector field reconstruction Vector field reconstruction is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system. A differential equation model is one that describes th ...
*
Modulus of continuity In mathematical analysis, a modulus of continuity is a function ω : , ∞→ , ∞used to measure quantitatively the uniform continuity of functions. So, a function ''f'' : ''I'' → R admits ω as a modulus of continuity if and only if :, f(x)-f ...
— measures smoothness of a function *
Least squares (function approximation) In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimizes the difference betwe ...
— minimizes the error in the L2-norm *
Minimax approximation algorithm A minimax approximation algorithm (or L∞ approximation or uniform approximation) is a method to find an approximation of a mathematical function that minimizes maximum error. For example, given a function f defined on the interval ,band a de ...
— minimizes the maximum error over an interval (the L-norm) ** Equioscillation theorem — characterizes the best approximation in the L-norm *
Unisolvent point set In approximation theory, a finite collection of points X \subset R^n is often called unisolvent for a space W if any element w \in W is uniquely determined by its values on X. X is unisolvent for \Pi^m_n (polynomials in n variables of degree at ...
— function from given function space is determined uniquely by values on such a set of points *
Stone–Weierstrass theorem In mathematical analysis, the Weierstrass approximation theorem states that every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function. Because polynomials are among the si ...
— continuous functions can be approximated uniformly by polynomials, or certain other function spaces *Approximation by polynomials: **
Linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or a ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate polyn ...
— basis of polynomials useful for approximating a function ** Bernstein's constant — error when approximating , ''x'', by a polynomial **
Remez algorithm The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the ...
— for constructing the best polynomial approximation in the L-norm **
Bernstein's inequality (mathematical analysis) Bernstein's theorem is an inequality relating the maximum modulus of a complex polynomial function on the unit disk with the maximum modulus of its derivative on the unit disk. It was proven by Sergei Bernstein while he was working on approximation ...
— bound on maximum of derivative of polynomial in unit disk **
Mergelyan's theorem Mergelyan's theorem is a result from approximation by polynomials in complex analysis proved by the Armenian mathematician Sergei Mergelyan in 1951. Statement :Let ''K'' be a compact subset of the complex plane C such that C∖''K'' is conne ...
— generalization of Stone–Weierstrass theorem for polynomials ** Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero **
Bramble–Hilbert lemma In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma (mathematics), lemma, named after James H. Bramble and Stephen Hilbert, bounds the approximation error, error of an approximation of a function (mathematics), function \tex ...
— upper bound on Lp error of polynomial approximation in multiple dimensions **
Discrete Chebyshev polynomials In mathematics, discrete Chebyshev polynomials, or Gram polynomials, are a type of discrete orthogonal polynomials used in approximation theory, introduced by Pafnuty Chebyshev and rediscovered by Gram. They were later found to be applicable to v ...
— polynomials orthogonal with respect to a discrete measure ** Favard's theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials *Approximation by Fourier series / trigonometric polynomials: **
Jackson's inequality In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of it ...
— upper bound for best approximation by a trigonometric polynomial ***
Bernstein's theorem (approximation theory) In approximation theory, Bernstein's theorem is a converse to Jackson's theorem. The first results of this type were proved by Sergei Bernstein in 1912. For approximation by trigonometric polynomials, the result is as follows: Let ''f'': , 2 ...
— a converse to Jackson's inequality **
Fejér's theorem In mathematics, Fejér's theorem,Leopold FejérUntersuchungen über Fouriersche Reihen ''Mathematische Annalen''vol. 58 1904, 51-69. named after Hungarian mathematician Lipót Fejér, states the following: Explanation of Fejér's Theorem's Ex ...
— Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions **
Erdős–Turán inequality In mathematics, the Erdős–Turán inequality bounds the distance between a Probability distribution, probability measure on the circle and the Lebesgue measure, in terms of Fourier coefficients. It was proved by Paul Erdős and Pál Turán in 1948 ...
— bounds distance between probability and Lebesgue measure in terms of Fourier coefficients *Different approximations: **
Moving least squares Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is re ...
**
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
***
Padé table In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants :''R'm'', ''n'' to a given complex formal power series. Certain sequences of approximants lying within a Padé table can often b ...
— table of Padé approximants **
Hartogs–Rosenthal theorem In mathematics, the Hartogs–Rosenthal theorem is a classical result in complex analysis on the uniform approximation of continuous functions on compact subsets of the complex plane by rational functions. The theorem was proved in 1931 by the Germ ...
— continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero ** Szász–Mirakyan operator — approximation by e−''n'' ''x''''k'' on a semi-infinite interval ** Szász–Mirakjan–Kantorovich operator ** Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators ** Favard operator — approximation by sums of Gaussians *
Surrogate model A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so a model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design ...
— application: replacing a function that is hard to evaluate by a simpler function *
Constructive function theory In mathematical analysis, constructive function theory is a field which studies the connection between the smoothness of a function and its degree of approximation. It is closely related to approximation theory. The term was coined by Sergei Berns ...
— field that studies connection between degree of approximation and smoothness *
Universal differential equation A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy. Precisely, a ( ...
— differential–algebraic equation whose solutions can approximate any continuous function * Fekete problem — find ''N'' points on a sphere that minimize some kind of energy *
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
— condition guaranteeing that a measure is uniquely determined by its moments *
Krein's condition In mathematical analysis, Krein's condition provides a necessary and sufficient condition for exponential sums : \left\, to be dense in a weighted L2 space on the real line. It was discovered by Mark Krein in the 1940s. A corollary, also called K ...
— condition that exponential sums are dense in weighted L2 space * Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces *
Wirtinger's representation and projection theorem In mathematics, Wirtinger's representation and projection theorem is a theorem proved by Wilhelm Wirtinger in 1932 in connection with some problems of approximation theory. This theorem gives the representation formula for the holomorphic subspa ...
*Journals: **
Constructive Approximation ''Constructive Approximation'' is "an international mathematics journal dedicated to Approximations and Expansions and related research in computation, function theory, functional analysis, interpolation spaces and interpolation of operators, nume ...
**
Journal of Approximation Theory The ''Journal of Approximation Theory'' is "devoted to advances in pure and applied approximation theory and related areas." References External links ''Journal of Approximation Theory'' web site''Journal of Approximation Theory'' home page a ...


Miscellaneous

*
Extrapolation In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between know ...
**
Linear predictive analysis Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a t ...
— linear extrapolation *
Unisolvent functions In mathematics, a set of ''n'' functions ''f''1, ''f''2, ..., ''f'n'' is unisolvent (meaning "uniquely solvable") on a domain Ω if the vectors : \beginf_1(x_1) \\ f_1(x_2) \\ \vdots \\ f_1(x_n)\end, \beginf_2(x_1) \\ f_2(x_2) \\ \vdots \\ f_ ...
— functions for which the interpolation problem has a unique solution *
Regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
**
Isotonic regression In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing (or non-increasing) everywhere, and lies as ...
*
Curve-fitting compaction Curve-fitting compaction is data compaction accomplished by replacing data to be stored or transmitted with an analytical expression. Examples of curve-fitting compaction consisting of discretization and then interpolation are: * Breaking of a ...
*
Interpolation (computer graphics) In the context of live-action and computer animation, interpolation is inbetweening,{{Cite web, url=https://www.freecodecamp.org/news/understanding-linear-interpolation-in-ui-animations-74701eb9957c/, title=Understanding Linear Interpolation in UI ...


Finding roots of nonlinear equations

:''See #Numerical linear algebra for linear equations''
Root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
— algorithms for solving the equation ''f''(''x'') = 0 *General methods: **
Bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and the ...
— simple and robust; linear convergence ***
Lehmer–Schur algorithm In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex p ...
— variant for complex functions **
Fixed-point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point iterat ...
**
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
— based on linear approximation around the current iterate; quadratic convergence ***
Kantorovich theorem The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, ...
— gives a region around solution such that Newton's method converges ***
Newton fractal The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's method. ...
— indicates which initial condition converges to which root under Newton iteration ***
Quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
— uses an approximation of the Jacobian: ****
Broyden's method In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in variables. It was originally described by C. G. Broyden in 1965. Newton's method for solving uses the Jacobian matrix, , at every iteration. However, compu ...
— uses a rank-one update for the Jacobian **** Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian ****
Davidon–Fletcher–Powell formula The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher (mathematician), Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfie ...
— update of the Jacobian in which the matrix remains positive definite ****
Broyden–Fletcher–Goldfarb–Shanno algorithm In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the ...
— rank-two update of the Jacobian in which the matrix remains positive definite ****
Limited-memory BFGS Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algo ...
method — truncated, matrix-free variant of BFGS method suitable for large problems ***
Steffensen's method In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's me ...
— uses divided differences instead of the derivative **
Secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation of ...
— based on linear interpolation at last two iterates **
False position method In mathematics, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and er ...
— secant method with ideas from the bisection method **
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every iterat ...
— based on quadratic interpolation at last three iterates ** Sidi's generalized secant method — higher-order variants of secant method **
Inverse quadratic interpolation In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form ''f''(''x'') = 0. The idea is to use quadratic interpolation to approximate the inverse of ''f''. ...
— similar to Muller's method, but interpolates the inverse **
Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable ...
— combines bisection method, secant method and inverse quadratic interpolation **
Ridders' method In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function f(x). The method is due to C. Ridders. Ridders' ...
— fits a linear function times an exponential to last two iterates and their midpoint **
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
— uses ''f'', ''f''' and ''f''''; achieves the cubic convergence **
Householder's method In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order . Each of these methods is chara ...
— uses first ''d'' derivatives to achieve order ''d'' + 1; generalizes Newton's and Halley's method *Methods for polynomials: **
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. Thi ...
**
Bairstow's method In numerical analysis, Bairstow's method is an efficient algorithm for finding the Root of a function, roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book ''Applied Aerodynamics'' by Leonar ...
**
Durand–Kerner method In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In ot ...
**
Graeffe's method In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 ...
**
Jenkins–Traub algorithm The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with comple ...
— fast, reliable, and widely used **
Laguerre's method In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this metho ...
**
Splitting circle method In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper ''The fundamental theorem ...
*Analysis: **
Wilkinson's polynomial In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbatio ...
*
Numerical continuation Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, :F(\mathbf u,\lambda) = 0. The ''parameter'' \lambda is usually a real scalar, and the ''solution'' \mathbf u an ''n''-vector ...
— tracking a root as one parameter in the equation changes **
Piecewise linear continuation Simplicial continuation, or piecewise linear continuation (Allgower and Georg),Eugene L. Allgower, K. Georg, "Introduction to Numerical Continuation Methods", ''SIAM Classics in Applied Mathematics'' 45, 2003.E. L. Allgower, K. Georg, "Simplicial an ...


Optimization

Mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
— algorithm for finding maxima or minima of a given function


Basic concepts

*
Active set In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality ...
*
Candidate solution In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
*
Constraint (mathematics) In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of can ...
**
Constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
— studies optimization problems with constraints **
Binary constraint A binary constraint, in mathematical optimization, is a constraint that involves exactly two variables. For example, consider the n-queens problem, where the goal is to place ''n'' chess queens on an ''n''-by-''n'' chessboard such that none of t ...
— a constraint that involves exactly two variables *
Corner solution A corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is when the chooser is either unwilling or unabl ...
*
Feasible region In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
— contains all solutions that satisfy the constraints but may not be optimal *
Global optimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
and
Local optimum In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which i ...
*
Maxima and minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
*
Slack variable In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity co ...
*
Continuous optimization Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of rea ...
*
Discrete optimization Discrete optimization is a branch of optimization in applied mathematics and computer science. Scope As opposed to continuous optimization, some or all of the variables used in a discrete mathematical program are restricted to be discrete variab ...


Linear programming

Linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
(also treats ''integer programming'') — objective function and constraints are linear * Algorithms for linear programming: **
Simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
***
Bland's rule In mathematical optimization, Bland's rule (also known as Bland's algorithm, Bland's anti-cycling rule or Bland's pivot rule) is an algorithmic refinement of the simplex method for linear optimization. With Bland's rule, the simplex algorithm solve ...
— rule to avoid cycling in the simplex method ***
Klee–Minty cube The Klee–Minty cube or Klee–Minty polytope (named after Victor Klee and George J. Minty) is a unit cube, unit hypercube of variable dimension whose corners have been perturbed. Klee and Minty demonstrated that George Dantzig's simplex algorith ...
— perturbed (hyper)cube; simplex method has exponential complexity on such a domain ***
Criss-cross algorithm In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective ...
— similar to the simplex algorithm ***
Big M method In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating ...
— variation of simplex algorithm for problems with both "less than" and "greater than" constraints **
Interior point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
***
Ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for convex optimization, minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algor ...
***
Karmarkar's algorithm Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also pol ...
***
Mehrotra predictor–corrector method Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra. The method is based on the fact that at each iteration of an interior point algorithm it ...
**
Column generation Column generation or delayed column generation is an efficient algorithm for solving large linear programs. The overarching idea is that many linear programs are too large to consider all the variables explicitly. The idea is thus to start by solv ...
** k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set) *
Linear complementarity problem In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. ...
*Decompositions: **
Benders' decomposition Benders decomposition (or Benders' decomposition) is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. This block structure often occurs in applications ...
**
Dantzig–Wolfe decomposition Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have se ...
**
Theory of two-level planning The theory of two-level planning (alternatively, Kornai–Liptak decomposition) is a method that Matrix decomposition , decomposes large problems of Linear programming, linear optimization into sub-problems. This decomposition simplifies the soluti ...
** Variable splitting *
Basic solution (linear programming) In linear programming, a discipline within applied mathematics, a basic solution is any solution of a linear programming problem satisfying certain specified technical conditions. For a polyhedron P and a vector \mathbf^* \in \mathbb^n, \mathbf^* ...
— solution at vertex of feasible region *
Fourier–Motzkin elimination Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions. The algorithm is named after Joseph Fourier who proposed the me ...
*
Hilbert basis (linear programming) The Hilbert basis of a convex cone ''C'' is a minimal set of integer vectors such that every integer vector in ''C'' is a conical combination of the vectors in the Hilbert basis with integer coefficients. Definition Given a lattice L\subset\mat ...
— set of integer vectors in a convex cone which generate all integer vectors in the cone *
LP-type problem In the study of algorithms, an LP-type problem (also called a generalized linear program) is an optimization problem that shares certain properties with low-dimensional linear programs and that may be solved by similar algorithms. LP-type problems i ...
*
Linear inequality In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:. It shows the data which is not equal in graph form. * greater than * ≤ less than or equal to * ...
*
Vertex enumeration problem In mathematics, the vertex enumeration problem for a polytope, a polyhedral cell complex, a hyperplane arrangement, or some other object of discrete geometry, is the problem of determination of the object's vertices given some formal representation ...
— list all vertices of the feasible set


Convex optimization

Convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization probl ...
*
Quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constra ...
**
Linear least squares (mathematics) Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and g ...
**
Total least squares In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generaliza ...
**
Frank–Wolfe algorithm The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was orig ...
**
Sequential minimal optimization Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research. SMO is widely ...
— breaks up large QP problems into a series of smallest possible QP problems ** Bilinear program *
Basis pursuit Basis pursuit is the mathematical optimization problem of the form : \min_x \, x\, _1 \quad \text \quad y = Ax, where ''x'' is a ''N''-dimensional solution vector (signal), ''y'' is a ''M''-dimensional vector of observations (measurements), ''A' ...
— minimize L1-norm of vector subject to linear constraints **
Basis pursuit denoising In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form : \min_x \left(\frac \, y - Ax\, ^2_2 + \lambda \, x\, _1\right), where \lambda is a parameter that controls the trade ...
(BPDN) — regularized version of basis pursuit *** In-crowd algorithm — algorithm for solving basis pursuit denoising *
Linear matrix inequality In convex optimization, a linear matrix inequality (LMI) is an expression of the form : \operatorname(y):=A_0+y_1A_1+y_2A_2+\cdots+y_m A_m\succeq 0\, where * y= _i\,,~i\!=\!1,\dots, m/math> is a real vector, * A_0, A_1, A_2,\dots,A_m are n\times n ...
*
Conic optimization Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone. The class of conic optimization problems includes some of the ...
**
Semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive ...
**
Second-order cone programming A second-order cone program (SOCP) is a convex optimization problem of the form :minimize \ f^T x \ :subject to ::\lVert A_i x + b_i \rVert_2 \leq c_i^T x + d_i,\quad i = 1,\dots,m ::Fx = g \ where the problem parameters are f \in \mathbb^n, ...
**
Sum-of-squares optimization A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficien ...
**Quadratic programming (see above) *
Bregman method The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967. The algorithm is a row-action method accessing constra ...
— row-action method for strictly convex optimization problems *
Proximal gradient method Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form \operatorname\limits_ \sum_^n ...
— use splitting of objective function in sum of possible non-differentiable pieces * Subgradient method — extension of steepest descent for problems with a non-differentiable objective function * Biconvex optimization — generalization where objective function and constraint set can be biconvex


Nonlinear programming

Nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or sta ...
— the most general optimization problem in the usual framework *Special cases of nonlinear programming: **See ''Linear programming'' and ''Convex optimization'' above **
Geometric programming A geometric program (GP) is an optimization problem of the form : \begin \mbox & f_0(x) \\ \mbox & f_i(x) \leq 1, \quad i=1, \ldots, m\\ & g_i(x) = 1, \quad i=1, \ldots, p, \end where f_0,\dots,f_m are posynomials and g_1,\dots,g_p are monomials. I ...
— problems involving signomials or posynomials ***
Signomial A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponents to be arbitrary real numbers (rather than j ...
— similar to polynomials, but exponents need not be integers ***
Posynomial A posynomial, also known as a posinomial in some literature, is a function of the form : f(x_1, x_2, \dots, x_n) = \sum_^K c_k x_1^ \cdots x_n^ where all the coordinates x_i and coefficients c_k are positive real numbers, and the exponents a_ are ...
— a signomial with positive coefficients ** Quadratically constrained quadratic program **
Linear-fractional programming In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a rat ...
— objective is ratio of linear functions, constraints are linear ***
Fractional programming In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often desc ...
— objective is ratio of nonlinear functions, constraints are linear **
Nonlinear complementarity problem In applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a ...
(NCP) — find ''x'' such that ''x'' ≥ 0, ''f''(''x'') ≥ 0 and ''x''T ''f''(''x'') = 0 **
Least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
— the objective function is a sum of squares ***
Non-linear least squares Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
***
Gauss–Newton algorithm The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum ...
**** BHHH algorithm — variant of Gauss–Newton in econometrics **** Generalized Gauss–Newton method — for constrained nonlinear least-squares problems ***
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least sq ...
***
Iteratively reweighted least squares The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a ''p''-norm: :\underset \sum_^n \big, y_i - f_i (\boldsymbol\beta) \big, ^p, by an iterative met ...
(IRLS) — solves a weighted least-squares problem at every iteration ***
Partial least squares Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a li ...
— statistical techniques similar to principal components analysis ****
Non-linear iterative partial least squares Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
(NIPLS) **
Mathematical programming with equilibrium constraints Mathematical programming with equilibrium constraints (MPEC) is the study of constrained optimization problems where the constraints include variational inequalities or complementarities. MPEC is related to the Stackelberg game. MPEC is used ...
— constraints include variational inequalities or complementarities **Univariate optimization: ***
Golden section search The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interv ...
***
Successive parabolic interpolation Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or ...
— based on quadratic interpolation through the last three iterates *General algorithms: **Concepts: ***
Descent direction In optimization, a descent direction is a vector \mathbf\in\mathbb R^n that, in the sense below, moves us closer towards a local minimum \mathbf^* of our objective function f:\mathbb R^n\to\mathbb R. Suppose we are computing \mathbf^* by an iterati ...
*** Guess value — the initial guess for a solution with which an algorithm starts ***
Line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a d ...
****
Backtracking line search In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient ...
****
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\m ...
**
Gradient method In optimization (mathematics), optimization, a gradient method is an algorithm to solve problems of the form :\min_\; f(x) with the search directions defined by the gradient of the function at the current point. Examples of gradient methods are t ...
— method that uses the gradient as the search direction ***
Gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
****
Stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of ...
***
Landweber iteration The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, ...
— mainly used for ill-posed problems **
Successive linear programming Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. Starting at some estimate of the optimal solution, the method is based on sol ...
(SLP) — replace problem by a linear programming problem, solve that, and repeat **
Sequential quadratic programming Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable. SQP me ...
(SQP) — replace problem by a quadratic programming problem, solve that, and repeat **
Newton's method in optimization In calculus, Newton's method is an iterative method for finding the roots of a differentiable function , which are solutions to the equation . As such, Newton's method can be applied to the derivative of a twice-differentiable function to ...
***See also under ''Newton algorithm'' in the section ''Finding roots of nonlinear equations'' **
Nonlinear conjugate gradient method In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f(x) :: \displaystyle f(x)=\, Ax-b\, ^2, the minimum of f is obtained whe ...
**Derivative-free methods ***
Coordinate descent Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, ...
— move in one of the coordinate directions ****
Adaptive coordinate descent Adaptive coordinate descent is an improvement of the coordinate descent algorithm to non-separable optimization by the use of adaptive encoding. The adaptive coordinate descent approach gradually builds a transformation of the coordinate system suc ...
— adapt coordinate directions to objective function ****
Random coordinate descent Randomized (Block) Coordinate Descent Method is an optimization algorithm popularized by Nesterov (2010) and Richtárik and Takáč (2011). The first analysis of this method, when applied to the problem of minimizing a smooth convex function, was pe ...
— randomized version ***
Nelder–Mead method The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on ...
***
Pattern search (optimization) Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or different ...
***
Powell's method Powell's method, strictly Powell's conjugate direction method, is an algorithm proposed by Michael J. D. Powell for finding a local minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a f ...
— based on conjugate gradient descent ***
Rosenbrock methods Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock. Numerical solution of differential equations Rosenbrock methods for stiff differential equations are a family of single-step ...
— derivative-free method, similar to Nelder–Mead but with guaranteed convergence **
Augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems a ...
— replaces constrained problems by unconstrained problems with a term added to the objective function **
Ternary search A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function. A ternary search determines either that the minimum or maximum cannot be in the first third of the domain or that it cannot be ...
**
Tabu search Tabu search is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986 and formalized in 1989. Local (neighborhood) searches take a potential solution to a prob ...
** Guided Local Search — modification of search algorithms which builds up penalties during a search **Reactive search optimization (RSO) — the algorithm adapts its parameters automatically ** MM algorithm — majorize-minimization, a wide framework of methods **
Least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based minimizing the ''sum o ...
***
Expectation–maximization algorithm In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variabl ...
****
Ordered subset expectation maximization In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography, f ...
**
Nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function ...
**
Space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The know ...
— uses "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models


Optimal control and infinite-dimensional optimization

Optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
*
Pontryagin's minimum principle Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it ...
— infinite-dimensional version of Lagrange multipliers **
Costate equations The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations : \dot^(t)=-\frac where ...
— equation for the "Lagrange multipliers" in Pontryagin's minimum principle **
Hamiltonian (control theory) The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Insp ...
— minimum principle says that this function should be minimized *Types of problems: ** Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic ** Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic *** Optimal projection equations — method for reducing dimension of LQG control problem * Algebraic Riccati equation — matrix equation occurring in many optimal control problems *
Bang–bang control In control theory, a bang–bang controller (2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often use ...
— control that switches abruptly between two states *
Covector mapping principle The covector mapping principle is a special case of Riesz' representation theorem, which is a fundamental theorem in functional analysis. The name was coined by Ross and co-workers,Ross, I. M., “A Historical Introduction to the Covector Mappin ...
* Differential dynamic programming — uses locally-quadratic models of the dynamics and cost functions * DNSS point — initial state for certain optimal control problems with multiple optimal solutions * Legendre–Clebsch condition — second-order condition for solution of optimal control problem * Pseudospectral optimal control ** Bellman pseudospectral method — based on Bellman's principle of optimality ** Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind) **
Flat pseudospectral method The flat pseudospectral method is part of the family of the Ross–Fahroo pseudospectral methods introduced by Ross and Fahroo. Ross, I. M. and Fahroo, F., Pseudospectral Methods for the Optimal Motion Planning of Differentially Flat Systems” ...
— combines Ross–Fahroo pseudospectral method with differential flatness **Gauss pseudospectral method — uses collocation at the Legendre–Gauss points **Legendre pseudospectral method — uses Legendre polynomials **Pseudospectral knotting method — generalization of pseudospectral methods in optimal control **Ross–Fahroo pseudospectral method — class of pseudospectral method including Chebyshev, Legendre and knotting *Ross–Fahroo lemma — condition to make discretization and duality operations commute *Ross' π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability *Sethi model — optimal control problem modelling advertising Infinite-dimensional optimization *Semi-infinite programming — infinite number of variables and finite number of constraints, or other way around *Shape optimization, Topology optimization — optimization over a set of regions **Topological derivative — derivative with respect to changing in the shape *Generalized semi-infinite programming — finite number of variables, infinite number of constraints


Uncertainty and randomness

*Approaches to deal with uncertainty: **Markov decision process **Partially observable Markov decision process **Robust optimization ***Wald's maximin model **Scenario optimization — constraints are uncertain **Stochastic approximation **Stochastic optimization **Stochastic programming **
Stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of ...
*Random optimization algorithms: **Random search — choose a point randomly in ball around current iterate **Simulated annealing ***Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation. ***Great Deluge algorithm ***Mean field annealing — deterministic variant of simulated annealing **Bayesian optimization — treats objective function as a random function and places a prior over it **Evolutionary algorithm ***Differential evolution ***Evolutionary programming ***Genetic algorithm, Genetic programming ****Genetic algorithms in economics ***MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent ***Simultaneous perturbation stochastic approximation (SPSA) **Luus–Jaakola **Particle swarm optimization **Stochastic tunneling **Harmony search — mimicks the improvisation process of musicians **see also the section ''Monte Carlo method''


Theoretical aspects

*Convex analysis — function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≥ ''tf''(''x'') + (1 − ''t'')''f''(''y'') for ''t'' ∈ [0,1] **Pseudoconvex function — function ''f'' such that ∇''f'' · (''y'' − ''x'') ≥ 0 implies ''f''(''y'') ≥ ''f''(''x'') **Quasiconvex function — function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≤ max(''f''(''x''), ''f''(''y'')) for ''t'' ∈ [0,1] **Subderivative **Geodesic convexity — convexity for functions defined on a Riemannian manifold *Duality (optimization) **Weak duality — dual solution gives a bound on the primal solution **Strong duality — primal and dual solutions are equivalent **Shadow price **Dual cone and polar cone **Duality gap — difference between primal and dual solution **Fenchel's duality theorem — relates minimization problems with maximization problems of convex conjugates **Perturbation function — any function which relates to primal and dual problems **Slater's condition — sufficient condition for strong duality to hold in a convex optimization problem **Total dual integrality — concept of duality for integer linear programming **Wolfe duality — for when objective function and constraints are differentiable *Farkas' lemma *Karush–Kuhn–Tucker conditions (KKT) — sufficient conditions for a solution to be optimal **Fritz John conditions — variant of KKT conditions *Lagrange multiplier **Lagrange multipliers on Banach spaces *Semi-continuity *Complementarity theory — study of problems with constraints of the form ⟨''u'', ''v''⟩ = 0 **Mixed complementarity problem ***Mixed linear complementarity problem ***Lemke's algorithm — method for solving (mixed) linear complementarity problems *Danskin's theorem — used in the analysis of minimax problems *Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some conditions *No free lunch in search and optimization *Relaxation (approximation) — approximating a given problem by an easier problem by relaxing some constraints **Lagrangian relaxation **Linear programming relaxation — ignoring the integrality constraints in a linear programming problem *Self-concordant function *Reduced cost — cost for increasing a variable by a small amount *Hardness of approximation — computational complexity of getting an approximate solution


Applications

*In geometry: **Geometric median — the point minimizing the sum of distances to a given set of points **Chebyshev center — the centre of the smallest ball containing a given set of points *In statistics: **Iterated conditional modes — maximizing joint probability of Markov random field **Response surface methodology — used in the design of experiments *Automatic label placement *Compressed sensing — reconstruct a signal from knowledge that it is sparse or compressible *Cutting stock problem *Demand optimization *Destination dispatch — an optimization technique for dispatching elevators *Energy minimization *Entropy maximization *Highly optimized tolerance *Hyperparameter optimization *Inventory control problem **Newsvendor model **Extended newsvendor model **Assemble-to-order system *Linear programming decoding *Linear search problem — find a point on a line by moving along the line *Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than a given number *Meta-optimization — optimization of the parameters in an optimization method *Multidisciplinary design optimization *Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision *Paper bag problem *Process optimization *Recursive economics — individuals make a series of two-period optimization decisions over time. *Stigler diet *Space allocation problem *Stress majorization *Trajectory optimization *Transportation theory (mathematics), Transportation theory *Wing-shape optimization


Miscellaneous

*Combinatorial optimization *Dynamic programming **Bellman equation **Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation **Backward induction — solving dynamic programming problems by reasoning backwards in time **Optimal stopping — choosing the optimal time to take a particular action ***Odds algorithm ***Robbins' problem *Global optimization: **BRST algorithm **MCS algorithm *Multi-objective optimization — there are multiple conflicting objectives **Benson's algorithm — for linear vector optimization problems *Bilevel optimization — studies problems in which one problem is embedded in another *Optimal substructure *Dykstra's projection algorithm — finds a point in intersection of two convex sets *Algorithmic concepts: **Barrier function **Penalty method **Trust region *Test functions for optimization: **Rosenbrock function — two-dimensional function with a banana-shaped valley **Himmelblau's function — two-dimensional with four local minima, defined by f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2 **Rastrigin function — two-dimensional function with many local minima **Shekel function — multimodal and multidimensional *Mathematical Optimization Society


Numerical quadrature (integration)

Numerical integration — the numerical evaluation of an integral *Rectangle method — first-order method, based on (piecewise) constant approximation *Trapezoidal rule — second-order method, based on (piecewise) linear approximation *Simpson's rule — fourth-order method, based on (piecewise) quadratic approximation **Adaptive Simpson's method *Boole's rule — sixth-order method, based on the values at five equidistant points *Newton–Cotes formulas — generalizes the above methods *Romberg's method — Richardson extrapolation applied to trapezium rule *Gaussian quadrature — highest possible degree with given number of points **Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight {{nowrap, (1 − ''x''2)±1/2 on [−1, 1] **Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−''x''2) on [−∞, ∞] **Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − ''x'')''α'' (1 + ''x'')''β'' on [−1, 1] **Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−''x'') on [0, ∞] **Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature **Gaussian quadrature, Gauss–Kronrod rules *Tanh-sinh quadrature — variant of Gaussian quadrature which works well with singularities at the end points *Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials *Adaptive quadrature — adapting the subintervals in which the integration interval is divided depending on the integrand *Monte Carlo integration — takes random samples of the integrand **''See also #Monte Carlo method'' *Quantized state systems method (QSS) — based on the idea of state quantization *Lebedev quadrature — uses a grid on a sphere with octahedral symmetry *Sparse grid *Coopmans approximation *Numerical differentiation — for fractional-order integrals **Numerical smoothing and differentiation **Adjoint state method — approximates gradient of a function in an optimization problem *Euler–Maclaurin formula


Numerical methods for ordinary differential equations

Numerical methods for ordinary differential equations — the numerical solution of ordinary differential equations (ODEs) *Euler method — the most basic method for solving an ODE *Explicit and implicit methods — implicit methods need to solve an equation at every step *Backward Euler method — implicit variant of the Euler method *Trapezoidal rule (differential equations), Trapezoidal rule — second-order implicit method *Runge–Kutta methods — one of the two main classes of methods for initial-value problems **Midpoint method — a second-order method with two stages **Heun's method — either a second-order method with two stages, or a third-order method with three stages **Bogacki–Shampine method — a third-order method with four stages (FSAL) and an embedded fourth-order method **Cash–Karp method — a fifth-order method with six stages and an embedded fourth-order method **Dormand–Prince method — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method **Runge–Kutta–Fehlberg method — a fifth-order method with six stages and an embedded fourth-order method **Gauss–Legendre method — family of A-stable method with optimal order based on Gaussian quadrature **Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods **List of Runge–Kutta methods *Linear multistep method — the other main class of methods for initial-value problems **Backward differentiation formula — implicit methods of order 2 to 6; especially suitable for stiff equations **Numerov's method — fourth-order method for equations of the form y'' = f(t,y) **Predictor–corrector method — uses one method to approximate solution and another one to increase accuracy *General linear methods — a class of methods encapsulating linear multistep and Runge-Kutta methods *Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order *Exponential integrator — based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part *Methods designed for the solution of ODEs from classical physics: **Newmark-beta method — based on the extended mean-value theorem **Verlet integration — a popular second-order method **Leapfrog integration — another name for Verlet integration **Beeman's algorithm — a two-step method extending the Verlet method **Dynamic relaxation *Geometric integrator — a method that preserves some geometric structure of the equation **Symplectic integrator — a method for the solution of Hamilton's equations that preserves the symplectic structure ***Variational integrator — symplectic integrators derived using the underlying variational principle ***Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians **Energy drift — phenomenon that energy, which should be conserved, drifts away due to numerical errors *Other methods for initial value problems (IVPs): **Bi-directional delay line **Partial element equivalent circuit *Methods for solving two-point boundary value problems (BVPs): **Shooting method **Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval *Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints: **Constraint algorithm — for solving Newton's equations with constraints **Pantelides algorithm — for reducing the index of a DEA *Methods for solving stochastic differential equations (SDEs): **Euler–Maruyama method — generalization of the Euler method for SDEs **Milstein method — a method with strong order one **Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs *Methods for solving integral equations: **Nyström method — replaces the integral with a quadrature rule *Analysis: **Truncation error (numerical integration) — local and global truncation errors, and their relationships ***Lady Windermere's Fan (mathematics) — telescopic identity relating local and global truncation errors *Stiff equation — roughly, an ODE for which unstable methods need a very short step size, but stable methods do not **L-stability — method is A-stable and stability function vanishes at infinity *Adaptive stepsize — automatically changing the step size when that seems advantageous *Parareal -- a parallel-in-time integration algorithm


Numerical methods for partial differential equations

Numerical partial differential equations — the numerical solution of partial differential equations (PDEs)


Finite difference methods

Finite difference method — based on approximating differential operators with difference operators *Finite difference — the discrete analogue of a differential operator **Finite difference coefficient — table of coefficients of finite-difference approximations to derivatives **Discrete Laplace operator — finite-difference approximation of the Laplace operator ***Eigenvalues and eigenvectors of the second derivative — includes eigenvalues of discrete Laplace operator ***Kronecker sum of discrete Laplacians — used for Laplace operator in multiple dimensions **Discrete Poisson equation — discrete analogue of the Poisson equation using the discrete Laplace operator *Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm **Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours ***Higher-order compact finite difference scheme **Non-compact stencil — any stencil that is not compact **Five-point stencil — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid *Finite difference methods for heat equation and related PDEs: **FTCS scheme (forward-time central-space) — first-order explicit **Crank–Nicolson method — second-order implicit *Finite difference methods for hyperbolic PDEs like the wave equation: **Lax–Friedrichs method — first-order explicit **Lax–Wendroff method — second-order explicit **MacCormack method — second-order explicit **Upwind scheme ***Upwind differencing scheme for convection — first-order scheme for convection–diffusion problems **Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution *Alternating direction implicit method (ADI) — update using the flow in ''x''-direction and then using flow in ''y''-direction *Nonstandard finite difference scheme *Specific applications: **Finite difference methods for option pricing **Finite-difference time-domain method — a finite-difference method for electrodynamics


Finite element methods, gradient discretisation methods

Finite element method — based on a discretization of the space of solutions gradient discretisation method — based on both the discretization of the solution and of its gradient *Finite element method in structural mechanics — a physical approach to finite element methods *Galerkin method — a finite element method in which the residual is orthogonal to the finite element space **Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous *Rayleigh–Ritz method — a finite element method based on variational principles *Spectral element method — high-order finite element methods *hp-FEM — variant in which both the size and the order of the elements are automatically adapted *Examples of finite elements: **Bilinear quadrilateral element — also known as the Q4 element **Constant strain triangle element (CST) — also known as the T3 element **Quadratic quadrilateral element — also known as the Q8 element **Barsoum elements *Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis *Trefftz method *Finite element updating *Extended finite element method — puts functions tailored to the problem in the approximation space *Functionally graded elements — elements for describing functionally graded materials *Superelement — particular grouping of finite elements, employed as a single element *
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
method — combination of finite elements with interval arithmetic *Discrete exterior calculus — discrete form of the exterior calculus of differential geometry *Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations *Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the true solution *Patch test (finite elements) — simple test for the quality of a finite element *MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University *NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis *Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture *
Interval finite element In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
*Applied element method — for simulation of cracks and structural collapse *Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs *Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools *Loubignac iteration *Stiffness matrix — finite-dimensional analogue of differential operator *Combination with meshfree methods: **Weakened weak form — form of a PDE that is weaker than the standard weak form **G space — functional space used in formulating the weakened weak form **Smoothed finite element method *Variational multiscale method *List of finite element software packages


Other methods

*Spectral method — based on the Fourier transformation **Pseudo-spectral method *Method of lines — reduces the PDE to a large system of ordinary differential equations *Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain ** Interval boundary element method — a version using interval arithmetics *Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically *Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics **Godunov's scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation **MUSCL scheme — second-order variant of Godunov's scheme **AUSM — advection upstream splitting method **Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations **Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data) **Properties of discretization schemes — finite volume methods can be conservative, bounded, etc. *Discrete element method — a method in which the elements can move freely relative to each other **Extended discrete element method — adds properties such as strain to each particle **Movable cellular automaton — combination of cellular automata with discrete elements *Meshfree methods — does not use a mesh, but uses a particle view of the field **Discrete least squares meshless method — based on minimization of weighted summation of the squared residual **Diffuse element method **Finite pointset method — represent continuum by a point cloud **Moving Particle Semi-implicit Method **Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions **Variants of MFS with source points on the physical boundary: ***Boundary knot method (BKM) ***Boundary particle method (BPM) ***Regularized meshless method (RMM) ***Singular boundary method (SBM) *Methods designed for problems from electromagnetics: **Finite-difference time-domain method — a finite-difference method **Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet's theorem **Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines **Uniform theory of diffraction — specifically designed for scattering problems *Particle-in-cell — used especially in fluid dynamics **Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid *High-resolution scheme *Shock capturing method *Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing *Split-step method *Fast marching method *Orthogonal collocation *Lattice Boltzmann methods — for the solution of the Navier-Stokes equations *Roe solver — for the solution of the Euler equation *Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations *Broad classes of methods: **Mimesis (mathematics), Mimetic methods — methods that respect in some sense the structure of the original problem **Multiphysics — models consisting of various submodels with different physics **Immersed boundary method — for simulating elastic structures immersed within fluids *Multisymplectic integrator — extension of symplectic integrators, which are for ODEs *Stretched grid method — for problems solution that can be related to an elastic grid behavior.


Techniques for improving these methods

*Multigrid method — uses a hierarchy of nested meshes to speed up the methods *Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains **Additive Schwarz method **Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information **Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices **BDDC, Balancing domain decomposition by constraints (BDDC) — further development of BDD **FETI, Finite element tearing and interconnect (FETI) **FETI-DP — further development of FETI **Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape **Mortar methods — meshes on subdomain do not mesh **Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain **Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains **Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current **Schur complement method — early and basic method on subdomains that do not overlap **Schwarz alternating method — early and basic method on subdomains that overlap *Coarse space (numerical analysis), Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom *Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary *Fast multipole method — hierarchical method for evaluating particle-particle interactions *Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions


Grids and meshes

*Grid classification / Types of mesh: **Polygon mesh — consists of polygons in 2D or 3D **Triangle mesh — consists of triangles in 2D or 3D ***Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue ***Nonobtuse mesh — mesh in which all angles are less than or equal to 90° ***Point-set triangulation — triangle mesh such that given set of point are all a vertex of a triangle ***Polygon triangulation — triangle mesh inside a polygon ***Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle ***Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation ***Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex ***Minimum-weight triangulation — triangulation of minimum total edge length ***Kinetic triangulation — a triangulation that moves over time ***Triangulated irregular network ***Quasi-triangulation — subdivision into simplices, where vertices are not points but arbitrary sloped line segments **Volume mesh — consists of three-dimensional shapes **Regular grid — consists of congruent parallelograms, or higher-dimensional analogue **Unstructured grid **Geodesic grid — isotropic grid on a sphere *Mesh generation **Image-based meshing — automatic procedure of generating meshes from 3D image data **Marching cubes — extracts a polygon mesh from a scalar field **Parallel mesh generation **Ruppert's algorithm — creates quality Delauney triangularization from piecewise linear data *Subdivisions: *Apollonian network — undirected graph formed by recursively subdividing a triangle *Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue *Improving an existing mesh: **Chew's second algorithm — improves Delauney triangularization by refining poor-quality triangles **Laplacian smoothing — improves polynomial meshes by moving the vertices *Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point *Spatial twist continuum — dual representation of a mesh consisting of hexahedra *Pseudotriangle — simply connected region between any three mutually tangent convex sets *Simplicial complex — all vertices, line segments, triangles, tetrahedra, ..., making up a mesh


Analysis

*Lax equivalence theorem — a consistent method is convergent if and only if it is stable *Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs *Von Neumann stability analysis — all Fourier components of the error should be stable *Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present **False diffusion *Numerical dispersion *Numerical resistivity — the same, with resistivity instead of diffusion *Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods *Total variation diminishing — property of schemes that do not introduce spurious oscillations *Godunov's theorem — linear monotone schemes can only be of first order *Motz's problem — benchmark problem for singularity problems


Monte Carlo method

*Variants of the Monte Carlo method: **Direct simulation Monte Carlo **Quasi-Monte Carlo method **Markov chain Monte Carlo ***Metropolis–Hastings algorithm ****Multiple-try Metropolis — modification which allows larger step sizes ****Wang and Landau algorithm — extension of Metropolis Monte Carlo ****Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm ****Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals ***Gibbs sampling ***Coupling from the past ***Reversible-jump Markov chain Monte Carlo **Dynamic Monte Carlo method ***Kinetic Monte Carlo ***Gillespie algorithm **Particle filter ***Auxiliary particle filter **Reverse Monte Carlo **Demon algorithm *Pseudo-random number sampling **Inverse transform sampling — general and straightforward method but computationally expensive **Rejection sampling — sample from a simpler distribution but reject some of the samples ***Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments **For sampling from a normal distribution: ***Box–Muller transform ***Marsaglia polar method **Convolution random number generator — generates a random variable as a sum of other random variables **Indexed search *Variance reduction techniques: **Antithetic variates **Control variates **Importance sampling **Stratified sampling **VEGAS algorithm *Low-discrepancy sequence **Constructions of low-discrepancy sequences *Event generator *Parallel tempering *Umbrella sampling — improves sampling in physical systems with significant energy barriers *Hybrid Monte Carlo *Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables *Transition path sampling *Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains *Applications: **Ensemble forecasting — produce multiple numerical predictions from slightly initial conditions or parameters **Bond fluctuation model — for simulating the conformation and dynamics of polymer systems **Iterated filtering **Metropolis light transport **Monte Carlo localization — estimates the position and orientation of a robot **Monte Carlo methods for electron transport **Monte Carlo method for photon transport **Monte Carlo methods in finance ***Monte Carlo methods for option pricing ***Quasi-Monte Carlo methods in finance **Monte Carlo molecular modeling ***Path integral molecular dynamics — incorporates Feynman path integrals **Quantum Monte Carlo ***Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation ***Gaussian quantum Monte Carlo ***Path integral Monte Carlo ***Reptation Monte Carlo ***Variational Monte Carlo **Methods for simulating the Ising model: ***Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters ***Wolff algorithm — improvement of the Swendsen–Wang algorithm ***Metropolis–Hastings algorithm **Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems **Cross-entropy method — for multi-extremal optimization and importance sampling *Also see the list of statistics topics


Applications

*Computational physics **Computational electromagnetics **Computational fluid dynamics (CFD) ***Numerical methods in fluid mechanics ***Large eddy simulation ***Smoothed-particle hydrodynamics ***Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types ***Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures ***Explicit algebraic stress model **Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids **Climate model **Numerical weather prediction ***Geodesic grid **Celestial mechanics ***Numerical model of the Solar System **Quantum jump method — used for simulating open quantum systems, operates on wave function **Dynamic design analysis method (DDAM) — for evaluating effect of underwater explosions on equipment *Computational chemistry **Cell lists **Coupled cluster **Density functional theory **DIIS — direct inversion in (or of) the iterative subspace *Computational sociology *Computational statistics


Software

For a large list of software, see the list of numerical-analysis software.


Journals

*Acta Numerica *Mathematics of Computation (published by the American Mathematical Society) *Journal of Computational and Applied Mathematics *BIT Numerical Mathematics *Numerische Mathematik *Journals from the Society for Industrial and Applied Mathematics **SIAM Journal on Numerical Analysis **SIAM Journal on Scientific Computing


Researchers

*Cleve Moler *Gene H. Golub *James H. Wilkinson *Margaret H. Wright *Nicholas Higham, Nicholas J. Higham *
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
*Peter Lax *Richard S. Varga *Ulrich W. Kulisch *Vladik Kreinovich Numerical analysis, *Topics Mathematics-related lists, Numerical analysis topics Outlines of mathematics and logic, Numerical analysis Wikipedia outlines, Numerical analysis Lists of topics, Numerical analysis