HOME

TheInfoList



OR:

Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and
continuous optimization Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of real ...
. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the
value Value or values may refer to: Ethics and social * Value (ethics) wherein said concept may be construed as treating actions themselves as abstract objects, associating value to them ** Values (Western philosophy) expands the notion of value beyo ...
of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined **Domain of definition of a partial function **Natural domain of a partial function **Domain of holomorphy of a function * Do ...
(or input), including a variety of different types of objective functions and different types of domains.


Optimization problems

Optimization problems can be divided into two categories, depending on whether the variables are
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
or discrete: * An optimization problem with discrete variables is known as a '' discrete optimization'', in which an
object Object may refer to: General meanings * Object (philosophy), a thing, being, or concept ** Object (abstract), an object which does not exist at any particular time or place ** Physical object, an identifiable collection of matter * Goal, an ai ...
such as an integer,
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or proc ...
or graph must be found from a
countable set In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is ''countable'' if there exists an injective function from it into the natural numbers; ...
. * A problem with continuous variables is known as a ''
continuous optimization Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of real ...
'', in which an optimal value from a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: :''Given:'' a function from some
set Set, The Set, SET or SETS may refer to: Science, technology, and mathematics Mathematics *Set (mathematics), a collection of elements *Category of sets, the category whose objects and morphisms are sets and total functions, respectively Electro ...
to the real numbers :''Sought:'' an element such that for all ("minimization") or such that for all ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
– see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid :f(\mathbf_)\geq f(\mathbf) \Leftrightarrow -f(\mathbf_)\leq -f(\mathbf), it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as '' energy minimization'', speaking of the value of the function as representing the energy of the
system A system is a group of Interaction, interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment (systems), environment, is described by its boundaries, ...
being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, is some
subset In mathematics, Set (mathematics), set ''A'' is a subset of a set ''B'' if all Element (mathematics), elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are ...
of the Euclidean space , often specified by a set of '' constraints'', equalities or inequalities that the members of have to satisfy. The
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined **Domain of definition of a partial function **Natural domain of a partial function **Domain of holomorphy of a function * Do ...
of is called the ''search space'' or the ''choice set'', while the elements of are called '' candidate solutions'' or ''feasible solutions''. The function is called, variously, an ''objective function'', a ''
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
'' or ''cost function'' (minimization), a ''utility function'' or ''fitness function'' (maximization), or, in certain fields, an ''energy function'' or ''energy
functional Functional may refer to: * Movements in architecture: ** Functionalism (architecture) ** Form follows function * Functional group, combination of atoms within molecules * Medical conditions without currently visible organic basis: ** Functional sy ...
''. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an ''optimal solution''. In mathematics, conventional optimization problems are usually stated in terms of minimization. A ''local minimum'' is defined as an element for which there exists some such that :\forall\mathbf\in A \; \text \;\left\Vert\mathbf-\mathbf^\right\Vert\leq\delta,\, the expression holds; that is to say, on some region around all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a
global minimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.


Notation

Optimization problems are often expressed with special notation. Here are some examples:


Minimum and maximum value of a function

Consider the following notation: :\min_\; \left(x^2 + 1\right) This denotes the minimum
value Value or values may refer to: Ethics and social * Value (ethics) wherein said concept may be construed as treating actions themselves as abstract objects, associating value to them ** Values (Western philosophy) expands the notion of value beyo ...
of the objective function , when choosing from the set of real numbers . The minimum value in this case is 1, occurring at . Similarly, the notation :\max_\; 2x asks for the maximum value of the objective function , where may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "
infinity Infinity is that which is boundless, endless, or larger than any natural number. It is often denoted by the infinity symbol . Since the time of the ancient Greeks, the philosophical nature of infinity was the subject of many discussions amo ...
" or "undefined".


Optimal input arguments

Consider the following notation: :\underset \; x^2 + 1, or equivalently :\underset \; x^2 + 1, \; \text \; x\in(-\infty,-1]. This represents the value (or values) of the Argument of a function, argument in the interval that minimizes (or minimize) the objective function (the actual minimum value of that function is not what the problem asks for). In this case, the answer is , since is infeasible, that is, it does not belong to the
feasible set In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
. Similarly, :\underset \; x\cos y, or equivalently :\underset \; x\cos y, \; \text \; x\in 5,5 \; y\in\mathbb R, represents the pair (or pairs) that maximizes (or maximize) the value of the objective function , with the added constraint that lie in the interval (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form and , where ranges over all integers. Operators and are sometimes also written as and , and stand for ''argument of the minimum'' and ''argument of the maximum''.


History

Fermat and
Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi LagrangiaNewton Newton most commonly refers to: * Isaac Newton (1642–1726/1727), English scientist * Newton (unit), SI unit of force named after Isaac Newton Newton may also refer to: Arts and entertainment * ''Newton'' (film), a 2017 Indian film * Newton ( ...
and Gauss proposed iterative methods for moving towards an optimum. The term "
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (''Programming'' in this context does not refer to computer programming, but comes from the use of ''program'' by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and John von Neumann developed the theory of
duality Duality may refer to: Mathematics * Duality (mathematics), a mathematical concept ** Dual (category theory), a formalization of mathematical duality ** Duality (optimization) ** Duality (order theory), a concept regarding binary relations ** Dual ...
in the same year. Other notable researchers in mathematical optimization include the following: *
Richard Bellman Richard Ernest Bellman (August 26, 1920 – March 19, 1984) was an American applied mathematician, who introduced dynamic programming in 1953, and made important contributions in other fields of mathematics, such as biomathematics. He founde ...
* Dimitri Bertsekas *
Michel Bierlaire Michel Bierlaire (born 1967 in Namur, Belgium) is a Belgian- Swiss applied mathematician specialized in transportation modeling and optimization. He is a professor at EPFL (École Polytechnique Fédérale de Lausanne) and the head of the Transp ...
* Roger Fletcher *
Ronald A. Howard Ronald Arthur Howard (born August 27, 1934) is an emeritus professor in the Department of Engineering-Economic Systems (now the Department of Management Science and Engineering) in the School of Engineering at Stanford University. Howard directs t ...
*
Fritz John Fritz John (14 June 1910 – 10 February 1994) was a German-born mathematician specialising in partial differential equations and ill-posed problems. His early work was on the Radon transform and he is remembered for John's equation. He was a 1 ...
*
Narendra Karmarkar Narendra Krishna Karmarkar (born Circa 1956) is an Indian Mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher. He invented one of the first provably polynomial time algorithms for linear prog ...
* William Karush * Leonid Khachiyan * Bernard Koopman *
Harold Kuhn Harold William Kuhn (July 29, 1925 – July 2, 2014) was an American mathematician who studied game theory. He won the 1980 John von Neumann Theory Prize along with David Gale and Albert W. Tucker. A former Professor Emeritus of Mathemati ...
* László Lovász * Arkadi Nemirovski * Yurii Nesterov *
Lev Pontryagin Lev Semenovich Pontryagin (russian: Лев Семёнович Понтрягин, also written Pontriagin or Pontrjagin) (3 September 1908 – 3 May 1988) was a Soviet mathematician. He was born in Moscow and lost his eyesight completely due ...
*
R. Tyrrell Rockafellar Ralph Tyrrell Rockafellar (born February 10, 1935) is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is the author of four major books including the landmark ...
*
Naum Z. Shor Naum Zuselevich Shor (russian: Наум Зуселевич Шор) (1 January 1937 – 26 February 2006) was a Soviet and Ukrainian mathematician specializing in optimization. He made significant contributions to nonlinear and stochastic progra ...
* Albert Tucker


Major subfields

*
Convex programming Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pro ...
studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. **
Linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
(LP), a type of convex programming, studies the case in which the objective function ''f'' is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is
bounded Boundedness or bounded may refer to: Economics * Bounded rationality, the idea that human rationality in decision-making is bounded by the available information, the cognitive limitations, and the time available to make the decision * Bounded e ...
. ** Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. ** Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. **
Conic programming Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone. The class of conic optimization problems includes some of th ...
is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. ** Geometric programming is a technique whereby objective and inequality constraints expressed as
posynomials A posynomial, also known as a posinomial in some literature, is a function of the form : f(x_1, x_2, \dots, x_n) = \sum_^K c_k x_1^ \cdots x_n^ where all the coordinates x_i and coefficients c_k are positive real numbers, and the exponents a_ are ...
and equality constraints as
monomials In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called power product, is a product of powers of variables with nonnegative integer expone ...
can be transformed into a convex program. * Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. * Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. * Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. * Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. *
Stochastic programming In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, ...
studies the case in which some of the constraints or parameters depend on
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s. *
Robust optimization Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the ...
is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. * Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. *
Stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective funct ...
is used with random (noisy) function measurements or random inputs in the search process. *
Infinite-dimensional optimization In certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body. Such a problem is an infinite-dimensional optimization problem, becaus ...
studies the case when the set of feasible solutions is a subset of an infinite- dimensional space, such as a space of functions. * Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. * Constraint satisfaction studies the case in which the objective function ''f'' is constant (this is used in artificial intelligence, particularly in automated reasoning). ** Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. * Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. *
Space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The know ...
is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or
surrogate model A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so a model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design ...
. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): *
Calculus of variations The calculus of variations (or Variational Calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions t ...
Is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. *
Optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
theory is a generalization of the calculus of variations which introduces control policies. * Dynamic programming is the approach to solve the
stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective funct ...
problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. * Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities.


Multi-objective optimization

Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the
Pareto set Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engin ...
. The curve created plotting weight against stiffness of the best designs is known as the
Pareto frontier In multi-objective optimization, the Pareto front (also called Pareto frontier or Pareto curve) is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of effi ...
. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.


Multi-modal or global optimization

Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing.


Classification of critical points and extrema


Feasibility problem

The '' satisfiability problem'', also called the ''feasibility problem'', is just the problem of finding any
feasible solution In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, poten ...
at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.


Existence

The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.


Necessary conditions for optimality

One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the '
Karush–Kuhn–Tucker conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be o ...
'.


Sufficient conditions for optimality

While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the
bordered Hessian In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was develo ...
in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see ' Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.


Sensitivity and continuity of optima

The
envelope theorem In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, i ...
describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The
maximum theorem The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematic ...
of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters.


Calculus of optimization

For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero
subgradient In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connectio ...
certifies that a local minimum has been found for minimization problems with convex functions and other
locally In mathematics, a mathematical object is said to satisfy a property locally, if the property is satisfied on some limited, immediate portions of the object (e.g., on some ''sufficiently small'' or ''arbitrarily small'' neighborhoods of points). Pr ...
Lipschitz function In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exis ...
s. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is ''positive'' definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of
saddle point In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function ...
. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers.
Lagrangian relaxation In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the o ...
can also provide approximate solutions to difficult constrained problems. When the objective function is a
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods.


Global convergence

More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.


Computational optimization techniques

To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).


Optimization algorithms

* Simplex algorithm of George Dantzig, designed for
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
* Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming * Variants of the simplex algorithm that are especially suited for
network optimization Network, networking and networked may refer to: Science and technology * Network theory, the study of graphs as a representation of relations between discrete objects * Network science, an academic field that studies complex networks Mathematics ...
* Combinatorial algorithms *
Quantum optimization algorithms Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly ...


Iterative methods

The iterative methods used to solve problems of nonlinear programming differ according to whether they
evaluate Evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative ...
Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
(or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. * Methods that evaluate Hessians (or approximate Hessians, using finite differences): **
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
** Sequential quadratic programming: A Newton-based method for small-medium scale ''constrained'' problems. Some versions can handle large-dimensional problems. **
Interior point methods Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. * Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): **
Coordinate descent Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, ...
methods: Algorithms which update a single coordinate in each iteration **
Conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterativ ...
s: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) ** Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. **
Subgradient method Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective funct ...
s: An iterative method for large
locally In mathematics, a mathematical object is said to satisfy a property locally, if the property is satisfied on some limited, immediate portions of the object (e.g., on some ''sufficiently small'' or ''arbitrarily small'' neighborhoods of points). Pr ...
Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. ** Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for
convex minimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pro ...
problems (similar to conjugate gradient methods). ** Ellipsoid method: An iterative method for small problems with
quasiconvex In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form (-\infty,a) is a convex set. For a function of a single ...
objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. ** Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with
linear constraints Linearity is the property of a mathematical relationship (''function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear r ...
, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). ** Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). **
Simultaneous perturbation stochastic approximation Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to ...
(SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. * Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. **
Interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a n ...
methods ** Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. **
Mirror descent In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function. It generalizes algorithms such as gradient descent and multiplicative weights. History Mirror descent was originall ...


Heuristics

Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: *
Differential evolution In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as ...
* Dynamic relaxation * Evolutionary algorithms * Genetic algorithms * Hill climbing with random restart *
Memetic algorithm A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm. It may provide a sufficiently good solution to an optimization problem. It uses a local search technique to reduce the like ...
* Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients) * Particle swarm optimization * Simulated annealing * Stochastic tunneling * Tabu search


Applications


Mechanics

Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the
engineering optimization Engineering optimization is the subject which uses optimization techniques to achieve design goals in engineering. It is sometimes referred to as design optimization. Topics * structural design (including pressure vessel design and welded beam ...
, and another recent and growing subset of this field is
multidisciplinary design optimization Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and Multi ...
, which, while useful in many problems, has in particular been applied to
aerospace engineering Aerospace engineering is the primary field of engineering concerned with the development of aircraft and spacecraft. It has two major and overlapping branches: aeronautical engineering and astronautical engineering. Avionics engineering is si ...
problems. This approach may be applied in cosmology and astrophysics.


Economics and finance

Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics ''qua'' science as the "study of human behavior as a relationship between ends and
scarce In economics, scarcity "refers to the basic fact of life that there exists only a finite amount of human and nonhuman resources which the best technical knowledge is capable of using to produce only limited maximum amounts of each economic good. ...
means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with
game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appli ...
and the study of economic equilibria. The '' Journal of Economic Literature'' codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their
profit Profit may refer to: Business and law * Profit (accounting), the difference between the purchase price and the costs of bringing to market * Profit (economics), normal profit and economic profit * Profit (real property), a nonpossessory intere ...
. Also, agents are often modeled as being
risk-averse In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more ce ...
, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing
stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appea ...
es rather than on static optimization.
International trade theory International trade theory is a sub-field of economics which analyzes the patterns of international trade, its origins, and its welfare implications. International trade policy has been highly controversial since the 18th century. International ...
also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.


Electrical engineering

Some common applications of optimization techniques in
electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the l ...
include active filter design, stray field reduction in superconducting magnetic energy storage systems,
space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The know ...
design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical
surrogate model A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so a model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design ...
and
space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The know ...
methodologies since the discovery of
space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The know ...
in 1993.


Civil engineering

Optimization has been widely used in civil engineering.
Construction management Construction management (CM) is a professional service that uses specialized, project management techniques and software to oversee the planning, design, construction and closeout of a project. The purpose of Construction management is to control ...
and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization.


Operations research

Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses
stochastic programming In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, ...
to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and
stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective funct ...
methods.


Control engineering

Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.


Geophysics

Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the
physical properties A physical property is any property that is measurable, whose value describes a state of a physical system. The changes in the physical properties of a system can be used to describe its changes between momentary states. Physical properties are o ...
and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.


Molecular modeling

Nonlinear optimization methods are widely used in
conformational analysis In chemistry, conformational isomerism is a form of stereoisomerism in which the isomers can be interconverted just by rotations about formally single bonds (refer to figure on single bond rotation). While any two arrangements of atoms in a mo ...
.


Computational systems biology

Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology.
Linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways.


Machine learning


Solvers


See also

* Brachistochrone * Curve fitting *
Deterministic global optimization Deterministic global optimization is a branch of numerical optimization which focuses on finding the global solutions of an optimization problem whilst providing theoretical guarantees that the reported solution is indeed the global one, within som ...
* Goal programming * Important publications in optimization *
Least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
* Mathematical Optimization Society (formerly Mathematical Programming Society) * Mathematical optimization algorithms * Mathematical optimization software *
Process optimization Process optimization is the discipline of adjusting a process so as to optimize (make the best or most effective use of) some specified set of parameters without violating some constraint. The most common goals are minimizing cost and maximizing ...
* Simulation-based optimization * Test functions for optimization * Variational calculus * Vehicle routing problem


Notes


Further reading

* * * * *


External links

* Links to optimization source codes * * * {{Authority control Operations research Optimization