In mathematics, computer science and operations research, mathematical
optimization or mathematical programming, alternatively spelled
optimisation, is the selection of a best element (with regard to some
criterion) from some set of available alternatives.[1]
In the simplest case, an optimization problem consists of maximizing
or minimizing a real function by systematically choosing input values
from within an allowed set and computing the value of the function.
The generalization of optimization theory and techniques to other
formulations constitutes a large area of applied mathematics. More
generally, optimization includes finding "best available" values of
some objective function given a defined domain (or input), including a
variety of different types of objective functions and different types
of domains.
Contents 1 Optimization problems 2 Notation 2.1 Minimum and maximum value of a function 2.2 Optimal input arguments 3 History 4 Major subfields 4.1 Multi-objective optimization 4.2 Multi-modal optimization 5 Classification of critical points and extrema 5.1 Feasibility problem
5.2 Existence
5.3 Necessary conditions for optimality
5.4 Sufficient conditions for optimality
5.5 Sensitivity and continuity of optima
5.6
6 Computational optimization techniques 6.1 Optimization algorithms 6.2 Iterative methods 6.3 Global convergence 6.4 Heuristics 7 Applications 7.1 Mechanics
7.2
8 Solvers 9 See also 10 Notes 11 Further reading 11.1 Comprehensive 11.1.1 Undergraduate level 11.1.2 Graduate level 11.2 Continuous optimization 11.3 Combinatorial optimization 11.4 Relaxation (extension method) 12 Journals 13 External links Optimization problems[edit] Main article: Optimization problem An optimization problem can be represented in the following way: Given: a function f : A → displaystyle to R from some set A to the real numbers Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that f(x0) ≥ f(x) for all x in A ("maximization"). Such a formulation is called an optimization problem or a mathematical
programming problem (a term not directly related to computer
programming, but still in use for example in linear programming –
see History below). Many real-world and theoretical problems may be
modeled in this general framework. Problems formulated using this
technique in the fields of physics and computer vision may refer to
the technique as energy minimization, speaking of the value of the
function f as representing the energy of the system being modeled.
Typically, A is some subset of the
‖ x − x ∗ ‖ ≤ δ , displaystyle mathbf x -mathbf x ^ * leq delta ,, the expression f ( x ∗ ) ≤ f ( x ) displaystyle f(mathbf x ^ * )leq f(mathbf x ) holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly. While a local minimum is at least as good as any nearby points, a global minimum is at least as good as every feasible point. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible points), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving nonconvex problems—including the majority of commercially available solvers—are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Notation[edit] Optimization problems are often expressed with special notation. Here are some examples: Minimum and maximum value of a function[edit] Consider the following notation: min x ∈ R ( x 2 + 1 ) displaystyle min _ xin mathbb R ;(x^ 2 +1) This denotes the minimum value of the objective function x 2 + 1 displaystyle x^ 2 +1 , when choosing x from the set of real numbers R displaystyle mathbb R . The minimum value in this case is 1 displaystyle 1 , occurring at x = 0 displaystyle x=0 . Similarly, the notation max x ∈ R 2 x displaystyle max _ xin mathbb R ;2x asks for the maximum value of the objective function 2x, where x may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". Optimal input arguments[edit] Main article: Arg max Consider the following notation: a r g m i n x ∈ ( − ∞ , − 1 ] x 2 + 1 , displaystyle underset xin (-infty ,-1] operatorname arg,min ;x^ 2 +1, or equivalently a r g m i n x x 2 + 1 , subject to: x ∈ ( − ∞ , − 1 ] . displaystyle underset x operatorname arg,min ;x^ 2 +1,; text subject to: ;xin (-infty ,-1]. This represents the value (or values) of the argument x in the interval ( − ∞ , − 1 ] displaystyle (-infty ,-1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function is not what the problem asks for). In this case, the answer is x = –1, since x = 0 is infeasible, i.e. does not belong to the feasible set. Similarly, a r g m a x x ∈ [ − 5 , 5 ] , y ∈ R x cos ( y ) , displaystyle underset xin [-5,5],;yin mathbb R operatorname arg,max ;xcos(y), or equivalently a r g m a x x , y x cos ( y ) , subject to: x ∈ [ − 5 , 5 ] , y ∈ R , displaystyle underset x,;y operatorname arg,max ;xcos(y),; text subject to: ;xin [-5,5],;yin mathbb R , represents the ( x , y ) displaystyle (x,y) pair (or pairs) that maximizes (or maximize) the value of the objective function x cos ( y ) displaystyle xcos(y) , with the added constraint that x lie in the interval [ − 5 , 5 ] displaystyle [-5,5] (again, the actual maximum value of the expression does not matter).
In this case, the solutions are the pairs of the form (5, 2kπ) and
(−5,(2k+1)π), where k ranges over all integers.
arg min and arg max are sometimes also written argmin and argmax, and
stand for argument of the minimum and argument of the maximum.
History[edit]
Fermat and Lagrange found calculus-based formulae for identifying
optima, while Newton and Gauss proposed iterative methods for moving
towards an optimum.
The term "linear programming" for certain optimization cases was due
to George B. Dantzig, although much of the theory had been
introduced by
Aharon Ben-Tal Richard Bellman Roger Fletcher Ronald A. Howard Fritz John Narendra Karmarkar William Karush Leonid Khachiyan Bernard Koopman Harold Kuhn László Lovász Arkadi Nemirovski Yurii Nesterov Boris Polyak Lev Pontryagin James Renegar R. Tyrrell Rockafellar Cornelis Roos Naum Z. Shor Michael J. Todd Albert Tucker Major subfields[edit]
Disjunctive programming is used where at least one constraint must be
satisfied but not all. It is of particular use in scheduling.
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
Multi-objective optimization[edit]
Main article: Multi-objective optimization
Adding more than one objective to an optimization problem adds
complexity. For example, to optimize a structural design, one would
desire a design that is both light and rigid. When two objectives
conflict, a trade-off must be created. There may be one lightest
design, one stiffest design, and an infinite number of designs that
are some compromise of weight and rigidity. The set of trade-off
designs that cannot be improved upon according to one criterion
without hurting another criterion is known as the Pareto set. The
curve created plotting weight against stiffness of the best designs is
known as the Pareto frontier.
A design is judged to be "Pareto optimal" (equivalently, "Pareto
efficient" or in the Pareto set) if it is not dominated by any other
design: If it is worse than another design in some respects and no
better in any respect, then it is dominated and is not Pareto optimal.
The choice among "Pareto optimal" solutions to determine the "favorite
solution" is delegated to the decision maker. In other words, defining
the problem as multi-objective optimization signals that some
information is missing: desirable objectives are given but
combinations of them are not rated relative to each other. In some
cases, the missing information can be derived by interactive sessions
with the decision maker.
Iterative methods[edit] Main article: Iterative method See also: Newton's method in optimization, Quasi-Newton method, Finite difference, Approximation theory, and Numerical analysis The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix) the number of function evaluations is in the order of N². Newton's method requires the 2nd order derivates, so for each iteration the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization. Some interior-point methods use only (sub)gradient information, and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients):
Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used.
Global convergence[edit] More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. Heuristics[edit] Main article: Heuristic algorithm Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: Memetic algorithm
Differential evolution
Evolutionary algorithms
Dynamic relaxation
Genetic algorithms
Applications[edit]
Mechanics[edit]
Problems in rigid body dynamics (in particular articulated rigid body
dynamics) often require mathematical programming techniques, since you
can view rigid body dynamics as attempting to solve an ordinary
differential equation on a constraint manifold; the constraints are
various nonlinear geometric constraints such as "these two points must
always coincide", "this surface must not penetrate any other", or
"this point must always lie somewhere on this curve". Also, the
problem of computing contact forces can be done by solving a linear
complementarity problem, which can also be viewed as a QP (quadratic
programming) problem.
Many design problems can also be expressed as optimization programs.
This application is called design optimization. One subset is the
engineering optimization, and another recent and growing subset of
this field is multidisciplinary design optimization, which, while
useful in many problems, has in particular been applied to aerospace
engineering problems.
This approach may be applied in cosmology and astrophysics,.[4]
Brachistochrone
Curve fitting
Deterministic global optimization
Goal programming
Important publications in optimization
Least squares
Notes[edit] ^ "The Nature of
Further reading[edit] Comprehensive[edit] Undergraduate level[edit] Bradley, S.; Hax, A.; Magnanti, T. (1977). Applied mathematical programming. Addison Wesley. Parkinson, A.; Balling, R.; Hedengren, J. (2013). Optimization Methods for Engineering Design. Provo, UT: Brigham Young University. Rardin, Ronald L. (1997). Optimization in operations research. Prentice Hall. p. 919. ISBN 0-02-398415-5. copyright: 1998 Strang, Gilbert (1986). Introduction to applied mathematics. Wellesley, MA: Wellesley-Cambridge Press (Strang's publishing company). pp. xii+758. ISBN 0-9614088-0-4. MR 0870634. Graduate level[edit] Magnanti, Thomas L. (1989). "Twenty years of mathematical programming". In Cornet, Bernard; Tulkens, Henry. Contributions to Operations Research and Economics: The twentieth anniversary of CORE (Papers from the symposium held in Louvain-la-Neuve, January 1987). Cambridge, MA: MIT Press. pp. 163–227. ISBN 0-262-03149-3. MR 1104662. Minoux, M. (1986). Mathematical programming: Theory and algorithms. Egon Balas foreword) (Translated by Steven Vajda from the (1983 Paris: Dunod) French ed.). Chichester: A Wiley-Interscience Publication. John Wiley & Sons, Ltd. pp. xxviii+489. ISBN 0-471-90170-9. MR 2571910. (2008 Second ed., in French: Programmation mathématique: Théorie et algorithmes. Editions Tec & Doc, Paris, 2008. xxx+711 pp. ISBN 978-2-7430-1000-3. MR868279. Nemhauser, G. L.; Rinnooy Kan, A. H. G.; Todd, M. J., eds. (1989). Optimization. Handbooks in Operations Research and Management Science. 1. Amsterdam: North-Holland Publishing Co. pp. xiv+709. ISBN 0-444-87284-1. MR 1105099. J. E. Dennis, Jr. and Robert B. Schnabel, A view of unconstrained
optimization (pp. 1–72);
Shapiro, Jeremy F. (1979). Mathematical programming: Structures and algorithms. New York: Wiley-Interscience [John Wiley & Sons]. pp. xvi+388. ISBN 0-471-77886-9. MR 0544669. Spall, J. C. (2003), Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley, Hoboken, NJ. University, Edwin K. P. Chong, Colorado State University, Stanislaw H. Żak, Purdue (2013). An introduction to optimization (Fourth edition. ed.). Hoboken, New Jersey: John Wiley & Sons. ISBN 9781118279014. Continuous optimization[edit] Roger Fletcher (2000). Practical methods of optimization. Wiley.
ISBN 978-0-471-49463-8.
Mordecai Avriel (2003). Nonlinear Programming: Analysis and Methods.
Dover Publishing. ISBN 0-486-43227-0.
P. E. Gill, W. Murray and M. H. Wright (1982). Practical Optimization.
Emerald Publishing. ISBN 978-0-12-283952-8.
Xin-She Yang (2010). Engineering Optimization: An Introduction with
Combinatorial optimization[edit] R. K. Ahuja, Thomas L. Magnanti, and
Relaxation (extension method)[edit] Methods to obtain suitable (in some sense) natural extensions of optimization problems that otherwise lack of existence or stability of solutions to obtain problems with guaranteed existence of solutions and their stability in some sense (typically under various perturbation of data) are in general called relaxation. Solutions of such extended (=relaxed) problems in some sense characterizes (at least certain features) of the original problems, e.g. as far as their optimizing sequences concerns. Relaxed problems may also possesses their own natural linear structure that may yield specific optimality conditions different from optimality conditions for the original problems. H. O. Fattorini: Infinite Dimensional Optimization and Control Theory.
Cambridge Univ. Press, 1999.
P. Pedregal: Parametrized Measures and Variational Principles.
Birkhäuser, Basel, 1997
T. Roubicek: "Relaxation in Optimization Theory and Variational
Calculus". W. de Gruyter, Berlin, 1997. ISBN 3-11-014542-1.
J. Warga:
Journals[edit] Computational Optimization and Applications
Journal of Computational Optimization in
External links[edit] COIN-OR—Computational Infrastructure for Operations Research
Decision Tree for Optimization Software Links to optimization source
codes
Global optimization
v t e Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear: Methods calling … … functions Golden-section search
… and gradients Convergence Trust region Wolfe conditions Quasi–Newton BFGS and L-BFGS
DFP
Other methods Berndt–Hall–Hall–Hausman Gauss–Newton Gradient Levenberg–Marquardt Conjugate gradient Truncated Newton … and Hessians Newton's method Constrained nonlinear General Barrier methods Penalty methods Differentiable Augmented Lagrangian methods Sequential quadratic programming Successive linear programming Convex optimization Convex minimization Cutting-plane method
Reduced gradient (Frank–Wolfe)
Linear and quadratic Interior point Affine scaling Ellipsoid algorithm of Khachiyan Projective algorithm of Karmarkar Basis-exchange
Combinatorial Paradigms Approximation algorithm
Dynamic programming
Greedy algorithm
Branch and bound/cut Graph algorithms Minimum spanning tree Bellman–Ford Borůvka Dijkstra Floyd–Warshall Johnson Kruskal Network flows Dinic Edmonds–Karp Ford–Fulkerson Push–relabel maximum flow Metaheuristics Evolutionary algorithm Hill climbing Local search Simulated annealing Tabu search Software v t e Areas of mathematics outline topic lists Branches Arithmetic History of mathematics
Philosophy of mathematics
Algebra Number theory Elementary Linear Multilinear Abstract Combinatorics Group theory Calculus Analysis Differential equations / Dynamical systems Numerical analysis Optimization Functional analysis Geometry Discrete Algebraic Analytic Differential Finite Topology Trigonometry Applied Probability
Mathematical physics
Mathematical statistics
Statistics
Computer science
Game theory
Recreational mathematics
Divisions Pure Applied Discrete Computational Category Portal Commons WikiProject v t e Systems engineering Subfields Aerospace engineering Biological systems engineering Configuration management Earth systems engineering and management Electrical engineering Enterprise systems engineering Performance engineering Reliability engineering Safety engineering Processes Requirements engineering
Functional specification
Concepts Business process
System
Tools Decision-making
Function modelling
IDEF
Optimization
Quality function deployment
People James S. Albus Ruzena Bajcsy Benjamin S. Blanchard Wernher von Braun Kathleen Carley Harold Chestnut Wolt Fabrycky Barbara Grosz Arthur David Hall III Derek Hitchins Robert E. Machol Radhika Nagpal Simon Ramo Joseph Francis Shea Katia Sycara Manuela M. Veloso John N. Warfield Related fields Control engineering Computer engineering Industrial engineering Operations research Project management Quality management Risk management Software engineering Category Authority control GN |