HOME

TheInfoList



OR:

In
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or
energy function Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, which is to be minimized, or a
reward function Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine ...
or
utility function As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosopher ...
, which is to be
maximize In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
d. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.


Relation to constraint-satisfaction problems

The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an ''objective function'' to be optimized. Many algorithms are used to handle the optimization part.


General form

A general constrained minimization problem may be written as follows: : \begin \min &~& f(\mathbf) & \\ \mathrm &~& g_i(\mathbf) = c_i &\text i=1,\ldots,n \quad \text \\ &~& h_j(\mathbf) \geqq d_j &\text j=1,\ldots,m \quad \text \end where g_i(\mathbf) = c_i ~\mathrm i=1,\ldots,n and h_j(\mathbf) \ge d_j ~\mathrm j=1,\ldots,m are constraints that are required to be satisfied (these are called hard constraints), and f(\mathbf) is the objective function that needs to be optimized subject to the constraints. In some problems, often called ''constraint optimization problems'', the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.


Solution methods

Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a
penalty method Penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution o ...
. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.


Equality constraints


Substitution method

For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution. The idea is to substitute the constraint into the objective function to create a
composite function In mathematics, function composition is an operation that takes two function (mathematics), functions and , and produces a function such that . In this operation, the function is function application, applied to the result of applying the ...
that incorporates the effect of the constraint. For example, assume the objective is to maximize f(x,y) = x \cdot y subject to x + y = 10. The constraint implies y = 10 - x, which can be substituted into the objective function to create p(x) = x (10 - x) = 10x - x^. The first-order necessary condition gives \frac = 10 - 2x = 0, which can be solved for x=5 and, consequently, y = 10 - 5 = 5.


Lagrange multiplier

If the constrained problem has only equality constraints, the method of
Lagrange multipliers In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied ex ...
can be used to convert it into an unconstrained problem whose number of variables is the original number of variables minus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.


Inequality constraints

With inequality constraints, the problem can be characterized in terms of the
geometric optimality conditions Geometry (; ) is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space such as the distance, shape, size, and relative position of figures. A mathematician who works in the field of geometry is ca ...
, Fritz John conditions and
Karush–Kuhn–Tucker conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be o ...
, under which simple problems may be solvable.


Linear programming

If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
problem. This can be solved by the
simplex method In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are ...
, which usually works in
polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
in the problem size but is not guaranteed to, or by
interior point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
s which are guaranteed to work in polynomial time.


Nonlinear programming

If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a
nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or sta ...
problem.


Quadratic programming

If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a
quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constra ...
problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the
ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for convex optimization, minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algor ...
if the objective function is
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytope ...
; otherwise the problem may be
NP hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
.


KKT conditions

Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity.


Branch and bound

Constraint optimization can be solved by
branch-and-bound Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate soluti ...
algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution. Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far. On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible. A variation of this approach called Hansen's method uses interval methods. It inherently implements rectangular constraints.


First-choice bounding functions

One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for x=a while another constraint is maximal for x=b.


=Russian doll search

= This methodVerfaillie, Gérard, Michel Lemaître, and Thomas Schiex.
Russian doll search for solving constraint optimization problems
" AAAI/IAAI, Vol. 1. 1996.
runs a branch-and-bound algorithm on n problems, where n is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables x_1,\ldots,x_i from the original problem, along with the constraints containing them. After the problem on variables x_,\ldots,x_n is solved, its optimal cost can be used as an upper bound while solving the other problems, In particular, the cost estimate of a solution having x_,\ldots,x_n as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point. There is similarity between the Russian Doll Search method and
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search.


Bucket elimination

The
bucket elimination In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier t ...
algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if x is the variable to be removed, C_1,\ldots,C_n are the soft constraints containing it, and y_1,\ldots,y_m are their variables except x, the new soft constraint is defined by: :C(y_1=a_1,\ldots,y_n=a_n) = \max_a \sum_i C_i(x=a,y_1=a_1,\ldots,y_n=a_n). Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.


See also

*
Constrained least squares In constrained least squares one solves a linear least squares problem with an additional constraint on the solution. I.e., the unconstrained equation \mathbf \boldsymbol = \mathbf must be fit as closely as possible (in the least squares sense ...
*
Distributed constraint optimization Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of cons ...
*
Constraint satisfaction problem Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constra ...
(CSP) *
Constraint programming Constraint programming (CP) is a paradigm for solving combinatorial problems that draws on a wide range of techniques from artificial intelligence, computer science, and operations research. In constraint programming, users declaratively state th ...
*
Integer programming An integer programming problem is a mathematical optimization or Constraint satisfaction problem, feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programmin ...
*
Penalty method Penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution o ...
* Superiorization


References


Further reading

* *{{cite book , first=Rina , last=Dechter , title=Constraint Processing , publisher=Morgan Kaufmann , url=https://archive.org/details/constraintproces00rina , year=2003 , isbn=1-55860-890-7 , url-access=registration Mathematical optimization Constraint programming