HOME

TheInfoList



OR:

Benders decomposition (or Benders' decomposition) is a technique in
mathematical programming Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
that allows the solution of very large
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
problems that have a special block structure. This block structure often occurs in applications such as stochastic programming as the uncertainty is usually represented with scenarios. The technique is named after Jacques F. Benders. The strategy behind Benders decomposition can be summarized as ''divide-and-conquer''. That is, in Benders decomposition, the variables of the original problem are divided into two subsets so that a first-stage master problem is solved over the first set of variables, and the values for the second set of variables are determined in a second-stage subproblem for a given first-stage solution. If the subproblem determines that the fixed first-stage decisions are in fact infeasible, then so-called ''Benders cuts'' are generated and added to the master problem, which is then re-solved until no cuts can be generated. Since Benders decomposition adds new ''constraints'' as it progresses towards a solution, the approach is called "''row'' generation". In contrast,
Dantzig–Wolfe decomposition Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have se ...
uses " ''column'' generation".


Methodology

Assume a problem that occurs in two or more stages, where the decisions for the later stages rely on the results from the earlier ones. An attempt at first-stage decisions can be made without prior knowledge of optimality according to later stage decisions. This first-stage decision is the master problem. Further stages can then be analyzed as separate subproblems. Information from these subproblems is passed back to the master problem. If constraints for a subproblem were violated, they can be added back to the master problem. The master problem is then re-solved. The master problem represents an initial
convex set In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex r ...
which is further constrained by information gathered from the subproblems. Because the feasible space only shrinks as information is added, the objective value for the master function provides a lower bound on the objective function of the overall problem. Benders Decomposition is applicable to problems with a largely block-diagonal structure.


Mathematical Formulation

Assume a problem of the following structure: : \begin & \text && \mathbf^\mathrm\mathbf + \mathbf^\mathrm\mathbf \\ & \text && A \mathbf + B \mathbf \geq \mathbf \\ & && y \in Y \\ & && \mathbf \geq \mathbf \end Where A, B represent the constraints shared by both stages of variables and Y represents the feasible set for \mathbf. Notice that for any fixed \mathbf \in Y, the residual problem is : \begin & \text && \mathbf^\mathrm\mathbf + \mathbf^\mathrm\mathbf \\ & \text && A \mathbf \geq \mathbf - B \mathbf \\ & && \mathbf \geq \mathbf \end The dual of the residual problem is : \begin & \text && (\mathbf - B \mathbf)^\mathrm \mathbf + \mathbf^\mathrm\mathbf \\ & \text && A^\mathrm \mathbf \leq \mathbf \\ & && \mathbf \geq \mathbf \end Using the dual representation of the residual problem, the original problem can be rewritten as an equivalent minimax problem : \min_ \left \mathbf^\mathrm\mathbf + \max_ \left\ \right Benders decomposition relies on an iterative procedure that chooses successive values of \mathbf without considering the inner problem except through a set of cut constraints that are created through a pass-back mechanism from the maximization problem. Although the minimax formulation is written in terms of (\mathbf, \mathbf), for an optimal \mathbf the corresponding \mathbf can be found by solving the original problem with \mathbf fixed.


Master Problem Formulation

The decisions for the first stage problem can be described by the smaller minimization problem : \begin & \text && \mathbf \\ & \text && \ \\ & && \mathbf \in Y \\ \end Initially the set of cuts is empty. Solving this master problem will constitute a "first guess" at an optimal solution to the overall problem, with the value of \mathbf unbounded below and \mathbf taking on any feasible value. The set of cuts will be filled in a sequence of iterations by solving the inner maximization problem of the minimax formulation. The cuts both guide the master problem towards an optimal \mathbf, if one exists, and ensure that \mathbf is feasible for the full problem. The set of cuts define the relationship between \mathbf, \mathbf, and implicitly \mathbf. Since the value of z starts unconstrained and we only add constraints at each iteration, meaning the feasible space can only shrink, the value of the master problem at any iteration provides a lower bound on the solution to the overall problem. If for some \mathbf the objective value of the master problem is equal to the value of the optimal value of the inner problem, then by duality theory the solution is optimal.


Subproblem Formulation

The subproblem considers the suggested solution \mathbf to the master problem and solves the inner maximization problem from the minimax formulation. The inner problem is formulated using the dual representation : \begin & \text && (\mathbf - B \mathbf)^\mathrm \mathbf + \mathbf^\mathrm\mathbf \\ & \text && A^\mathrm \mathbf \leq \mathbf \\ & && \mathbf \geq \mathbf \end While the master problem provides a lower bound on the value of the problem, the subproblem is used to get an upper bound. The result of solving the subproblem for any given \mathbf can either be a finite optimal value for which an extreme point \mathbf can be found, an unbounded solution for which an extreme ray \mathbf in the recession cone can be found, or a finding that the subproblem is infeasible.


Procedure

At a high level, the procedure will iteratively consider the master problem and subproblem. Each iteration provides an updated upper and lower bound on the optimal objective value. The result of the subproblem either provides a new constraint to add to the master problem or a certificate that no finite optimal solution exists for the problem. The procedure terminates when it is shown that no finite optimal solution exists or when the gap between the upper and lower bound is sufficiently small. In such a case, the value of \mathbf is determined by solving the primal residual problem fixing \mathbf. Formally, the procedure begins with the lower bound set to -\inf, the upper bound set to \inf, and the cuts in the master problem empty. An initial solution is produced by selecting any \mathbf \in Y. Then the iterative procedure begins and continues until the gap between the upper and lower bound is at most \epsilon or it is shown that no finite optimal solution exists. The first step of each iteration begins by updating the upper bound by solving the subproblem given the most recent value of \mathbf. There are three possible outcomes from solving the subproblem. In the first case, the objective value of the subproblem is unbounded above. By
duality theory In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of is , then the ...
, when a dual problem has unbounded objective the corresponding primal problem is infeasible. This means that the choice of \mathbf does not satisfy A \mathbf + B \mathbf \geq \mathbf for any \mathbf \geq \mathbf. This solution can be removed from the master problem by taking an extreme ray \mathbf that certifies the subproblem has unbounded objective and adding a constraint to the master asserting that (\mathbf - B \mathbf)^\mathrm \mathbf \leq \mathbf. In the second case, the subproblem is infeasible. Since the dual feasible space to the problem is empty, either the original problem is not feasible or there is a ray in the primal problem that certifies the objective value is unbounded below. In either case, the procedure terminates. In the third case, the subproblem has a finite optimal solution. By duality theory for linear programs, the optimal value of the subproblem is equal to the optimal value of the original problem constrained on the choice of \mathbf. This allows the upper bound to be updated to the value of the optimal solution of the subproblem, if it is better than the current upper bound. Given an optimal extreme point \mathbf, it also yields a new constraint that requires the master problem to consider the objective value under this particular solution by asserting that z \geq (\mathbf - B \mathbf)^\mathrm \mathbf + \mathbf^\mathrm\mathbf . This will strictly increase the value of z at the solution \mathbf in the master problem if the choice of \mathbf was suboptimal. Finally, the last part of each iteration is creating a new solution to the master problem by solving the master problem with the new constraint. The new solution (\mathbf, z) is used to update the lower bound. If the gap between the best upper and lower bound is less than \epsilon then the procedure terminates and the value of \mathbf is determined by solving the primal residual problem fixing \mathbf. Otherwise, the procedure continues on to the next iteration.


See also

*
FortSP FortSP is a software package for solving stochastic programming (SP) problems. It solves scenario-based SP problems with recourse as well as problems with chance constraints and integrated chance constraints. FortSP is available as a standalone ...
solver uses Benders decomposition for solving stochastic programming problems


References

* Benders, J. F. (Sept. 1962),
Partitioning procedures for solving mixed-variables programming problems
, ''
Numerische Mathematik ''Numerische Mathematik'' is a peer-reviewed mathematics journal on numerical analysis. It was established in 1959 and is published by Springer Science+Business Media. The journal is indexed by ''Mathematical Reviews'' and Zentralblatt MATH. Its 20 ...
'' 4(3): 238–252. * {{citation, last=Lasdon, first=Leon S., title=Optimization Theory for Large Systems, publisher=
Dover Publications Dover Publications, also known as Dover Books, is an American book publisher founded in 1941 by Hayward and Blanche Cirker. It primarily reissues books that are out of print from their original publishers. These are often, but not always, books ...
, location=Mineola, New York, year=2002, edition=reprint of the 1970 Macmillan, pages=xiii+523, mr=1888251. Linear programming Decomposition methods Stochastic optimization