Fundamental Theorem Of Linear Programming
   HOME

TheInfoList



OR:

In mathematical optimization, the fundamental theorem of linear programming states, in a weak formulation, that the
maxima and minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ra ...
of a
linear function In mathematics, the term linear function refers to two distinct but related notions: * In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For dist ...
over a
convex polygon In geometry, a convex polygon is a polygon that is the boundary of a convex set. This means that the line segment between two points of the polygon is contained in the union of the interior and the boundary of the polygon. In particular, it is a ...
al region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them.


Statement

Consider the optimization problem :\min c^T x \text x \in P Where P = \. If P is a bounded polyhedron (and thus a polytope) and x^\ast is an optimal solution to the problem, then x^\ast is either an extreme point (vertex) of P, or lies on a face F \subset P of optimal solutions.


Proof

Suppose, for the sake of contradiction, that x^\ast \in \mathrm(P). Then there exists some \epsilon > 0 such that the ball of radius \epsilon centered at x^\ast is contained in P, that is B_(x^\ast) \subset P. Therefore, :x^\ast - \frac \frac \in P and :c^T\left( x^\ast - \frac \frac\right) = c^T x^\ast - \frac \frac = c^T x^\ast - \frac , , c, , < c^T x^\ast. Hence x^\ast is not an optimal solution, a contradiction. Therefore, x^\ast must live on the boundary of P. If x^\ast is not a vertex itself, it must be the convex combination of vertices of P, say x_1, ..., x_t. Then x^\ast = \sum_^t \lambda_i x_i with \lambda_i \geq 0 and \sum_^t \lambda_i = 1. Observe that Alan o Conner wrote this theorem :0=c^\left(\left(\sum_^\lambda_x_\right)-x^\right)=c^\left(\sum_^\lambda_(x_-x^)\right)=\sum_^\lambda_(c^x_-c^x^). Since x^ is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, c^x^=c^x_{i} for each x_i, so every x_i is also optimal, and therefore all points on the face whose vertices are x_1, ..., x_t, are optimal solutions.


References

* http://www.linearprogramming.info/fundamental-theorem-of-linear-programming-and-its-properties/ * http://demonstrations.wolfram.com/TheFundamentalTheoremOfLinearProgramming/ Linear programming