Linear Programming Duality
   HOME
*





Linear Programming Duality
The dual of a given linear program (LP) is another LP that is derived from the original (the primal) LP in the following schematic way: * Each variable in the primal LP becomes a constraint in the dual LP; * Each constraint in the primal LP becomes a variable in the dual LP; * The objective direction is inversed – maximum in the primal becomes minimum in the dual and vice versa. The weak duality theorem states that the objective value of the dual LP at any feasible solution is always a bound on the objective of the primal LP at any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem). In fact, this bounding property holds for the optimal values of the dual and primal LPs. The strong duality theorem states that, moreover, if the primal has an optimal solution then the dual has an optimal solution too, ''and the two optima are equal''. Pages 81–104. These theorems belong to a larger class of duality theorems in optimizatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Program
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polytope where this function has the smallest (or largest) value if such a point exists. Linear programs are problems that can be expressed in canonical form as : \begin & \text && \ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Duality (optimization)
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality. Dual problem Usually the term "dual problem" refers to the ''Lagrangian dual problem'' but other ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Duality Gap
In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If d^* is the optimal dual value and p^* is the optimal primal value then the duality gap is equal to p^* - d^*. This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds. In general given two dual pairs separated locally convex spaces \left(X,X^*\right) and \left(Y,Y^*\right). Then given the function f: X \to \mathbb \cup \, we can define the primal problem by :\inf_ f(x). \, If there are constraint conditions, these can be built into the function f by letting f = f + I_\text where I is the indicator function. Then let F: X \times Y \to \mathbb \cup \ be a perturbation function such that F(x,0) = f(x). The ''duality gap'' is the difference given by :\inf_ (x,0)- \sup_ F^*(0,y^*)/math> where F^* is the convex conj ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Shadow Price
A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in order to account for the presence of distortionary market instruments (e.g. quotas, tariffs, taxes or subsidies). Shadow Prices are the real economic prices given to goods and services after they have been appropriately adjusted by removing distortionary market instruments and incorporating the societal impact of the respective good or service. A shadow price is often calculated based on a group of assumptions and estimates because it lacks reliable data, so it is subjective and somewhat inaccurate. The need for shadow prices arises as a result of “externalities” and the presence of distortionary market instruments. An externality is defined as a cost or benefit incurred by a third party as a result of production or consumption of a g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Simplex Algorithm
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial ''cones'', and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function. History George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946 his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, at that t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Farkas Lemma
Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization (alternatively, mathematical programming). It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements. Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities, i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Max-flow Min-cut Theorem
In computer science and optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the ''source'' to the ''sink'' is equal to the total weight of the edges in a minimum cut, i.e., the smallest total weight of the edges which if removed would disconnect the source from the sink. This is a special case of the duality theorem for linear programs and can be used to derive Menger's theorem and the Kőnig–Egerváry theorem. Definitions and statement The theorem equates two quantities: the maximum flow through a network, and the minimum capacity of a cut of the network. To state the theorem, each of these notions must first be defined. Network A network consists of * a finite directed graph , where ''V'' denotes the finite set of vertices and is the set of directed edges; * a source and a sink ; * a capacity function, which is a mapping c:E\to\R^+ denoted by or for . It represents the maximum amount of flow that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Minimax Theorem
In the mathematical area of game theory, a minimax theorem is a theorem providing conditions that guarantee that the max–min inequality is also an equality. The first theorem in this sense is von Neumann's minimax theorem from 1928, which was considered the starting point of game theory. Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature. Zero-sum games The minimax theorem was first proven and published in 1928 by John von Neumann, who is quoted as saying "''As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved''". Formally, von Neumann's minimax theorem states: Let X \subset \mathbb^n and Y \subset \mathbb^m be compact convex sets. If f: X \times Y \rightarrow \mathbb is a continuous function that is concave-convex, i.e. : f(\cdot,y): X \to \mathbb is concave for fixed y, and : f(x,\cdo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Coefficient
In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or an expression; it is usually a number, but may be any expression (including variables such as , and ). When the coefficients are themselves variables, they may also be called parameters. For example, the polynomial 2x^2-x+3 has coefficients 2, −1, and 3, and the powers of the variable x in the polynomial ax^2+bx+c have coefficient parameters a, b, and c. The constant coefficient is the coefficient not attached to variables in an expression. For example, the constant coefficients of the expressions above are the number 3 and the parameter ''c'', respectively. The coefficient attached to the highest degree of the variable in a polynomial is referred to as the leading coefficient. For example, in the expressions above, the leading coefficients are 2 and ''a'', respectively. Terminology and definition In mathematics, a coefficient is a multiplicative factor in some term of a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]