HOME

TheInfoList



OR:

In
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
, linear-fractional programming (LFP) is a generalization of
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomia ...
(LP). Whereas the objective function in a linear program is a
linear function In mathematics, the term linear function refers to two distinct but related notions: * In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For di ...
, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1. Formally, a linear-fractional program is defined as the problem of maximizing (or minimizing) a ratio of
affine function In Euclidean geometry, an affine transformation or affinity (from the Latin, ''wikt:affine, affinis'', "connected with") is a geometric transformation that preserves line (geometry), lines and parallel (geometry), parallelism, but not necessarily ...
s over a
polyhedron In geometry, a polyhedron (: polyhedra or polyhedrons; ) is a three-dimensional figure with flat polygonal Face (geometry), faces, straight Edge (geometry), edges and sharp corners or Vertex (geometry), vertices. The term "polyhedron" may refer ...
, : \begin \text \quad & \frac \\ \text \quad & A\mathbf \leq \mathbf, \end where \mathbf \in \mathbb^n represents the vector of variables to be determined, \mathbf, \mathbf \in \mathbb^n and \mathbf \in \mathbb^m are vectors of (known) coefficients, A \in \mathbb^ is a (known) matrix of coefficients and \alpha, \beta \in \mathbb are constants. The constraints have to restrict the
feasible region In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, ...
to \, i.e. the region on which the denominator is positive. Alternatively, the denominator of the objective function has to be strictly negative in the entire feasible region.


Motivation by comparison to linear programming

Both linear programming and linear-fractional programming represent optimization problems using linear equations and linear inequalities, which for each problem-instance define a feasible set. Fractional linear programs have a richer set of objective functions. Informally, linear programming computes a policy delivering the best outcome, such as maximum profit or lowest cost. In contrast, a linear-fractional programming is used to achieve the highest ''ratio'' of outcome to cost, the ratio representing the highest efficiency. For example, in the context of LP we maximize the objective function profit = income − cost and might obtain maximum profit of $100 (= $1100 of income − $1000 of cost). Thus, in LP we have an efficiency of $100/$1000 = 0.1. Using LFP we might obtain an efficiency of $10/$50 = 0.2 with a profit of only $10, but only requiring $50 of investment.


Transformation to a linear program

Any linear-fractional program can be transformed into a linear program, assuming that the feasible region is non-empty and bounded, using the Charnes–Cooper transformation. The main idea is to introduce a new non-negative variable t to the program which will be used to rescale the constants involved in the program (\alpha, \beta, \mathbf). This allows us to require that the denominator of the objective function (\mathbf^T \mathbf + \beta) equals 1. (To understand the transformation, it is instructive to consider the simpler special case with \alpha = \beta = 0.) Formally, the linear program obtained via the Charnes–Cooper transformation uses the transformed variables \mathbf \in \mathbb^n and t \ge 0 : : \begin \text \quad & \mathbf^T \mathbf + \alpha t \\ \text \quad & A\mathbf \leq \mathbf t \\ & \mathbf^T \mathbf + \beta t = 1 \\ & t \geq 0. \end A solution \mathbf to the original linear-fractional program can be translated to a solution of the transformed linear program via the equalities :\mathbf = \frac \cdot \mathbf\quad \text \quad t = \frac. Conversely, a solution for \mathbf and t of the transformed linear program can be translated to a solution of the original linear-fractional program via :\mathbf=\frac\mathbf.


Duality

Let the dual variables associated with the constraints A\mathbf - \mathbf t \leq \mathbf and \mathbf^T \mathbf + \beta t - 1 = 0 be denoted by \mathbf and \lambda, respectively. Then the dual of the LFP above is : \begin \text \quad & \lambda \\ \text \quad & A^T\mathbf + \lambda \mathbf = \mathbf \\ & -\mathbf^T \mathbf + \lambda \beta \geq \alpha \\ & \mathbf \in \mathbb_+^m, \lambda \in \mathbb, \end which is an LP and which coincides with the dual of the equivalent linear program resulting from the Charnes–Cooper transformation.


Properties and algorithms

The objective function in a linear-fractional problem is both quasiconcave and quasiconvex (hence quasilinear) with a monotone property, pseudoconvexity, which is a stronger property than quasiconvexity. A linear-fractional objective function is both pseudoconvex and pseudoconcave, hence pseudolinear. Since an LFP can be transformed to an LP, it can be solved using any LP solution method, such as the
simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are ...
(of George B. Dantzig), the criss-cross algorithm, or interior-point methods.


Notes


Sources

*


Further reading

* * * * * {{DEFAULTSORT:Linear-Fractional Programming Optimization algorithms and methods Linear programming Generalized convexity