HOME

TheInfoList



OR:

In
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, the ellipsoid method is an
iterative method In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived fr ...
for minimizing
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
s. When specialized to solving feasible
linear optimization Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
problems with rational data, the ellipsoid method is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
which finds an optimal solution in a number of steps that is polynomial in the input size. The ellipsoid method generates a sequence of
ellipsoid An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation. An ellipsoid is a quadric surface;  that is, a surface that may be defined as the ...
s whose volume uniformly decreases at every step, thus enclosing a minimizer of a
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
.


History

The ellipsoid method has a long history. As an
iterative method In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived fr ...
, a preliminary version was introduced by Naum Z. Shor. In 1972, an
approximation algorithm In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solu ...
for real
convex minimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pro ...
was studied by
Arkadi Nemirovski Arkadi Nemirovski (born March 14, 1947) is a professor at the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. He has been a leader in continuous optimization and is best known for his work o ...
and David B. Yudin (Judin). As an algorithm for solving
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
problems with rational data, the ellipsoid algorithm was studied by
Leonid Khachiyan Leonid Genrikhovich Khachiyan (; russian: Леони́д Ге́нрихович Хачия́н; May 3, 1952April 29, 2005) was a Soviet and American mathematician and computer scientist. He was most famous for his ellipsoid algorithm (1979) for ...
; Khachiyan's achievement was to prove the
polynomial-time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the
simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
, which has a run time that ''typically'' is linear in the size of the problem, but for which examples exist for which it is ''exponential'' in the size of the problem. As such, having an algorithm that is ''guaranteed'' to be polynomial for all cases seemed like a theoretical breakthrough. Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically,
Karmarkar's algorithm Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also pol ...
, an
interior-point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case. The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in
combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
theory for many years.V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in '' Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45. Only in the 21st century have interior-point algorithms with similar complexity properties appeared.


Description

A convex minimization problem consists of the following ingredients. * A
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
f_0(x): \mathbb^n \to \mathbb to be minimized over the vector ''x'' (containing ''n'' variables); * Convex inequality constraints of the form f_i(x) \leqslant 0, where the functions f_i are convex; these constraints define a
convex set In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex r ...
''Q''. * Linear equality constraints of the form h_i(x) = 0. We are also given an initial
ellipsoid An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation. An ellipsoid is a quadric surface;  that is, a surface that may be defined as the ...
\mathcal^ \subset \mathbb^n defined as :\mathcal^ = \left \ containing a minimizer x^*, where P_ \succ 0 and x_0 is the center of \mathcal. Finally, we require the existence of a separation oracle for the convex set ''Q. ''Given a point x\in \mathbb^n, the oracle should return one of two answers: * "The point x is in Q", or - * "The point x is in not in Q, and moreover, here is a hyperplane that separates x from Q", that is, a vector c such that c\cdot x < c\cdot y for all y\in Q. The output of the ellipsoid method is either: * Any point in the polytope ''Q'' (i.e., any feasible point), or - * A proof that ''Q'' is empty. Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point. It turns out that any linear programming problem can be reduced to a linear feasibility problem (e.g. minimize the zero function subject to some linear inequality and equality constraints). One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution. Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.


Unconstrained minimization

At the ''k''-th iteration of the algorithm, we have a point x^ at the center of an ellipsoid :\mathcal^ = \left \. We query the cutting-plane oracle to obtain a vector g^ \in \mathbb^n such that :g^ \left (x^* - x^ \right ) \leqslant 0. We therefore conclude that :x^* \in \mathcal^ \cap \left \. We set \mathcal^ to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute x^. The update is given by :\begin x^ &= x^ - \frac P_ \tilde^ \\ P_ &= \frac \left(P_ - \frac P_ \tilde^ \tilde^ P_ \right ) \end where :\tilde^ = \left (\frac \right )g^. The stopping criterion is given by the property that :\sqrt \leqslant \epsilon \quad \Rightarrow \quad f \left (x^ \right ) - f \left(x^* \right ) \leqslant \epsilon.


Inequality-constrained minimization

At the ''k''-th iteration of the algorithm for constrained minimization, we have a point x^ at the center of an ellipsoid \mathcal^ as before. We also must maintain a list of values f_^ recording the smallest objective value of feasible iterates so far. Depending on whether or not the point x^ is feasible, we perform one of two tasks: *If x^ is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient g_0 that satisfies ::g_0^T \left (x^*-x^ \right ) + f_0 \left (x^ \right ) - f_^ \leqslant 0 *If x^ is infeasible and violates the ''j''-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient g_j of f_j which must satisfy ::g_j^T \left (z-x^ \right ) + f_j \left (x^ \right )\leqslant 0 for all feasible ''z''.


Performance

The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is
numerically stable In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
. On even "small"-sized problems, it suffers from numerical instability and poor performance in practice . However, the ellipsoid method is an important theoretical technique in
combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
. In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows. In the 21st century, interior-point algorithms with similar properties have appeared .


Notes


Further reading

* Dmitris Alevras and Manfred W. Padberg, ''Linear Optimization and Extensions: Problems and Extensions'', Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.) * V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in ''Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 31-1 to 31-37. * V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in '' Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45. *
George B. Dantzig George Bernard Dantzig (; November 8, 1914 – May 13, 2005) was an American mathematical scientist who made contributions to industrial engineering, operations research, computer science, economics, and statistics. Dantzig is known for his dev ...
and Mukund N. Thapa. 1997. ''Linear programming 1: Introduction''. Springer-Verlag. *
George B. Dantzig George Bernard Dantzig (; November 8, 1914 – May 13, 2005) was an American mathematical scientist who made contributions to industrial engineering, operations research, computer science, economics, and statistics. Dantzig is known for his dev ...
and Mukund N. Thapa. 2003. ''Linear Programming 2: Theory and Extensions''. Springer-Verlag. * L. Lovász: ''An Algorithmic Theory of Numbers, Graphs, and Convexity'', CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986 * Kattta G. Murty, ''Linear Programming'', Wiley, 1983. * M. Padberg, ''Linear Optimization and Extensions'', Second Edition, Springer-Verlag, 1999. *
Christos H. Papadimitriou Christos Charilaos Papadimitriou ( el, Χρήστος Χαρίλαος "Χρίστος" Παπαδημητρίου; born August 16, 1949) is a Greek theoretical computer scientist and the Donovan Family Professor of Computer Science at Columbia ...
and Kenneth Steiglitz, ''Combinatorial Optimization: Algorithms and Complexity'', Corrected republication with a new preface, Dover. *
Alexander Schrijver Alexander (Lex) Schrijver (born 4 May 1948 in Amsterdam) is a Dutch mathematician and computer scientist, a professor of discrete mathematics and optimization at the University of Amsterdam and a fellow at the Centrum Wiskunde & Informatica in Ams ...
, ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998,


External links


EE364b
a Stanford course homepage {{optimization algorithms, convex Combinatorial optimization Convex optimization Linear programming