Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s that solve linear and nonlinear
convex optimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization probl ...
problems.
An interior point method was discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s.
In 1984,
Narendra Karmarkar
Narendra Krishna Karmarkar (born Circa 1956) is an Indian Mathematician. Karmarkar developed Karmarkar's algorithm. He is listed as an ISI highly cited researcher.
He invented one of the first provably polynomial time algorithms for linear prog ...
developed a method for
linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
called
Karmarkar's algorithm Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also pol ...
, which runs in provably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems that were beyond the capabilities of the
simplex method
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.
The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are ...
. Contrary to the simplex method, it reaches a best solution by traversing the interior of the
feasible region
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
. The method can be generalized to convex programming based on a
self-concordant barrier function In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem. Such functions a ...
used to encode the
convex set
In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex r ...
.
Any convex optimization problem can be transformed into minimizing (or maximizing) a
linear function
In mathematics, the term linear function refers to two distinct but related notions:
* In calculus and related areas, a linear function is a function (mathematics), function whose graph of a function, graph is a straight line, that is, a polynomia ...
over a convex set by converting to the
epigraph form. The idea of encoding the
feasible set
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were mainly developed for general
nonlinear programming
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or sta ...
, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g.
sequential quadratic programming
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.
SQP me ...
).
Yurii Nesterov
Yurii Nesterov is a Russian mathematician, an internationally recognized expert in convex optimization, especially in the development of efficient algorithms and numerical optimization analysis. He is currently a professor at the University of Lou ...
, and
Arkadi Nemirovski
Arkadi Nemirovski (born March 14, 1947) is a professor at the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. He has been a leader in continuous optimization and is best known for his work o ...
came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of
iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.
Karmarkar's breakthrough revitalized the study of interior-point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by
polynomial complexity and, moreover, that was competitive with the simplex method.
Already
Khachiyan
Leonid Genrikhovich Khachiyan (; russian: Леони́д Ге́нрихович Хачия́н; May 3, 1952April 29, 2005) was a Soviet and American mathematician and computer scientist.
He was most famous for his ellipsoid algorithm (1979) fo ...
's
ellipsoid method was a polynomial-time algorithm; however, it was too slow to be of practical interest.
The class of primal-dual path-following interior-point methods is considered the most successful.
Mehrotra's predictor–corrector algorithm provides the basis for most implementations of this class of methods.
Primal-dual interior-point method for nonlinear optimization
The primal-dual method's idea is easy to demonstrate for constrained
nonlinear optimization.
For simplicity, consider the all-inequality version of a nonlinear optimization problem:
:minimize
subject to
where
This inequality-constrained optimization problem is then solved by converting it into an unconstrained objective function whose minimum we hope to find efficiently.
Specifically, the logarithmic
barrier function In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem. Such functions a ...
associated with (1) is
:
Here
is a small positive scalar, sometimes called the "barrier parameter". As
converges to zero the minimum of
should converge to a solution of (1).
The barrier function
gradient is
:
where
is the gradient of the original function
, and
is the gradient of
.
In addition to the original ("primal") variable
we introduce a
Lagrange multiplier-inspired
dual
Dual or Duals may refer to:
Paired/two things
* Dual (mathematics), a notion of paired concepts that mirror one another
** Dual (category theory), a formalization of mathematical duality
*** see more cases in :Duality theories
* Dual (grammatical ...
variable
:
(4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" in
KKT conditions.
We try to find those
for which the gradient of the barrier function is zero.
Applying (4) to (3), we get an equation for the gradient:
:
where the matrix
is the
Jacobian
In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to:
*Jacobian matrix and determinant
*Jacobian elliptic functions
*Jacobian variety
*Intermediate Jacobian
In mathematics, the intermediate Jacobian of a compact Kähler m ...
of the constraints
.
The intuition behind (5) is that the gradient of
should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with small
(4) can be understood as the condition that the solution should either lie near the boundary
, or that the projection of the gradient
on the constraint component
normal should be almost zero.
Applying
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
to (4) and (5), we get an equation for
update
:
:
where
is the
Hessian matrix of
,
is a
diagonal matrix of
, and
is a diagonal matrix with
.
Because of (1), (4) the condition
:
should be enforced at each step. This can be done by choosing appropriate
:
:
See also
*
Affine scaling
In mathematical optimization, affine scaling is an algorithm for solving linear programming problems. Specifically, it is an interior point method, discovered by Soviet Union, Soviet mathematician I. I. Dikin in 1967 and reinvented in the United S ...
*
Augmented Lagrangian method
*
Penalty method
*
Karush–Kuhn–Tucker conditions
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be o ...
References
Bibliography
*
*
*
*
*
*
*
*
{{Optimization algorithms, convex
Optimization algorithms and methods