αΒΒ
   HOME

TheInfoList



OR:

αΒΒ is a second-order
deterministic global optimization Deterministic global optimization is a branch of numerical optimization which focuses on finding the global solutions of an optimization problem whilst providing theoretical guarantees that the reported solution is indeed the global one, within som ...
algorithm for finding the optima of general, twice continuously differentiable functions.αBB: A global optimization method for general constrained nonconvex problems
" ''Journal of Global Optimization'', 1995, 7(4), 337-363 The algorithm is based around creating a relaxation for nonlinear functions of general form by superposing them with a quadratic of sufficient magnitude, called α, such that the resulting superposition is enough to overcome the worst-case scenario of non-convexity of the original function. Since a quadratic has a diagonal
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
, this superposition essentially adds a number to all diagonal elements of the original Hessian, such that the resulting Hessian is positive-semidefinite. Thus, the resulting relaxation is a
convex function In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigra ...
.


Theory

Let a function be a function of general non-linear non-convex structure, defined in a finite box X=\. Then, a convex underestimation (relaxation) L(\boldsymbol) of this function can be constructed over X by superposing a sum of univariate quadratics, each of sufficient magnitude to overcome the non-convexity of everywhere in X, as follows: :L(\boldsymbol)=f(\boldsymbol)+\sum_^\alpha_i(x_i^L - x_i)(x_i^U - x_i) L(\boldsymbol) is called the \alpha \text underestimator for general functional forms. If all \alpha_i are sufficiently large, the new function L(\boldsymbol) is convex everywhere in X. Thus, local minimization of L(\boldsymbol) yields a rigorous lower bound on the value of in that domain.


Calculation of \boldsymbol

There are numerous methods to calculate the values of the \boldsymbol vector. It is proven that when \alpha_i=\max\, where \lambda_i^ is a valid lower bound on the i-th eigenvalue of the Hessian matrix of , the L(\boldsymbol) underestimator is guaranteed to be convex. One of the most popular methods to get these valid bounds on eigenvalues is by use of the Scaled Gerschgorin theorem. Let H(X) be the interval Hessian matrix of over the interval X. Then, \forall d_i>0 a valid lower bound on eigenvalue \lambda_i may be derived from the i-th row of H(X) as follows: :\lambda_i^=\underline-\sum_(\max(, \underline, ,, \overline, \frac)


References

{{DEFAULTSORT:AlphaBB Deterministic global optimization