HOME

TheInfoList



OR:

The value function of an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
gives the
value Value or values may refer to: Ethics and social * Value (ethics) wherein said concept may be construed as treating actions themselves as abstract objects, associating value to them ** Values (Western philosophy) expands the notion of value beyo ...
attained by the
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cos ...
at a solution, while only depending on the
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s of the problem. In a controlled
dynamical system In mathematics, a dynamical system is a system in which a Function (mathematics), function describes the time dependence of a Point (geometry), point in an ambient space. Examples include the mathematical models that describe the swinging of a ...
, the value function represents the optimal payoff of the system over the interval , t1/var> when started at the time-t
state variable A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of a ...
x(t)=x. If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function." In an economic context, where the objective function usually represents
utility As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosopher ...
, the value function is conceptually equivalent to the
indirect utility function __NOTOC__ In economics, a consumer's indirect utility function v(p, w) gives the consumer's maximal attainable utility when faced with a vector p of goods prices and an amount of income w. It reflects both the consumer's preferences and market con ...
. In a problem of
optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
, the value function is defined as the
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest l ...
of the objective function taken over the set of admissible controls. Given (t_, x_) \in , t_\times \mathbb^, a typical optimal control problem is to : \text \quad J(t_, x_; u) = \int_^ I(t,x(t), u(t)) \, \mathrmt + \phi(x(t_)) subject to :\frac = f(t, x(t), u(t)) with initial state variable x(t_)=x_. The objective function J(t_, x_; u) is to be maximized over all admissible controls u \in U _,t_/math>, where u is a
Lebesgue measurable function In mathematics and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in ...
from _, t_/math> to some prescribed arbitrary set in \mathbb^. The value function is then defined as with V(t_, x(t_)) = \phi(x(t_)), where \phi(x(t_)) is the "scrap value". If the optimal pair of control and state trajectories is (x^\ast, u^\ast), then V(t_, x_) = J(t_, x_; u^\ast). The function h that gives the optimal control u^\ast based on the current state x is called a feedback control policy, or simply a policy function. Bellman's principle of optimality roughly states that any optimal policy at time t, t_ \leq t \leq t_ taking the current state x(t) as "new" initial condition must be optimal for the remaining problem. If the value function happens to be
continuously differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
, this gives rise to an important
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ...
known as
Hamilton–Jacobi–Bellman equation In optimal control theory, the Hamilton-Jacobi-Bellman (HJB) equation gives a necessary and sufficient condition for optimality of a control with respect to a loss function. It is, in general, a nonlinear partial differential equation in the value ...
, :-\frac = \max_u \left\ where the maximand on the right-hand side can also be re-written as the
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
, H \left(t, x, u, \lambda \right) = I(t,x,u) + \lambda f(t, x, u), as :-\frac = \max_u H(t,x,u,\lambda) with \partial V(t,x)/\partial x = \lambda(t) playing the role of the costate variables. Given this definition, we further have \mathrm \lambda(t) / \mathrmt = \partial^ V(t,x) / \partial x \partial t + \partial^ V(t,x) / \partial x^ \cdot f(x), and after differentiating both sides of the HJB equation with respect to x, :- \frac = \frac + \frac f(x) + \frac \frac which after replacing the appropriate terms recovers the
costate equation The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations : \dot^(t)=-\frac where ...
:- \dot(t) = \frac + \lambda(t) \frac = \frac where \dot(t) is Newton notation for the derivative with respect to time. The value function is the unique
viscosity solution In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE) ...
to the Hamilton–Jacobi–Bellman equation. In an
online In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" or ...
closed-loop approximate optimal control, the value function is also a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s se ...
that establishes global asymptotic stability of the closed-loop system.


References


Further reading

* * * *{{cite book , first=Robert F. , last=Stengel , chapter=Conditions for Optimality , title=Optimal Control and Estimation , location=New York , publisher=Dover , year=1994 , isbn=0-486-68200-5 , pages=201–222 , chapter-url=https://www.google.com/books/edition/_/jDjPxqm7Lw0C?hl=en&gbpv=1&pg=PA201 Dynamic programming Optimal control