HOME

TheInfoList



OR:

In
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
, a control-Lyapunov function (CLF) is an extension of the idea of
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s se ...
V(x) to systems with control inputs. The ordinary Lyapunov function is used to test whether a
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water i ...
is ''(Lyapunov) stable'' or (more restrictively) ''asymptotically stable''. Lyapunov stability means that if the system starts in a state x \ne 0 in some domain ''D'', then the state will remain in ''D'' for all time. For ''asymptotic stability'', the state is also required to converge to x = 0. A control-Lyapunov function is used to test whether a system is ''asymptotically stabilizable'', that is whether for any state ''x'' there exists a control u(x,t) such that the system can be brought to the zero state asymptotically by applying the control ''u''. The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.


Definition

Consider an autonomous dynamical system with inputs where x\in\mathbb^n is the state vector and u\in\mathbb^m is the control vector. Suppose our goal is to drive the system to an equilibrium x_* \in \mathbb^n from every initial state in some domain D\subset\mathbb^n. Without loss of generality, suppose the equilibrium is at x_*=0 (for an equilibrium x_*\neq 0, it can be translated to the origin by a change of variables). Definition. A control-Lyapunov function (CLF) is a function V : D \to \mathbb that is
continuously differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, positive-definite (that is, V(x) is positive for all x\in D except at x=0 where it is zero), and such that for all x \in \mathbb^n (x \neq 0), there exists u\in \mathbb^m such that : \dot(x, u) := \langle \nabla V(x), f(x,u)\rangle < 0, where \langle u, v\rangle denotes the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
of u, v \in\mathbb^n. The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by
Artstein's theorem Artstein's theorem states that a nonlinear dynamical system in the control-affine form \dot = \mathbf + \sum_^m \mathbf_i(\mathbf)u_i has a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback ''u''(' ...
. Some results apply only to control-affine systems—i.e., control systems in the following form: where f : \mathbb^n \to \mathbb^n and g_i : \mathbb^n \to \mathbb^ for i = 1, \dots, m.


Theorems

E. D. Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable. It was later shown by Francis H. Clarke that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback. Artstein proved that the dynamical system () has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').


Constructing the Stabilizing Input

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (), ''Sontag's formula'' (or ''Sontag's universal formula'') gives the feedback law k : \mathbb^n \to \mathbb^m directly in terms of the derivatives of the CLF.Sontag (1998). ''Mathematical Control Theory'', Equation 5.56 In the special case of a single input system (m=1), Sontag's formula is written as :k(x) = \begin \displaystyle -\frac & \text L_ V(x) \neq 0 \\ 0 & \text L_ V(x)=0 \end where L_f V(x) := \langle \nabla V(x), f(x)\rangle and L_g V(x) := \langle \nabla V(x), g(x)\rangle are the Lie derivatives of V along f and g, respectively. For the general nonlinear system (), the input u can be found by solving a static non-linear programming problem : u^*(x) = \underset \nabla V(x) \cdot f(x,u) for each state ''x''.


Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem. Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by : m(1+q^2)\ddot+b\dot+K_0q+K_1q^3=u Now given the desired state, q_d, and actual state, q, with error, e = q_d - q, define a function r as : r=\dot+\alpha e A Control-Lyapunov candidate is then : V=\fracr^2 which is positive definite for all q \ne 0, \dot \ne 0. Now taking the time derivative of V : \dot=r\dot : \dot=(\dot+\alpha e)(\ddot+\alpha \dot) The goal is to get the time derivative to be : \dot=-\kappa V which is globally exponentially stable if V is globally positive definite (which it is). Hence we want the rightmost bracket of \dot, : (\ddot+\alpha \dot)=(\ddot_d-\ddot+\alpha \dot) to fulfill the requirement : (\ddot_d-\ddot+\alpha \dot) = -\frac(\dot+\alpha e) which upon substitution of the dynamics, \ddot, gives : \left(\ddot_d-\frac+\alpha \dot\right) = -\frac(\dot+\alpha e) Solving for u yields the control law : u= m(1+q^2)\left(\ddot_d + \alpha \dot+\fracr\right)+K_0q+K_1q^3+b\dot with \kappa and \alpha, both greater than zero, as tunable parameters This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected : \dot=-\kappa V which is a linear first order differential equation which has solution : V=V(0)\exp(-\kappa t) And hence the error and error rate, remembering that V=\frac(\dot+\alpha e)^2, exponentially decay to zero. If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V and solve for e. This is left as an exercise for the reader but the first few steps at the solution are: : r\dot=-\fracr^2 : \dot=-\fracr : r=r(0)\exp\left(-\frac t\right) : \dot+\alpha e= (\dot(0)+\alpha e(0))\exp\left(-\frac t\right) which can then be solved using any linear differential equation methods.


Notes


References

* * * *{{cite book , last = Sontag , first = Eduardo , author-link = Eduardo D. Sontag , year = 1998 , title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition , publisher = Springer , url = http://www.sontaglab.org/FTPDIR/sontag_mathematical_control_theory_springer98.pdf , isbn = 978-0-387-98489-6


See also

*
Artstein's theorem Artstein's theorem states that a nonlinear dynamical system in the control-affine form \dot = \mathbf + \sum_^m \mathbf_i(\mathbf)u_i has a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback ''u''(' ...
* Lyapunov optimization * Drift plus penalty Stability theory