Backstepping
   HOME

TheInfoList



OR:

In
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
, backstepping is a technique developed circa 1990 by Petar V. Kokotovic and others for designing stabilizing controls for a special class of
nonlinear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many othe ...
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
s. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this
recursive Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics ...
structure, the designer can start the design process at the known-stable system and "back out" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as ''backstepping.''


Backstepping approach

The backstepping approach provides a
recursive Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics ...
method for stabilizing the
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * ''The Origin'' (Buffy comic), a 1999 ''Buffy the Vampire Sl ...
of a system in strict-feedback form. That is, consider a system of the form :\begin\begin \dot &= f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 &= f_1(\mathbf,z_1) + g_1(\mathbf,z_1) z_2\\ \dot_2 &= f_2(\mathbf,z_1,z_2) + g_2(\mathbf,z_1,z_2) z_3\\ \vdots\\ \dot_i &= f_i(\mathbf,z_1, z_2, \ldots, z_, z_i) + g_i(\mathbf,z_1, z_2, \ldots, z_, z_i) z_ \quad \text 1 \leq i < k-1\\ \vdots\\ \dot_ &= f_(\mathbf,z_1, z_2, \ldots, z_) + g_(\mathbf,z_1, z_2, \ldots, z_) z_k\\ \dot_k &= f_k(\mathbf,z_1, z_2, \ldots, z_, z_k) + g_k(\mathbf,z_1, z_2, \dots, z_, z_k) u \end\end where * \mathbf \in \mathbb^n with n \geq 1, * z_1, z_2, \ldots, z_i, \ldots, z_, z_k are scalars, * is a scalar input to the system, * f_x, f_1, f_2, \ldots, f_i, \ldots, f_, f_k vanish at the
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * ''The Origin'' (Buffy comic), a 1999 ''Buffy the Vampire Sl ...
(i.e., f_i(0,0,\dots,0) = 0), * g_1, g_2, \ldots, g_i, \ldots, g_, g_k are nonzero over the domain of interest (i.e., g_i(\mathbf,z_1,\ldots,z_k) \neq 0 for 1 \leq i \leq k). Also assume that the subsystem :\dot = f_x(\mathbf) + g_x(\mathbf) u_x(\mathbf) is stabilized to the
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * ''The Origin'' (Buffy comic), a 1999 ''Buffy the Vampire Sl ...
(i.e., \mathbf = \mathbf\,) by some known control u_x(\mathbf) such that u_x(\mathbf) = 0. It is also assumed that a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
V_x for this stable subsystem is known. That is, this subsystem is stabilized by some other method and backstepping extends its stability to the \textbf shell around it. In systems of this ''strict-feedback form'' around a stable subsystem, * The backstepping-designed control input has its most immediate stabilizing impact on state z_n. * The state z_n then acts like a stabilizing control on the state z_ before it. * This process continues so that each state z_i is stabilized by the ''fictitious'' "control" z_. The backstepping approach determines how to stabilize the subsystem using z_1, and then proceeds with determining how to make the next state z_2 drive z_1 to the control required to stabilize . Hence, the process "steps backward" from out of the strict-feedback form system until the ultimate control is designed.


Recursive Control Design Overview

# It is given that the smaller (i.e., lower-order) subsystem #::\dot = f_x(\mathbf) + g_x(\mathbf) u_x(\mathbf) #:is already stabilized to the origin by some control u_x(\mathbf) where u_x(\mathbf) = 0. That is, choice of u_x to stabilize this system must occur using ''some other method.'' It is also assumed that a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
V_x for this stable subsystem is known. Backstepping provides a way to extend the controlled stability of this subsystem to the larger system. # A control u_1(\mathbf,z_1) is designed so that the system #::\dot_1 = f_1(\mathbf,z_1) + g_1(\mathbf,z_1) u_1(\mathbf,z_1) #:is stabilized so that z_1 follows the desired u_x control. The control design is based on the augmented Lyapunov function candidate #::V_1(\mathbf,z_1) = V_x(\mathbf) + \frac( z_1 - u_x(\mathbf) )^2 #:The control u_1 can be picked to bound \dot_1 away from zero. # A control u_2(\mathbf,z_1,z_2) is designed so that the system #::\dot_2 = f_2(\mathbf,z_1,z_2) + g_2(\mathbf,z_1,z_2) u_2(\mathbf,z_1,z_2) #:is stabilized so that z_2 follows the desired u_1 control. The control design is based on the augmented Lyapunov function candidate #::V_2(\mathbf,z_1,z_2) = V_1(\mathbf,z_1) + \frac( z_2 - u_1(\mathbf,z_1) )^2 #:The control u_2 can be picked to bound \dot_2 away from zero. # This process continues until the actual is known, and #* The ''real'' control stabilizes z_k to ''fictitious'' control u_. #* The ''fictitious'' control u_ stabilizes z_ to ''fictitious'' control u_. #* The ''fictitious'' control u_ stabilizes z_ to ''fictitious'' control u_. #* ... #* The ''fictitious'' control u_2 stabilizes z_2 to ''fictitious'' control u_1. #* The ''fictitious'' control u_1 stabilizes z_1 to ''fictitious'' control u_x. #* The ''fictitious'' control u_x stabilizes to the origin. This process is known as backstepping because it starts with the requirements on some internal subsystem for stability and progressively ''steps back'' out of the system, maintaining stability at each step. Because * f_i vanish at the origin for 0 \leq i \leq k, * g_i are nonzero for 1 \leq i \leq k, * the given control u_x has u_x(\mathbf) = 0, then the resulting system has an equilibrium at the origin (i.e., where \mathbf=\mathbf\,, z_1=0, z_2=0, ..., z_=0, and z_k=0) that is globally asymptotically stable.


Integrator Backstepping

Before describing the backstepping procedure for general strict-feedback form
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
s, it is convenient to discuss the approach for a smaller class of strict-feedback form systems. These systems connect a series of integrators to the input of a system with a known feedback-stabilizing control law, and so the stabilizing approach is known as ''integrator backstepping.'' With a small modification, the integrator backstepping approach can be extended to handle all strict-feedback form systems.


Single-integrator Equilibrium

Consider the
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
where \mathbf \in \mathbb^n and z_1 is a scalar. This system is a
cascade connection A two-port network (a kind of four-terminal network or quadripole) is an electrical network ( circuit) or device with two ''pairs'' of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them sati ...
of an integrator with the subsystem (i.e., the input enters an integrator, and the
integral In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along wit ...
z_1 enters the subsystem). We assume that f_x(\mathbf)=0, and so if u_1=0, \mathbf = \mathbf\, and z_1 = 0, then :\begin \dot = f_x(\underbrace_) + ( g_x(\underbrace_) )(\underbrace_) = 0 + ( g_x(\mathbf) )(0) = \mathbf & \quad \text \mathbf = \mathbf \text\\ \dot_1 = \overbrace^ & \quad \text z_1 = 0 \text \end So the
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * ''The Origin'' (Buffy comic), a 1999 ''Buffy the Vampire Sl ...
(\mathbf,z_1) = (\mathbf,0) is an equilibrium (i.e., a
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" in ...
) of the system. If the system ever reaches the origin, it will remain there forever after.


Single-integrator Backstepping

In this example, backstepping is used to stabilize the single-integrator system in Equation () around its equilibrium at the origin. To be less precise, we wish to design a control law u_1(\mathbf,z_1) that ensures that the states (\mathbf, z_1) return to (\mathbf,0) after the system is started from some arbitrary initial condition. * First, by assumption, the subsystem ::\dot = F(\mathbf) \qquad \text \qquad F(\mathbf) \triangleq f_x(\mathbf) + g_x(\mathbf) u_x(\mathbf) :with u_x(\mathbf) = 0 has a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
V_x(\mathbf) > 0 such that ::\dot_x=\frac(f_x(\mathbf)+g_x(\mathbf)u_x(\mathbf)) \leq - W(\mathbf) :where W(\mathbf) is a
positive-definite function In mathematics, a positive-definite function is, depending on the context, either of two types of function. Most common usage A ''positive-definite function'' of a real variable ''x'' is a complex-valued function f: \mathbb \to \mathbb such ...
. That is, we assume that we have already shown that this existing simpler subsystem is stable (in the sense of Lyapunov). Roughly speaking, this notion of stability means that: ** The function V_x is like a "generalized energy" of the subsystem. As the states of the system move away from the origin, the energy V_x(\mathbf) also grows. ** By showing that over time, the energy V_x(\mathbf(t)) decays to zero, then the states must decay toward \mathbf=\mathbf\,. That is, the origin \mathbf=\mathbf\, will be a stable equilibrium of the system – the states will continuously approach the origin as time increases. ** Saying that W(\mathbf) is positive definite means that W(\mathbf) > 0 everywhere except for \mathbf=\mathbf\,, and W(\mathbf)=0. ** The statement that \dot_x \leq -W(\mathbf) means that \dot_x is bounded away from zero for all points except where \mathbf = \mathbf\,. That is, so long as the system is not at its equilibrium at the origin, its "energy" will be decreasing. ** Because the energy is always decaying, then the system must be stable; its trajectories must approach the origin. :Our task is to find a control that makes our cascaded (\mathbf,z_1) system also stable. So we must find a ''new'' Lyapunov function candidate for this new system. That candidate will depend upon the control , and by choosing the control properly, we can ensure that it is decaying everywhere as well. * Next, by ''adding'' and ''subtracting'' g_x(\mathbf) u_x(\mathbf) (i.e., we don't change the system in any way because we make no ''net'' effect) to the \dot part of the larger (\mathbf,z_1) system, it becomes ::\begin\dot = f_x(\mathbf) + g_x(\mathbf) z_1 + \mathord\\\dot_1 = u_1\end :which we can re-group to get ::\begin\dot = \mathord + g_x(\mathbf) \underbrace_\\\dot_1 = u_1\end :So our cascaded supersystem encapsulates the known-stable \dot = F(\mathbf) subsystem plus some error perturbation generated by the integrator. * We now can change variables from (\mathbf, z_1) to (\mathbf, e_1) by letting e_1 \triangleq z_1 - u_x(\mathbf). So ::\begin\dot = (f_x(\mathbf) + g_x(\mathbf) u_x(\mathbf)) + g_x(\mathbf) e_1\\\dot_1 = u_1 - \dot_x\end : Additionally, we let v_1 \triangleq u_1 - \dot_x so that u_1 = v_1 + \dot_x and ::\begin\dot = (f_x(\mathbf) + g_x(\mathbf) u_x(\mathbf))+g_x(\mathbf) e_1\\\dot_1 = v_1\end : We seek to stabilize this error system by feedback through the new control v_1. By stabilizing the system at e_1 = 0, the state z_1 will track the desired control u_x which will result in stabilizing the inner subsystem. * From our existing Lyapunov function V_x, we define the ''augmented'' Lyapunov function ''candidate'' ::V_1(\mathbf, e_1) \triangleq V_x(\mathbf) + \frac e_1^2 : So ::\begin \dot_1 &= \dot_x(\mathbf) + \frac\left( 2 e_1 \dot_1 \right)\\ &= \dot_x(\mathbf) + e_1 \dot_1\\ &= \dot_x(\mathbf) + e_1 \overbrace^\\ &= \overbrace^ + e_1 v_1\\ &= \overbrace^ + e_1 v_1 \end : By distributing \partial V_x/\partial \mathbf, we see that ::\dot_1 = \overbrace^ + \fracg_x(\mathbf) e_1 + e_1 v_1 \leq -W(\mathbf)+ \frac g_x(\mathbf) e_1 + e_1 v_1 : To ensure that \dot_1 \leq -W(\mathbf) < 0 (i.e., to ensure stability of the supersystem), we pick the control law ::v_1 = -\fracg_x(\mathbf)- k_1 e_1 : with k_1 > 0, and so ::\dot_1 = -W(\mathbf) + \frac g_x(\mathbf) e_1 + e_1\overbrace^ : After distributing the e_1 through, ::\begin \dot_1 & = -W(\mathbf) + \mathord - k_1 e_1^2\\ &= -W(\mathbf)-k_1 e_1^2 \leq -W(\mathbf)\\ &< 0 \end : So our ''candidate'' Lyapunov function V_1 is a true
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
, and our system is stable under this control law v_1 (which corresponds the control law u_1 because v_1 \triangleq u_1 - \dot_x). Using the variables from the original coordinate system, the equivalent Lyapunov function : : As discussed below, this Lyapunov function will be used again when this procedure is applied iteratively to multiple-integrator problem. * Our choice of control v_1 ultimately depends on all of our original state variables. In particular, the actual feedback-stabilizing control law : : The states and z_1 and functions f_x and g_x come from the system. The function u_x comes from our known-stable \dot=F(\mathbf) subsystem. The gain parameter k_1 > 0 affects the convergence rate or our system. Under this control law, our system is stable at the origin (\mathbf,z_1)=(\mathbf,0). : Recall that u_1 in Equation () drives the input of an integrator that is connected to a subsystem that is feedback-stabilized by the control law u_x. Not surprisingly, the control u_1 has a \dot_x term that will be integrated to follow the stabilizing control law \dot_x plus some offset. The other terms provide damping to remove that offset and any other perturbation effects that would be magnified by the integrator. So because this system is feedback stabilized by u_1(\mathbf, z_1) and has Lyapunov function V_1(\mathbf,z_1) with \dot_1(\mathbf, z_1) \leq -W(\mathbf) < 0, it can be used as the upper subsystem in another single-integrator cascade system.


Motivating Example: Two-integrator Backstepping

Before discussing the recursive procedure for the general multiple-integrator case, it is instructive to study the recursion present in the two-integrator case. That is, consider the
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
where \mathbf \in \mathbb^n and z_1 and z_2 are scalars. This system is a cascade connection of the single-integrator system in Equation () with another integrator (i.e., the input u_2 enters through an integrator, and the output of that integrator enters the system in Equation () by its u_1 input). By letting * \mathbf \triangleq \begin \mathbf \\ z_1 \end\,, * f_y(\mathbf) \triangleq \begin f_x(\mathbf) + g_x(\mathbf) z_1 \\ 0 \end\,, * g_y(\mathbf) \triangleq \begin \mathbf\\ 1 \end,\, then the two-integrator system in Equation () becomes the single-integrator system By the single-integrator procedure, the control law u_y(\mathbf) \triangleq u_1(\mathbf,z_1) stabilizes the upper z_2-to- subsystem using the Lyapunov function V_1(\mathbf,z_1), and so Equation () is a new single-integrator system that is structurally equivalent to the single-integrator system in Equation (). So a stabilizing control u_2 can be found using the same single-integrator procedure that was used to find u_1.


Many-integrator backstepping

In the two-integrator case, the upper single-integrator subsystem was stabilized yielding a new single-integrator system that can be similarly stabilized. This recursive procedure can be extended to handle any finite number of integrators. This claim can be formally proved with
mathematical induction Mathematical induction is a method for proving that a statement ''P''(''n'') is true for every natural number ''n'', that is, that the infinitely many cases ''P''(0), ''P''(1), ''P''(2), ''P''(3), ...  all hold. Informal metaphors help ...
. Here, a stabilized multiple-integrator system is built up from subsystems of already-stabilized multiple-integrator subsystems. * First, consider the
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
::\dot = f_x(\mathbf) + g_x(\mathbf) u_x :that has scalar input u_x and output states \mathbf = _1, x_2, \ldots, x_n \in \mathbb^n. Assume that **f_x(\mathbf) = \mathbf so that the zero-input (i.e., u_x = 0) system is stationary at the origin \mathbf = \mathbf\,. In this case, the origin is called an ''equilibrium'' of the system. **The feedback control law u_x(\mathbf) stabilizes the system at the equilibrium at the origin. **A
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
corresponding to this system is described by V_x(\mathbf). :That is, if output states are fed back to the input u_x by the control law u_x(\mathbf), then the output states (and the Lyapunov function) return to the origin after a single perturbation (e.g., after a nonzero initial condition or a sharp disturbance). This subsystem is stabilized by feedback control law u_x. * Next, connect an integrator to input u_x so that the augmented system has input u_1 (to the integrator) and output states . The resulting augmented dynamical system is ::\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 = u_1 \end :This "cascade" system matches the form in Equation (), and so the single-integrator backstepping procedure leads to the stabilizing control law in Equation (). That is, if we feed back states z_1 and to input u_1 according to the control law ::u_1(\mathbf,z_1)=-\fracg_x(\mathbf)-k_1(z_1-u_x(\mathbf)) + \frac(f_x(\mathbf)+g_x(\mathbf)z_1) : with gain k_1 > 0, then the states z_1 and will return to z_1 = 0 and \mathbf=\mathbf\, after a single perturbation. This subsystem is stabilized by feedback control law u_1, and the corresponding Lyapunov function from Equation () is ::V_1(\mathbf,z_1) = V_x(\mathbf) + \frac( z_1 - u_x(\mathbf) )^2 :That is, under feedback control law u_1, the Lyapunov function V_1 decays to zero as the states return to the origin. * Connect a new integrator to input u_1 so that the augmented system has input u_2 and output states . The resulting augmented dynamical system is ::\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 = z_2\\ \dot_2 = u_2 \end :which is equivalent to the ''single''-integrator system ::\begin \overbrace^ = \overbrace^ + \overbrace^ z_2 &\qquad \text V_1, \text u_1(\textbf_1) \text\\ \dot_2 = u_2 \end :Using these definitions of \mathbf_1, f_1, and g_1, this system can also be expressed as ::\begin \dot_1 = f_1(\mathbf_1) + g_1(\mathbf_1) z_2 &\qquad \text V_1, \text u_1(\textbf_1) \text\\ \dot_2 = u_2 \end :This system matches the single-integrator structure of Equation (), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states z_1, z_2, and to input u_2 according to the control law ::u_2(\mathbf,z_1,z_2)=-\frac g_1(\mathbf_1)-k_2(z_2-u_1(\mathbf_1)) + \frac(f_1(\mathbf_1)+g_1(\mathbf_1)z_2) :with gain k_2 > 0, then the states z_1, z_2, and will return to z_1 = 0, z_2 = 0, and \mathbf=\mathbf\, after a single perturbation. This subsystem is stabilized by feedback control law u_2, and the corresponding Lyapunov function is ::V_2(\mathbf,z_1,z_2) = V_1(\mathbf_1) + \frac( z_2 - u_1(\mathbf_1) )^2 :That is, under feedback control law u_2, the Lyapunov function V_2 decays to zero as the states return to the origin. * Connect an integrator to input u_2 so that the augmented system has input u_3 and output states . The resulting augmented dynamical system is ::\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 = z_2\\ \dot_2 = z_3\\ \dot_3 = u_3 \end :which can be re-grouped as the ''single''-integrator system ::\begin \overbrace^ = \overbrace^ + \overbrace^ z_3 &\qquad \text V_2, \text u_2(\textbf_2) \text\\ \dot_3 = u_3 \end :By the definitions of \mathbf_1, f_1, and g_1 from the previous step, this system is also represented by ::\begin \overbrace^ = \overbrace^ + \overbrace^ z_3 &\qquad \text V_2, \text u_2(\textbf_2) \text\\ \dot_3 = u_3 \end :Further, using these definitions of \mathbf_2, f_2, and g_2, this system can also be expressed as ::\begin \dot_2 = f_2(\mathbf_2) + g_2(\mathbf_2) z_3 &\qquad \text V_2, \text u_2(\textbf_2) \text\\ \dot_3 = u_3 \end :So the re-grouped system has the single-integrator structure of Equation (), and so the single-integrator backstepping procedure can be applied again. That is, if we feed back states z_1, z_2, z_3, and to input u_3 according to the control law ::u_3(\mathbf,z_1,z_2,z_3)=-\frac g_2(\mathbf_2)-k_3(z_3-u_2(\mathbf_2)) + \frac(f_2(\mathbf_2)+g_2(\mathbf_2)z_3) :with gain k_3 > 0, then the states z_1, z_2, z_3, and will return to z_1 = 0, z_2 = 0, z_3 = 0, and \mathbf=\mathbf\, after a single perturbation. This subsystem is stabilized by feedback control law u_3, and the corresponding Lyapunov function is ::V_3(\mathbf,z_1,z_2,z_3) = V_2(\mathbf_2) + \frac( z_3 - u_2(\mathbf_2) )^2 :That is, under feedback control law u_3, the Lyapunov function V_3 decays to zero as the states return to the origin. * This process can continue for each integrator added to the system, and hence any system of the form ::\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1 &\qquad \text V_x, \text u_x(\textbf) \text\\ \dot_1 = z_2\\ \dot_2 = z_3\\ \vdots\\ \dot_i = z_\\ \vdots\\ \dot_ = z_\\ \dot_ = z_k\\ \dot_k = u \end :has the recursive structure ::\begin \begin \begin \begin \begin \begin \begin \begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1 &\qquad \text V_x, \text u_x(\textbf) \text\\ \dot_1 = z_2 \end\\ \dot_2 = z_3 \end\\ \vdots \end\\ \dot_i = z_ \end\\ \vdots \end\\ \dot_ = z_ \end\\ \dot_ = z_k \end\\ \dot_k = u \end :and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator (\mathbf,z_1) subsystem (i.e., with input z_2 and output ) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control is known. At iteration , the equivalent system is ::\begin \overbrace^ = \overbrace^ + \overbrace^ z_i &\quad \text V_, \text u_(\textbf_) \text\\ \dot_i = u_i \end :The corresponding feedback-stabilizing control law is ::u_i(\overbrace^)=-\frac g_(\mathbf_) \, - \, k_i(z_i \, - \, u_(\mathbf_)) \, + \, \frac(f_(\mathbf_) \, + \, g_(\mathbf_)z_i) :with gain k_i > 0. The corresponding Lyapunov function is ::V_i(\mathbf_i) = V_(\mathbf_) + \frac( z_i - u_(\mathbf_) )^2 :By this construction, the ultimate control u(\mathbf,z_1,z_2,\ldots,z_k) = u_k(\mathbf_k) (i.e., ultimate control is found at final iteration i=k). Hence, any system in this special many-integrator strict-feedback form can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an
adaptive control Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumpt ...
algorithm).


Generic Backstepping

Systems in the special strict-feedback form have a recursive structure similar to the many-integrator system structure. Likewise, they are stabilized by stabilizing the smallest cascaded system and then ''backstepping'' to the next cascaded system and repeating the procedure. So it is critical to develop a single-step procedure; that procedure can be recursively applied to cover the many-step case. Fortunately, due to the requirements on the functions in the strict-feedback form, each single-step system can be rendered by feedback to a single-integrator system, and that single-integrator system can be stabilized using methods discussed above.


Single-step Procedure

Consider the simple strict-feedback system where * \mathbf = _1, x_2, \ldots, x_n \in \mathbb^n, * z_1 and u_1 are scalars, * For all and z_1, g_1(\mathbf,z_1) \neq 0. Rather than designing feedback-stabilizing control u_1 directly, introduce a new control u_ (to be designed ''later'') and use control law :u_1( \mathbf, z_1 ) = \frac \left( u_ - f_1(\mathbf,z_1) \right) which is possible because g_1 \neq 0. So the system in Equation () is :\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 = f_1(\mathbf, z_1) + g_1(\mathbf, z_1) \overbrace^ \end which simplifies to :\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1\\ \dot_1 = u_ \end This new u_-to- system matches the ''single-integrator cascade system'' in Equation (). Assuming that a feedback-stabilizing control law u_x(\mathbf) and
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s s ...
V_x(\mathbf) for the upper subsystem is known, the feedback-stabilizing control law from Equation () is :u_(\mathbf,z_1)=-\fracg_x(\mathbf)-k_1(z_1-u_x(\mathbf)) + \frac(f_x(\mathbf)+g_x(\mathbf)z_1) with gain k_1 > 0. So the final feedback-stabilizing control law is with gain k_1 > 0. The corresponding Lyapunov function from Equation () is Because this ''strict-feedback system'' has a feedback-stabilizing control and a corresponding Lyapunov function, it can be cascaded as part of a larger strict-feedback system, and this procedure can be repeated to find the surrounding feedback-stabilizing control.


Many-step Procedure

As in many-integrator backstepping, the single-step procedure can be completed iteratively to stabilize an entire strict-feedback system. In each step, # The smallest "unstabilized" single-step strict-feedback system is isolated. # Feedback is used to convert the system into a single-integrator system. # The resulting single-integrator system is stabilized. # The stabilized system is used as the upper system in the next step. That is, any ''strict-feedback system'' :\begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1 &\qquad \text V_x, \text u_x(\textbf) \text\\ \dot_1 = f_1( \mathbf, z_1 ) + g_1( \mathbf, z_1 ) z_2\\ \dot_2 = f_2( \mathbf, z_1, z_2 ) + g_2( \mathbf, z_1, z_2 ) z_3\\ \vdots\\ \dot_i = f_i( \mathbf, z_1, z_2, \ldots, z_i ) + g_i( \mathbf, z_1, z_2, \ldots, z_i ) z_\\ \vdots\\ \dot_ = f_( \mathbf, z_1, z_2, \ldots z_ ) + g_( \mathbf, z_1, z_2, \ldots, z_ ) z_\\ \dot_ = f_( \mathbf, z_1, z_2, \ldots z_, z_ ) + g_( \mathbf, z_1, z_2, \ldots, z_, z_ ) z_k\\ \dot_k = f_k( \mathbf, z_1, z_2, \ldots z_, z_k ) + g_k( \mathbf, z_1, z_2, \ldots, z_, z_k ) u \end has the recursive structure :\begin \begin \begin \begin \begin \begin \begin \begin \dot = f_x(\mathbf) + g_x(\mathbf) z_1 &\qquad \text V_x, \text u_x(\textbf) \text\\ \dot_1 = f_1( \mathbf, z_1 ) + g_1( \mathbf, z_1 ) z_2 \end\\ \dot_2 = f_2( \mathbf, z_1, z_2 ) + g_2( \mathbf, z_1, z_2 ) z_3 \end\\ \vdots\\ \end\\ \dot_i = f_i( \mathbf, z_1, z_2, \ldots, z_i ) + g_i( \mathbf, z_1, z_2, \ldots, z_i ) z_ \end\\ \vdots \end\\ \dot_ = f_( \mathbf, z_1, z_2, \ldots z_ ) + g_( \mathbf, z_1, z_2, \ldots, z_ ) z_ \end\\ \dot_ = f_( \mathbf, z_1, z_2, \ldots z_, z_ ) + g_( \mathbf, z_1, z_2, \ldots, z_, z_ ) z_k \end\\ \dot_k = f_k( \mathbf, z_1, z_2, \ldots z_, z_k ) + g_k( \mathbf, z_1, z_2, \ldots, z_, z_k ) u \end and can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator (\mathbf,z_1) subsystem (i.e., with input z_2 and output ) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control is known. At iteration , the equivalent system is :\begin \overbrace^ = \overbrace^ + \overbrace^ z_i &\quad \text V_, \text u_(\textbf_) \text\\ \dot_i = f_i(\mathbf_i) + g_i(\mathbf_i) u_i \end By Equation (), the corresponding feedback-stabilizing control law is :u_i(\overbrace^) = \frac \left( \overbrace^ \, - \, f_i( \mathbf_ ) \right) with gain k_i > 0. By Equation (), the corresponding Lyapunov function is :V_i(\mathbf_i) = V_(\mathbf_) + \frac ( z_i - u_(\mathbf_) )^2 By this construction, the ultimate control u(\mathbf,z_1,z_2,\ldots,z_k) = u_k(\mathbf_k) (i.e., ultimate control is found at final iteration i=k). Hence, any strict-feedback system can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an
adaptive control Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumpt ...
algorithm).


See also

*
Nonlinear control Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dyn ...
* Strict-feedback form *
Robust control In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typicall ...
*
Adaptive control Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumpt ...


References

{{Reflist Nonlinear control