Overview
The ''standard'' approach to control systems design is organized in two-steps: # Model identification aims at estimating a nominal model of the system , where is the unit-delay operator (for discrete-time transfer functions representation) and is the vector of parameters of identified on a set of data. Then, validation consists in constructing the ''uncertainty set'' that contains the true system at a certain probability level. # Controller design aims at finding a controller achieving closed-loop stability and meeting the required performance with . Typical objectives of system identification are to have as close as possible to , and to have as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model. One way to deal with uncertainty is to design a controller that has an acceptable performance with all models in , including . This is the main idea behindIndirect and direct methods
There are many methods available to control the systems. The fundamental distinction is between indirect and direct controller design methods. The former group of techniques is still retaining the standard two-step approach, ''i.e.'' first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated model (according to theIterative and noniterative methods
Another important distinction is between iterative and noniterative (or one-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which theOn-line and off-line methods
Since, on practical industrial applications, open-loop or closed-loop data are often available continuously, on-line data-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead, off-line approaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time.Iterative feedback tuning
The iterative feedback tuning (IFT) method was introduced in 1994, starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle. IFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation. Let be a desired output to the reference signal ; the error between the achieved and desired response is . The control design objective can be formulated as the minimization of the objective function: : Given the objective function to minimize, the ''quasi-Newton method'' can be applied, i.e. a gradient-based minimization using a gradient search of the type: : The value is the step size, is an appropriate positive definite matrix and is an approximation of the gradient; the true value of the gradient is given by the following: : The value of is obtained through the following three-step methodology: # Normal Experiment: Perform an experiment on the closed loop system with as controller and as reference; collect N measurements of the output , denoted as . # Gradient Experiment: Perform an experiment on the closed loop system with as controller and 0 as reference ; inject the signal such that it is summed to the control variable output by , going as input into the plant. Collect the output, denoted as . # Take the following as gradient approximation: . A crucial factor for the convergence speed of the algorithm is the choice of ; when is small, a good choice is the approximation given by the Gauss–Newton direction: :Noniterative correlation-based tuning
Noniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset. Suppose that denotes an unknown LTI stable SISO plant, a user-defined reference model and a user-defined weighting function. An LTI fixed-order controller is indicated as , where , and is a vector of LTI basis functions. Finally, is an ideal LTI controller of any structure, guaranteeing a closed-loop function when applied to . The goal is to minimize the following objective function: : is a convex approximation of the objective function obtained from a model reference problem, supposing that . When is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm of in the scheme in figure. The input signal is supposed to be a persistently exciting input signal and to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal error is uncorrelated with . The control objective thus consists in finding such that and are uncorrelated. The vector of ''instrumental variables'' is defined as: : where is large enough and , where is an appropriate filter. The correlation function is: : and the optimization problem becomes: : Denoting with the spectrum of , it can be demonstrated that, under some assumptions, if is selected as: : then, the following holds: :Stability constraint
There is no guarantee that the controller that minimizes is stable. Instability may occur in the following cases: * If is non-minimum phase, may lead to cancellations in the right-half complex plane. * If (even if stabilizing) is not achievable, may not be stabilizing. * Due to measurement noise, even if is stabilizing, data-estimated may not be so. Consider a stabilizing controller and the closed loop transfer function . Define: : : :Theorem :''The controller stabilizes the plant if'' # '' is stable'' # '' s.t. '' Condition 1. is enforced when: * is stable * contains an integrator (it is canceled). The model reference design with stability constraint becomes: : : A convex data-driven estimation of can be obtained through theVirtual reference feedback tuning
Virtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset. VRFT was first proposed in and then extended to LPV systems. VRFT also builds on ideas given in Guardabassi, Guido O., and Sergio M. Savaresi. "Approximate feedback linearization of discrete-time non-linear systems using virtual input direct design." Systems & Control Letters 32.2 (1997): 63–74. as . The main idea is to define a desired closed loop model and to use its inverse dynamics to obtain a virtual reference from the measured output signal . The virtual signals are and The optimal controller is obtained from noiseless data by solving the following optimization problem: : where the optimization function is given as follows: :References
{{reflistExternal links