HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and
control theory Control theory is a field of control engineering and applied mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the applic ...
, Kalman filtering (also known as linear quadratic estimation) is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
that uses a series of measurements observed over time, including
statistical noise In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) ''Y'' which cannot be explained, i.e., which is not correctly predicted, by the ex ...
and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a
joint probability distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán. Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is much applied in
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
analysis tasks such as
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
and
econometrics Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
. Kalman filtering is also important for robotic
motion planning Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used ...
and control, and can be used for
trajectory optimization Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop ...
. Kalman filtering also works for modeling the
central nervous system The central nervous system (CNS) is the part of the nervous system consisting primarily of the brain, spinal cord and retina. The CNS is so named because the brain integrates the received information and coordinates and influences the activity o ...
's control of movement. Due to the time delay between issuing motor commands and receiving
sensory feedback Feedback occurs when outputs of a system are routed back as inputs as part of a Signal chain (signal processing chain), chain of Causality, cause and effect that forms a circuit or loop. The system can then be said to ''feed back'' into itself ...
, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands. The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the current
state variable A state variable is one of the set of Variable (mathematics), variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behavi ...
s, including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a
weighted average The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
, with more weight given to estimates with greater certainty. The algorithm is
recursive Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possible ''
linear In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a '' function'' (or '' mapping''); * linearity of a '' polynomial''. An example of a linear function is the function defined by f(x) ...
'' estimator in the minimum mean-square-error sense, although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian. Extensions and
generalization A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteri ...
s of the method have also been developed, such as the
extended Kalman filter In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered t ...
and the
unscented Kalman filter In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of un ...
which work on
nonlinear system In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathem ...
s. The basis is a
hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
such that the
state space In computer science, a state space is a discrete space representing the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial ...
of the
latent variable In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
s is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed
sensor networks Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental ...
to develop distributed or consensus Kalman filtering.


History

The filtering method is named for Hungarian
émigré An ''émigré'' () is a person who has emigrated, often with a connotation of political or social exile or self-exile. The word is the past participle of the French verb ''émigrer'' meaning "to emigrate". French Huguenots Many French Hugueno ...
Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the
Johns Hopkins Applied Physics Laboratory The Johns Hopkins University Applied Physics Laboratory (or simply Applied Physics Laboratory, or APL) is a not-for-profit university-affiliated research center (UARC) in Howard County, Maryland. It is affiliated with Johns Hopkins University ...
contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Kalman was inspired to derive the Kalman filter by applying state variables to the Wiener filtering problem. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the
NASA Ames Research Center The Ames Research Center (ARC), also known as NASA Ames, is a major NASA research center at Moffett Federal Airfield in California's Silicon Valley. It was founded in 1939 as the second National Advisory Committee for Aeronautics (NACA) laborat ...
that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the
Apollo program The Apollo program, also known as Project Apollo, was the United States human spaceflight program led by NASA, which Moon landing, landed the first humans on the Moon in 1969. Apollo followed Project Mercury that put the first Americans in sp ...
resulting in its incorporation in the Apollo navigation computer. This digital filter is sometimes termed the ''Stratonovich–Kalman–Bucy filter'' because it is a special case of a more general, nonlinear filter developed by the
Soviet The Union of Soviet Socialist Republics. (USSR), commonly known as the Soviet Union, was a List of former transcontinental countries#Since 1700, transcontinental country that spanned much of Eurasia from 1922 until Dissolution of the Soviet ...
mathematician A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, mathematical structure, structure, space, Mathematica ...
Ruslan Stratonovich Ruslan Leont'evich Stratonovich () was a Russian physicist, engineer, and probabilist and one of the founders of the theory of stochastic differential equations. Biography Ruslan Stratonovich was born on 31 May 1930 in Moscow. He studied from 1 ...
. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow. This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). Kalman filters have been vital in the implementation of the navigation systems of
U.S. Navy The United States Navy (USN) is the maritime service branch of the United States Department of Defense. It is the world's most powerful navy with the largest displacement, at 4.5 million tons in 2021. It has the world's largest aircraft ...
nuclear
ballistic missile submarine A ballistic missile submarine is a submarine capable of deploying submarine-launched ballistic missiles (SLBMs) with nuclear warheads. These submarines became a major weapon system in the Cold War because of their nuclear deterrence capabi ...
s, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the
U.S. Air Force The United States Air Force (USAF) is the air service branch of the United States Department of Defense. It is one of the six United States Armed Forces and one of the eight uniformed services of the United States. Tracing its origins to 1 ...
's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of
reusable launch vehicle A reusable launch vehicle has parts that can be recovered and reflown, while carrying payloads from the surface to outer space. Rocket stages are the most common launch vehicle parts aimed for reuse. Smaller parts such as fairings, booster ...
s and the
attitude control Spacecraft attitude control is the process of controlling the orientation of a spacecraft (vehicle or satellite) with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, ...
and navigation systems of spacecraft which dock at the
International Space Station The International Space Station (ISS) is a large space station that was Assembly of the International Space Station, assembled and is maintained in low Earth orbit by a collaboration of five space agencies and their contractors: NASA (United ...
.


Overview of the calculation

Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its
state State most commonly refers to: * State (polity), a centralized political organization that regulates law and society within a territory **Sovereign state, a sovereign polity in international law, commonly referred to as a country **Nation state, a ...
) that is better than the estimate obtained by using only one measurement alone. As such, it is a common
sensor fusion Sensor fusion is a process of combining sensor data or data derived from disparate sources so that the resulting information has less uncertainty than would be possible if these sources were used individually. For instance, one could potentially o ...
and data fusion algorithm. Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a
weighted average The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's '' gain''. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness. When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into
matrices Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the ...
because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.


Example application

As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a
GPS The Global Positioning System (GPS) is a satellite-based hyperbolic navigation system owned by the United States Space Force and operated by Mission Delta 31. It is one of the global navigation satellite systems (GNSS) that provide geol ...
unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as
dead reckoning In navigation, dead reckoning is the process of calculating the current position of a moving object by using a previously determined position, or fix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. T ...
. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate. For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.


Technical description and context

The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of
engineering Engineering is the practice of using natural science, mathematics, and the engineering design process to Problem solving#Engineering, solve problems within technology, increase efficiency and productivity, and improve Systems engineering, s ...
and
econometric Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8� ...
applications from
radar Radar is a system that uses radio waves to determine the distance ('' ranging''), direction ( azimuth and elevation angles), and radial velocity of objects relative to the site. It is a radiodetermination method used to detect and track ...
and
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
to estimation of structural macroeconomic models, and is an important topic in
control theory Control theory is a field of control engineering and applied mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the applic ...
and
control system A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial ...
s engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. In most applications, the internal state is much larger (has more
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For the
Dempster–Shafer theory The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and ...
, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include
belief filter In probability theory, statistics, and machine learning, recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function (PDF) recursively over time using in ...
ing which use Bayes or evidential updates to the state equations. A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, the
Kalman–Bucy filter In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the
phase-locked loop A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is fixed relative to the phase of an input signal. Keeping the input and output phase in lockstep also implies keeping the input and ou ...
, which is now ubiquitous in radios, especially
frequency modulation Frequency modulation (FM) is a signal modulation technique used in electronic communication, originally for transmitting messages with a radio wave. In frequency modulation a carrier wave is varied in its instantaneous frequency in proporti ...
(FM) radios, television sets,
satellite communications A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. ...
receivers, outer space communications systems, and nearly any other electronic communications equipment.


Underlying dynamic system model

Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
built on
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
s perturbed by errors that may include
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
noise Noise is sound, chiefly unwanted, unintentional, or harmful sound considered unpleasant, loud, or disruptive to mental or hearing faculties. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrat ...
. The
state State most commonly refers to: * State (polity), a centralized political organization that regulates law and society within a territory **Sovereign state, a sovereign polity in international law, commonly referred to as a country **Nation state, a ...
of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
of
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s. At each
discrete time In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled. Discrete time Discrete time views values of variables as occurring at distinct, separate "poi ...
increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13.Hamilton, J. (1994), ''Time Series Analysis'', Princeton University Press. Chapter 13, 'The Kalman Filter' In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step ''k'', following: * \mathbf_k, the state-transition model; * \mathbf_k, the observation model; * \mathbf_k, the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
of the process noise; * \mathbf_k, the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
of the observation noise; * and sometimes \mathbf_k, the control-input model as described below; if \mathbf_k is included, then there is also * \mathbf_k, the control vector, representing the controlling input into control-input model. As seen below, it is common in many applications that the matrices \mathbf, \mathbf, \mathbf, \mathbf, and \mathbf are constant across time, in which case their k index may be dropped. The Kalman filter model assumes the true state at time k is evolved from the state at k-1 according to : \mathbf_k = \mathbf_k \mathbf_ +\mathbf_k \mathbf_ + \mathbf_k where * \mathbf_k is the state transition model which is applied to the previous state x''k''−1; * \mathbf_k is the control-input model which is applied to the control vector \mathbf_k; * \mathbf_k is the process noise, which is assumed to be drawn from a zero mean
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
, \mathcal, with
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
, \mathbf_k: \mathbf_k \sim \mathcal\left(0, \mathbf_k\right) . If \mathbf is independent of time, one may, following Roweis and Ghahramani, write \mathbf_\bull instead of \mathbf_k to emphasize that the noise has no explicit knowledge of time. At time k an observation (or measurement) \mathbf_kof the true state \mathbf_k is made according to :\mathbf_k = \mathbf_k \mathbf_k + \mathbf_k where * \mathbf_k is the observation model, which maps the true state space into the observed space and * \mathbf_k is the observation noise, which is assumed to be zero mean Gaussian
white noise In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used with this or similar meanings in many scientific and technical disciplines, i ...
with covariance \mathbf_k: \mathbf_k \sim \mathcal\left(0, \mathbf_k\right) . Analogously to the situation for \mathbf_k, one may write \mathbf_\bull instead of \mathbf_k if \mathbf is independent of time. The initial state, and the noise vectors at each step \ are all assumed to be mutually
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
. Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using
robust control In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typical ...
.


Details

The Kalman filter is a
recursive Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation \hat_ represents the estimate of \mathbf at time ''n'' given observations up to and including at time . The state of the filter is represented by two variables: * \hat\mathbf_, the ''
a posteriori ('from the earlier') and ('from the later') are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on experience. knowledge is independent from any experience. Examples include ...
'' state estimate mean at time ''k'' given observations up to and including at time ''k''; * \mathbf_, the ''a posteriori'' estimate covariance matrix (a measure of the estimated
accuracy Accuracy and precision are two measures of ''observational error''. ''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value''. ''Precision'' is how close the measurements are to each other. The ...
of the state estimate). The algorithm structure of the Kalman filter resembles that of
Alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of State observer, observer for estimation, data smo ...
. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the ''a priori'' state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the
innovation Innovation is the practical implementation of ideas that result in the introduction of new goods or service (economics), services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a n ...
(the pre-fit residual), i.e. the difference between the current ''a priori'' prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the ''a posteriori'' state estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices H''k'').


Predict


Update

The formula for the updated (''a posteriori'') estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the '' derivations'' section, where the formula valid for any Kk is also shown. A more intuitive way to express the updated state estimate (\hat_) is: :\hat_ = (\mathbf - \mathbf_k \mathbf_k) \hat_ + \mathbf_k \mathbf_k This expression reminds us of a linear interpolation, x = (1-t)(a) + t(b) for t between ,1 In our case: * t is the matrix \mathbf_k \mathbf_k that takes values from 0 (high error in the sensor) to I or a projection (low error). * a is the internal state \hat_ estimated from the model. * b is the internal state \mathbf_k \mathbf_k estimated from the measurement, assuming \mathbf_k is nonsingular. This expression also resembles the
alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of State observer, observer for estimation, data smo ...
update step.


Invariants

If the model is accurate, and the values for \hat_ and \mathbf_ accurately reflect the distribution of the initial state values, then the following invariants are preserved: :\begin \operatorname mathbf_k - \hat_&= \operatorname mathbf_k - \hat_= 0 \\ \operatorname tilde_k&= 0 \end where \operatorname xi/math> is the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of \xi. That is, all estimates have a mean error of zero. Also: :\begin \mathbf_ &= \operatorname\left(\mathbf_k - \hat_\right) \\ \mathbf_ &= \operatorname\left(\mathbf_k - \hat_\right) \\ \mathbf_k &= \operatorname\left(\tilde_k\right) \end so covariance matrices accurately reflect the covariance of estimates.


Estimation of the noise covariances Q''k'' and R''k''

Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Q''k'' and R''k''. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the ''autocovariance least-squares (ALS)'' technique that uses the time-lagged
autocovariance In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the proces ...
s of routine operating data to estimate the covariances. The
GNU Octave GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
and
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
code used to calculate the noise covariance matrices using the ALS technique is available online using the
GNU General Public License The GNU General Public Licenses (GNU GPL or simply GPL) are a series of widely used free software licenses, or ''copyleft'' licenses, that guarantee end users the freedom to run, study, share, or modify the software. The GPL was the first ...
. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is the ''Optimized Kalman Filter'' (''OKF''), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation. These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies.


Optimality and performance

The Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated), and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters. Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by setting Q''k'' and R''k'' to the covariances of the noise. Instead, in that case, the parameters Q''k'' and R''k'' may be set to explicitly optimize the state estimation, e.g., using standard
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
. After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the
innovations Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed entit ...
measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.


Example application, technical

Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δ''t'' seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and
velocity Velocity is a measurement of speed in a certain direction of motion. It is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of physical objects. Velocity is a vector (geometry), vector Physical q ...
. We show here how we derive the model from which we create our Kalman filter. Since \mathbf, \mathbf, \mathbf, \mathbf are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space :\mathbf_k = \begin x \\ \dot \end where \dot is the velocity, that is, the derivative of position with respect to time. We assume that between the (''k'' âˆ’ 1) and ''k'' timestep, uncontrolled forces cause a constant acceleration of ''a''''k'' that is
normally distributed In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
with mean 0 and standard deviation ''σ''''a''. From
Newton's laws of motion Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows: # A body re ...
we conclude that :\mathbf_k = \mathbf \mathbf_ + \mathbf a_k (there is no \mathbfu term since there are no known control inputs. Instead, ''a''''k'' is the effect of an unknown input and \mathbf applies that effect to the state vector) where :\begin \mathbf &= \begin 1 & \Delta t \\ 0 & 1 \end \\ pt \mathbf &= \begin \frac^2 \\ pt \Delta t \end \end so that :\mathbf_k = \mathbf \mathbf_ + \mathbf_k where :\begin \mathbf_k &\sim N(0, \mathbf) \\ \mathbf &= \mathbf\mathbf^\textsf\sigma_a^2 = \begin \frac^4 & \frac^3 \\ pt \frac^3 & ^2 \end\sigma_a^2. \end The matrix \mathbf is not full rank (it is of rank one if \Delta t \neq 0). Hence, the distribution N(0, \mathbf) is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by :\mathbf_k \sim \mathbf \cdot N\left(0, \sigma_a^2\right). At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise ''v''''k'' is also distributed normally, with mean 0 and standard deviation ''σ''''z''. :\mathbf_k = \mathbf_k + \mathbf_k where :\mathbf = \begin 1 & 0 \end and : \mathbf = \mathrm\left mathbf_k \mathbf_k^\textsf\right = \begin \sigma_z^2 \end We know the initial starting state of the truck with perfect precision, so we initialize :\hat_ = \begin 0 \\ 0 \end and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: :\mathbf_ = \begin 0 & 0 \\ 0 & 0 \end If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: :\mathbf_ = \begin \sigma_x^2 & 0 \\ 0 & \sigma_\dot^2 \end The filter will then prefer the information from the first measurements over the information already in the model.


Asymptotic form

For simplicity, assume that the control input \mathbf_k=\mathbf. Then the Kalman filter may be written: :\hat_ = \mathbf_k \hat_ + \mathbf_k mathbf_k - \mathbf_k \mathbf_k\hat_ A similar equation holds if we include a non-zero control input. Gain matrices \mathbf_k and covariance matrices \mathbf_ evolve independently of the measurements \mathbf_k. From above, the four equations needed for updating the matrices are as follows: :\begin \mathbf_ &= \mathbf_k \mathbf_ \mathbf_k^\textsf + \mathbf_k, \\ \mathbf_k &= \mathbf_k \mathbf_ \mathbf_k^\textsf + \mathbf_k, \\ \mathbf_k &= \mathbf_\mathbf_k^\textsf \mathbf_k^, \\ \mathbf_ &= \left(\mathbf - \mathbf_k \mathbf_k\right) \mathbf_. \end Since these depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices \mathbf_k to an asymptotic matrix \mathbf_\infty applies for conditions established in Walrand and Dimakis. If the \mathbf_ series converges, then it converges exponentially to an asymptotic \mathbf_\infty, assuming non-zero plant noise. Recent analysis has shown that the ''rate and nature'' of this convergence can involve multiple coequal modes, including oscillatory components, depending on the eigenstructure of the Jacobian of the above Riccati map evaluated at \mathbf_\infty. For the moving truck example described above, with \Delta t = 1 and \sigma_a^2=\sigma_z^2 =\sigma_x^2= \sigma_\dot^2=1, simulation shows convergence in 10 iterations. Using the asymptotic gain, and assuming \mathbf_k and \mathbf_k are independent of k, the Kalman filter becomes a linear time-invariant filter: :\hat_ = \mathbf \hat_ + \mathbf_\infty mathbf_k - \mathbf\mathbf \hat_ The asymptotic gain \mathbf_\infty, if it exists, can be computed by first solving the following discrete
Riccati equation In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form y'(x) = q_0(x) + q_1(x) \, y(x) + q_2(x) \, y^2( ...
for the asymptotic state covariance \mathbf_\infty: :\mathbf_\infty = \mathbf\left(\mathbf_\infty - \mathbf_\infty \mathbf^\textsf \left(\mathbf\mathbf_\infty\mathbf^\textsf + \mathbf\right) ^ \mathbf\mathbf_\infty\right) \mathbf^\textsf + \mathbf. The asymptotic gain is then computed as before. :\mathbf_\infty = \mathbf_\infty \mathbf^\textsf \left( \mathbf + \mathbf \mathbf_\infty \mathbf^\textsf \right) ^. Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by : where :\overline_\infty = \mathbf\mathbf_\infty \mathbf^\textsf \left( \mathbf + \mathbf \mathbf_\infty \mathbf^\textsf \right) ^. This leads to an estimator of the form :


Derivations

The Kalman filter can be derived as a
generalized least squares In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a Linear regression, linear regression model. It is used when there is a non-zero amount of correlation between the Residual (statistics), resi ...
method operating on previous data.


Deriving the ''posteriori'' estimate covariance matrix

Starting with our invariant on the error covariance P''k'' ,  ''k'' as above :\mathbf_ = \operatorname\left(\mathbf_k - \hat_\right) substitute in the definition of \hat_ :\mathbf_ = \operatorname\left mathbf_k - \left(\hat_ + \mathbf_k\tilde_k\right)\right/math> and substitute \tilde_k :\mathbf_ = \operatorname\left(\mathbf_k - \left hat_ + \mathbf_k\left(\mathbf_k - \mathbf_k\hat_\right)\rightright) and \mathbf_k :\mathbf_ = \operatorname\left(\mathbf_k - \left hat_ + \mathbf_k\left(\mathbf_k\mathbf_k + \mathbf_k - \mathbf_k\hat_\right)\rightright) and by collecting the error vectors we get :\mathbf_ = \operatorname\left left(\mathbf - \mathbf_k \mathbf_k\right)\left(\mathbf_k - \hat_\right) - \mathbf_k \mathbf_k\right/math> Since the measurement error v''k'' is uncorrelated with the other terms, this becomes :\mathbf_ = \operatorname\left left(\mathbf - \mathbf_k \mathbf_k\right)\left(\mathbf_k - \hat_\right)\right+ \operatorname\left mathbf_k \mathbf_k\right/math> by the properties of vector covariance this becomes :\mathbf_ = \left(\mathbf - \mathbf_k \mathbf_k\right)\operatorname\left(\mathbf_k - \hat_\right)\left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k\operatorname\left(\mathbf_k\right)\mathbf_k^\textsf which, using our invariant on P''k'' ,  ''k''−1 and the definition of R''k'' becomes :\mathbf_ = \left(\mathbf - \mathbf_k \mathbf_k\right) \mathbf_ \left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k \mathbf_k \mathbf_k^\textsf This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of K''k''. It turns out that if K''k'' is the optimal Kalman gain, this can be simplified further as shown below.


Kalman gain derivation

The Kalman filter is a minimum mean-square error (MMSE) estimator. The error in the ''a posteriori'' state estimation is :\mathbf_k - \hat_ We seek to minimize the expected value of the square of the magnitude of this vector, \operatorname\left \mathbf_ - \hat_\right\, ^2\right/math>. This is equivalent to minimizing the
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album), by Nell Other uses in arts and entertainment * ...
of the ''a posteriori'' estimate
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
\mathbf_ . By expanding out the terms in the equation above and collecting, we get: :\begin \mathbf_ & = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \left(\mathbf_k \mathbf_ \mathbf_k^\textsf + \mathbf_k\right) \mathbf_k^\textsf \\ pt &= \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \mathbf_k\mathbf_k^\textsf \end The trace is minimized when its
matrix derivative In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multi ...
with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that :\frac = -2 \left(\mathbf_k \mathbf_\right)^\textsf + 2 \mathbf_k \mathbf_k = 0. Solving this for K''k'' yields the Kalman gain: :\begin \mathbf_k \mathbf_k &= \left(\mathbf_k \mathbf_\right)^\textsf = \mathbf_ \mathbf_k^\textsf \\ \Rightarrow \mathbf_k &= \mathbf_ \mathbf_k^\textsf \mathbf_k^ \end This gain, which is known as the ''optimal Kalman gain'', is the one that yields MMSE estimates when used.


Simplification of the ''posteriori'' error covariance formula

The formula used to calculate the ''a posteriori'' error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by S''k''K''k''T, it follows that :\mathbf_k \mathbf_k \mathbf_k^\textsf = \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf Referring back to our expanded formula for the ''a posteriori'' error covariance, : \mathbf_ = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ - \mathbf_ \mathbf_k^\textsf \mathbf_k^\textsf + \mathbf_k \mathbf_k \mathbf_k^\textsf we find the last two terms cancel out, giving : \mathbf_ = \mathbf_ - \mathbf_k \mathbf_k \mathbf_ = (\mathbf - \mathbf_k \mathbf_k) \mathbf_ This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and ...
, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the ''a posteriori'' error covariance formula as derived above (Joseph form) must be used.


Sensitivity analysis

The Kalman filtering equations provide an estimate of the state \hat_ and its error covariance \mathbf_ recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices \mathbf_ and \mathbf_k, the expression :\mathbf_ = \left(\mathbf - \mathbf_k\mathbf_k\right)\mathbf_\left(\mathbf - \mathbf_k\mathbf_k\right)^\textsf + \mathbf_k\mathbf_k\mathbf_k^\textsf no longer provides the actual error covariance. In other words, \mathbf_ \neq E\left left(\mathbf_k - \hat_\right)\left(\mathbf_k - \hat_\right)^\textsf\right/math>. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices \mathbf_k and \mathbf_k that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by \mathbf^a_k and \mathbf^a_k respectively, whereas the design values used in the estimator are \mathbf_k and \mathbf_k respectively. The actual error covariance is denoted by \mathbf_^a and \mathbf_ as computed by the Kalman filter is referred to as the Riccati variable. When \mathbf_k \equiv \mathbf^a_k and \mathbf_k \equiv \mathbf^a_k, this means that \mathbf_ = \mathbf_^a. While computing the actual error covariance using \mathbf_^a = E\left left(\mathbf_k - \hat_\right)\left(\mathbf_k - \hat_\right)^\textsf\right, substituting for \widehat_ and using the fact that E\left mathbf_k\mathbf_k^\textsf\right= \mathbf_k^a and E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k^a, results in the following recursive equations for \mathbf_^a : :\mathbf_^a = \mathbf_k\mathbf_^a \mathbf_k^\textsf + \mathbf_k^a and :\mathbf_^a = \left(\mathbf - \mathbf_k \mathbf_k\right)\mathbf_^a \left(\mathbf - \mathbf_k \mathbf_k\right)^\textsf + \mathbf_k \mathbf_k^a \mathbf_k^\textsf While computing \mathbf_, by design the filter implicitly assumes that E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k and E\left mathbf_k \mathbf_k^\textsf\right= \mathbf_k. The recursive expressions for \mathbf_^a and \mathbf_ are identical except for the presence of \mathbf_k^a and \mathbf_k^a in place of the design values \mathbf_k and \mathbf_k respectively. Researches have been done to analyze Kalman filter system's robustness.


Factored form

One problem with the Kalman filter is its
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and ...
. If the process noise covariance Q''k'' is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is
positive-definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite ...
. Positive definite matrices have the property that they have a factorization into the product of a
non-singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular or sounder, a group of boar, see List of animal names * Singular (band), a Thai jazz pop duo *'' Singular ...
, lower-triangular matrix S and its
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
: P = S·ST . The factor S can be computed efficiently using the Cholesky factorization algorithm. This product form of the covariance matrix P is guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal element Pkk is equal to the
euclidean norm Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces'' ...
of the k-th row of S, which is necessarily positive. An equivalent form, which avoids many of the
square root In mathematics, a square root of a number is a number such that y^2 = x; in other words, a number whose ''square'' (the result of multiplying the number by itself, or y \cdot y) is . For example, 4 and −4 are square roots of 16 because 4 ...
operations involved in the Cholesky factorization algorithm, yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton. The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (
LAPACK LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It als ...
). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and
well-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk, k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires
orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).


Parallel form

The Kalman filter is efficient for sequential data processing on
central processing units A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, log ...
(CPUs), but in its original form it is inefficient on parallel architectures such as
graphics processing units A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal co ...
(GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä and García-Fernández (2021). The filter solution can then be retrieved by the use of a
prefix sum In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers is a second sequence of numbers , the summation, sums of Prefix (computer science), prefixes (running totals) of the input sequence: : : : ...
algorithm which can be efficiently implemented on GPU. This reduces the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
from O(N) in the number of time steps to O(\log(N)).


Relationship to recursive Bayesian estimation

The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly,
recursive Bayesian estimation In probability theory, statistics, and machine learning, recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function (PDF) recursively over time using in ...
calculates
estimates In the Westminster system of government, the ''Estimates'' are an outline of government spending for the following fiscal year presented by the Cabinet (government), cabinet to parliament. The Estimates are drawn up by bureaucrats in the finance ...
of an unknown
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
(PDF) recursively over time using incoming measurements and a mathematical process model. In recursive Bayesian estimation, the true state is assumed to be an unobserved
Markov process In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
, and the measurements are the observed states of a hidden Markov model (HMM). Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. :p(\mathbf_k\mid \mathbf_0,\dots,\mathbf_) = p(\mathbf_k\mid \mathbf_) Similarly, the measurement at the ''k''-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. :p(\mathbf_k\mid\mathbf_0,\dots,\mathbf_k) = p(\mathbf_k\mid \mathbf_k ) Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: :p\left(\mathbf_0, \dots, \mathbf_k, \mathbf_1, \dots, \mathbf_k\right) = p\left(\mathbf_0\right)\prod_^k p\left(\mathbf_i \mid \mathbf_i\right)p\left(\mathbf_i \mid \mathbf_\right) However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. This results in the ''predict'' and ''update'' phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (''k'' âˆ’ 1)-th timestep to the ''k''-th and the probability distribution associated with the previous state, over all possible x_. :p\left(\mathbf_k \mid \mathbf_\right) = \int p\left(\mathbf_k \mid \mathbf_\right) p\left(\mathbf_ \mid \mathbf_\right)\, d\mathbf_ The measurement set up to time ''t'' is :\mathbf_t = \left\ The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. :p\left(\mathbf_k \mid \mathbf_k\right) = \frac The denominator :p\left(\mathbf_k \mid \mathbf_\right) = \int p\left(\mathbf_k \mid \mathbf_k\right) p\left(\mathbf_k \mid \mathbf_\right)\, d\mathbf_k is a normalization term. The remaining probability density functions are :\begin p\left(\mathbf_k \mid \mathbf_\right) &= \mathcal\left(\mathbf_k\mathbf_, \mathbf_k\right) \\ p\left(\mathbf_k \mid \mathbf_k\right) &= \mathcal\left(\mathbf_k\mathbf_k, \mathbf_k\right) \\ p\left(\mathbf_ \mid \mathbf_\right) &= \mathcal\left(\hat_, \mathbf_\right) \end The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for \mathbf_k given the measurements \mathbf_k is the Kalman filter estimate.


Marginal likelihood

Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a
generative model In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
, i.e., a process for ''generating'' a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is # Sample a hidden state \mathbf_0 from the Gaussian prior distribution p\left(\mathbf_0\right) = \mathcal\left(\hat_, \mathbf_\right). # Sample an observation \mathbf_0 from the observation model p\left(\mathbf_0 \mid \mathbf_0\right) = \mathcal\left(\mathbf_0\mathbf_0, \mathbf_0\right). # For k = 1, 2, 3, \ldots, do ## Sample the next hidden state \mathbf_k from the transition model p\left(\mathbf_k \mid \mathbf_\right) = \mathcal\left(\mathbf_k \mathbf_ + \mathbf_k\mathbf_k, \mathbf_k\right). ## Sample an observation \mathbf_k from the observation model p\left(\mathbf_k \mid \mathbf_k\right) = \mathcal\left(\mathbf_k\mathbf_k, \mathbf_k\right). This process has identical structure to the
hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. In some applications, it is useful to compute the ''probability'' that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the
marginal likelihood A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be under ...
because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison. It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
, the likelihood can be factored as the product of the probability of each observation given previous observations, :p(\mathbf) = \prod_^T p\left(\mathbf_k \mid \mathbf_, \ldots, \mathbf_0\right), and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate \hat_, \mathbf_. Thus the marginal likelihood is given by :\begin p(\mathbf) &= \prod_^T \int p\left(\mathbf_k \mid \mathbf_k\right) p\left(\mathbf_k \mid \mathbf_, \ldots,\mathbf_0\right) d\mathbf_k\\ &= \prod_^T \int \mathcal\left(\mathbf_k; \mathbf_k\mathbf_k, \mathbf_k\right) \mathcal\left(\mathbf_k; \hat_, \mathbf_\right) d\mathbf_k\\ &= \prod_^T \mathcal\left(\mathbf_k; \mathbf_k\hat_, \mathbf_k + \mathbf_k \mathbf_ \mathbf_k^\textsf\right)\\ &= \prod_^T \mathcal\left(\mathbf_k; \mathbf_k\hat_, \mathbf_k\right), \end i.e., a product of Gaussian densities, each corresponding to the density of one observation z''k'' under the current filtering distribution \mathbf_k\hat_, \mathbf_k. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the ''log'' marginal likelihood \ell = \log p(\mathbf) instead. Adopting the convention \ell^ = 0, this can be done via the recursive update rule :\ell^ = \ell^ - \frac \left(\tilde_k^\textsf \mathbf^_k \tilde_k + \log \left, \mathbf_k\ + d_y\log 2\pi \right), where d_y is the dimension of the measurement vector. An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.


Information filter

In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. Additionally, the information filter allows for system information initialization according to , which would not be possible for the regular Kalman filter. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and
information Information is an Abstraction, abstract concept that refers to something which has the power Communication, to inform. At the most fundamental level, it pertains to the Interpretation (philosophy), interpretation (perhaps Interpretation (log ...
vector respectively. These are defined as: :\begin \mathbf_ &= \mathbf_^ \\ \hat_ &= \mathbf_^\hat_ \end Similarly the predicted covariance and state have equivalent information forms, defined as: :\begin \mathbf_ &= \mathbf_^ \\ \hat_ &= \mathbf_^\hat_ \end and the measurement covariance and measurement vector, which are defined as: :\begin \mathbf_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k \\ \mathbf_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k \end The information update now becomes a trivial sum. :\begin \mathbf_ &= \mathbf_ + \mathbf_k \\ \hat_ &= \hat_ + \mathbf_k \end The main advantage of the information filter is that ''N'' measurements can be filtered at each time step simply by summing their information matrices and vectors. :\begin \mathbf_ &= \mathbf_ + \sum_^N \mathbf_ \\ \hat_ &= \hat_ + \sum_^N \mathbf_ \end To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used. :\begin \mathbf_k &= \left mathbf_k^\right\textsf \mathbf_ \mathbf_k^ \\ \mathbf_k &= \mathbf_k \left mathbf_k + \mathbf_k^\right \\ \mathbf_k &= \mathbf - \mathbf_k \\ \mathbf_ &= \mathbf_k \mathbf_k + \mathbf_k \mathbf_k^ \mathbf_k^\textsf \\ \hat_ &= \mathbf_k \left mathbf_k^\right\textsf \hat_ \end


Fixed-lag smoother

The optimal fixed-lag smoother provides the optimal estimate of \hat_ for a given fixed-lag N using the measurements from \mathbf_1 to \mathbf_k. It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: : \begin \hat_ \\ \hat_ \\ \vdots \\ \hat_ \\ \end = \begin \mathbf \\ 0 \\ \vdots \\ 0 \\ \end \hat_ + \begin 0 & \ldots & 0 \\ \mathbf & 0 & \vdots \\ \vdots & \ddots & \vdots \\ 0 & \ldots & \mathbf \\ \end \begin \hat_ \\ \hat_ \\ \vdots \\ \hat_ \\ \end + \begin \mathbf^ \\ \mathbf^ \\ \vdots \\ \mathbf^ \\ \end \mathbf_ where: * \hat_ is estimated via a standard Kalman filter; * \mathbf_ = \mathbf_t - \mathbf\hat_ is the innovation produced considering the estimate of the standard Kalman filter; * the various \hat_ with i = 1, \ldots, N-1 are new variables; i.e., they do not appear in the standard Kalman filter; * the gains are computed via the following scheme: *: \mathbf^ = \mathbf^ \mathbf^\textsf \left \mathbf \mathbf \mathbf^\textsf + \mathbf \right :and :: \mathbf^ = \mathbf \left \left( \mathbf - \mathbf \mathbf \right)^\textsf \righti :where \mathbf and \mathbf are the prediction error covariance and the gains of the standard Kalman filter (i.e., \mathbf_ ). If the estimation error covariance is defined so that : \mathbf_i := E \left \left( \mathbf_ - \hat_ \right)^ \left( \mathbf_ - \hat_ \right) \mid z_1 \ldots z_t \right then we have that the improvement on the estimation of \mathbf_ is given by: : \mathbf - \mathbf_i = \sum_^i \left \mathbf^ \mathbf^\textsf \left( \mathbf \mathbf \mathbf^\textsf + \mathbf \right)^ \mathbf \left( \mathbf^ \right)^\textsf \right


Fixed-interval smoothers

The optimal fixed-interval smoother provides the optimal estimate of \hat_ (k < n) using the measurements from a fixed interval \mathbf_1 to \mathbf_n. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.


Rauch–Tung–Striebel

The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing. The forward pass is the same as the regular Kalman filter algorithm. These ''filtered'' a-priori and a-posteriori state estimates \hat_, \hat_ and covariances \mathbf_, \mathbf_ are saved for use in the backward pass (for
retrodiction Retrodiction is the act of making a prediction about the past. It is also known as postdiction (but this should not be confused with the use of the term in criticisms of parapsychological research). Activity The activity of retrodiction (or po ...
). In the backward pass, we compute the ''smoothed'' state estimates \hat_ and covariances \mathbf_. We start at the last time step and proceed backward in time using the following recursive equations: :\begin \hat_ &= \hat_ + \mathbf_k \left(\hat_ - \hat_\right) \\ \mathbf_ &= \mathbf_ + \mathbf_k \left(\mathbf_ - \mathbf_\right) \mathbf_k^\textsf \end where :\mathbf_k = \mathbf_ \mathbf_^\textsf \mathbf_^. \mathbf_ is the a-posteriori state estimate of timestep k and \mathbf_ is the a-priori state estimate of timestep k + 1. The same notation applies to the covariance.


Modified Bryson–Frazier smoother

An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. The recursive equations are :\begin \tilde_k &= \mathbf_k^\textsf \mathbf_k^ \mathbf_k + \hat_k^\textsf \hat_k \hat_k \\ \hat_ &= \mathbf_k^\textsf\tilde_k\mathbf_k \\ \hat_n &= 0 \\ \tilde_k &= -\mathbf_k^\textsf \mathbf_k^ \mathbf_k + \hat_k^\textsf \hat_k \\ \hat_ &= \mathbf_k^\textsf\tilde_k \\ \hat_n &= 0 \end where \mathbf_k is the residual covariance and \hat_k = \mathbf - \mathbf_k \mathbf_k. The smoothed state and covariance can then be found by substitution in the equations :\begin \mathbf_ &= \mathbf_ - \mathbf_\hat_k\mathbf_ \\ \mathbf_ &= \mathbf_ - \mathbf_\hat_k \end or :\begin \mathbf_ &= \mathbf_ - \mathbf_\tilde_k\mathbf_ \\ \mathbf_ &= \mathbf_ - \mathbf_\tilde_k. \end An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs. The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that is \mathbf_ - \mathbf_.


Minimum-variance smoother

The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal
Wiener filter In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant ( LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, a ...
. The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by :\begin \hat_ &= (\mathbf_k - \mathbf_k\mathbf_k)\hat_ + \mathbf_k\mathbf_k \\ \alpha_k &= -\mathbf_k^\mathbf_k\hat_ + \mathbf_k^\mathbf_k \end The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass \beta_k may be calculated by operating the forward equations on the time-reversed \alpha_k and time reversing the result. In the case of output estimation, the smoothed estimate is given by :\hat_ = \mathbf_k - \mathbf_k\beta_k Taking the causal part of this minimum-variance smoother yields :\hat_ = \mathbf_k - \mathbf_k \mathbf_k^ \alpha_k which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. A continuous-time version of the above smoother is described in.
Expectation–maximization algorithm In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent varia ...
s may be employed to calculate approximate
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation. In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (
extended Kalman filter In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered t ...
ing).


Frequency-weighted Kalman filters

Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let \mathbf - \hat denote the output estimation error exhibited by a conventional Kalman filter. Also, let \mathbf denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of \mathbf\left(\mathbf - \hat\right) arises by simply constructing \mathbf^ \hat. The design of \mathbf remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting \mathbf equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.


Nonlinear filters

The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be
nonlinear In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathe ...
. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.


Extended Kalman filter

In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
type. :\begin \mathbf_k &= f(\mathbf_, \mathbf_k) + \mathbf_k \\ \mathbf_k &= h(\mathbf_k) + \mathbf_k \end The function ''f'' can be used to compute the predicted state from the previous estimate and similarly the function ''h'' can be used to compute the predicted measurement from the predicted state. However, ''f'' and ''h'' cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.


Unscented Kalman filter

When the state transition and observation models—that is, the predict and update functions f and h—are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)  uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).


Sigma points

For a
random In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
vector \mathbf=(x_1, \dots, x_L), sigma points are any set of vectors : \=\bigl\ attributed with * first-order weights W_0^a, \dots, W_N^a that fulfill # \sum_^N W_j^a=1 # for all i=1, \dots, L: E _i\sum_^N W_j^a s_ * second-order weights W_0^c, \dots, W_N^c that fulfill # \sum_^N W_j^c=1 # for all pairs (i,l) \in \^2: E _ix_l\sum_^N W_j^c s_s_ . A simple choice of sigma points and weights for \mathbf_ in the UKF algorithm is :\begin \mathbf_0&=\hat \mathbf_\\ -1& where \hat \mathbf_ is the mean estimate of \mathbf_. The vector \mathbf_j is the ''j''th column of \mathbf where \mathbf_=\mathbf^\textsf. Typically, \mathbf is obtained via
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
of \mathbf_. With some care the filter equations can be expressed in such a way that \mathbf is evaluated directly without intermediate calculations of \mathbf_. This is referred to as the ''square-root unscented Kalman filter''. The weight of the mean value, W_0, can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is :\begin \mathbf_0&=\hat \mathbf_\\ W_0^a&= \frac\\ W_0^c&= W_0^a + 1-\alpha^2+\beta \\ \mathbf_j&=\hat \mathbf_ + \alpha\sqrt \mathbf_j, \quad j=1, \dots, L\\ \mathbf_&=\hat \mathbf_ - \alpha\sqrt \mathbf_j, \quad j=1, \dots, L\\ W_j^a&=W_j^c=\frac, \quad j=1, \dots, 2L. \end \alpha and \kappa control the spread of the sigma points. \beta is related to the distribution of x. Note that this is an overparameterization in the sense that any one of \alpha, \beta and \kappa can be chosen arbitrarily. Appropriate values depend on the problem at hand, but a typical recommendation is \alpha = 1, \beta = 0, and \kappa \approx 3L/2. If the true distribution of x is Gaussian, \beta = 2 is optimal.


Predict

As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, \hat\mathbf_ and \mathbf_, one obtains N = 2L+1 sigma points as described in the section above. The sigma points are propagated through the transition function ''f''. :\mathbf_ = f\left(\mathbf_\right) \quad j = 0, \dots, 2L . The propagated sigma points are weighed to produce the predicted mean and covariance. :\begin \hat_ &= \sum_^ W_j^a \mathbf_j \\ \mathbf_ &= \sum_^ W_j^c \left(\mathbf_j - \hat_\right)\left(\mathbf_j - \hat_\right)^\textsf+\mathbf_k \end where W_j^a are the first-order weights of the original sigma points, and W_j^c are the second-order weights. The matrix \mathbf_k is the covariance of the transition noise, \mathbf_k.


Update

Given prediction estimates \hat_ and \mathbf_, a new set of N = 2L+1 sigma points \mathbf_0, \dots, \mathbf_ with corresponding first-order weights W_0^a,\dots W_^a and second-order weights W_0^c,\dots, W_^c is calculated. These sigma points are transformed through the measurement function h. : \mathbf_j=h(\mathbf_j), \,\, j=0,1, \dots, 2L . Then the empirical mean and covariance of the transformed points are calculated. :\begin \hat &= \sum_^ W_j^a \mathbf_j \\ pt \hat_k &= \sum_^ W_j^c (\mathbf_j-\hat)(\mathbf_j-\hat)^\textsf + \mathbf \end where \mathbf_k is the covariance matrix of the observation noise, \mathbf_k. Additionally, the cross covariance matrix is also needed :\begin \mathbf &= \sum_^ W_j^c (\mathbf_j-\hat\mathbf_)(\mathbf_j-\hat\mathbf)^\textsf. \end The Kalman gain is : \begin \mathbf_k=\mathbf\hat_k^. \end The updated mean and covariance estimates are : \begin \hat\mathbf_&=\hat\mathbf_+\mathbf_k(\mathbf_k-\hat\mathbf)\\ \mathbf_&=\mathbf_-\mathbf_k\hat_k\mathbf_k^\textsf. \end


Discriminative Kalman filter

When the observation model p(\mathbf_k\mid\mathbf_k) is highly non-linear and/or non-Gaussian, it may prove advantageous to apply
Bayes' rule Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting conditional probabilities, allowing one to find the probability of a cause given its effect. For example, if the risk of develo ...
and estimate : p(\mathbf_k\mid\mathbf_k) \approx \frac where p(\mathbf_k\mid\mathbf_k) \approx \mathcal(g(\mathbf_k),Q(\mathbf_k)) for nonlinear functions g,Q. This replaces the generative specification of the standard Kalman filter with a
discriminative model Discriminative models, also referred to as conditional models, are a class of models frequently used for classification. They are typically used to solve binary classification problems, i.e. assign labels, such as pass/fail, win/lose, alive/dead or ...
for the latent states given observations. Under a stationary state model : \begin p(\mathbf_1) &= \mathcal(0, \mathbf), \\ p(\mathbf_k\mid\mathbf_) &= \mathcal(\mathbf\mathbf_, \mathbf), \end where \mathbf= \mathbf\mathbf\mathbf^\intercal + \mathbf, if : p(\mathbf_k\mid\mathbf_) \approx \mathcal(\hat_, \mathbf_), then given a new observation \mathbf_k, it follows that : p(\mathbf_\mid\mathbf_) \approx \mathcal(\hat_, \mathbf_) where : \begin \mathbf_ &= \mathbf\mathbf_\mathbf^\intercal + \mathbf, \\ \mathbf_ &= (\mathbf_^ + Q(\mathbf_k)^ - \mathbf^)^, \\ \hat_ &= \mathbf_ (\mathbf_^\mathbf\hat_ + \mathbf_^g(\mathbf_k) ). \end Note that this approximation requires Q(\mathbf_k)^ - \mathbf^ to be positive-definite; in the case that it is not, : \mathbf_ = (\mathbf_^ + Q(\mathbf_k)^)^ is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model.


Adaptive Kalman filter

Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model \mathbf(t), which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.


Kalman–Bucy filter

Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering. It is based on the state space model :\begin \frac\mathbf(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t) + \mathbf(t) \\ \mathbf(t) &= \mathbf(t) \mathbf(t) + \mathbf(t) \end where \mathbf(t) and \mathbf(t) represent the intensities of the two white noise terms \mathbf(t) and \mathbf(t), respectively. The filter consists of two differential equations, one for the state estimate and one for the covariance: :\begin \frac\hat(t) &= \mathbf(t)\hat(t) + \mathbf(t)\mathbf(t) + \mathbf(t) \left(\mathbf(t) - \mathbf(t)\hat(t)\right) \\ \frac\mathbf(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf^\textsf(t) + \mathbf(t) - \mathbf(t)\mathbf(t)\mathbf^\textsf(t) \end where the Kalman gain is given by :\mathbf(t) = \mathbf(t)\mathbf^\textsf(t)\mathbf^(t) Note that in this expression for \mathbf(t) the covariance of the observation noise \mathbf(t) represents at the same time the covariance of the prediction error (or ''innovation'') \tilde(t) = \mathbf(t) - \mathbf(t)\hat(t); these covariances are equal only in the case of continuous time. The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. The second differential equation, for the covariance, is an example of a
Riccati equation In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form y'(x) = q_0(x) + q_1(x) \, y(x) + q_2(x) \, y^2( ...
. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.


Hybrid Kalman filter

Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by :\begin \dot(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t) + \mathbf(t), &\mathbf(t) &\sim N\left(\mathbf, \mathbf(t)\right) \\ \mathbf_k &= \mathbf_k\mathbf_k + \mathbf_k, &\mathbf_k &\sim N(\mathbf,\mathbf_k) \end where :\mathbf_k = \mathbf(t_k).


Initialize

:\hat_ = E\left mathbf(t_0)\right \mathbf_ = \operatorname\left mathbf\left(t_0\right)\right/math>


Predict

:\begin \dot(t) &= \mathbf(t) \hat(t) + \mathbf(t) \mathbf(t) \text \hat\left(t_\right) = \hat_ \\ \Rightarrow \hat_ &= \hat\left(t_k\right) \\ \dot(t) &= \mathbf(t)\mathbf(t) + \mathbf(t)\mathbf(t)^\textsf + \mathbf(t) \text \mathbf\left(t_\right) = \mathbf_ \\ \Rightarrow \mathbf_ &= \mathbf\left(t_k\right) \end The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., \mathbf(t) = 0 . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. For the case of
linear time invariant In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined ...
systems, the continuous time dynamics can be exactly discretized into a discrete time system using
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
s.


Update

:\begin \mathbf_k &= \mathbf_\mathbf_k^\textsf \left(\mathbf_k\mathbf_\mathbf_k^\textsf + \mathbf_k\right)^ \\ \hat_ &= \hat_ + \mathbf_k\left(\mathbf_k - \mathbf_k\hat_\right) \\ \mathbf_ &= \left(\mathbf - \mathbf_k\mathbf_k\right)\mathbf_ \end The update equations are identical to those of the discrete-time Kalman filter.


Variants for the recovery of sparse signals

The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of
compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a Signal (electronics), signal by finding solutions to Underdetermined s ...
/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.


Relation to Gaussian processes

Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for
Gaussian process regression In statistics, originally in geostatistics, kriging or Kriging (), also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging g ...
.


Applications

* Attitude and heading reference systems *
Autopilot An autopilot is a system used to control the path of a vehicle without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allow ...
*
Electric battery An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive Terminal (electronics), terminal is the ...
state of charge State of charge (SOC) quantifies the remaining capacity available in a battery at a given time and in relation to a given state of ageing. It is usually expressed as percentage (0% = empty; 100% = full). An alternative form of the same measure i ...
(SoC) estimation * Brain–computer interfaces * Tracking and vertex fitting of
charged particles In physics, a charged particle is a particle with an electric charge. For example, some elementary particles, like the electron or quarks are charged. Some composite particles like protons are charged particles. An ion, such as a molecule or atom ...
in
particle detector In experimental and applied particle physics, nuclear physics, and nuclear engineering, a particle detector, also known as a radiation detector, is a device used to detect, track, and/or identify ionizing elementary particle, particles, such as t ...
s * Tracking of objects in
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
*
Dynamic positioning Dynamic positioning (DP) is a computer-controlled system to automatically maintain a vessel's position and heading by using its own propellers and thrusters. Position reference sensors, combined with wind sensors, motion sensors and gyrocompas ...
in shipping *
Economics Economics () is a behavioral science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behaviour and interac ...
, in particular
macroeconomics Macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study topics such as output (econ ...
,
time series analysis In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
, and
econometrics Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
*
Inertial guidance system An inertial navigation system (INS; also inertial guidance system, inertial instrument) is a navigation device that uses motion sensors (accelerometers), rotation sensors ( gyroscopes) and a computer to continuously calculate by dead reckoning ...
*
Nuclear medicine Nuclear medicine (nuclear radiology, nucleology), is a medical specialty involving the application of radioactivity, radioactive substances in the diagnosis and treatment of disease. Nuclear imaging is, in a sense, ''radiology done inside out'', ...
– single photon emission computed tomography image restoration *
Orbit determination Orbit determination is the estimation of orbits of objects such as moons, planets, and spacecraft. One major application is to allow tracking newly observed asteroids and verify that they have not been previously discovered. The basic methods wer ...
* Power system state estimation *
Radar tracker A radar tracker is a component of a radar system, or an associated command and control (C2) system, that associates consecutive radar observations of the same target into Track (navigation), tracks. It is particularly useful when the radar system ...
*
Satellite navigation system A satellite or an artificial satellite is an object, typically a spacecraft, placed into orbit around a celestial body. They have a variety of uses, including communication relay, weather forecasting, navigation ( GPS), broadcasting, scientif ...
s *
Seismology Seismology (; from Ancient Greek σεισμός (''seismós'') meaning "earthquake" and -λογία (''-logía'') meaning "study of") is the scientific study of earthquakes (or generally, quakes) and the generation and propagation of elastic ...
* Sensorless control of AC motor
variable-frequency drive A variable-frequency drive (VFD, or adjustable-frequency drive, adjustable-speed drive, variable-speed drive, AC drive, micro drive, inverter drive, variable voltage variable frequency drive, or drive) is a type of AC motor, AC motor drive (sys ...
s *
Simultaneous localization and mapping Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an Intelligent agent, agent's location within it. While this initially ap ...
* Speech enhancement *
Visual odometry The visual system is the physiological basis of visual perception (the ability to detect and process light). The system detects, transduces and interprets information concerning light within the visible range to construct an image and buil ...
*
Weather forecasting Weather forecasting or weather prediction is the application of science and technology forecasting, to predict the conditions of the Earth's atmosphere, atmosphere for a given location and time. People have attempted to predict the weather info ...
*
Navigation system A navigation system is a computing system that aids in navigation. Navigation systems may be entirely on board the vehicle or vessel that the system is controlling (for example, on the ship's bridge) or located elsewhere, making use of radio or oth ...
*
3D modeling In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based Computer representation of surfaces, representation of a surface of an object (inanimate or living) in Three-dimensional space, three dimensions vi ...
*
Structural health monitoring Structural health monitoring (SHM) involves the observation and analysis of a system over time using periodically sampled response measurements to monitor changes to the material and geometric properties of engineering structures such as bridges ...
* Human sensorimotor processing


See also

*
Alpha beta filter An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of State observer, observer for estimation, data smo ...
*
Inverse-variance weighting In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance (i.e., proportional to its ...
* Covariance intersection *
Data assimilation Data assimilation refers to a large group of methods that update information from numerical computer models with information from observations. Data assimilation is used to update model states, model trajectories over time, model parameters, and ...
* Ensemble Kalman filter *
Extended Kalman filter In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered t ...
* Fast Kalman filter *
Filtering problem (stochastic processes) In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations. For example, in GPS navigation, filtering helps estimate a car’s true posit ...
*
Generalized filtering Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" a ...
* Invariant extended Kalman filter *
Kernel adaptive filter In signal processing, a kernel adaptive filter is a type of nonlinear adaptive filter. An adaptive filter is a filter that adapts its transfer function to changes in signal properties over time by minimizing an error or loss function that character ...
* Masreliez's theorem * Moving horizon estimation *
Particle filter Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical ...
estimator *
PID controller PID or Pid may refer to: Medicine * Pelvic inflammatory disease or pelvic inflammatory disorder, an infection of the upper part of the female reproductive system * Primary immune deficiency, disorders in which part of the body's immune system is ...
*
Predictor–corrector method In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equationsto find an unknown function that satisfies a given differential equation. All such algorithms proceed in two ...
*
Recursive least squares filter Recursive least squares (RLS) is an adaptive filter algorithm that recursively finds the coefficients that minimize a weighted linear least squares cost function relating to the input signals. This approach is in contrast to other algorithms such ...
* Schmidt–Kalman filter *
Separation principle In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved b ...
*
Sliding mode control In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamic system, dynamics of a nonlinear system by applying a discontinuous control signal (or more rigorously, a set-valued control signal) that forces th ...
* State-transition matrix *
Stochastic differential equation A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics an ...
s * Switching Kalman filter


References


Further reading

* * * * * * * * * * * * * * * * * *


External links


A New Approach to Linear Filtering and Prediction Problems
by R. E. Kalman, 1960
Kalman and Bayesian Filters in Python
Open source Kalman filtering textbook.
How a Kalman filter works, in pictures
Illuminates the Kalman filter with pictures and colors {{Authority control Control theory Linear filters Markov models Nonlinear filters Robot control Signal estimation Stochastic differential equations Hungarian inventions