Autoregressive Models
   HOME

TheInfoList



OR:

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of
random process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a
stochastic Stochastic (; ) is the property of being well-described by a random probability distribution. ''Stochasticity'' and ''randomness'' are technically distinct concepts: the former refers to a modeling approach, while the latter describes phenomena; i ...
term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation) which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and
autoregressive integrated moving average In time series analysis used in statistics and econometrics, autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) models are generalizations of the autoregressive moving average (ARMA) model to non-stationary series and pe ...
(ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable. Unlike the moving-average (MA) model, the autoregressive model is not always stationary, because it may contain a unit root.
Large language model A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are g ...
s are called autoregressive, but they are not a classical autoregressive model in this sense because they are not linear.


Definition

The notation AR(p) indicates an autoregressive model of order ''p''. The AR(''p'') model is defined as : X_t = \sum_^p \varphi_i X_ + \varepsilon_t where \varphi_1, \ldots, \varphi_p are the ''parameters'' of the model, and \varepsilon_t is
white noise In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used with this or similar meanings in many scientific and technical disciplines, i ...
. This can be equivalently written using the backshift operator ''B'' as : X_t = \sum_^p \varphi_i B^i X_ + \varepsilon_t so that, moving the summation term to the left side and using polynomial notation, we have :\phi _t= \varepsilon_t An autoregressive model can thus be viewed as the output of an all- pole
infinite impulse response Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response h(t) that does not become exactly zero past a certain point but continues indefinitely. This is in ...
filter whose input is white noise. Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with , \varphi_1 , \geq 1 are not stationary. More generally, for an AR(''p'') model to be weak-sense stationary, the roots of the polynomial \Phi(z):=\textstyle 1 - \sum_^p \varphi_i z^ must lie outside the
unit circle In mathematics, a unit circle is a circle of unit radius—that is, a radius of 1. Frequently, especially in trigonometry, the unit circle is the circle of radius 1 centered at the origin (0, 0) in the Cartesian coordinate system in the Eucli ...
, i.e., each (complex) root z_i must satisfy , z_i , >1 (see pages 89,92 ).


Intertemporal effect of shocks

In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model X_t = \varphi_1 X_ + \varepsilon_t. A non-zero value for \varepsilon_t at say time ''t''=1 affects X_1 by the amount \varepsilon_1. Then by the AR equation for X_2 in terms of X_1, this affects X_2 by the amount \varphi_1 \varepsilon_1. Then by the AR equation for X_3 in terms of X_2, this affects X_3 by the amount \varphi_1^2 \varepsilon_1. Continuing this process shows that the effect of \varepsilon_1 never ends, although if the process is stationary then the effect diminishes toward zero in the limit. Because each shock affects ''X'' values infinitely far into the future from when they occur, any given value ''X''''t'' is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression :\phi (B)X_t= \varepsilon_t \, (where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as :X_t= \frac\varepsilon_t \, . When the
polynomial division In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, beca ...
on the right side is carried out, the polynomial in the backshift operator applied to \varepsilon_t has an infinite order—that is, an infinite number of lagged values of \varepsilon_t appear on the right side of the equation.


Characteristic polynomial

The
autocorrelation function Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at differe ...
of an AR(''p'') process can be expressed as :\rho(\tau) = \sum_^p a_k y_k^ , where y_k are the roots of the polynomial : \phi(B) = 1- \sum_^p \varphi_k B^k where ''B'' is the backshift operator, where \phi(\cdot) is the function defining the autoregression, and where \varphi_k are the coefficients in the autoregression. The formula is valid only if all the roots have multiplicity 1. The autocorrelation function of an AR(''p'') process is a sum of decaying exponentials. * Each real root contributes a component to the autocorrelation function that decays exponentially. * Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation.


Graphs of AR(''p'') processes

The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise. For an AR(1) process with a positive \varphi, only the previous term in the process and the noise term contribute to the output. If \varphi is close to 0, then the process still looks like white noise, but as \varphi approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a
low pass filter A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter d ...
. For an AR(2) process, the previous two terms and the noise term contribute to the output. If both \varphi_1 and \varphi_2 are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If \varphi_1 is positive while \varphi_2 is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be linked to edge detection or detection of change in direction.


Example: An AR(1) process

An AR(1) process is given by:X_t = \varphi X_+\varepsilon_t\,where \varepsilon_t is a white noise process with zero mean and constant variance \sigma_\varepsilon^2. (Note: The subscript on \varphi_1 has been dropped.) The process is weak-sense stationary if , \varphi, <1 since it is obtained as the output of a stable filter whose input is white noise. (If \varphi=1 then the variance of X_t depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not weak sense stationary.) Assuming , \varphi, <1, the mean \operatorname (X_t) is identical for all values of ''t'' by the very definition of weak sense stationarity. If the mean is denoted by \mu, it follows from\operatorname (X_t)=\varphi\operatorname (X_)+\operatorname(\varepsilon_t), that \mu=\varphi\mu+0,and hence :\mu=0. The
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
is :\textrm(X_t)=\operatorname(X_t^2)-\mu^2=\frac, where \sigma_\varepsilon is the standard deviation of \varepsilon_t. This can be shown by noting that :\textrm(X_t) = \varphi^2\textrm(X_) + \sigma_\varepsilon^2, and then by noticing that the quantity above is a stable fixed point of this relation. The
autocovariance In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the proces ...
is given by :B_n=\operatorname(X_X_t)-\mu^2=\frac\,\,\varphi^. It can be seen that the autocovariance function decays with a decay time (also called
time constant In physics and engineering, the time constant, usually denoted by the Greek language, Greek letter (tau), is the parameter characterizing the response to a step input of a first-order, LTI system theory, linear time-invariant (LTI) system.Concre ...
) of \tau=1-\varphi. The
spectral density In signal processing, the power spectrum S_(f) of a continuous time signal x(t) describes the distribution of power into frequency components f composing that signal. According to Fourier analysis, any physical signal can be decomposed into ...
function is the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform: :\Phi(\omega)= \frac\,\sum_^\infty B_n e^ =\frac\,\left(\frac\right). This expression is periodic due to the discrete nature of the X_j, which is manifested as the cosine term in the denominator. If we assume that the sampling time (\Delta t=1) is much smaller than the decay time (\tau), then we can use a continuum approximation to B_n: :B(t)\approx \frac\,\,\varphi^ which yields a Lorentzian profile for the spectral density: :\Phi(\omega)= \frac\,\frac\,\frac where \gamma=1/\tau is the angular frequency associated with the decay time \tau. An alternative expression for X_t can be derived by first substituting \varphi X_+\varepsilon_ for X_ in the defining equation. Continuing this process ''N'' times yields :X_t=\varphi^NX_+\sum_^\varphi^k\varepsilon_. For ''N'' approaching infinity, \varphi^N will approach zero and: :X_t=\sum_^\infty\varphi^k\varepsilon_. It is seen that X_t is white noise convolved with the \varphi^k kernel plus the constant mean. If the white noise \varepsilon_t is a
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The di ...
then X_t is also a Gaussian process. In other cases, the
central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
indicates that X_t will be approximately normally distributed when \varphi is close to one. For \varepsilon_t = 0, the process X_t = \varphi X_ will be a
geometric progression A geometric progression, also known as a geometric sequence, is a mathematical sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed number called the ''common ratio''. For example, the s ...
(''exponential'' growth or decay). In this case, the solution can be found analytically: X_t = a \varphi^t whereby a is an unknown constant (
initial condition In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted ''t'' = 0). Fo ...
).


Explicit mean/difference form of AR(1) process

The AR(1) model is the discrete-time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model, with process parameter \theta \in \mathbb, is given by :X_ = X_t + (1-\theta)(\mu - X_t) + \varepsilon_, where , \theta, < 1 \,, \mu := E(X) is the model mean, and \ is a white-noise process with zero mean and constant variance \sigma. By rewriting this as X_ = \theta X_t + (1 - \theta)\mu + \varepsilon_ and then deriving (by induction) X_ = \theta ^ n X_ + (1 - \theta ^ n) \mu + \Sigma_^ \left(\theta ^ \epsilon_\right), one can show that : \operatorname(X_ , X_t) = \mu\left -\theta^n\right+ X_t\theta^n and : \operatorname (X_ , X_t) = \sigma^2 \frac.


Choosing the maximum lag

The partial autocorrelation of an AR(p) process equals zero at lags larger than p, so the appropriate maximum lag p is the one after which the partial autocorrelations are all zero.


Calculation of the AR parameters

There are many ways to estimate the coefficients, such as the
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
procedure or method of moments (through Yule–Walker equations). The AR(''p'') model is given by the equation : X_t = \sum_^p \varphi_i X_+ \varepsilon_t.\, It is based on parameters \varphi_i where ''i'' = 1, ..., ''p''. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.


Yule–Walker equations

The Yule–Walker equations, named for
Udny Yule George Udny Yule, CBE, FRS (18 February 1871 – 26 June 1951), usually known as Udny Yule, was a British statistician, particularly known for the Yule distribution and proposing the preferential attachment model for random graphs. Perso ...
and Gilbert Walker, are the following set of equations. :\gamma_m = \sum_^p \varphi_k \gamma_ + \sigma_\varepsilon^2\delta_, where , yielding equations. Here \gamma_m is the autocovariance function of Xt, \sigma_\varepsilon is the standard deviation of the input noise process, and \delta_ is the Kronecker delta function. Because the last part of an individual equation is non-zero only if , the set of equations can be solved by representing the equations for in matrix form, thus getting the equation :\begin \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \vdots \\ \gamma_p \\ \end = \begin \gamma_0 & \gamma_ & \gamma_ & \cdots \\ \gamma_1 & \gamma_0 & \gamma_ & \cdots \\ \gamma_2 & \gamma_1 & \gamma_0 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \gamma_ & \gamma_ & \gamma_ & \cdots \\ \end \begin \varphi_ \\ \varphi_ \\ \varphi_ \\ \vdots \\ \varphi_ \\ \end which can be solved for all \. The remaining equation for ''m'' = 0 is :\gamma_0 = \sum_^p \varphi_k \gamma_ + \sigma_\varepsilon^2 , which, once \ are known, can be solved for \sigma_\varepsilon^2 . An alternative formulation is in terms of the
autocorrelation function Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at differe ...
. The AR parameters are determined by the first ''p''+1 elements \rho(\tau) of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating : \rho(\tau) = \sum_^p \varphi_k \rho(k-\tau) Examples for some Low-order AR(''p'') processes * ''p''=1 ** \gamma_1 = \varphi_1 \gamma_0 ** Hence \rho_1 = \gamma_1 / \gamma_0 = \varphi_1 * ''p''=2 ** The Yule–Walker equations for an AR(2) process are **: \gamma_1 = \varphi_1 \gamma_0 + \varphi_2 \gamma_ **: \gamma_2 = \varphi_1 \gamma_1 + \varphi_2 \gamma_0 *** Remember that \gamma_ = \gamma_k *** Using the first equation yields \rho_1 = \gamma_1 / \gamma_0 = \frac *** Using the recursion formula yields \rho_2 = \gamma_2 / \gamma_0 = \frac


Estimation of AR parameters

The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR(''p'') model, by replacing the theoretical covariances with estimated values. Some of these variants can be described as follows: *Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices. *Formulation as a
least squares regression Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for Ordinary least squares, ordinary (unweig ...
problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of ''X''''t'' on the ''p'' previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate. *Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model: :: X_t = \sum_^p \varphi_i X_+ \varepsilon^*_t \,. :Here predicted values of ''X''''t'' would be based on the ''p'' future values of the same series. This way of estimating the AR parameters is due to John Parker Burg, and is called the Burg method: Burg and later authors called these particular estimates "maximum entropy estimates", but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with
maximum entropy spectral estimation Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectrum, spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the mo ...
. Other possible approaches to estimation include
maximum likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial ''p'' values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.


Spectrum

The
power spectral density In signal processing, the power spectrum S_(f) of a continuous time signal x(t) describes the distribution of power into frequency components f composing that signal. According to Fourier analysis, any physical signal can be decomposed into ...
(PSD) of an AR(''p'') process with noise variance \mathrm(Z_t) = \sigma_Z^2 is : S(f) = \frac.


AR(0)

For white noise (AR(0)) : S(f) = \sigma_Z^2.


AR(1)

For AR(1) : S(f) = \frac = \frac *If \varphi_1 > 0 there is a single spectral peak at f=0, often referred to as
red noise In science, Brownian noise, also known as Brown noise or red noise, is the type of signal noise produced by Brownian motion, hence its alternative name of random walk noise. The term "Brown noise" does not come from the color, but after Rober ...
. As \varphi_1 becomes nearer 1, there is stronger power at low frequencies, i.e. larger time lags. This is then a low-pass filter, when applied to full spectrum light, everything except for the red light will be filtered. *If \varphi_1 < 0 there is a minimum at f=0, often referred to as
blue noise In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly ...
. This similarly acts as a high-pass filter, everything except for blue light will be filtered.


AR(2)

The behavior of an AR(2) process is determined entirely by the roots of it characteristic equation, which is expressed in terms of the
lag operator In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series :X= \ then : L X_t = X_ for all t > 1 or similarly in term ...
as: : 1 - \varphi_1 B -\varphi_2 B^2 =0, or equivalently by the poles of its
transfer function In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a function (mathematics), mathematical function that mathematical model, models the system's output for each possible ...
, which is defined in the Z domain by: : H_z = (1 - \varphi_1 z^ -\varphi_2 z^)^. It follows that the poles are values of z satisfying: : 1 - \varphi_1 z^ -\varphi_2 z^ = 0 , which yields: : z_1,z_2 = \frac\left(\varphi_1 \pm \sqrt\right) . z_1 and z_2 are the reciprocals of the characteristic roots, as well as the eigenvalues of the temporal update matrix: : \begin \varphi_1 & \varphi_2 \\ 1 & 0 \end AR(2) processes can be split into three groups depending on the characteristics of their roots/poles: * When \varphi_1^2 + 4\varphi_2 < 0, the process has a pair of complex-conjugate poles, creating a mid-frequency peak at: :f^* = \frac\cos^\left(\frac\right), with bandwidth about the peak inversely proportional to the moduli of the poles: :, z_1, =, z_2, =\sqrt. The terms involving square roots are all real in the case of complex poles since they exist only when \varphi_2<0. Otherwise the process has real roots, and: * When \varphi_1 > 0 it acts as a low-pass filter on the white noise with a spectral peak at f=0 * When \varphi_1 < 0 it acts as a high-pass filter on the white noise with a spectral peak at f=1/2. The process is non-stationary when the poles are on or outside the unit circle, or equivalently when the characteristic roots are on or inside the unit circle. The process is stable when the poles are strictly within the unit circle (roots strictly outside the unit circle), or equivalently when the coefficients are in the triangle -1 \le \varphi_2 \le 1 - , \varphi_1, . The full PSD function can be expressed in real form as: :S(f) = \frac


Implementations in statistics packages

* R – the ''stats'' package includes ''ar'' function; the ''astsa'' package includes ''sarima'' function to fit various models including AR. *
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
– the Econometrics Toolbox and System Identification Toolbox include AR models. *
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
and
Octave In music, an octave (: eighth) or perfect octave (sometimes called the diapason) is an interval between two notes, one having twice the frequency of vibration of the other. The octave relationship is a natural phenomenon that has been referr ...
– the ''TSA'' toolbox contains several estimation functions for uni-variate, multivariate, and adaptive AR models. * PyMC3 – the Bayesian statistics and probabilistic programming framework supports AR modes with ''p'' lags. * ''bayesloop'' – supports parameter inference and model selection for the AR-1 process with time-varying parameters. *
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (prog ...
– statsmodels.org hosts an AR model.


Impulse response

The
impulse response In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (). More generally, an impulse response is the reac ...
of a system is the change in an evolving variable in response to a change in the value of a shock term ''k'' periods earlier, as a function of ''k''. Since the AR model is a special case of the vector autoregressive model, the computation of the impulse response in vector autoregression#impulse response applies here.


''n''-step-ahead forecasting

Once the parameters of the autoregression : X_t = \sum_^p \varphi_i X_+ \varepsilon_t \, have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use ''t'' to refer to the first period for which data is not yet available; substitute the known preceding values ''X''''t-i'' for ''i=''1, ..., ''p'' into the autoregressive equation while setting the error term \varepsilon_t equal to zero (because we forecast ''X''''t'' to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use ''t'' to refer to the ''next'' period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of ''X'' one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after ''p'' predictions, all ''p'' right-side values are predicted values from preceding steps. There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term \varepsilon_t \, for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the ''n''-step-ahead predictions; the confidence interval will become wider as ''n'' increases because of the use of an increasing number of estimated values for the right-side variables.


See also

*
Moving average model In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable is cross-correlated with a n ...
*
Linear difference equation In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear ...
*
Predictive analytics Predictive analytics encompasses a variety of Statistics, statistical techniques from data mining, Predictive modelling, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or other ...
*
Linear predictive coding Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model ...
*
Resonance Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration whose frequency matches a resonant frequency (or resonance frequency) of the system, defined as a frequency that generates a maximu ...
* Levinson recursion *
Ornstein–Uhlenbeck process In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle ...
*
Infinite impulse response Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response h(t) that does not become exactly zero past a certain point but continues indefinitely. This is in ...


Notes


References

* * *


External links


AutoRegression Analysis (AR)
by Paul Bourke * by Mark Thoma {{Artificial intelligence navbox Autocorrelation Signal processing