Hildreth–Lu Estimation
   HOME
*





Hildreth–Lu Estimation
Hildreth–Lu estimation, named for Clifford Hildreth and John Y. Lu, is a method for adjusting a linear model in response to the presence of serial correlation in the error term. It is an iterative procedure related to the Cochrane–Orcutt estimation. The idea is to repeatedly apply ordinary least squares to :y_t - \rho y_ = \alpha(1-\rho)+(X_t - \rho X_)\beta + e_t \, for different values of \rho between −1 and 1. From all these auxiliary regressions, one selects the pair ''(α, β)'' that yields the smallest residual sum of squares. See also * Prais–Winsten estimation In econometrics, Prais–Winsten estimation is a procedure meant to take care of the Autocorrelation, serial correlation of type Autoregressive model#Example: An AR.281.29 process, AR(1) in a linear model. Conceived by Sigbert Prais and Christophe ... References Further reading * * * * {{DEFAULTSORT:Hildreth-Lu estimation Autocorrelation Regression with time series structure ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Clifford Hildreth
Clifford George Hildreth (December 8, 1917 – August 15, 1995) was an American econometrician. He was a head of the Department of Economics at Michigan State University. A native of McPherson, Kansas, Hildreth earned his bachelor's from the University of Kansas before entering Iowa State University for graduate studies. After years at University of Chicago and North Carolina State University, he joined the faculty at Michigan State, before going to the University of Minnesota in 1964 where he held joint appointments in the Department of Economics, the School of Statistics and the Department of Agricultural and Applied Economics. He retired in 1988. His most notable contribution was a procedure for estimating a linear model in the presence of autocorrelated error terms, known as Hildreth–Lu estimation. In 1960 he was elected as a Fellow of the American Statistical Association.
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


John Y
John is a common English name and surname: * John (given name) * John (surname) John may also refer to: New Testament Works * Gospel of John, a title often shortened to John * First Epistle of John, often shortened to 1 John * Second Epistle of John, often shortened to 2 John * Third Epistle of John, often shortened to 3 John People * John the Baptist (died c. AD 30), regarded as a prophet and the forerunner of Jesus Christ * John the Apostle (lived c. AD 30), one of the twelve apostles of Jesus * John the Evangelist, assigned author of the Fourth Gospel, once identified with the Apostle * John of Patmos, also known as John the Divine or John the Revelator, the author of the Book of Revelation, once identified with the Apostle * John the Presbyter, a figure either identified with or distinguished from the Apostle, the Evangelist and John of Patmos Other people with the given name Religious figures * John, father of Andrew the Apostle and Saint Peter * Pope Joh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linear Model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible. Linear regression models For the regression case, the statistical model is as follows. Given a (random) sample (Y_i, X_, \ldots, X_), \, i = 1, \ldots, n the relation between the observations Y_i and the independent variables X_ is formulated as :Y_i = \beta_0 + \beta_1 \phi_1(X_) + \cdots + \beta_p \phi_p(X_) + \varepsilon_i \qquad i = 1, \ldots, n where \phi_1, \ldots, \phi_p may be nonlinear functions. In the above, the quantities \varepsilon_i are random variables representing errors in the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Serial Correlation
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. Auto ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Error Term
In mathematics and statistics, an error term is an additive type of error. Common examples include: * errors and residuals in statistics, e.g. in linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ... * the error term in numerical integration {{sia, mathematics Error measures ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cochrane–Orcutt Estimation
Cochrane–Orcutt estimation is a procedure in econometrics, which adjusts a linear model for serial correlation in the error term. Developed in the 1940s, it is named after statisticians Donald Cochrane and Guy Orcutt. Theory Consider the model :y_t = \alpha + X_t \beta+\varepsilon_t,\, where y_ is the value of the dependent variable of interest at time ''t'', \beta is a column vector of coefficients to be estimated, X_ is a row vector of explanatory variables at time ''t'', and \varepsilon_t is the error term at time ''t''. If it is found, for instance via the Durbin–Watson statistic, that the error term is serially correlated over time, then standard statistical inference as normally applied to regressions is invalid because standard errors are estimated with bias. To avoid this problem, the residuals must be modeled. If the process generating the residuals is found to be a stationary first-order autoregressive structure, \varepsilon_t =\rho \varepsilon_+e_t,\ , \rho, < ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ordinary Least Squares
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the input dataset and the output of the (linear) function of the independent variable. Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation. The OLS estimator is consiste ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Residual Sum Of Squares
In statistics, the residual sum of squares (RSS), also known as the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of data). It is a measure of the discrepancy between the data and an estimation model, such as a linear regression. A small RSS indicates a tight fit of the model to the data. It is used as an optimality criterion in parameter selection and model selection. In general, total sum of squares = explained sum of squares + residual sum of squares. For a proof of this in the multivariate ordinary least squares (OLS) case, see partitioning in the general OLS model. One explanatory variable In a model with a single explanatory variable, RSS is given by: :\operatorname = \sum_^n (y_i - f(x_i))^2 where ''y''''i'' is the ''i''th value of the variable to be predicted, ''x''''i'' is the ''i''th value of the explanatory variable, and f(x_i) is the predicted value of ''y''''i'' (also terme ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Prais–Winsten Estimation
In econometrics, Prais–Winsten estimation is a procedure meant to take care of the Autocorrelation, serial correlation of type Autoregressive model#Example: An AR.281.29 process, AR(1) in a linear model. Conceived by Sigbert Prais and Christopher Winsten in 1954, it is a modification of Cochrane–Orcutt estimation in the sense that it does not lose the first observation, which leads to more efficiency (statistics), efficiency as a result and makes it a special case of feasible generalized least squares. Theory Consider the model :y_t = \alpha + X_t \beta+\varepsilon_t,\, where y_ is the time series of interest at time ''t'', \beta is a Vector (geometry), vector of coefficients, X_ is a matrix of explanatory variables, and \varepsilon_t is the error term. The error term can be serial correlation, serially correlated over time: \varepsilon_t =\rho \varepsilon_+e_t,\ , \rho, <1 and e_t is white noise. In addition to the Cochrane–Orcutt transformation, wh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Autocorrelation
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. A ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]