HOME

TheInfoList



OR:

In statistics, best linear unbiased prediction (BLUP) is used in linear
mixed model A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. ...
s for the estimation of random effects. BLUP was derived by Charles Roy Henderson in 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962. "Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (see
Gauss–Markov theorem In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the ...
) of fixed effects. The distinction arises because it is conventional to talk not about ''estimating'' fixed effects but rather about ''predicting'' random effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson page 28)). However, the equations for the "fixed" effects and for the random effects are different. In practice, it is often the case that the parameters associated with the random effect(s) term(s) are unknown; these parameters are the variances of the random effects and residuals. Typically the parameters are estimated and plugged into the predictor, leading to the Empirical Best Linear Unbiased Predictor (EBLUP). Notice that by simply plugging in the estimated parameter into the predictor, additional variability is unaccounted for, leading to overly optimistic prediction variances for the EBLUP. Best linear unbiased predictions are similar to
empirical Bayes Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed ...
estimates of random effects in linear mixed models, except that in the latter case, where weights depend on unknown values of components of variance, these unknown variances are replaced by sample-based estimates.


Example

Suppose that the model for observations is written as :Y_j= \mu + x_j^T\beta +\xi_j + \varepsilon_j, \, where \mu is the mean of all observations Y, and ''ξj'' and ''εj'' represent the random effect and observation error for observation ''j'', and suppose they are uncorrelated and have known variances ''σξ''2 and ''σε''2, respectively. Further, ''xj'' is a vector of
independent variables Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
for the ''j''th observation and \beta is a vector of regression parameters. The BLUP problem of providing an estimate of the observation-error-free value for the ''k''th observation, :\tilde= \mu + x_k^T\beta +\xi_k , can be formulated as requiring that the coefficients of a linear predictor, defined as :\widehat_k= \sum_^n c_ Y_j , should be chosen so as to minimise the variance of the prediction error, :V= \operatorname(\tilde- \widehat_k), subject to the condition that the predictor is unbiased, :\operatorname(\tilde- \widehat_k)=0 .


BLUP vs BLUE

In contrast to the case of best linear unbiased estimation, the "quantity to be estimated", \tilde, not only has a contribution from a random element but one of the observed quantities, specifically Y_k which contributes to \widehat, also has a contribution from this same random element. In contrast to BLUE, BLUP takes into account known or estimated variances.


History of BLUP in breeding

Henderson explored breeding from a statistical point of view. His work assisted the development of the Selection Index (SI) and Estimated Breeding Value (EBV). These statistical methods influenced the artificial insemination stud rankings used in the United States. These early statistical methods are confused with the BLUP now common in livestock breeding. The actual term BLUP originated out of work at the
University of Guelph , mottoeng = "to learn the reasons of realities" , established = May 8, 1964 ()As constituents: OAC: (1874) Macdonald Institute: (1903) OVC: (1922) , type = Public university , chancellor ...
in Canada by Daniel Sorensen and Brian Kennedy, in which they extended Henderson's results to a model that includes several cycles of selection. This model was popularized by the University of Guelph in the dairy industry under the name BLUP. Further work by the University showed BLUP's superiority over EBV and SI leading to it becoming the primary genetic predictor. There is thus confusion between the BLUP model popularized above with the best linear unbiased prediction statistical method which was too theoretical for general use. The model was supplied for use on computers to farmers. In Canada, all dairies report nationally. The genetics in Canada were shared making it the largest genetic pool and thus source of improvements. This and BLUP drove a rapid increase in
Holstein cattle Holstein Friesians (often shortened to Holsteins in North America, while the term Friesians is often used in the UK and Ireland) are a breed of dairy cattle that originated in the Dutch provinces of North Holland and Friesland, and Schleswig-Hols ...
quality.


See also

*
Kriging In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging giv ...
* Minimum mean square error


Notes


References

* * {{cite journal , last1 = Liu , first1 = Xu-Qing , last2 = Rong , first2 = Jian-Ying , last3 = Liu , first3 = Xiu-Ying , year = 2008 , title = Best linear unbiased prediction for linear combinations in general mixed linear models , journal = Journal of Multivariate Analysis , volume = 99 , issue = 8, pages = 1503–1517 , doi = 10.1016/j.jmva.2008.01.004 , doi-access = free Estimation methods