Cook's distance
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, Cook's distance or Cook's ''D'' is a commonly used estimate of the
influence Influence or influencer may refer to: *Social influence, in social psychology, influence in interpersonal relationships ** Minority influence, when the minority affect the behavior or beliefs of the majority *Influencer marketing, through individ ...
of a data point when performing a least-squares
regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
. In a practical
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the prin ...
analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.


Definition

Data points with large residuals (
outlier In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s) and/or high
leverage Leverage or leveraged may refer to: *Leverage (mechanics), mechanical advantage achieved by using a lever * ''Leverage'' (album), a 2012 album by Lyriel *Leverage (dance), a type of dance connection *Leverage (finance), using given resources to ...
may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis. For the algebraic expression, first define : \underset = \underset \quad \underset \quad + \quad \underset where \boldsymbol \sim \mathcal\left( 0, \sigma^2 \mathbf \right) is the
error term In mathematics and statistics, an error term is an additive type of error. Common examples include: * errors and residuals in statistics, e.g. in linear regression * the error term in numerical integration In analysis, numerical integration ...
, \boldsymbol = \left \beta_ \, \beta_ \dots \beta_ \right is the coefficient matrix, p is the number of covariates or predictors for each observation, and \mathbf is the
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual ob ...
including a constant. The
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
estimator then is \mathbf = \left( \mathbf^ \mathbf \right)^ \mathbf^ \mathbf, and consequently the fitted (predicted) values for the mean of \mathbf are :\mathbf = \mathbf \mathbf = \mathbf \left( \mathbf^ \mathbf \right)^ \mathbf^ \mathbf = \mathbf \mathbf where \mathbf \equiv \mathbf ( \mathbf^ \mathbf)^ \mathbf^ is the
projection matrix In statistics, the projection matrix (\mathbf), sometimes also called the influence matrix or hat matrix (\mathbf), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes t ...
(or hat matrix). The i-th diagonal element of \mathbf \,, given by h_ \equiv \mathbf_i^ ( \mathbf^ \mathbf)^ \mathbf_i, is known as the
leverage Leverage or leveraged may refer to: *Leverage (mechanics), mechanical advantage achieved by using a lever * ''Leverage'' (album), a 2012 album by Lyriel *Leverage (dance), a type of dance connection *Leverage (finance), using given resources to ...
of the i-th observation. Similarly, the i-th element of the residual vector \mathbf = \mathbf - \mathbf = \left( \mathbf - \mathbf \right) \mathbf is denoted by e_i. Cook's distance D_i of observation i \; (\text i = 1, \dots, n) is defined as the sum of all the changes in the regression model when observation i is removed from it : D_i = \frac where \widehat_ is the fitted response value obtained when excluding i, and s^2 = \frac is the
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
of the regression model. Equivalently, it can be expressed using the leverage (h_): : D_i = \frac\left frac\right .


Detecting highly influential observations

There are different opinions regarding what cut-off values to use for spotting highly influential points. Since Cook's distance is in the metric of an ''F'' distribution with p and n-p (as defined for the design matrix \mathbf above) degrees of freedom, the median point (i.e., F_(p,n-p)) can be used as a cut-off. Since this value is close to 1 for large n, a simple operational guideline of D_i>1 has been suggested. Note that the Cook's distance measure does not always correctly identify influential observations.


Relationship to other influence measures (and interpretation)

D_i can be expressed using the leverage (0 \leq h_ \leq 1) and the square of the ''internally''
Studentized residual In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points. This is ...
(0 \leq t_i^2), as follows: : \begin D_i &= \frac\left frac\right= \frac \frac\left frac\right\\ &= \left frac \rightt_i^2 \frac . \end The benefit in the last formulation is that it clearly shows the relationship between t_i^2 and h_ to D_i (while p and n are the same for all observations). If t_i^2 is large then it (for non-extreme values of h_) will increase D_i. If h_ is close to 0 then D_i will be small, while if h_ is close to 1 then D_i will become very large (as long as t_i^2 > 0, i.e.: that the observation i is not exactly on the regression line that was fitted without observation i). D_i is related to
DFFITS DFFIT and DFFITS ("difference in fit(s)") are diagnostics meant to show how influential a point is in a statistical regression, first proposed in 1980. DFFIT is the change in the predicted value for a point, obtained when that point is left out ...
through the following relationship (note that t_i = t_ is the ''externally'' studentized residual, and \widehat, \widehat_ are defined
here Here is an adverb that means "in, on, or at this place". It may also refer to: Software * Here Technologies, a mapping company * Here WeGo (formerly Here Maps), a mobile app and map website by Here Television * Here TV (formerly "here!"), a TV ...
): : \begin D_i &= \left frac \rightt_i^2 \frac \\ &= \left frac \right t_i^2 \frac = \left frac \right \left(t_ \sqrt\right)^2 \\ &= \left frac \right \text^2 \end D_i can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters. This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis. An alternative to D_ has been proposed. Instead of considering the influence a single observation has on the overall model, the statistics S_ serves as a measure of how sensitive the prediction of the i-th observation is to the deletion of each observation in the original data set. It can be formulated as a weighted linear combination of the D_'s of all data points. Again, the
projection matrix In statistics, the projection matrix (\mathbf), sometimes also called the influence matrix or hat matrix (\mathbf), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes t ...
is involved in the calculation to obtain the required weights: : \begin S_ = \frac = \sum_^n \frac = \sum_^n \rho_^ \cdot D_ \end In this context, \rho_ (\leq 1) resembles the correlation between the predictions \widehat_i and \widehat_j.
In contrast to D_, the distribution of S_ is asymptotically normal for large sample sizes and models with many predictors. In absence of outliers the expected value of S_ is approximately p^. An influential observation can be identified if : \begin \left, S_ - med \left( S \right) \ \geq 4.5 \cdot MAD \left( S \right) \end with med \left( S \right) as the
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
and MAD \left( S \right) as the
median absolute deviation In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a un ...
of all S-values within the original data set, i.e., a robust measure of location and a robust measure of scale for the distribution of S_. The factor 4.5 covers approx. 3
standard deviations In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
of S around its centre.
When compared to Cook's distance, S_ was found to perform well for high- and intermediate-leverage outliers, even in presence of masking effects for which D_ failed.
Interestingly, D_ and S_ are closely related because they can both be expressed in terms of the matrix \mathbf which holds the effects of the deletion of the j-th data point on the i-th prediction: : \begin \mathbf = \left[\begin \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \end\right] = \mathbf\mathbf\mathbf = \mathbf \left[\begin e_ & 0 & 0 & \cdots & 0 & 0 \\ 0 & e_ & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & e_ & 0 \\ 0 & 0 & 0 & \cdots & 0 & e_ \end\right] \left[\begin \frac & 0 & 0 & \cdots & 0 & 0 \\ 0 & \frac & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \frac & 0 \\ 0 & 0 & 0 & \cdots & 0 & \frac \end\right] \end With \mathbf at hand, \mathbf is given by: : \begin \mathbf = \left[\begin D_ \\ D_ \\ \vdots \\ D_ \\ D_ \end\right] = \fracdiag\left(\mathbf^\mathbf\right) = \fracdiag\left(\mathbf\mathbf\mathbf^\mathbf\mathbf\mathbf\right) = diag\left(\mathbf\right) \end where \mathbf^\mathbf = \mathbf if \mathbf is symmetric and idempotent, which is not necessarily the case. In contrast, \mathbf can be calculated as: : \begin \mathbf = \left begin S_ \\ S_ \\ \vdots \\ S_ \\ S_ \end\right= \frac\mathbfdiag\left(\mathbf\mathbf^\right) = \frac\left[\begin \frac & 0 & 0 & \cdots & 0 & 0 \\ 0 & \frac & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \frac & 0 \\ 0 & 0 & 0 & \cdots & 0 & \frac\end\right]diag\left(\mathbf\mathbf^\right) = \frac\mathbfdiag\left(\mathbf\mathbf\mathbf\mathbf\mathbf\mathbf^\right) = \mathbfdiag\left(\mathbf\right) \end where diag\left(\mathbf\right) extracts the main diagonal of a square matrix \mathbf. In this context, \mathbf = p^s^\mathbf\mathbf\mathbf^\mathbf\mathbf\mathbf is referred to as the influence matrix whereas \mathbf = p^s^\mathbf\mathbf\mathbf\mathbf\mathbf\mathbf^ resembles the so-called sensitivity matrix. An eigenvector analysis of \mathbf and \mathbf - which both share the same eigenvalues - serves as a tool in outlier detection, although the eigenvectors of the sensitivity matrix are more powerful.


Software implementations

Many programs and statistics packages, such as R,
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
, etc., include implementations of Cook's distance.


Extensions

High-dimensional Influence Measure (HIM) is an alternative to Cook's distance for when p>n (i.e., when there are more predictors than observations).High-dimensional influence measure
/ref> While the Cook's distance quantifies the individual observation's influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.


See also

*
Outlier In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
*
Leverage (statistics) In statistics and in particular in regression analysis, leverage is a measure of how far away the independent variable values of an observation are from those of the other observations. ''High-leverage points'', if any, are outliers with respect to ...
*
Partial leverage In regression analysis, partial leverage (PL) is a measure of the contribution of the individual independent variables to the total leverage of each observation. That is, if ''h'i'' is the ''i''th element of the diagonal of the hat matrix, PL is ...
*
DFFITS DFFIT and DFFITS ("difference in fit(s)") are diagnostics meant to show how influential a point is in a statistical regression, first proposed in 1980. DFFIT is the change in the predicted value for a point, obtained when that point is left out ...
*
Studentized residual In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's ''t''-statistic, with the estimate of error varying between points. This is ...


Notes


References


Further reading

* * * * {{cite journal, last1=Aguinis, first1=Herman , last2=Gottfredson, first2=Ryan K. , last3=Joo, first3=Harry, year=2013 , title=Best-Practice Recommendations for Defining Identifying and Handling Outliers , url=https://www.researchgate.net/publication/258174106 , journal=Organizational Research Methods , publisher=Sage , volume=16 , issue=2 , pages=270–301 , doi=10.1177/1094428112470848 , s2cid=54916947 , access-date=4 December 2015 Regression diagnostics Statistical outliers Statistical distance