HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the coefficient of determination, denoted ''R''2 or ''r''2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s). It is a
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypot ...
used in the context of
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
s whose main purpose is either the
prediction A prediction (Latin ''præ-'', "before," and ''dictum'', "something said") or forecast is a statement about a future event or about future data. Predictions are often, but not always, based upon experience or knowledge of forecasters. There ...
of future outcomes or the testing of
hypotheses A hypothesis (: hypotheses) is a proposed explanation for a phenomenon. A scientific method, scientific hypothesis must be based on observations and make a testable and reproducible prediction about reality, in a process beginning with an educ ...
, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model. There are several definitions of ''R''2 that are only sometimes equivalent. In
simple linear regression In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x ...
(which includes an intercept), ''r''2 is simply the square of the sample ''correlation coefficient'' (''r''), between the observed outcomes and the observed predictor values. If additional
regressor A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
s are included, ''R''2 is the square of the '' coefficient of multiple correlation''. In both such cases, the coefficient of determination normally ranges from 0 to 1. There are cases where ''R''2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, ''R''2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion. The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to
SMAPE The symmetric mean absolute percentage error (SMAPE or sMAPE) is an accuracy measure based on percentage (or relative) errors. It is usually defined as follows: : \text = \frac \sum_^n \frac where ''A't'' is the actual value and ''F't'' is ...
on certain test datasets. When evaluating the goodness-of-fit of simulated (''Y''pred) versus measured (''Y''obs) values, it is not appropriate to base this on the ''R''2 of the linear regression (i.e., ''Y''obs= ''m''·''Y''pred + b). The ''R''2 quantifies the degree of any linear correlation between ''Y''obs and ''Y''pred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: ''Y''obs = 1·''Y''pred + 0 (i.e., the 1:1 line).


Definitions

A
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more table (database), database tables, where every column (database), column of a table represents a particular Variable (computer sci ...
has ''n'' values marked ''y''1, ..., ''y''''n'' (collectively known as ''y''''i'' or as a vector ''y'' = 'y''1, ..., ''y''''n''sup>T), each associated with a fitted (or modeled, or predicted) value ''f''1, ..., ''f''''n'' (known as ''f''''i'', or sometimes ''ŷ''''i'', as a vector ''f''). Define the residuals as (forming a vector ''e''). If \bar is the mean of the observed data: \bar=\frac\sum_^n y_i then the variability of the data set can be measured with two sums of squares formulas: * The sum of squares of residuals, also called the
residual sum of squares In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of dat ...
: SS_\text=\sum_i (y_i - f_i)^2=\sum_i e_i^2\, * The
total sum of squares In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. For a set of observations, y_i, i\leq n, it is defined as the sum over all squared dif ...
(proportional to the
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of the data): SS_\text=\sum_i (y_i - \bar)^2 The most general definition of the coefficient of determination is R^2 = 1 - In the best case, the modeled values exactly match the observed values, which results in SS_\text=0 and . A baseline model, which always predicts , will have .


Relation to unexplained variance

In a general form, ''R''2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data): R^2 = 1 - \text


As explained variance

A larger value of ''R''2 implies a more successful regression model. Suppose . This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called the
explained sum of squares In statistics, the explained sum of squares (ESS), alternatively known as the model sum of squares or sum of squares due to regression (SSR – not to be confused with the residual sum of squares (RSS) or sum of squares of errors), is a quantity ...
, is defined as : SS_\text=\sum_i (f_i -\bar)^2 In some cases, as in
simple linear regression In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x ...
, the
total sum of squares In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. For a set of observations, y_i, i\leq n, it is defined as the sum over all squared dif ...
equals the sum of the two other sums of squares defined above: : SS_\text+SS_\text=SS_\text See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of ''R''2 is equivalent to : R^2 = \frac = \frac where ''n'' is the number of observations (cases) on the variables. In this form ''R''2 is expressed as the ratio of the explained variance (variance of the model's predictions, which is ) to the total variance (sample variance of the dependent variable, which is ). This partition of the sum of squares holds for instance when the model values ''ƒ''''i'' have been obtained by
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
. A milder
sufficient condition In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of ...
reads as follows: The model has the form : f_i=\widehat\alpha+\widehat\beta q_i where the ''q''''i'' are arbitrary values that may or may not depend on ''i'' or on other free parameters (the common choice ''q''''i'' = ''x''''i'' is just one special case), and the coefficient estimates \widehat\alpha and \widehat\beta are obtained by minimizing the residual sum of squares. This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions: : \bar=\bar.\,


As squared correlation coefficient

In linear least squares multiple regression (with fitted intercept and slope), ''R''2 equals \rho^2(y,f) the square of the
Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviatio ...
between the observed y and modeled (predicted) f data values of the dependent variable. In a linear least squares regression with a single explanator (with fitted intercept and slope), this is also equal to \rho^2(y,x) the squared Pearson correlation coefficient between the dependent variable y and explanatory variable x. It should not be confused with the correlation coefficient between two
explanatory variable A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
s, defined as : \rho_ = , where the covariance between two coefficient estimates, as well as their
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
s, are obtained from the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of the coefficient estimates, (X^T X)^. Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an ''R''2 value can be calculated as the square of the
correlation coefficient A correlation coefficient is a numerical measure of some type of linear correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two c ...
between the original y and modeled f data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the form ). According to Everitt, this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables.


Interpretation

''R''2 is a measure of the
goodness of fit The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measur ...
of a model. In regression, the ''R''2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An ''R''2 of 1 indicates that the regression predictions perfectly fit the data. Values of ''R''2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible
least-squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth is used (this is the equation used most often), ''R''2 can be less than zero. If equation 2 of Kvålseth is used, ''R''2 can be greater than one. In all instances where ''R''2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing ''SS''res. In this case, ''R''2 increases as the number of variables in the model is increased (''R''2 is
monotone increasing In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
with the number of variables included—it will never decrease). This illustrates a drawback to one possible use of ''R''2, where one might keep adding variables ( kitchen sink regression) to increase the ''R''2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the ''R''2 will never decrease as variables are added and will likely experience an increase due to chance alone. This leads to the alternative approach of looking at the adjusted ''R''2. The explanation of this statistic is almost the same as ''R''2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the ''R''2 statistic can be calculated as above and may still be a useful measure. If fitting is by
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (''heteroscedasticity'') is incorporated into ...
or
generalized least squares In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a Linear regression, linear regression model. It is used when there is a non-zero amount of correlation between the Residual (statistics), resi ...
, alternative versions of ''R''2 can be calculated appropriate to those statistical frameworks, while the "raw" ''R''2 may still be useful if it is more easily interpreted. Values for ''R''2 can be calculated for any type of predictive model, which need not have a statistical basis.


In a multiple linear model

Consider a linear model with more than a single explanatory variable, of the form : Y_i = \beta_0 + \sum_^p \beta_j X_ + \varepsilon_i, where, for the ''i''th case, is the response variable, X_,\dots,X_ are ''p'' regressors, and \varepsilon_i is a mean zero
error An error (from the Latin , meaning 'to wander'Oxford English Dictionary, s.v. “error (n.), Etymology,” September 2023, .) is an inaccurate or incorrect action, thought, or judgement. In statistics, "error" refers to the difference between t ...
term. The quantities \beta_0,\dots,\beta_p are unknown coefficients, whose values are estimated by
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
. The coefficient of determination ''R''2 is a measure of the global fit of the model. Specifically, ''R''2 is an element of , 1and represents the proportion of variability in ''Y''''i'' that may be attributed to some linear combination of the regressors (
explanatory variable A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
s) in ''X''. ''R''2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, ''R''2 = 1 indicates that the fitted model explains all variability in y, while ''R''2 = 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept = \bar) between the response variable and regressors). An interior value such as ''R''2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown, lurking variables or inherent variability." A caution that applies to ''R''2, as to other statistical descriptions of
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
and association is that "
correlation does not imply causation The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. The id ...
." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause"). In case of a single regressor, fitted by least squares, ''R''2 is the square of the
Pearson product-moment correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviation ...
relating the regressor and the response variable. More generally, ''R''2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the ''R''2 can be referred to as the
coefficient of multiple determination In statistics, the coefficient of multiple correlation is a measure of how well a given variable can be predicted using a linear function of a set of other variables. It is the correlation between the variable's values and the best predictions th ...
.


Inflation of ''R''2

In
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
regression using typical data, ''R''2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value of ''R''2, ''R''2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, an
F-test An F-test is a statistical test that compares variances. It is used to determine if the variances of two samples, or if the ratios of variances among multiple samples, are significantly different. The test calculates a Test statistic, statistic, ...
can be performed on the
residual sum of squares In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of dat ...
, similar to the F-tests in Granger causality, though this is not always appropriate. As a reminder of this, some authors denote ''R''2 by ''R''''q''2, where ''q'' is the number of columns in ''X'' (the number of explanators including the constant). To demonstrate this property, first recall that the objective of least squares linear regression is : \min_b SS_\text(b) \Rightarrow \min_b \sum_i (y_i - X_ib)^2\, where ''Xi'' is a row vector of values of explanatory variables for case ''i'' and ''b'' is a column vector of coefficients of the respective elements of ''Xi''. The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns of X (the explanatory data matrix whose ''i''th row is ''Xi'') are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting that SS_ depends only on ''y'', the non-decreasing property of ''R''2 follows directly from the definition above. The intuitive reason that using an additional explanatory variable cannot lower the ''R''2 is this: Minimizing SS_\text is equivalent to maximizing ''R''2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and the ''R''2 unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the ''R''2. The above gives an analytical explanation of the inflation of ''R''2. Next, an example based on ordinary least square from a geometric perspective is shown below. A simple case to be considered first: : Y=\beta_0+\beta_1\cdot X_1+\varepsilon\, This equation describes the
ordinary least squares regression In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
model with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space in \mathbb (without intercept). The residual is shown as the red line. : Y=\beta_0+\beta_1\cdot X_1+\beta_2\cdot X_2 + \varepsilon\, This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space in \mathbb^2 (without intercept). Noticeably, the values of \beta_0 and \beta_0 are not the same as in the equation for smaller model space as long as X_1 and X_2 are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space in \mathbb^2, giving the minimal distance from the space. The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation for ''R''2, a smaller value of SS_ will lead to a larger value of ''R''2, meaning that adding regressors will result in inflation of ''R''2.


Caveats

''R''2 does not indicate whether: * the independent variables are a cause of the changes in the
dependent variable A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical functio ...
; *
omitted-variable bias In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included. More specifically, O ...
exists; * the correct regression was used; * the most appropriate set of independent variables has been chosen; * there is
collinearity In geometry, collinearity of a set of points is the property of their lying on a single line. A set of points with this property is said to be collinear (sometimes spelled as colinear). In greater generality, the term has been used for aligned ...
present in the data on the explanatory variables; * the model might be improved by using transformed versions of the existing set of independent variables; * there are enough data points to make a solid conclusion; * there are a few
outlier In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s in an otherwise good sample.


Extensions


Adjusted ''R''2

The use of an adjusted ''R''2 (one common notation is \bar R^2, pronounced "R bar squared"; another is R^2_ or R^2_) is an attempt to account for the phenomenon of the ''R''2 automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting. By far the most used one, to the point that it is typically just referred to as adjusted ''R'', is the correction proposed by
Mordecai Ezekiel Mordecai Joseph Brill Ezekiel (May 10, 1899 – October 31, 1974) was an American agrarian economist who worked for the United States government and the United Nations Food and Agriculture Organization (FAO). He was a "New Deal economic adviso ...
. The adjusted ''R''2 is defined as : \bar R^2 = where df''res'' is the
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
of the estimate of the population variance around the model, and df''tot'' is the degrees of freedom of the estimate of the population variance around the mean. df''res'' is given in terms of the sample size ''n'' and the number of variables ''p'' in the model, . df''tot'' is given in the same way, but with ''p'' being zero for the mean, i.e. . Inserting the degrees of freedom and using the definition of ''R''2, it can be rewritten as: : \bar R^2 = 1-(1-R^2) where ''p'' is the total number of explanatory variables in the model (excluding the intercept), and ''n'' is the sample size. The adjusted ''R''2 can be negative, and its value will always be less than or equal to that of ''R''2. Unlike ''R''2, the adjusted ''R''2 increases only when the increase in ''R''2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjusted ''R''2 computed each time, the level at which adjusted ''R''2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms. The adjusted ''R''2 can be interpreted as an instance of the bias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjusted ''R''2 specifically, the model complexity (i.e. number of parameters) affects the ''R''2 and the term / frac and thereby captures their attributes in the overall performance of the model. ''R''2 can be interpreted as the variance of the model, which is influenced by the model complexity. A high ''R''2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). In 2, the term () will be lower with high complexity and resulting in a higher 2, consistently indicating a better performance. On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance. Considering the calculation of 2, more parameters will increase the ''R''2 and lead to an increase in 2. Nevertheless, adding more parameters will increase the term/frac and thus decrease 2. These two trends construct a reverse u-shape relationship between model complexity and 2, which is in consistent with the u-shape trend of model complexity versus overall performance. Unlike ''R''2, which will always increase when model complexity increases, 2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. Using 2 instead of ''R''2 could thereby prevent overfitting. Following the same logic, adjusted ''R''2 can be interpreted as a less biased estimator of the population ''R''2, whereas the observed sample ''R''2 is a positively biased estimate of the population value. Adjusted ''R''2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in the
feature selection In machine learning, feature selection is the process of selecting a subset of relevant Feature (machine learning), features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons: * sim ...
stage of model building. The principle behind the adjusted ''R''2 statistic can be seen by rewriting the ordinary ''R''2 as : R^2 = where \text_\text = SS_\text/n and \text_\text = SS_\text/n are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statistically
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
versions: \text_\text = SS_\text/(n-p) and \text_\text = SS_\text/(n-1). Despite using unbiased estimators for the population variances of the error and the dependent variable, adjusted ''R''2 is not an unbiased estimator of the population ''R''2, which results by using the population variances of the errors and the dependent variable instead of estimating them.
Ingram Olkin Ingram Olkin (July 23, 1924 – April 28, 2016) was a professor emeritus and chair of statistics and education at Stanford University and the Stanford Graduate School of Education. He is known for developing statistical analysis for evaluatin ...
and John W. Pratt derived the
minimum-variance unbiased estimator In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter. For pra ...
for the population ''R''2, which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjusting ''R''2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator or the exact Olkin–Pratt estimator should be preferred over (Ezekiel) adjusted ''R''2.


Coefficient of partial determination

The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model. The calculation for the partial ''R''2 is relatively straightforward after estimating two models and generating the
ANOVA Analysis of variance (ANOVA) is a family of statistical methods used to compare the means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation ''between'' the group means to the amount of variation ''w ...
tables for them. The calculation for the partial ''R''2 is : \frac, which is analogous to the usual coefficient of determination: : \frac.


Generalizing and decomposing ''R''2

As explained above, model selection heuristics such as the adjusted ''R''2 criterion and the
F-test An F-test is a statistical test that compares variances. It is used to determine if the variances of two samples, or if the ratios of variances among multiple samples, are significantly different. The test calculates a Test statistic, statistic, ...
examine whether the total ''R''2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the total ''R''2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high. Alternatively, one can decompose a generalized version of ''R''2 to quantify the relevance of deviating from a hypothesis. As Hoornweg (2018) shows, several shrinkage estimators – such as
Bayesian linear regression Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well ...
,
ridge regression Ridge regression (also known as Tikhonov regularization, named for Andrey Tikhonov) is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in m ...
, and the (adaptive)
lasso A lasso or lazo ( or ), also called reata or la reata in Mexico, and in the United States riata or lariat (from Mexican Spanish lasso for roping cattle), is a loop of rope designed as a restraint to be thrown around a target and tightened when ...
– make use of this decomposition of ''R''2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as : y=X\beta+\varepsilon. It is assumed that the matrix ''X'' is standardized with Z-scores and that the column vector y is centered to have a mean of zero. Let the column vector \beta_0 refer to the hypothesized regression parameters and let the column vector b denote the estimated parameters. We can then define : R^2=1-\frac. An ''R''2 of 75% means that the in-sample accuracy improves by 75% if the data-optimized ''b'' solutions are used instead of the hypothesized \beta_0 values. In the special case that \beta_0 is a vector of zeros, we obtain the traditional ''R''2 again. The individual effect on ''R''2 of deviating from a hypothesis can be computed with R^\otimes ('R-outer'). This p times p matrix is given by : R^=(X'\tilde y_0)(X'\tilde y_0)' (X'X)^(\tilde y_0'\tilde y_0)^, where \tilde y_0=y-X\beta_0. The diagonal elements of R^\otimes exactly add up to ''R''2. If regressors are uncorrelated and \beta_0 is a vector of zeros, then the j^\text diagonal element of R^ simply corresponds to the ''r''2 value between x_j and y. When regressors x_i and x_j are correlated, R^\otimes_ might increase at the cost of a decrease in R^_. As a result, the diagonal elements of R^ may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements of R^ to quantify the relevance of deviating from a hypothesized value. Click on the
lasso A lasso or lazo ( or ), also called reata or la reata in Mexico, and in the United States riata or lariat (from Mexican Spanish lasso for roping cattle), is a loop of rope designed as a restraint to be thrown around a target and tightened when ...
for an example.


''R''2 in logistic regression

In the case of
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
, usually fit by
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
, there are several choices of pseudo-''R''2. One is the generalized ''R''2 originally proposed by Cox & Snell, and independently by Magee: : R^2 = 1 - \left(\right)^ where \mathcal(0) is the likelihood of the model with only the intercept, is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) and ''n'' is the sample size. It is easily rewritten to: : R^2 = 1 - e^ = 1 - e^ where ''D'' is the test statistic of the
likelihood ratio test In statistics, the likelihood-ratio test is a hypothesis test that involves comparing the goodness of fit of two competing statistical models, typically one found by maximization over the entire parameter space and another found after imposing ...
. Nico Nagelkerke noted that it had the following properties: # It is consistent with the classical coefficient of determination when both can be computed; # Its value is maximised by the maximum likelihood estimation of a model; # It is asymptotically independent of the sample size; # The interpretation is the proportion of the variation explained by the model; # The values are between 0 and 1, with 0 denoting that model does not explain any variation and 1 denoting that it perfectly explains the observed variation; # It does not have any unit. However, in the case of a logistic model, where \mathcal(\widehat) cannot be greater than 1, ''R''2 is between 0 and R^2_\max = 1- (\mathcal(0))^ : thus, Nagelkerke suggested the possibility to define a scaled ''R''2 as ''R''2/''R''2max.


Comparison with residual statistics

Occasionally, residual statistics are used for indicating goodness of fit. The
norm Norm, the Norm or NORM may refer to: In academic disciplines * Normativity, phenomenon of designating things as good or bad * Norm (geology), an estimate of the idealised mineral content of a rock * Norm (philosophy), a standard in normative e ...
of residuals is calculated as the square-root of the
sum of squares of residuals In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of dat ...
(SSR): : \text = \sqrt = \, e \, . Similarly, the reduced chi-square is calculated as the SSR divided by the degrees of freedom. Both ''R''2 and the norm of residuals have their relative merits. For
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
analysis ''R''2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage of ''R''2 is the SS_\text term acts to normalize the value. If the ''yi'' values are all multiplied by a constant, the norm of residuals will also change by that constant but ''R''2 will stay the same. As a basic example, for the linear least squares fit to the set of data: : ''R''2 = 0.998, and norm of residuals = 0.302. If all values of ''y'' are multiplied by 1000 (for example, in an
SI prefix The International System of Units, internationally known by the abbreviation SI (from French ), is the modern form of the metric system and the world's most widely used system of measurement. It is the only system of measurement with official st ...
change), then ''R''2 remains the same, but norm of residuals = 302. Another single-parameter indicator of fit is the RMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.OriginLab webpage, http://www.originlab.com/doc/Origin-Help/LR-Algorithm. Retrieved February 9, 2016.


History

The creation of the coefficient of determination has been attributed to the geneticist
Sewall Wright Sewall Green Wright ForMemRS HonFRSE (December 21, 1889March 3, 1988) was an American geneticist known for his influential work on evolutionary theory and also for his work on path analysis. He was a founder of population genetics alongside ...
and was first published in 1921.


See also

* Anscombe's quartet * Fraction of variance unexplained *
Goodness of fit The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measur ...
* Nash–Sutcliffe model efficiency coefficient ( hydrological applications) *
Pearson product-moment correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviation ...
* Proportional reduction in loss * Regression model validation *
Root mean square deviation The root mean square deviation (RMSD) or root mean square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on th ...
*
Stepwise regression In statistics, stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step, a variable is considered for addition to or subtraction from the set of ...


Notes


Further reading

* * * * * {{DEFAULTSORT:Coefficient Of Determination Regression diagnostics Statistical ratios Least squares