HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, linear regression is a
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as
dependent and independent variables Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
). The case of one explanatory variable is called ''
simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
''; for more than one, the process is called multiple linear regression. This term is distinct from
multivariate linear regression The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear reg ...
, where multiple
correlated In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistic ...
dependent variables are predicted, rather than a single scalar variable. In linear regression, the relationships are modeled using
linear predictor function In statistics and in machine learning, a linear predictor function is a linear function ( linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent vari ...
s whose unknown model parameters are
estimated Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is der ...
from the
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
. Such models are called
linear model In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term ...
s. Most commonly, the
conditional mean In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – give ...
of the response given the values of the explanatory variables (or predictors) is assumed to be an
affine function In Euclidean geometry, an affine transformation or affinity (from the Latin, ''affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More generall ...
of those values; less commonly, the conditional
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic f ...
or some other
quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. There is one fewer quantile th ...
is used. Like all forms of
regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
, linear regression focuses on the
conditional probability distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
of the response given the values of the predictors, rather than on the
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
of all of these variables, which is the domain of
multivariate analysis Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the diff ...
. Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine. Linear regression has many practical uses. Most applications fall into one of the following two broad categories: * If the goal is error reduction in
prediction A prediction (Latin ''præ-'', "before," and ''dicere'', "to say"), or forecast, is a statement about a future event or data. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact ...
or
forecasting Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual ...
, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response. * If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response. Linear regression models are often fitted using the
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the re ...
approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with
least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based minimizing the '' sum ...
regression), or by minimizing a penalized version of the least squares cost function as in
ridge regression Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
(''L''2-norm penalty) and lasso (''L''1-norm penalty). Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.


Formulation

Given a
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
set \_^n of ''n'' statistical units, a linear regression model assumes that the relationship between the dependent variable ''y'' and the ''p''-vector of regressors x is
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
. This relationship is modeled through a ''disturbance term'' or ''error variable'' ''ε'' — an unobserved
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
that adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the form : y_i = \beta_ + \beta_ x_ + \cdots + \beta_ x_ + \varepsilon_i = \mathbf^\mathsf_i\boldsymbol\beta + \varepsilon_i, \qquad i = 1, \ldots, n, where T denotes the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
, so that x''i''T''β'' is the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
between vectors x''i'' and ''β''. Often these ''n'' equations are stacked together and written in matrix notation as : \mathbf = \mathbf \boldsymbol\beta + \boldsymbol\varepsilon, \, where : \mathbf = \begin y_1 \\ y_2 \\ \vdots \\ y_n \end, \quad : \mathbf = \begin \mathbf^\mathsf_1 \\ \mathbf^\mathsf_2 \\ \vdots \\ \mathbf^\mathsf_n \end = \begin 1 & x_ & \cdots & x_ \\ 1 & x_ & \cdots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_ & \cdots & x_ \end, : \boldsymbol\beta = \begin \beta_0 \\ \beta_1 \\ \beta_2 \\ \vdots \\ \beta_p \end, \quad \boldsymbol\varepsilon = \begin \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end.


Notation and terminology

* \mathbf is a vector of observed values y_i\ (i=1,\ldots,n) of the variable called the ''regressand'', ''endogenous variable'', ''response variable'', ''measured variable'', ''criterion variable'', or ''
dependent variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
''. This variable is also sometimes known as the ''predicted variable'', but this should not be confused with ''predicted values'', which are denoted \hat. The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality. * \mathbf may be seen as a matrix of row-vectors \mathbf_ or of ''n''-dimensional column-vectors \mathbf_, which are known as ''regressors'', ''exogenous variables'', ''explanatory variables'', ''covariates'', ''input variables'', ''predictor variables'', or ''
independent variables Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
'' (not to be confused with the concept of independent random variables). The matrix \mathbf is sometimes called the design matrix. ** Usually a constant is included as one of the regressors. In particular, x_=1 for i=1,\ldots,n. The corresponding element of ''β'' is called the '' intercept''. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero. ** Sometimes one of the regressors can be a non-linear function of another regressor or of the data, as in
polynomial regression In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable ''x'' and the dependent variable ''y'' is modelled as an ''n''th degree polynomial in ''x''. Polynomial regression fi ...
and
segmented regression Segmented regression, also known as piecewise regression or broken-stick regression, is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval. Segmented r ...
. The model remains linear as long as it is linear in the parameter vector ''β''. ** The values ''x''''ij'' may be viewed as either observed values of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s ''X''''j'' or as fixed values chosen prior to observing the dependent variable. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations. * \boldsymbol\beta is a (p+1)-dimensional ''parameter vector'', where \beta_0 is the intercept term (if one is included in the model—otherwise \boldsymbol\beta is ''p''-dimensional). Its elements are known as ''effects'' or ''regression coefficients'' (although the latter term is sometimes reserved for the ''estimated'' effects). In
simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
, ''p''=1, and the coefficient is known as ''regression slope''. Statistical
estimation Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is de ...
and
inference Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word ''wikt:infer, infer'' means to "carry forward". Inference is theoretically traditionally divided into deductive reasoning, deduction and in ...
in linear regression focuses on ''β''. The elements of this parameter vector are interpreted as the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
s of the dependent variable with respect to the various independent variables. * \boldsymbol\varepsilon is a vector of values \varepsilon_i. This part of the model is called the ''error term'', ''disturbance term'', or sometimes ''noise'' (in contrast with the "signal" provided by the rest of the model). This variable captures all other factors which influence the dependent variable ''y'' other than the regressors x. The relationship between the error term and the regressors, for example their
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistic ...
, is a crucial consideration in formulating a linear regression model, as it will determine the appropriate estimation method. Fitting a linear model to a given data set usually requires estimating the regression coefficients \boldsymbol\beta such that the error term \boldsymbol\varepsilon=\mathbf- \mathbf\boldsymbol\beta is minimized. For example, it is common to use the sum of squared errors , , \boldsymbol\varepsilon, , _2^2 as a measure of \boldsymbol\varepsilon for minimization.


Example

Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent ''hi'' at various moments in time ''ti''. Physics tells us that, ignoring the drag, the relationship can be modeled as : h_i = \beta_1 t_i + \beta_2 t_i^2 + \varepsilon_i, where ''β''1 determines the initial velocity of the ball, ''β''2 is proportional to the
standard gravity The standard acceleration due to gravity (or standard acceleration of free fall), sometimes abbreviated as standard gravity, usually denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. ...
, and ''ε''''i'' is due to measurement errors. Linear regression can be used to estimate the values of ''β''1 and ''β''2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters ''β''1 and ''β''2; if we take regressors x''i'' = (''x''''i''1, ''x''''i''2)  = (''t''''i'', ''t''''i''2), the model takes on the standard form : h_i = \mathbf^\mathsf_i\boldsymbol\beta + \varepsilon_i.


Assumptions

Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variables and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model. The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g.
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
): *Weak exogeneity. This essentially means that the predictor variables ''x'' can be treated as fixed values, rather than
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. Although this assumption is not realistic in many settings, dropping it leads to significantly more difficult errors-in-variables models. *Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. This technique is used, for example, in
polynomial regression In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable ''x'' and the dependent variable ''y'' is modelled as an ''n''th degree polynomial in ''x''. Polynomial regression fi ...
, which uses linear regression to fit the response variable as an arbitrary
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
function (up to a given degree) of a predictor variable. With this much flexibility, models such as polynomial regression often have "too much power", in that they tend to
overfit mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
the data. As a result, some kind of
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
must typically be used to prevent unreasonable solutions coming out of the estimation process. Common examples are
ridge regression Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
and lasso regression. Bayesian linear regression can also be used, which by its nature is more or less immune to the problem of overfitting. (In fact,
ridge regression Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
and lasso regression can both be viewed as special cases of Bayesian linear regression, with particular types of
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
s placed on the regression coefficients.) *Constant variance (a.k.a.
homoscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. Th ...
). This means that the variance of the errors does not depend on the values of the predictor variables. Thus the variability of the responses for given fixed values of the predictors is the same regardless of how large or small the responses are. This is often not the case, as a variable whose mean is large will typically have a greater variance than one whose mean is small. For example, a person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000—i.e., a
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, whil ...
of around $20,000—while another person with a predicted income of $10,000 is unlikely to have the same $20,000 standard deviation, since that would imply their actual income could vary anywhere between −$10,000 and $30,000. (In fact, as this shows, in many cases—often the same cases where the assumption of normally distributed errors fails—the variance or standard deviation should be predicted to be proportional to the mean, rather than constant.) The absence of homoscedasticity is called
heteroscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The ...
. In order to check this assumption, a plot of residuals versus predicted values (or the values of each individual predictor) can be examined for a "fanning effect" (i.e., increasing or decreasing vertical spread as one moves left to right on the plot). A plot of the absolute or squared residuals versus the predicted values (or each predictor) can also be examined for a trend or curvature. Formal tests can also be used; see
Heteroscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The ...
. The presence of heteroscedasticity will result in an overall "average" estimate of variance being used instead of one that takes into account the true variance structure. This leads to less precise (but in the case of
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
, not biased) parameter estimates and biased standard errors, resulting in misleading tests and interval estimates. The
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
for the model will also be wrong. Various estimation techniques including
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a speci ...
and the use of
heteroscedasticity-consistent standard errors The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust ...
can handle heteroscedasticity in a quite general way. Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable (e.g., fitting the
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 ...
of the response variable using a linear regression model, which implies that the response variable itself has a
log-normal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a norma ...
rather than a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
). * Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual
statistical independence Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold.) Some methods such as
generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinar ...
are capable of handling correlated errors, although they typically require significantly more data unless some sort of
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue. * Lack of perfect multicollinearity in the predictors. For standard
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the re ...
estimation methods, the design matrix ''X'' must have full
column rank In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
''p''; otherwise perfect multicollinearity exists in the predictor variables, meaning a linear relationship exists between two or more predictor variables. This can be caused by accidentally duplicating a variable in the data, using a linear transformation of a variable along with the original (e.g., the same temperature measurements expressed in Fahrenheit and Celsius), or including a linear combination of multiple variables in the model, such as their mean. It can also happen if there is too little data available compared to the number of parameters to be estimated (e.g., fewer data points than regression coefficients). Near violations of this assumption, where predictors are highly but not perfectly correlated, can reduce the precision of parameter estimates (see
Variance inflation factor In statistics, the variance inflation factor (VIF) is the ratio (quotient) of the variance of estimating some parameter in a model that includes multiple other terms (parameters) by the variance of a model constructed using only one term. It quant ...
). In the case of perfect multicollinearity, the parameter vector ''β'' will be
non-identifiable In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining ...
—it has no unique solution. In such a case, only some of the parameters can be identified (i.e., their values can only be estimated within some linear subspace of the full parameter space R''p''). See
partial least squares regression Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a ...
. Methods for fitting linear models with multicollinearity have been developed, some of which require additional assumptions such as "effect sparsity"—that a large fraction of the effects are exactly zero. Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in
generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and by ...
s, do not suffer from this problem. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods: * The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent. * The arrangement, or
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
of the predictor variables x has a major influence on the precision of estimates of ''β''. Sampling and
design of experiments The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of ''β''.


Interpretation

A fitted linear regression model can be used to identify the relationship between a single predictor variable ''x''''j'' and the response variable ''y'' when all the other predictor variables in the model are "held fixed". Specifically, the interpretation of ''β''''j'' is the expected change in ''y'' for a one-unit change in ''x''''j'' when the other covariates are held fixed—that is, the expected value of the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
of ''y'' with respect to ''x''''j''. This is sometimes called the ''unique effect'' of ''x''''j'' on ''y''. In contrast, the ''marginal effect'' of ''x''''j'' on ''y'' can be assessed using a correlation coefficient or
simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
model relating only ''x''''j'' to ''y''; this effect is the total derivative of ''y'' with respect to ''x''''j''. Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "hold ''ti'' fixed" and at the same time change the value of ''ti''2). It is possible that the unique effect can be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in ''x''''j'', so that once that variable is in the model, there is no contribution of ''x''''j'' to the variation in ''y''. Conversely, the unique effect of ''x''''j'' can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of ''y'', but they mainly explain variation in a way that is complementary to what is captured by ''x''''j''. In this case, including the other variables in the model reduces the part of the variability of ''y'' that is unrelated to ''x''''j'', thereby strengthening the apparent relationship with ''x''''j''. The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in an observational study. The notion of a "unique effect" is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.


Group effects

In a multiple linear regression model : y= \beta_ + \beta_ x_ + \cdots + \beta_ x_ + \varepsilon, parameter \beta_j of predictor variable x_j represents the individual effect of x_j. It has an interpretation as the expected change in the response variable y when x_j increases by one unit with other predictor variables held constant. When x_j is strongly correlated with other predictor variables, it is improbable that x_j can increase by one unit with other variables held constant. In this case, the interpretation of \beta_j becomes problematic as it is based on an improbable condition, and the effect of x_j cannot be evaluated in isolation. For a group of predictor variables, say, \, a group effect \xi(\mathbf) is defined as a linear combination of their parameters : \xi(\mathbf) = w_1\beta_1+w_2\beta_2+\dots+w_q\beta_q, where \mathbf=(w_1,w_2,\dots,w_q)^\intercal is a weight vector satisfying \sum_^q , w_j, =1. Because of the constraint on , \xi(\mathbf) is also referred to as a normalized group effect. A group effect \xi(\mathbf) has an interpretation as the expected change in y when variables in the group x_1, x_2,\dots,x_q change by the amount w_1, w_2, \dots, w_q, respectively, at the same time with variables not in the group held constant. It generalizes the individual effect of a variable to a group of variables in that (i) if q=1, then the group effect reduces to an individual effect, and (ii) if w_i=1 and w_j=0 for j\neq i, then the group effect also reduces to an individual effect. A group effect \xi(\mathbf) is said to be meaningful if the underlying simultaneous changes of the q variables (w_1,w_2,\dots, w_q)^\intercal is probable. Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by the least squares regression due to the multicollinearity problem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize all p predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that \ is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Let y' be the centred y and x_j' be the standardized x_j. Then, the standardized linear regression model is : y'= \beta_' x_' + \cdots + \beta_' x_' + \varepsilon . Parameters \beta_j in the original model, including \beta_0, are simple functions of \beta_j' in the standardized model. The standardization of variables does not change their correlations, so \ is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of \ is : \xi'(\mathbf)=w_1\beta_1'+w_2\beta_2'+\dots+w_q\beta_q', and its minimum-variance unbiased linear estimator is : \hat'(\mathbf)=w_1\hat_1'+w_2\hat_2'+\dots+w_q\hat_q', where \hat_j' is the least squares estimator of \beta_j'. In particular, the average group effect of the q standardized variables is : \xi_A=\frac(\beta_1'+\beta_2'+\dots+\beta_q'), which has an interpretation as the expected change in y' when all x_j' in the strongly correlated group increase by (1/q)th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effect \xi_A is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimator \hat_A=\frac(\hat_1'+\hat_2'+\dots+\hat_q'), even when individually none of the \beta_j' can be accurately estimated by \hat_j'. Not all group effects are meaningful or can be accurately estimated. For example, \beta_1' is a special group effect with weights w_1=1 and w_j=0 for j\neq 1, but it cannot be accurately estimated by \hat'_1. It is also not a meaningful effect. In general, for a group of q strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectors \mathbf are at or near the centre of the simplex \sum_^q w_j=1 (w_j\geq 0) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated. Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of the q variables via testing H_0: \xi_A=0 versus H_1: \xi_A\neq 0, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate. A group effect of the original variables \ can be expressed as a constant times a group effect of the standardized variables \. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.


Extensions

Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.


Simple and multiple linear regression

The very simplest case of a single scalar predictor variable ''x'' and a single scalar response variable ''y'' is known as ''
simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
''. The extension to multiple and/or
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
-valued predictor variables (denoted with a capital ''X'') is known as multiple linear regression, also known as multivariable linear regression (not to be confused with ''
multivariate linear regression The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear reg ...
''). Multiple linear regression is a generalization of
simple linear regression In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
to the case of more than one independent variable, and a
special case In logic, especially as applied in mathematics, concept is a special case or specialization of concept precisely if every instance of is also an instance of but not vice versa, or equivalently, if is a generalization of . A limiting case ...
of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is : Y_i = \beta_0 + \beta_1 X_ + \beta_2 X_ + \ldots + \beta_p X_ + \epsilon_i for each observation ''i'' = 1, ... , ''n''. In the formula above we consider ''n'' observations of one dependent variable and ''p'' independent variables. Thus, ''Y''''i'' is the ''i''th observation of the dependent variable, ''X''''ij'' is ''i''th observation of the ''j''th independent variable, ''j'' = 1, 2, ..., ''p''. The values ''β''''j'' represent parameters to be estimated, and ''ε''''i'' is the ''i''th independent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each of ''m'' > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: : Y_ = \beta_ + \beta_ X_ + \beta_X_ + \ldots + \beta_ X_ + \epsilon_ for all observations indexed as ''i'' = 1, ... , ''n'' and for all dependent variables indexed as ''j = 1, ... , ''m''. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable ''y'' is still a scalar. Another term, ''multivariate linear regression'', refers to cases where ''y'' is a vector, i.e., the same as ''general linear regression''.


General linear models

The
general linear model The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regr ...
considers the situation when the response variable is not a scalar (for each observation) but a vector, y''i''. Conditional linearity of E(\mathbf\mid\mathbf_i)=\mathbf_i^\mathsfB is still assumed, with a matrix ''B'' replacing the vector ''β'' of the classical linear regression model. Multivariate analogues of
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
(OLS) and
generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinar ...
(GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models").


Heteroscedastic models

Various models have been created that allow for
heteroscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The ...
, i.e. the errors for different response variables may have different
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
s. For example,
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a speci ...
is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See also Weighted linear least squares, and
Generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinar ...
.)
Heteroscedasticity-consistent standard errors The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust ...
is an improved method for use with uncorrelated but potentially heteroscedastic errors.


Generalized linear models

Generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and by ...
s (GLMs) are a framework for modeling response variables that are bounded or discrete. This is used, for example: *when modeling positive quantities (e.g. prices or populations) that vary over a large scale—which are better described using a
skewed distribution In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
such as the
log-normal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a norma ...
or
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
(although GLMs are not used for log-normal data, instead the response variable is simply transformed using the logarithm function); *when modeling
categorical data In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
, such as the choice of a given candidate in an election (which is better described using a
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabi ...
/
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no ques ...
for binary choices, or a categorical distribution/ multinomial distribution for multi-way choices), where there are a fixed number of choices that cannot be meaningfully ordered; *when modeling
ordinal data Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described b ...
, e.g. ratings on a scale from 0 to 5, where the different outcomes can be ordered but where the quantity itself may not have any absolute meaning (e.g. a rating of 4 may not be "twice as good" in any objective sense as a rating of 2, but simply indicates that it is better than 2 or 3 but not as good as 5). Generalized linear models allow for an arbitrary ''link function'', ''g'', that relates the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set. For a data set, the '' ar ...
of the response variable(s) to the predictors: E(Y) = g^(XB). The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the (-\infty,\infty) range of the linear predictor and the range of the response variable. Some common examples of GLMs are: * Poisson regression for count data. *
Logistic regression In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables. In regression an ...
and
probit regression In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from ''probability'' + ''unit''. The purpose of the model is to es ...
for binary data. *
Multinomial logistic regression In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the prob ...
and
multinomial probit In statistics and econometrics, the multinomial probit model is a generalization of the probit model used when there are several possible categories that the dependent variable can fall into. As such, it is an alternative to the multinomial lo ...
regression for categorical data. *
Ordered logit In statistics, the ordered logit model (also ordered logistic regression or proportional odds model) is an ordinal regression model—that is, a regression model for ordinal dependent variables—first considered by Peter McCullagh. For exampl ...
and
ordered probit In statistics, ordered probit is a generalization of the widely used probit analysis to the case of more than two outcomes of an ordinal dependent variable (a dependent variable for which the potential values have a natural ordering, as in poor, ...
regression for ordinal data. Single index models allow some degree of nonlinearity in the relationship between ''x'' and ''y'', while preserving the central role of the linear predictor ''β''′''x'' as in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimate ''β'' up to a proportionality constant.


Hierarchical linear models

Hierarchical linear models Multilevel models (also known as hierarchical linear models, linear mixed-effect model, mixed models, nested data models, random coefficient, random-effects models, random parameter models, or split-plot designs) are statistical models of param ...
(or ''multilevel regression'') organizes the data into a hierarchy of regressions, for example where ''A'' is regressed on ''B'', and ''B'' is regressed on ''C''. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.


Errors-in-variables

Errors-in-variables model In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured ...
s (or "measurement error models") extend the traditional linear regression model to allow the predictor variables ''X'' to be observed with error. This error causes standard estimators of ''β'' to become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.


Others

* In Dempster–Shafer theory, or a
linear belief function Linear belief functions are an extension of the Dempster–Shafer theory of belief functions to the case when variables of interest are continuous. Examples of such variables include financial asset prices, portfolio performance, and other anteced ...
in particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.


Estimation methods

A large number of procedures have been developed for
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as
consistency In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent ...
and asymptotic
efficiency Efficiency is the often measurable ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In a more general sense, it is the ability to do things well, successfully, and without ...
. Some of the more common estimation techniques for linear regression are summarized below.


Least-squares estimation and related techniques

Assuming that the independent variable is \vec = \left _1^i, x_2^i, \ldots, x_m^i\right/math> and the model's parameters are \vec = \left beta_0, \beta_1, \ldots, \beta_m\right/math>, then the model's prediction would be : y_i \approx \beta_0 + \sum_^m \beta_j\times x_j^i. If \vec is extended to \vec = \left , x_1^i, x_2^i, \ldots, x_m^i\right/math> then y_i would become a dot product of the parameter and the independent variable, i.e. : y_i \approx \sum_^ m \beta_j \times x_j^i = \vec \cdot \vec. In the least-squares setting, the optimum parameter is defined as such that minimizes the sum of mean squared loss: : \vec = \underset \mbox\,L\left(D, \vec\right) = \underset\mbox \sum_^ \left(\vec \cdot \vec - y_i\right)^2 Now putting the independent and dependent variables in matrices X and Y respectively, the loss function can be rewritten as: : \begin L\left(D, \vec\right) &= \, X\vec - Y\, ^2 \\ &= \left(X\vec - Y\right)^\textsf \left(X\vec - Y\right) \\ &= Y^\textsfY - Y^\textsfX\vec - \vec^\textsfX^\textsf Y + \vec^\textsfX^\textsf X\vec \end As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention): : \begin \frac &= \frac \\ &= -2X^\textsfY + 2X^\textsfX\vec \end Setting the gradient to zero produces the optimum parameter: : \begin -2X^\textsfY + 2X^\textsfX\vec &= 0 \\ \Rightarrow X^\textsfX\vec &= X^\textsfY \\ \Rightarrow \vec &= \left(X^\textsfX\right)^X^\textsfY \end Note: To prove that the \hat obtained is indeed the local minimum, one needs to differentiate once more to obtain the
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
and show that it is positive definite. This is provided by the
Gauss–Markov theorem In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the ...
. Linear least squares methods include mainly: *
Ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
*
Weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a speci ...
*
Generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinar ...


Maximum-likelihood estimation and related techniques

*
Maximum likelihood estimation In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
can be performed when the distribution of the error terms is known to belong to a certain parametric family ''ƒθ'' of
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s. When ''f''θ is a normal distribution with zero
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set. For a data set, the '' ar ...
and variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a known
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
. *
Ridge regression Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
and other forms of penalized estimation, such as Lasso regression, deliberately introduce
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group ...
into the estimation of ''β'' in order to reduce the variability of the estimate. The resulting estimates generally have lower
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
than the OLS estimates, particularly when multicollinearity is present or when
overfitting mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
is a problem. They are generally used when the goal is to predict the value of the response variable ''y'' for values of the predictors ''x'' that have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias. * Least absolute deviation (LAD) regression is a robust estimation technique in that it is less sensitive to the presence of outliers than OLS (but is less efficient than OLS when no outliers are present). It is equivalent to maximum likelihood estimation under a
Laplace distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...
model for ''ε''. * Adaptive estimation. If we assume that error terms are
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independe ...
of the regressors, \varepsilon_i \perp \mathbf_i, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.


Other estimation techniques

* Bayesian linear regression applies the framework of
Bayesian statistics Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients β are assumed to be
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s with a specified
prior distribution In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
. The prior distribution can bias the solutions for the regression coefficients, in a way similar to (but more general than)
ridge regression Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
or lasso regression. In addition, the Bayesian estimation process produces not a single point estimate for the "best" values of the regression coefficients but an entire posterior distribution, completely describing the uncertainty surrounding the quantity. This can be used to estimate the "best" coefficients using the mean, mode, median, any quantile (see quantile regression), or any other function of the posterior distribution. * Quantile regression focuses on the conditional quantiles of ''y'' given ''X'' rather than the conditional mean of ''y'' given ''X''. Linear quantile regression models a particular conditional quantile, for example the conditional median, as a linear function βT''x'' of the predictors. *
Mixed model A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. ...
s are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure. Common applications of mixed models include analysis of data involving repeated measurements, such as longitudinal data, or data obtained from cluster sampling. They are generally fit as parametric models, using maximum likelihood or Bayesian estimation. In the case where the errors are modeled as normal random variables, there is a close connection between mixed models and generalized least squares.
Fixed effects estimation In statistics, a fixed effects model is a statistical model in which the model parameters are fixed or non-random quantities. This is in contrast to random effects models and mixed models in which all or some of the model parameters are rando ...
is an alternative approach to analyzing this type of data. * Principal component regression (PCR) is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using
principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
, and then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables. The
partial least squares regression Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a ...
is the extension of the PCR method which does not suffer from the mentioned deficiency. *
Least-angle regression In statistics, least-angle regression (LARS) is an algorithm for fitting linear regression models to high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. Suppose we expect a response variab ...
is an estimation procedure for linear regression models that was developed to handle high-dimensional covariate vectors, potentially with more covariates than observations. *The
Theil–Sen estimator In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane ( simple linear regression) by choosing the median of the slopes of all lines through pairs of points. It has also ...
is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to
outlier In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s. * Other robust estimation techniques, including the α-trimmed mean approach, and L-, M-, S-, and R-estimators have been introduced.


Applications

Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.


Trend line

A trend line represents a trend, the long-term movement in
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Ex ...
data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line. Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.


Epidemiology

Early evidence relating
tobacco smoking Tobacco smoking is the practice of burning tobacco and ingesting the resulting smoke. The smoke may be inhaled, as is done with cigarettes, or simply released from the mouth, as is generally done with pipes and cigars. The practice is beli ...
to mortality and
morbidity A disease is a particular abnormal condition that negatively affects the structure or function of all or part of an organism, and that is not immediately due to any external injury. Diseases are often known to be medical conditions that a ...
came from observational studies employing regression analysis. In order to reduce
spurious correlation In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but '' not'' causally related, due to either coincidence or the presence of a certain third, u ...
s when analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those other socio-economic factors. However, it is never possible to include all possible
confounding In statistics, a confounder (also confounding variable, confounding factor, extraneous determinant or lurking variable) is a variable that influences both the dependent variable and independent variable, causing a spurious association. Con ...
variables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason,
randomized controlled trial A randomized controlled trial (or randomized control trial; RCT) is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical t ...
s are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as
instrumental variables In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered ...
regression may be used to attempt to estimate causal relationships from observational data.


Finance

The
capital asset pricing model In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio. The model takes into ac ...
uses linear regression as well as the concept of
beta Beta (, ; uppercase , lowercase , or cursive ; grc, βῆτα, bē̂ta or ell, βήτα, víta) is the second letter of the Greek alphabet. In the system of Greek numerals, it has a value of 2. In Modern Greek, it represents the voiced labiod ...
for analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.


Economics

Linear regression is the predominant empirical tool in
economics Economics () is the social science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics anal ...
. For example, it is used to predict consumption spending,
fixed investment Fixed investment in economics is the purchasing of newly produced fixed capital. It is measured as a flow variable – that is, as an amount per unit of time. Thus, fixed investment is the accumulation of physical assets such as machinery, lan ...
spending, inventory investment, purchases of a country's exports, spending on
imports An import is the receiving country in an export from the sending country. Importation and exportation are the defining financial transactions of international trade. In international trade, the importation and exportation of goods are limited ...
, the demand to hold liquid assets, labor demand, and
labor supply In mainstream economic theories, the labour supply is the total hours (adjusted for intensity of effort) that workers wish to work at a given real wage rate. It is frequently represented graphically by a labour supply curve, which shows hypotheti ...
.


Environmental science

Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and
benthic The benthic zone is the ecological region at the lowest level of a body of water such as an ocean, lake, or stream, including the sediment surface and some sub-surface layers. The name comes from ancient Greek, βένθος (bénthos), meaning " ...
surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem.


Machine learning

Linear regression plays an important role in the subfield of
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech ...
known as
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties.


History

Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed by Legendre (1805) and Gauss (1809) for the prediction of planetary movement. Quetelet was responsible for making the procedure well-known and for using it extensively in the social sciences.


See also

*
Analysis of variance Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician ...
* Blinder–Oaxaca decomposition * Censored regression model * Cross-sectional regression *
Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data i ...
*
Empirical Bayes method Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed b ...
* Errors and residuals *
Lack-of-fit sum of squares In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance, used in the numerator in an F-test of the nu ...
* Line fitting *
Linear classifier In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the v ...
*
Linear equation In mathematics, a linear equation is an equation that may be put in the form a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coeffici ...
*
Logistic regression In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables. In regression an ...
* M-estimator *
Multivariate adaptive regression spline In statistics, multivariate adaptive regression splines (MARS) is a form of regression analysis introduced by Jerome H. Friedman in 1991. It is a non-parametric regression technique and can be seen as an extension of linear models that automatic ...
*
Nonlinear regression In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fi ...
*
Nonparametric regression Nonparametric regression is a category of regression analysis in which the predictor does not take a predetermined form but is constructed according to information derived from the data. That is, no parametric form is assumed for the relationship ...
*
Normal equations In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown statistical parameter, parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory ...
* Projection pursuit regression *
Response modeling methodology Response modeling methodology (RMM) is a general platform for statistical modeling of a linear/nonlinear relationship between a response variable (Dependent and independent variables, dependent variable) and a linear predictor (a linear combination ...
* Segmented linear regression * Standard deviation line *
Stepwise regression In statistics, stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step, a variable is considered for addition to or subtraction from the set of ...
*
Structural break In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in general. This issue was popularised by Da ...
*
Support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborat ...
*
Truncated regression model Truncated regression models are a class of models in which the sample has been truncated for certain ranges of the dependent variable. That means observations with values in the dependent variable below or above certain thresholds are systematicall ...
*
Deming regression In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for error ...


References


Citations


Sources

* Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003).
Applied multiple regression/correlation analysis for the behavioral sciences
'' (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates *
Charles Darwin Charles Robert Darwin ( ; 12 February 1809 – 19 April 1882) was an English naturalist, geologist, and biologist, widely known for his contributions to evolutionary biology. His proposition that all species of life have descended ...
. ''The Variation of Animals and Plants under Domestication''. (1868) ''(Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)'' * * Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," ''Journal of the Anthropological Institute'', 15:246-263 (1886). ''(Facsimile at

'' * Robert S. Pindyck and Daniel L. Rubinfeld (1998, 4h ed.). ''Econometric Models and Economic Forecasts'', ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3 (mult. regression in matrix form).


Further reading

*
Mathieu Rouaud, 2013: Probability, Statistics and Estimation
Chapter 2: Linear Regression, Linear Regression with Error Bars and Nonlinear Regression. *


External links


Least-Squares Regression
PhET Interactive simulations, University of Colorado at Boulder
DIY Linear Fit
{{DEFAULTSORT:Linear Regression Curve fitting Estimation theory Parametric statistics Single-equation methods (econometrics)