In
statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, ordinary least squares (OLS) is a type of
linear least squares method for choosing the unknown
parameters in a
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
model (with fixed level-one effects of a
linear function of a set of
explanatory variables) by the principle of
least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
: minimizing the sum of the squares of the differences between the observed
dependent variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
(values of the variable being observed) in the input
dataset and the output of the (linear) function of the
independent variable
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
.
Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting
estimator can be expressed by a simple formula, especially in the case of a
simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' and ...
, in which there is a single
regressor
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
on the right side of the regression equation.
The OLS estimator is
consistent for the level-one fixed effects when the regressors are
exogenous and forms perfect colinearity (rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments and—by the
Gauss–Markov theorem—
optimal in the class of linear unbiased estimators when the
errors are
homoscedastic and
serially uncorrelated. Under these conditions, the method of OLS provides
minimum-variance mean-unbiased estimation when the errors have finite
variances. Under the additional assumption that the errors are
normally distributed with zero mean, OLS is the
maximum likelihood estimator that outperforms any non-linear unbiased estimator.
Linear model

Suppose the data consists of
observations . Each observation
includes a scalar response
and a column vector
of
parameters (regressors), i.e.,
. In a
linear regression model, the response variable,
, is a linear function of the regressors:
:
or in
vector form,
:
where
, as introduced previously, is a column vector of the
-th observation of all the explanatory variables;
is a
vector of unknown parameters; and the scalar
represents unobserved random variables (
errors) of the
-th observation.
accounts for the influences upon the responses
from sources other than the explanators
. This model can also be written in matrix notation as
:
where
and
are
vectors of the response variables and the errors of the
observations, and
is an
matrix of regressors, also sometimes called the
design matrix, whose row
is
and contains the
-th observations on all the explanatory variables.
As a rule, the constant term is always included in the set of regressors
, say, by taking
for all
. The coefficient
corresponding to this regressor is called the ''intercept''.
Regressors do not have to be independent: there can be any desired relationship between the regressors (so long as it is not a linear relationship). For instance, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would be ''quadratic'' in the second regressor, but none-the-less is still considered a ''linear'' model because the model ''is'' still linear in the parameters (
).
Matrix/vector formulation
Consider an
overdetermined system
:
of
linear equation
In mathematics, a linear equation is an equation that may be put in the form
a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coefficien ...
s in
unknown
coefficients,
, with
. (Note: for a linear model as above, not all elements in
contains information on the data points. The first column is populated with ones,
. Only the other columns contain actual data. So here
is equal to the number of regressors plus one.) This can be written in
matrix form as
:
where
:
Such a system usually has no exact solution, so the goal is instead to find the coefficients
which fit the equations "best", in the sense of solving the
quadratic minimization problem
:
where the objective function
is given by
:
A justification for choosing this criterion is given in
Properties below. This minimization problem has a unique solution, provided that the
columns of the matrix
are
linearly independent, given by solving the so-called ''normal equations'':
:
The matrix
is known as the ''normal matrix'' or
Gram matrix and the matrix
is known as the
moment matrix of regressand by regressors. Finally,
is the coefficient vector of the least-squares
hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyper ...
, expressed as
:
or
:
Estimation
Suppose ''b'' is a "candidate" value for the parameter vector ''β''. The quantity , called the
residual for the ''i''-th observation, measures the vertical distance between the data point and the hyperplane , and thus assesses the degree of fit between the actual data and the model. The sum of squared residuals (SSR) (also called the error sum of squares (ESS) or residual sum of squares (RSS)) is a measure of the overall model fit:
:
where ''T'' denotes the matrix
transpose, and the rows of ''X'', denoting the values of all the independent variables associated with a particular value of the dependent variable, are ''X
i = x
i''
T. The value of ''b'' which minimizes this sum is called the OLS estimator for ''β''. The function ''S''(''b'') is quadratic in ''b'' with positive-definite
Hessian
A Hessian is an inhabitant of the German state of Hesse.
Hessian may also refer to:
Named from the toponym
*Hessian (soldier), eighteenth-century German regiments in service with the British Empire
**Hessian (boot), a style of boot
**Hessian f ...
, and therefore this function possesses a unique global minimum at
, which can be given by the explicit formula:
">roof/sup>
:
The product ''N''=''X''T ''X'' is a Gram matrix and its inverse, ''Q''=''N''–1, is the ''cofactor matrix'' of ''β'', closely related to its covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
, ''C''''β''.
The matrix (''X''T ''X'')–1 ''X''T=''Q'' ''X''T is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the gram matrix to have no inverse).
After we have estimated ''β'', the fitted values (or predicted values) from the regression will be
:
where ''P'' = ''X''(''X''T''X'')−1''X''T is the projection matrix onto the space ''V'' spanned by the columns of ''X''. This matrix ''P'' is also sometimes called the hat matrix because it "puts a hat" onto the variable ''y''. Another matrix, closely related to ''P'' is the ''annihilator'' matrix ; this is a projection matrix onto the space orthogonal to ''V''. Both matrices ''P'' and ''M'' are symmetric and idempotent (meaning that and ), and relate to the data matrix ''X'' via identities and . Matrix ''M'' creates the residuals from the regression:
:
Using these residuals we can estimate the value of ''σ'' 2 using the reduced chi-squared statistic:
:
The denominator, ''n''−''p'', is the statistical degrees of freedom. The first quantity, ''s''2, is the OLS estimate for ''σ''2, whereas the second, , is the MLE estimate for ''σ''2. The two estimators are quite similar in large samples; the first estimator is always unbiased, while the second estimator is biased but has a smaller mean squared error. In practice ''s''2 is used more often, since it is more convenient for the hypothesis testing. The square root of ''s''2 is called the regression standard error, standard error of the regression, or standard error of the equation.
It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto ''X''. The