Bayesian linear regression is a type of
conditional modeling in which the mean of one variable is described by a
linear combination of other variables, with the goal of obtaining the
posterior probability
The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior p ...
of the regression coefficients (as well as other parameters describing the
distribution Distribution may refer to:
Mathematics
*Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations
*Probability distribution, the probability of a particular value or value range of a varia ...
of the regressand) and ultimately allowing the
out-of-sample
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
prediction of the
regressand
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
(often labelled
) ''
conditional on'' observed values of the regressors (usually
). The simplest and most widely used version of this model is the ''normal linear model'', in which
given
is distributed
Gaussian. In this model, and under a particular choice of
prior probabilities for the parameters—so-called
conjugate prior
In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and t ...
s—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.
Model setup
Consider a standard
linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
problem, in which for
we specify the mean of the
conditional distribution of
given a
predictor vector
:
where
is a
vector, and the
are
independent and identically normally distributed random variables:
This corresponds to the following
likelihood function
The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
:
The
ordinary least squares
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the ...
solution is used to estimate the coefficient vector using the
Moore–Penrose pseudoinverse:
where
is the
design matrix, each row of which is a predictor vector
; and
is the column
-vector
.
This is a
frequentist approach, and it assumes that there are enough measurements to say something meaningful about
. In the
Bayesian approach, the data are supplemented with additional information in the form of a
prior probability distribution
In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken int ...
. The prior belief about the parameters is combined with the data's likelihood function according to
Bayes theorem
In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For ex ...
to yield the
posterior belief about the parameters
and
. The prior can take different functional forms depending on the domain and the information that is available ''a priori''.
Since the data comprise both
and
, the focus only on the distribution of
conditional on
needs justification. In fact, a "full" Bayesian analysis would require a joint likelihood
along with a prior
, where
symbolizes the parameters of the distribution for
. Only under the assumption of (weak) exogeneity can the joint likelihood be factored into
. The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptions
are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters.
With conjugate priors
Conjugate prior distribution
For an arbitrary prior distribution, there may be no analytical solution for the
posterior distribution. In this section, we will consider a so-called
conjugate prior
In Bayesian probability theory, if the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posterior are then called conjugate distributions, and t ...
for which the posterior distribution can be derived analytically.
A prior
is
conjugate to this likelihood function if it has the same functional form with respect to
and
. Since the log-likelihood is quadratic in
, the log-likelihood is re-written such that the likelihood becomes normal in
. Write
The likelihood is now re-written as
where
where
is the number of regression coefficients.
This suggests a form for the prior:
where
is an
inverse-gamma distribution
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according t ...
In the notation introduced in the
inverse-gamma distribution
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according t ...
article, this is the density of an
distribution with
and
with
and
as the prior values of
and
, respectively. Equivalently, it can also be described as a
scaled inverse chi-squared distribution,
Further the conditional prior density
is a
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
,
In the notation of the
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
, the conditional prior distribution is
Posterior distribution
With the prior now specified, the posterior distribution can be expressed as
With some re-arrangement, the posterior can be re-written so that the posterior mean
of the parameter vector
can be expressed in terms of the least squares estimator
and the prior mean
, with the strength of the prior indicated by the prior precision matrix
To justify that
is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a
quadratic form
In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example,
:4x^2 + 2xy - 3y^2
is a quadratic form in the variables and . The coefficients usually belong to ...
in
.
Now the posterior can be expressed as a
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
times an
inverse-gamma distribution
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according t ...
:
Therefore, the posterior distribution can be parametrized as follows.
where the two factors correspond to the densities of
and
distributions, with the parameters of these given by
which can be interpreted as Bayesian learning.
Model evidence
The
model evidence is the probability of the data given the model
. It is also known as the
marginal likelihood, and as the ''prior predictive density''. Here, the model is defined by the likelihood function
and the prior distribution on the parameters, i.e.
. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by
Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating
over all possible values of
and
.
This integral can be computed analytically and the solution is given in the following equation.
Here
denotes the
gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of
and
.
Note that this equation is nothing but a re-arrangement of
Bayes theorem
In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For ex ...
. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.
Other cases
In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an
approximate Bayesian inference method such as
Monte Carlo sampling[Carlin and Louis(2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression.] or
variational Bayes.
The special case
is called
ridge regression
Ridge regression is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also ...
.
A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian
estimation of covariance matrices: see
Bayesian multivariate linear regression.
See also
*
Bayes linear statistics
*
Regularized least squares
*
Tikhonov regularization
*
Spike and slab variable selection
*
Bayesian interpretation of kernel regularization
Notes
References
*
*
*
*
*
*
*
External links
*
Bayesian estimation of linear models (R programming wikibook). Bayesian linear regression as implemented in
R.
{{Statistics, correlation
Linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
Single-equation methods (econometrics)