HOME

TheInfoList



OR:

In statistics, a probit model is a type of
regression Regression or regressions may refer to: Science * Marine regression, coastal advance due to falling sea level, the opposite of marine transgression * Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
where the
dependent variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
can take only two values, for example married or not married. The word is a
portmanteau A portmanteau word, or portmanteau (, ) is a blend of wordsbinary classification model. A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does
logistic regression In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear function (calculus), linear combination of one or more independent var ...
using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function. It is most often estimated using the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
procedure, such an estimation being called a probit regression.


Conceptual framework

Suppose a response variable ''Y'' is ''binary'', that is it can have only two possible outcomes which we will denote as 1 and 0. For example, ''Y'' may represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector of
regressor Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
s ''X'', which are assumed to influence the outcome ''Y''. Specifically, we assume that the model takes the form : P(Y=1 \mid X) = \Phi(X^T\beta), where ''P'' is the
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
and \Phi is the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of the standard
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu i ...
. The parameters ''β'' are typically estimated by
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
. It is possible to motivate the probit model as a latent variable model. Suppose there exists an auxiliary random variable : Y^\ast = X^T\beta + \varepsilon, where ''ε'' ~ ''N''(0, 1). Then ''Y'' can be viewed as an indicator for whether this latent variable is positive: : Y = \left.\begin 1 & Y^* > 0 \\ 0 &\text \end \right\} = \left.\begin 1 & X^T\beta + \varepsilon > 0 \\ 0 &\text \end \right\} The use of the standard normal distribution causes no loss of generality compared with the use of a normal distribution with an arbitrary mean and standard deviation, because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount. To see that the two models are equivalent, note that : \begin P(Y = 1 \mid X) &= P(Y^\ast > 0) \\ &= P(X^T\beta + \varepsilon > 0) \\ &= P(\varepsilon > -X^T\beta) \\ &= P(\varepsilon < X^T\beta) & \text\\ &= \Phi(X^T\beta) \end


Model estimation


Maximum likelihood estimation

Suppose data set \_^n contains ''n'' independent statistical units corresponding to the model above. For the single observation, conditional on the vector of inputs of that observation, we have: :P(y_i=1, x_i)= \Phi(x_i'\beta) :P(y_i=0, x_i)= 1-\Phi(x_i'\beta) where x_i is a vector of K \times 1 inputs, and \beta is a K \times 1 vector of coefficients. The likelihood of a single observation (y_i, x_i) is then :\mathcal(\beta; y_i, x_i) = \Phi(x_i'\beta)^ -\Phi(x_i'\beta) In fact, if y_i=1, then \mathcal(\beta; y_i, x_i) = \Phi(x_i'\beta), and if y_i=0, then \mathcal(\beta; y_i, x_i) = 1-\Phi(x_i'\beta). Since the observations are independent and identically distributed, then the likelihood of the entire sample, or the joint likelihood, will be equal to the product of the likelihoods of the single observations: :\mathcal(\beta; Y, X) = \prod_^n \left( \Phi(x_i'\beta)^ -\Phi(x_i'\beta) \right) The joint log-likelihood function is thus : \ln\mathcal(\beta; Y, X) = \sum_^n \bigg( y_i\ln\Phi(x_i'\beta) + (1-y_i)\ln\!\big(1-\Phi(x_i'\beta)\big) \bigg) The estimator \hat\beta which maximizes this function will be
consistent In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consisten ...
, asymptotically normal and efficient provided that E 'XXexists and is not singular. It can be shown that this log-likelihood function is globally concave in ''β'', and therefore standard numerical algorithms for optimization will converge rapidly to the unique maximum. Asymptotic distribution for \hat\beta is given by : \sqrt(\hat\beta - \beta)\ \xrightarrow\ \mathcal(0,\,\Omega^), where : \Omega = \operatorname\bigg \fracXX' \bigg \qquad \hat\Omega = \frac\sum_^n \fracx_ix'_i, and \varphi=\Phi' is the Probability Density Function ( PDF) of standard normal distribution. Semi-parametric and non-parametric maximum likelihood methods for probit-type and other related models are also available.


Berkson's minimum chi-square method

This method can be applied only when there are many observations of response variable y_i having the same value of the vector of regressors x_i (such situation may be referred to as "many observations per cell"). More specifically, the model can be formulated as follows. Suppose among ''n'' observations \_^n there are only ''T'' distinct values of the regressors, which can be denoted as \. Let n_t be the number of observations with x_i=x_, and r_t the number of such observations with y_i=1. We assume that there are indeed "many" observations per each "cell": for each t, \lim_ n_t/n = c_t > 0 . Denote : \hat_t = r_t/n_t : \hat\sigma_t^2 = \frac \frac Then Berkson's minimum chi-square estimator is a generalized least squares estimator in a regression of \Phi^(\hat_t) on x_ with weights \hat\sigma_t^: : \hat\beta = \Bigg( \sum_^T \hat\sigma_t^x_x'_ \Bigg)^ \sum_^T \hat\sigma_t^x_\Phi^(\hat_t) It can be shown that this estimator is consistent (as ''n''→∞ and ''T'' fixed), asymptotically normal and efficient. Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts r_t, n_t, and x_ (for example in the analysis of voting behavior).


Gibbs sampling

Gibbs sampling of a probit model is possible because regression models typically use normal prior distributions over the weights, and this distribution is conjugate with the normal distribution of the errors (and hence of the latent variables ''Y''*). The model can be described as : \begin \boldsymbol\beta & \sim \mathcal(\mathbf_0, \mathbf_0) \\ pty_i^\ast\mid\mathbf_i,\boldsymbol\beta & \sim \mathcal(\mathbf'_i\boldsymbol\beta, 1) \\ pt y_i & = \begin 1 & \text y_i^\ast > 0 \\ 0 & \text \end \end From this, we can determine the full conditional densities needed: : \begin \mathbf &= (\mathbf_0^ + \mathbf'\mathbf)^ \\ pt\boldsymbol\beta\mid\mathbf^\ast &\sim \mathcal(\mathbf(\mathbf_0^\mathbf_0 + \mathbf'\mathbf^\ast), \mathbf) \\ pty_i^\ast\mid y_i=0,\mathbf_i,\boldsymbol\beta &\sim \mathcal(\mathbf'_i\boldsymbol\beta, 1) _i^\ast < 0\\ pty_i^\ast\mid y_i=1,\mathbf_i,\boldsymbol\beta &\sim \mathcal(\mathbf'_i\boldsymbol\beta, 1) _i^\ast \ge 0\end The result for β is given in the article on Bayesian linear regression, although specified with different notation. The only trickiness is in the last two equations. The notation _i^\ast < 0/math> is the Iverson bracket, sometimes written \mathcal(y_i^\ast < 0) or similar. It indicates that the distribution must be truncated within the given range, and rescaled appropriately. In this particular case, a
truncated normal distribution In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated no ...
arises. Sampling from this distribution depends on how much is truncated. If a large fraction of the original mass remains, sampling can be easily done with rejection sampling—simply sample a number from the non-truncated distribution, and reject it if it falls outside the restriction imposed by the truncation. If sampling from only a small fraction of the original mass, however (e.g. if sampling from one of the tails of the normal distribution—for example if \mathbf'_i\boldsymbol\beta is around 3 or more, and a negative sample is desired), then this will be inefficient and it becomes necessary to fall back on other sampling algorithms. General sampling from the truncated normal can be achieved using approximations to the normal CDF and the probit function, and R has a function rtnorm() for generating truncated-normal samples.


Model evaluation

The suitability of an estimated binary model can be evaluated by counting the number of true observations equaling 1, and the number equaling zero, for which the model assigns a correct predicted classification by treating any estimated probability above 1/2 (or, below 1/2), as an assignment of a prediction of 1 (or, of 0). See for details.


Performance under misspecification

Consider the latent variable model formulation of the probit model. When the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of \varepsilon conditional on x is not constant but dependent on x, then the heteroscedasticity issue arises. For example, suppose y^*= \beta_0+B_1 x_1+\varepsilon and \varepsilon\mid x \sim N (0,x^2_1) where x_1 is a continuous positive explanatory variable. Under heteroskedasticity, the probit estimator for \beta is usually inconsistent, and most of the tests about the coefficients are invalid. More importantly, the estimator for P (y=1\mid x) becomes inconsistent, too. To deal with this problem, the original model needs to be transformed to be homoskedastic. For instance, in the same example, 1 beta_0+\beta_1 x_1+\varepsilon>0/math> can be rewritten as 1 beta_0/x_1+\beta_1+\varepsilon/x_1>0/math>, where \varepsilon/x_1\mid x\sim N(0,1). Therefore, P(y=1\mid x) = \Phi (\beta_1 + \beta_0/x_1) and running probit on (1, 1/x_1) generates a consistent estimator for the
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occu ...
P(y=1\mid x). When the assumption that \varepsilon is normally distributed fails to hold, then a functional form misspecification issue arises: if the model is still estimated as a probit model, the estimators of the coefficients \beta are inconsistent. For instance, if \varepsilon follows a
logistic distribution Logistic may refer to: Mathematics * Logistic function, a sigmoid function used in many fields ** Logistic map, a recurrence relation that sometimes exhibits chaos ** Logistic regression, a statistical model using the logistic function ** Logit ...
in the true model, but the model is estimated by probit, the estimates will be generally smaller than the true value. However, the inconsistency of the coefficient estimates is practically irrelevant because the estimates for the
partial effect Partial may refer to: Mathematics *Partial derivative, derivative with respect to one of several variables of a function, with the other variables held constant ** ∂, a symbol that can denote a partial derivative, sometimes pronounced "partial d ...
s, \partial P(y=1\mid x)/\partial x_, will be close to the estimates given by the true logit model. To avoid the issue of distribution misspecification, one may adopt a general distribution assumption for the error term, such that many different types of distribution can be included in the model. The cost is heavier computation and lower accuracy for the increase of the number of parameter. In most of the cases in practice where the distribution form is misspecified, the estimators for the coefficients are inconsistent, but estimators for the conditional probability and the partial effects are still very good. One can also take semi-parametric or non-parametric approaches, e.g., via local-likelihood or nonparametric quasi-likelihood methods, which avoid assumptions on a parametric form for the index function and is robust to the choice of the link function (e.g., probit or logit).


History

The probit model is usually credited to Chester Bliss, who coined the term "probit" in 1934, and to John Gaddum (1933), who systematized earlier work. However, the basic model dates to the Weber–Fechner law by Gustav Fechner, published in , and was repeatedly rediscovered until the 1930s; see and . A fast method for computing
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed sta ...
estimates for the probit model was proposed by
Ronald Fisher Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who ...
as an appendix to Bliss' work in 1935.


See also

* Generalized linear model * Limited dependent variable *
Logit model In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables. In regression a ...
* Multinomial probit * Multivariate probit models * Ordered probit and
ordered logit In statistics, the ordered logit model (also ordered logistic regression or proportional odds model) is an ordinal regression model—that is, a regression model for ordinal dependent variables—first considered by Peter McCullagh. For exampl ...
model * Separation (statistics) * Tobit model


References

* ** Published in: *


Further reading

* * * * *


External links

* * by Mark Thoma {{DEFAULTSORT:Probit Model Categorical regression models Classification algorithms