HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the score test assesses constraints on
statistical parameter In statistics, as opposed to its general use in mathematics, a parameter is any measured quantity of a statistical population that summarises or describes an aspect of the population, such as a mean or a standard deviation. If a population ...
s based on the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
of the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
—known as the ''
score Score or scorer may refer to: *Test score, the result of an exam or test Business * Score Digital, now part of Bauer Radio * Score Entertainment, a former American trading card design and manufacturing company * Score Media, a former Canadian ...
''—evaluated at the hypothesized parameter value under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is ...
. Intuitively, if the restricted estimator is near the
maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
of the likelihood function, the score should not differ from zero by more than
sampling error In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ( ...
. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by
C. R. Rao Calyampudi Radhakrishna Rao FRS (born 10 September 1920), commonly known as C. R. Rao, is an Indian-American mathematician and statistician. He is currently professor emeritus at Pennsylvania State University and Research Professor at the ...
in 1948, a fact that can be used to determine
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
s associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by
S. D. Silvey Samuel David Silvey was a British statistician. Among his contributions are the Lagrange multiplier test, and the use of eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector t ...
in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and
Pagan Paganism (from classical Latin ''pāgānus'' "rural", "rustic", later "civilian") is a term first used in the fourth century by early Christians for people in the Roman Empire who practiced polytheism, or ethnic religions other than Judaism. I ...
's much-cited 1980 paper. The main advantage of the score test over the
Wald test In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the ...
and likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a
boundary point In topology and mathematics in general, the boundary of a subset of a topological space is the set of points in the closure of not belonging to the interior of . An element of the boundary of is called a boundary point of . The term bou ...
in the
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for ...
. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.


Single-parameter test


The statistic

Let L be the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
which depends on a univariate parameter \theta and let x be the data. The score U(\theta) is defined as : U(\theta)=\frac. The
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
is : I(\theta) = - \operatorname \left \,\theta \right,, where ƒ is the probability density. The statistic to test \mathcal_0:\theta=\theta_0 is S(\theta_0) = \frac which has an
asymptotic distribution In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing ...
of \chi^2_1, when \mathcal_0 is true. While asymptotically identical, calculating the LM statistic using the outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.


Note on notation

Note that some texts use an alternative notation, in which the statistic S^*(\theta)=\sqrt is tested against a normal distribution. This approach is equivalent and gives identical results.


As most powerful test for small deviations

: \left(\frac\right)_ \geq C where L is the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
, \theta_0 is the value of the parameter of interest under the null hypothesis, and C is a constant set depending on the size of the test desired (i.e. the probability of rejecting H_0 if H_0 is true; see
Type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
). The score test is the most powerful test for small deviations from H_0. To see this, consider testing \theta=\theta_0 versus \theta=\theta_0+h. By the
Neyman–Pearson lemma In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which introduced concepts like errors of the sec ...
, the most powerful test has the form : \frac \geq K; Taking the log of both sides yields : \log L(\theta_0 + h \mid x ) - \log L(\theta_0\mid x) \geq \log K. The score test follows making the substitution (by
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion) : \log L(\theta_0+h\mid x) \approx \log L(\theta_0\mid x) + h\times \left(\frac\right)_ and identifying the C above with \log(K).


Relationship with other hypothesis tests

If the null hypothesis is true, the likelihood ratio test, the
Wald test In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the ...
, and the Score test are asymptotically equivalent tests of hypotheses. When testing nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.


Multiple parameters

A more general score test can be derived when there is more than one parameter. Suppose that \widehat_0 is the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stat ...
estimate of \theta under the null hypothesis H_0 while U and I are respectively, the score and the Fisher information matrices under the alternative hypothesis. Then : U^T(\widehat_0) I^(\widehat_0) U(\widehat_0) \sim \chi^2_k asymptotically under H_0, where k is the number of constraints imposed by the null hypothesis and : U(\widehat_0) = \frac and : I(\widehat_0) = -\operatorname E\left(\frac \right). This can be used to test H_0. The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.


Special cases

In many situations, the score statistic reduces to another commonly used statistic. In
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is cal ...
, the Lagrange multiplier test can be expressed as a function of the ''F''-test. When the data follows a normal distribution, the score statistic is the same as the t statistic. When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the
Pearson's chi-squared test Pearson's chi-squared test (\chi^2) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests (e.g. ...
.


See also

*
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
* Uniformly most powerful test * Score (statistics) *
Sup-LM test In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in general. This issue was popularised by David ...


References


Further reading

* * * {{DEFAULTSORT:Score Test Statistical tests