Wilks' Theorem
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
Wilks' theorem offers an
asymptotic distribution In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing ...
of the log-likelihood ratio statistic, which can be used to produce confidence intervals for
maximum-likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
estimates or as a
test statistic A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). ''Statistical Inference'', Duxbury Press, Second Edition (p.374) A hypothesis test is typically specifi ...
for performing the
likelihood-ratio test In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after im ...
. Statistical tests (such as
hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
) generally require knowledge of the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of the test
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypo ...
. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. A convenient result by Samuel S. Wilks says that as the sample size approaches \infty, the distribution of the test statistic -2 \log(\Lambda) asymptotically approaches the chi-squared (\chi^2) distribution under the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
H_0. Here, \Lambda denotes the
likelihood ratio The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
, and the \chi^2 distribution has degrees of freedom equal to the difference in dimensionality of \Theta and \Theta_0, where \Theta is the full
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the ...
and \Theta_0 is the subset of the parameter space associated with H_0. This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio \Lambda for the data and compare -2\log(\Lambda) to the \chi^2 value corresponding to a desired
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
as an approximate statistical test. The theorem no longer applies when the true value of the parameter is on the boundary of the parameter space: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the ...
. In practice, one will notice the problem if the estimate lies on that boundary. In that event, the likelihood test is still a sensible test statistic and even possess some aymptotic optimality properties, but the significance (the -value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. In some cases, the asymptotic null-hypothesis distribution of the statistic is a mixture of chi-square distributions with different numbers of degrees of freedom.


Use

Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the
log-likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
recorded. The test statistic (often denoted by ) is twice the log of the likelihoods ratio, ''i.e.'', it is twice the difference in the log-likelihoods: : \begin D & = -2\ln\left( \frac \right) \\ pt& = 2\ln\left( \frac \right) \\ pt&= 2 \times \ln(\text) - \ln(\text) \\ pt\end The model with more parameters (here ''alternative'') will always fit at least as well — i.e., have the same or greater log-likelihood — than the model with fewer parameters (here ''null''). Whether the fit is significantly better and should thus be preferred is determined by deriving how likely ( -value) it is to observe such a difference  by ''chance alone'', if the model with fewer parameters were true. Where the null hypothesis represents a special case of the alternative hypothesis, the
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of the
test statistic A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). ''Statistical Inference'', Duxbury Press, Second Edition (p.374) A hypothesis test is typically specifi ...
is approximately a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
with
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
equal to \,df_\text - df_\text\,, respectively the number of free parameters of models ''alternative'' and ''null''. For example: If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of 2 \times (-8012 - (-8024)) = 24 with 3 - 1 = 2 degrees of freedom, and is equal to 6 \times 10^. Certain assumptions must be met for the statistic to follow a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
, but empirical -values may also be computed if those conditions are not met.


Examples


Coin tossing

An example of Pearson's test is a comparison of two coins to determine whether they have the same probability of coming up heads. The observations can be put into a
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business i ...
with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times each coin came up heads or tails. The contents of this table are our observations . : \begin X & \text & \text \\ \hline \text & k_\mathrm & k_\mathrm \\ \text & k_\mathrm & k_\mathrm \end Here consists of the possible combinations of values of the parameters p_\mathrm, p_\mathrm, p_\mathrm, and p_\mathrm, which are the probability that coins 1 and 2 come up heads or tails. In what follows, i = 1, 2 and j = \mathrm. The hypothesis space is constrained by the usual constraints on a probability distribution, 0 \le p_ \le 1, and p_ + p_ = 1 . The space of the null hypothesis H_0 is the subspace where p_ = p_. The dimensionality of the full parameter space is 2 (either of the p_ and either of the p_ may be treated as free parameters under the hypothesis H), and the dimensionality of \Theta_ is 1 (only one of the p_ may be considered a free parameter under the null hypothesis H_). Writing n_ for the best estimates of p_ under the hypothesis , the maximum likelihood estimate is given by :n_ = \frac\,. Similarly, the maximum likelihood estimates of p_ under the null hypothesis H_0 are given by :m_ = \frac\,, which does not depend on the coin . The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired distribution. Since the constraint causes the two-dimensional to be reduced to the one-dimensional H_0, the asymptotic distribution for the test will be \chi^2(1), the \chi^2 distribution with one degree of freedom. For the general contingency table, we can write the log-likelihood ratio statistic as :-2 \log \Lambda = 2\sum_ k_ \log \frac\,.


Invalidity for random or mixed effects models

Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the ...
. This is commonly violated in
random In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no :wikt:order, order and does not follow an intelligible pattern or combination. Ind ...
or mixed effects models, for example, when one of the variance components is negligible relative to the others. In some such cases, one variance component can be effectively zero, relative to the others, or in other cases the models can be improperly nested. To be clear: These limitations on Wilks’ theorem do ''not'' negate any
power Power most often refers to: * Power (physics), meaning "rate of doing work" ** Engine power, the power put out by an engine ** Electric power * Power (social and political), the ability to influence people or events ** Abusive power Power may a ...
properties of a particular likelihood ratio test. The only issue is that a \chi^2 distribution is sometimes a poor choice for estimating the
statistical significance In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the p ...
of the result.


Bad examples

Pinheiro and Bates (2000) showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naïve \chi^2 – often dramatically so. The naïve assumptions could give significance probabilities (-values) that are, on average, far too large in some cases and far too small in others. In general, to test random effects, they recommend using
Restricted maximum likelihood In statistics, the restricted (or residual, or reduced) maximum likelihood (REML) approach is a particular form of maximum likelihood estimation that does not base estimates on a maximum likelihood fit of all the information, but instead uses a like ...
(REML). For fixed-effects testing, they say, “a likelihood ratio test for REML fits is not feasible”, because changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model. As a demonstration, they set either one or two random effects variances to zero in simulated tests. In those particular examples, the simulated -values with restrictions most closely matched a 50–50 mixture of \chi^2(k) and \chi^2(k-1). (With , \chi^2(0) is 0 with probability 1. This means that a good approximation was \,0.5\,\chi^2(1)\,.) Pinheiro and Bates also simulated tests of different fixed effects. In one test of a factor with 4 levels (
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
 = 3), they found that a 50–50 mixture of \chi^2(3) and \chi^2(4) was a good match for actual -values obtained by simulation – and the error in using the naïve \chi^2(3) “may not be too alarming.” However, in another test of a factor with 15 levels, they found a reasonable match to \chi^2(18) – 4 more degrees of freedom than the 14 that one would get from a naïve (inappropriate) application of Wilks’ theorem, ''and'' the simulated -value was several times the naïve \chi^2(14). They conclude that for testing fixed effects, “it's wise to use simulation.”


See also

*
Bayes factor The Bayes factor is a ratio of two competing statistical models represented by their marginal likelihood, and is used to quantify the support for one model over the other. The models in questions can have a common set of parameters, such as a nu ...
*
Model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
* Sup-LR test


Notes


References


Other sources

* * * *


External links

* {{DEFAULTSORT:Likelihood-Ratio Test Statistical ratios Statistical tests