HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, Levene's test is an inferential statistic used to assess the equality of
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
s for a variable calculated for two or more groups. Some common statistical procedures assume that variances of the populations from which different samples are drawn are equal. Levene's test assesses this assumption. It tests the
null hypothesis In scientific research, the null hypothesis (often denoted ''H''0) is the claim that no difference or relationship exists between two sets of data or variables being analyzed. The null hypothesis is that any experimentally observed difference is d ...
that the population variances are equal (called ''homogeneity of variance'' or ''
homoscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The s ...
''). If the resulting ''p''-value of Levene's test is less than some significance level (typically 0.05), the obtained differences in sample variances are unlikely to have occurred based on random sampling from a population with equal variances. Thus, the null hypothesis of equal variances is rejected and it is concluded that there is a difference between the variances in the population. Some of the procedures typically assuming homoscedasticity, for which one can use Levene's tests, include
analysis of variance Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statisticia ...
and t-tests. Levene's test is sometimes used before a comparison of means, informing the decision on whether to use a pooled t-test or the Welch's t-test. However, it was shown that such a two-step procedure may markedly inflate the type 1 error obtained with the t-tests and thus should not be done in the first place. Instead, the choice of pooled or Welch's test should be made a priori based on the study design. Levene's test may also be used as a main test for answering a stand-alone question of whether two sub-samples in a given population have equal or different variances. Levene's test was developed by and named after American statistician and geneticist Howard Levene.


Definition

Levene's test is equivalent to a 1-way between-groups analysis of variance (ANOVA) with the dependent variable being the absolute value of the difference between a score and the mean of the group to which the score belongs (shown below as Z_ = , Y_ - \bar_, ). The test statistic, W, is equivalent to the F statistic that would be produced by such an ANOVA, and is defined as follows: : W = \frac \cdot \frac , where * k is the number of different groups to which the sampled cases belong, * N_i is the number of cases in the ith group, * N is the total number of cases in all groups, * Y_ is the value of the measured variable for thejth case from the ith group, * Z_ = \begin , Y_ - \bar_, , & \bar_ \text i\text, \\ , Y_ - \tilde_, , & \tilde_ \text i\text. \end (Both definitions are in use though the second one is, strictly speaking, the
Brown–Forsythe test The Brown–Forsythe test is a statistical test for the equality of group variances based on performing an Analysis of Variance (ANOVA) on a transformation of the response variable. When a one-way ANOVA is performed, samples are assumed to have bee ...
– see below for comparison.) * Z_ = \frac \sum_^ Z_ is the mean of the Z_ for group i, * Z_ = \frac \sum_^k \sum_^ Z_ is the mean of all Z_. The test statistic W is approximately F-distributed with k-1 and N-k degrees of freedom, and hence is the significance of the outcome w of W tested against F(1-\alpha;k-1,N-k) where F is a quantile of the F-distribution, with k-1 and N-k degrees of freedom, and \alpha is the chosen level of significance (usually 0.05 or 0.01).


Comparison with the Brown–Forsythe test

The
Brown–Forsythe test The Brown–Forsythe test is a statistical test for the equality of group variances based on performing an Analysis of Variance (ANOVA) on a transformation of the response variable. When a one-way ANOVA is performed, samples are assumed to have bee ...
uses the median instead of the mean in computing the spread within each group ( \bar vs. \tilde, above). Although the optimal choice depends on the underlying distribution, the definition based on the median is recommended as the choice that provides good
robustness Robustness is the property of being strong and healthy in constitution. When it is transposed into a system, it refers to the ability of tolerating perturbations that might affect the system’s functional body. In the same line ''robustness'' ca ...
against many types of non-normal data while retaining good
statistical power In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H_0) when a specific alternative hypothesis (H_1) is true. It is commonly denoted by 1-\beta, and represents the chances ...
. If one has knowledge of the underlying distribution of the data, this may indicate using one of the other choices. Brown and Forsythe performed
Monte Carlo Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is ...
studies that indicated that using the
trimmed mean A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end ...
performed best when the underlying data followed a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
(a
heavy-tailed In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distr ...
distribution) and the median performed best when the underlying data followed a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
with four degrees of freedom (a heavily
skewed distribution In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal d ...
). Using the mean provided the best power for symmetric, moderate-tailed, distributions.


See also

*
Bartlett's test In statistics, Bartlett's test, named after Maurice Stevenson Bartlett, is used to test homoscedasticity, that is, if multiple samples are from populations with equal variances. Some statistical tests, such as the analysis of variance, assume tha ...
*
F-test of equality of variances In statistics, an ''F''-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any ''F''-test can be regarded as a comparison of two variances, but the specific case being ...
*
Box's M test Box's ''M'' test is a multivariate statistical test used to check the equality of multiple variance-covariance matrices. The test is commonly used to test the assumption of homogeneity of variances and covariances in MANOVA and linear discrimina ...


References

{{Reflist


External links


Parametric and nonparametric Levene's test in SPSS
* http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm Analysis of variance Statistical tests