Extensions Of Fisher's Method
   HOME

TheInfoList



OR:

In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of
Fisher's method In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combi ...
are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.


Dependent statistics

A principal limitation of
Fisher's method In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combi ...
is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.


Known covariance


Brown's method

Fisher's method showed that the log-sum of ''k'' independent p-values follow a ''χ''2-distribution with 2''k'' degrees of freedom: : X = -2\sum_^k \log_e(p_i) \sim \chi^2(2k) . In the case that these p-values are not independent, Brown proposed the idea of approximating ''X'' using a scaled ''χ''2-distribution, ''cχ''2(''k’''), with ''k’'' degrees of freedom. The mean and variance of this scaled ''χ''2 variable are: : \operatorname \chi^2(k')= ck' , : \operatorname \chi^2(k')= 2c^2k' . where c=\operatorname(X)/(2\operatorname and k'=2(\operatorname ^2/\operatorname(X). This approximation is shown to be accurate up to two moments.


Unknown covariance


Harmonic mean ''p-''value

The harmonic mean ''p''-value offers an alternative to Fisher's method for combining ''p''-values when the dependency structure is unknown but the tests cannot be assumed to be independent.


Kost's method: ''t'' approximation

This method requires the test statistics' covariance structure to be known up to a scalar multiplicative constant.


Cauchy combination test

This is conceptually similar to Fisher's method: it computes a sum of transformed ''p''-values. Unlike Fisher's method, which uses a log transformation to obtain a test statistic which has a
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
under the null, the Cauchy combination test uses a tan transformation to obtain a test statistic whose tail is asymptotic to that of a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
under the null. The test statistic is: : X = \sum_^k \omega_i \tan 0.5-p_i)\pi, where \omega_i are non-negative weights, subject to \sum_^k \omega_i = 1 . Under the null, p_i are uniformly distributed, therefore \tan 0.5-p_i)\pi/math> are Cauchy distributed. Under some mild assumptions, but allowing for arbitrary dependency between the p_i, the tail of the distribution of ''X'' is asymptotic to that of a Cauchy distribution. More precisely, letting ''W'' denote a standard Cauchy random variable: : \lim_ \frac = 1. This leads to a combined hypothesis test, in which ''X'' is compared to the quantiles of the Cauchy distribution.


References

Multiple comparisons {{Statistics-stub