In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of
Fisher's method
In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combi ...
are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.
Dependent statistics
A principal limitation of
Fisher's method
In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combi ...
is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.
Known covariance
Brown's method
Fisher's method showed that the log-sum of ''k'' independent
p-value
In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
s follow a
''χ''2-distribution with 2''k'' degrees of freedom:
:
In the case that these p-values are not independent, Brown proposed the idea of approximating ''X'' using a scaled ''χ''
2-distribution, ''cχ''
2(''k’''), with ''k’'' degrees of freedom.
The mean and variance of this scaled ''χ''
2 variable are:
:
:
where
and
. This approximation is shown to be accurate up to two moments.
Unknown covariance
Harmonic mean ''p-''value
The
harmonic mean ''p''-value offers an alternative to Fisher's method for combining ''p''-values when the dependency structure is unknown but the tests cannot be assumed to be independent.
Kost's method: ''t'' approximation
This method requires the test statistics' covariance structure to be known up to a scalar multiplicative constant.
Cauchy combination test
This is conceptually similar to Fisher's method: it computes a sum of transformed ''p''-values. Unlike Fisher's method, which uses a log transformation to obtain a test statistic which has a
chi-squared distribution
In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
under the null, the Cauchy combination test uses a tan transformation to obtain a test statistic whose tail is asymptotic to that of a
Cauchy distribution
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) ...
under the null. The test statistic is:
:
where
are non-negative weights, subject to
. Under the null,
are uniformly distributed, therefore