Holm–Bonferroni Method
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of
multiple comparisons In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values. The more inferences ...
. It is intended to control the
family-wise error rate In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests. Familywise and Experimentwise Error Rates Tukey (1953) developed the concept of a ...
(FWER) and offers a simple test uniformly more powerful than the
Bonferroni correction In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Oliv ...
. It is named after Sture Holm, who codified the method, and
Carlo Emilio Bonferroni Carlo Emilio Bonferroni (28 January 1892 – 18 August 1960) was an Italian mathematician who worked on probability theory. Biography Bonferroni studied piano and conducting in Turin Conservatory and at University of Turin under Giuseppe Peano ...
.


Motivation

When considering several hypotheses, the problem of
multiplicity Multiplicity may refer to: In science and the humanities * Multiplicity (mathematics), the number of times an element is repeated in a multiset * Multiplicity (philosophy), a philosophical concept * Multiplicity (psychology), having or using multi ...
arises: the more hypotheses are checked, the higher the probability of obtaining
Type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
s (
false positive A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition (such as a disease when the disease is not present), while a false negative is the opposite error, where the test result ...
s). The Holm–Bonferroni method is one of many approaches for controlling the FWER, i.e., the probability that one or more Type I errors will occur, by adjusting the rejection criteria for each of the individual hypotheses.


Formulation

The method is as follows: * Suppose you have m p-values, sorted into order lowest-to-highest P_1,\ldots,P_m, and their corresponding hypotheses H_1,\ldots,H_m(null hypotheses). You want the FWER to be no higher than a certain pre-specified
significance level In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined significance level, denoted by \alpha, is the ...
\alpha . * Is P_1 < \alpha / m? If so, reject H_1 and continue to the next step, otherwise EXIT. * Is P_2 < \alpha / (m-1)? If so, reject H_2 also, and continue to the next step, otherwise EXIT. * And so on: for each P value, test whether P_k < \frac. If so, reject H_k and continue to examine the larger P values, otherwise EXIT. This method ensures that the FWER is at most \alpha, in the strong sense.


Rationale

The simple
Bonferroni correction In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Oliv ...
rejects only null hypotheses with ''p''-value less than \frac, in order to ensure that the FWER, i.e., the risk of rejecting one or more true null hypotheses (i.e., of committing one or more type I errors) is at most \alpha. The cost of this protection against type I errors is an increased risk of failing to reject one or more false null hypotheses (i.e., of committing one or more type II errors). The Holm–Bonferroni method also controls the FWER at \alpha, but with a lower increase of type II error risk than the classical Bonferroni method. The Holm–Bonferroni method sorts the ''p''-values from lowest to highest and compares them to nominal alpha levels of \frac to \alpha (respectively), namely the values \frac, \frac, \ldots , \frac, \frac. * The index k identifies the first ''p''-value that is ''not'' low enough to validate rejection. Therefore, the null hypotheses H_, \ldots , H_ are rejected, while the null hypotheses H_, ... , H_ are not rejected. * If k = 1 then no ''p''-values were low enough for rejection, therefore no null hypotheses are rejected. * If no such index k could be found then all ''p''-values were low enough for rejection, therefore all null hypotheses are rejected (none are accepted).


Proof

Holm–Bonferroni controls the FWER as follows. Let H_\ldots H_ be a family of hypotheses, and P_\leq P_\leq\cdots\leq P_ be the sorted p-values. Let I_0 be the set of indices corresponding to the (unknown) true null hypotheses, having m_0 members. Let us assume that we wrongly reject a true hypothesis. We have to prove that the probability of this event is at most \alpha. Let h be such that H_ is the first rejected true hypothesis, in the ordering used during the Bonferroni–Holm test. Then H_, \ldots, H_are all rejected false hypotheses. It then holds that h-1 \leq m -m_0 and \frac\leq \frac (1). Since H_ is rejected, it must be P_ \leq \frac by definition of the testing procedure. Using (1), the right hand side of this inequality is at most \frac. Thus, if we wrongly reject a true hypothesis, there has to be a true hypothesis with P-value at most \frac. So let us define the random variable A=\left\. Whatever the (unknown) set of true hypotheses I_0 is, we have \Pr(A)\leq \alpha (by the
Bonferroni inequalities In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individu ...
). Therefore, the probability to reject a true hypothesis is at most \alpha.


Alternative proof

The Holm–Bonferroni method can be viewed as a
closed testing procedure In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously. The closed testing principle Suppose there are ''k'' hypotheses ''H''1,..., ''H'k'' to be tested and the overall type I ...
, with the Bonferroni correction applied locally on each of the intersections of null hypotheses. The closure principle states that a hypothesis H_i in a family of hypotheses H_1,\ldots,H_m is rejected – while controlling the FWER at level \alpha – if and only if all the sub-families of the intersections with H_i are rejected at level \alpha. The Holm-Bonferroni method is a ''shortcut procedure'', since it makes m or less comparisons, while the number of all intersections of null hypotheses to be tested is of order 2^m. It controls the FWER in the strong sense. In the Holm–Bonferroni procedure, we first test H_. If it is not rejected then the intersection of all null hypotheses \bigcap\nolimits_^m H_i is not rejected too, such that there exists at least one intersection hypothesis for each of elementary hypotheses H_1,\ldots,H_m that is not rejected, thus we reject none of the elementary hypotheses. If H_ is rejected at level \alpha/m then all the intersection sub-families that contain it are rejected too, thus H_ is rejected. This is because P_ is the smallest in each one of the intersection sub-families and the size of the sub-families is at most m, such that the Bonferroni threshold larger than \alpha/m. The same rationale applies for H_. However, since H_ already rejected, it sufficient to reject all the intersection sub-families of H_ without H_. Once P_\leq\alpha/(m-1) holds all the intersections that contains H_ are rejected. The same applies for each 1\leq i \leq m.


Example

Consider four null hypotheses H_1,\ldots,H_4 with unadjusted p-values p_1=0.01, p_2=0.04, p_3=0.03 and p_4=0.005, to be tested at significance level \alpha=0.05. Since the procedure is step-down, we first test H_4=H_, which has the smallest p-value p_4=p_=0.005. The p-value is compared to \alpha/4=0.0125, the null hypothesis is rejected and we continue to the next one. Since p_1=p_=0.01<0.0167=\alpha/3 we reject H_1=H_ as well and continue. The next hypothesis H_3 is not rejected since p_3=p_=0.03 > 0.025=\alpha/2. We stop testing and conclude that H_1 and H_4 are rejected and H_2 and H_3 are not rejected while controlling the family-wise error rate at level \alpha=0.05. Note that even though p_2=p_=0.04 < 0.05=\alpha applies, H_2 is not rejected. This is because the testing procedure stops once a failure to reject occurs.


Extensions


Holm–Šidák method

When the hypothesis tests are not negatively dependent, it is possible to replace \frac,\frac, \ldots, \frac with: : 1-(1-\alpha)^,1-(1-\alpha)^,\ldots,1-(1-\alpha)^ resulting in a slightly more powerful test.


Weighted version

Let P_,\ldots,P_ be the ordered unadjusted p-values. Let H_, 0\leq w_ correspond to P_. Reject H_ as long as : P_\leq \frac \alpha,\quad j=1,\ldots,i


Adjusted ''p''-values

The adjusted ''p''-values for Holm–Bonferroni method are: : \widetilde_=\max_\left\_1, \text \_1 \equiv \min(x,1). In the earlier example, the adjusted ''p''-values are \widetilde_1 = 0.03, \widetilde_2 = 0.06, \widetilde_3 = 0.06 and \widetilde_4 = 0.02. Only hypotheses H_1 and H_4 are rejected at level \alpha=0.05. Similar adjusted ''p''-values for Holm-Šidák method can be defined recursively as \widetilde_=\max\left\, where \widetilde_ = 1 - (1 - p_)^m. Due to the inequality 1 - (1 - \alpha)^ < \alpha/n for n \geq 2, the Holm-Šidák method will be more powerful than Holm-Bonferroni method. The weighted adjusted ''p''-values are: :\widetilde_=\max_ \left\_1, \text \_1 \equiv \min(x,1). A hypothesis is rejected at level α if and only if its adjusted ''p''-value is less than α. In the earlier example using equal weights, the adjusted ''p''-values are 0.03, 0.06, 0.06, and 0.02. This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.


Alternatives and usage

The Holm–Bonferroni method is "uniformly" more powerful than the classic
Bonferroni correction In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Background The method is named for its use of the Bonferroni inequalities. An extension of the method to confidence intervals was proposed by Oliv ...
, meaning that it is always at least as powerful. There are other methods for controlling the FWER that are more powerful than Holm–Bonferroni. For instance, in the Hochberg procedure, rejection of H_ \ldots H_ is made after finding the ''maximal'' index k such that P_ \leq \frac. Thus, The Hochberg procedure is uniformly more powerful than the Holm procedure. However, the Hochberg procedure requires the hypotheses to be
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
or under certain forms of positive dependence, whereas Holm–Bonferroni can be applied without such assumptions. A similar step-up procedure is the Hommel procedure, which is uniformly more powerful than the Hochberg procedure.


Naming

Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm–Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper: ''"The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test."''


References

{{DEFAULTSORT:Holm-Bonferroni Method Statistical hypothesis testing Statistical tests Multiple comparisons