Background[edit]

The method is named for its use of the Bonferroni inequalities.[1] Application of the method to confidence intervals was described by Olive Jean Dunn.[2]


Statistical hypothesis testing is based on rejecting the null hypothesis when the likelihood of the observed data would be low if the null hypothesis were true. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases.[3]


The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level of , where is the desired overall alpha level and is the number of hypotheses.[4] For example, if a trial is testing hypotheses with a desired overall , then the Bonferroni correction would test each individual hypothesis at . Similarly, when constructing confidence intervals for parameters, each individual confidence interval can be computed at the confidence level to achieve an overall confidence level of .


The Bonferroni correction can also be applied as a p-value adjustment: Using that approach, instead of adjusting the alpha level, each p-value is multiplied by the number of tests (with adjusted p-values that exceed 1 then being reduced to 1), and the alpha level is left unchanged. The significance decisions using this approach will be the same as when using the alpha-level adjustment approach.

Extensions[edit]

Generalization[edit]

Rather than testing each hypothesis at the level, the hypotheses may be tested at any other combination of levels that add up to , provided that the level of each test is decided before looking at the data.[6] For example, for two hypothesis tests, an overall of 0.05 could be maintained by conducting one test at 0.04 and the other at 0.01.

Confidence intervals[edit]

The procedure proposed by Dunn[2] can be used to adjust confidence intervals. If one establishes confidence intervals, and wishes to have an overall confidence level of , each individual confidence interval can be adjusted to the level of .[2]

Continuous problems[edit]

When searching for a signal in a continuous parameter space there can also be a problem of multiple comparisons, or look-elsewhere effect. For example, a physicist might be looking to discover a particle of unknown mass by considering a large range of masses; this was the case during the Nobel Prize winning detection of the Higgs boson. In such cases, one can apply a continuous generalization of the Bonferroni correction by employing Bayesian logic to relate the effective number of trials, , to the prior-to-posterior volume ratio.[7]

Criticism[edit]

With respect to FWER control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated.[9]


Multiple-testing corrections, including the Bonferroni procedure, increase the probability of Type II errors when null hypotheses are false, i.e., they reduce statistical power.[10][9]

Bonferroni, Sidak online calculator