Multiple testing is a statistical inference problem, applied widely in the fields of genomic studies, QTL mapping and national security, where large number of hypotheses is being tested simultaneously. However, it is not always straightforward whether multiple testing can successfully be carried out for a specific dataset.
To measure whether multiple testing works as desired, the error rate, defined as P ( Type I error ) + P ( Type II error ) , for investigating performance of different frequentist and Bayesian testing methods is considered. In a grid of all possible combinations of p (proportion of signal in the data) and τ^2 /σ^2 (variance), a simulation study is conducted, testing a set of hypotheses with Benjamini-Hochberg procedure and its modified versions, as well as with Parametric Empirical and Full Bayes methods.
As a result, a sharp phase transition phenomenon for the error rate of each of the inference schemes is noticed, indicating the existence of a phase boundary defining regions of p and τ^2 /σ^2 for which multiple testing is feasible and for which it isn't. This discovery is then also discussed from the point of view of variable selection; Bayesian methods in variable selection are expected to show similar results due to the well known connections between multiple testing and model selection.
Furthermore, the outcome of the simulation yields differences between the performance of full Bayes and empirical Bayes methods noted before from asymptotic point of view. This finding is then contemplated from the perspective of phase boundary, resulting in new ideas of how to avoid conflict when using parametric Bayes approaches as an approximation of full Bayes analysis.