TY - JOUR
T1 - A sample size-dependent prior strategy for bridging the Bayesian-frequentist gap in point null hypothesis testing
AU - Zhang, Qiu Hu
AU - Ni, Yi Qing
N1 - Publisher Copyright:
© 2023 The Author(s). Published with license by Taylor & Francis Group, LLC.
PY - 2023
Y1 - 2023
N2 - Bayes factor, as a measure of the evidence provided by the data in favor of one hypothesis against its alternative, can be highly sensitive to the prior distributions of parameters involved in the hypotheses as well as to the sample size. This may cause a noticeable difference between the Bayesian and classical (frequentist) hypothesis testing results. In the worst-case scenario, the two results are in conflict, which is termed the Jeffreys-Lindley paradox. In this article, we propose a sample size-dependent prior strategy to bridge the Bayesian-frequentist gap from a decision-theoretical perspective. The central idea behind the proposed strategy is to adaptively adjust prior distributions for the parameters in line with the sample size to manage the risk of type I error in Bayesian hypothesis testing at the same level as that prespecified in frequentist hypothesis testing. The proposed strategy is inspired by the work of Maurice Stevenson Bartlett (M.S. Bartlett, A comment on D. V. Lindley’s statistical paradox, Biometrika, 44, 533–534, 1957), who suggested a sample size-dependent prior to make the Bayes factor independent of the sample size. In contrast to his work, we propose a strategy that leverages the use of sample size-dependent priors in Bayesian hypothesis testing and risk management when deciding the two hypotheses. To demonstrate the effectiveness of the proposed strategy, normal mean tests in the cases that (i) the variance is known (z-test) and (ii) the variance is unknown (t-test) are examined. It turns out that the Bayesian testing results coming out from the proposed strategy become consistent with their frequentist counterparts and the Jeffreys-Lindley paradox disappears.
AB - Bayes factor, as a measure of the evidence provided by the data in favor of one hypothesis against its alternative, can be highly sensitive to the prior distributions of parameters involved in the hypotheses as well as to the sample size. This may cause a noticeable difference between the Bayesian and classical (frequentist) hypothesis testing results. In the worst-case scenario, the two results are in conflict, which is termed the Jeffreys-Lindley paradox. In this article, we propose a sample size-dependent prior strategy to bridge the Bayesian-frequentist gap from a decision-theoretical perspective. The central idea behind the proposed strategy is to adaptively adjust prior distributions for the parameters in line with the sample size to manage the risk of type I error in Bayesian hypothesis testing at the same level as that prespecified in frequentist hypothesis testing. The proposed strategy is inspired by the work of Maurice Stevenson Bartlett (M.S. Bartlett, A comment on D. V. Lindley’s statistical paradox, Biometrika, 44, 533–534, 1957), who suggested a sample size-dependent prior to make the Bayes factor independent of the sample size. In contrast to his work, we propose a strategy that leverages the use of sample size-dependent priors in Bayesian hypothesis testing and risk management when deciding the two hypotheses. To demonstrate the effectiveness of the proposed strategy, normal mean tests in the cases that (i) the variance is known (z-test) and (ii) the variance is unknown (t-test) are examined. It turns out that the Bayesian testing results coming out from the proposed strategy become consistent with their frequentist counterparts and the Jeffreys-Lindley paradox disappears.
KW - Bayes factor
KW - Hypothesis testing
KW - Jeffreys-Lindley paradox
KW - risk management
KW - sample size-dependent prior
KW - type I error risk
UR - http://www.scopus.com/inward/record.url?scp=85177059672&partnerID=8YFLogxK
U2 - 10.1080/03610926.2023.2273202
DO - 10.1080/03610926.2023.2273202
M3 - Journal article
AN - SCOPUS:85177059672
SN - 0361-0926
JO - Communications in Statistics - Theory and Methods
JF - Communications in Statistics - Theory and Methods
ER -