Y ~ n(mu, 1) Ybar ~ n(mu, 1/n) mu ~ n(0, 1/tau) mu | Ybar ~ n(a, b^2) b^2 = 1 / (tau + n) a = n * Ybar * b^2 Post prob mu > 0 = 1 - Prob(mu <= 0 | Ybar) = 1 - Pr((mu - a) / b < (0 - a) / b) = 1 - Phi(-a / b) = Phi(a / b) If tau=0: b^2 = 1/n a = Ybar Post prob mu > 0 = 1 - Prob(mu <= 0 | Ybar) = 1 - Pr((mu - Ybar) / 1/sqrt(n) < (0 - Ybar) / 1/sqrt(n)) = 1 - Phi(- Ybar * sqrt(n)) = Phi(Ybar * sqrt(n)) m * Ybar * b^2 / b = Ybar * sqrt(n) m * Ybar * b = Ybar * sqrt(n) m * b = sqrt(n) # check: tau=0 b=1/sqrt(m) m^2 b^2 = n m^2 / (tau + m) = n m^2 = n(tau + m) = n*tau + n*m m^2 - n*m - n*tau = 0 m = 0.5 * (n + sqrt(n^2 + 4 *n * tau)) Check: tau=0: m = .5 * (n + n) z <- list() n <- seq(1, 100, by=2) for(v in c(.05, .1, .25, .5, 1, 4, 100)) z[[paste0('v=', v)]] <- list(x=n, y=0.5 * (n + sqrt(n^2 + 4 * n / v)) - n) labcurve(z, pl=TRUE, xlab='Sample Size With No Skepticism', ylab='Extra Subjects Needed Due to Skepticism') Caption: Effect of discounting by a skeptical prior with mean zero and variance $v$. This is in the increase needed in the sample size in order to achieve the same posterior probability of $\mu > 0$ as with the flat (non-informative) prior.