5 Tests and Confidence Regions 223
5.1 Introduction 223
5.2 A first approach to testing theory 224
5.2.1 Decision-theoretic testing 224
5.2.2 The Bayes factor 227
5.2.3 Modification of the prior 229
5.2.4 Point-null hypotheses 230
5.2.5 Improper priors 232
5.2.6 Pseudo-Bayes factors 236
5.3 Comparisons with the classical approach 242
5.3.1 UMP and UMPU tests 242
5.3.2 Least favorable prior distributions 245
5.3.3 Criticisms 247
5.3.4 The p-values 249
5.3.5 Least favorable Bayesian answers 250
5.3.6 The one-sided case 254
5.4 A second decision-theoretic approach 256
5.5 Confidence regions 259
5.5.1 Credible intervals 260
5.5.2 Classical confidence intervals 263
5.5.3 Decision-theoretic evaluation of confidence sets 264
5.6 Exercises 267
5.7 Notes 279
6 Bayesian Calculations 285
6.1 Implementation difficulties 285
6.2 Classical approximation methods 293
6.2.1 Numerical integration 293
6.2.2 Monte Carlo methods 294
6.2.3 Laplace analytic approximation 298
6.3 Markov chain Monte Carlo methods 301
6.3.1 MCMC in practice 302
6.3.2 Metropolis–Hastings algorithms 303
6.3.3 The Gibbs sampler 307
6.3.4 Rao–Blackwellization 309
6.3.5 The general Gibbs sampler 311
6.3.6 The slice sampler 315
6.3.7 The impact on Bayesian Statistics 317
6.4 An application to mixture estimation 318
6.5 Exercises 321
6.6 Notes 334
7 Model Choice 343
7.1 Introduction 343
7.1.1 Choice between models 344
7.1.2 Model choice: motives and uses 347
7.2 Standard framework 348
7.2.1 Prior modeling for model choice 348
7.2.2 Bayes factors 350
7.2.3 Schwartz’s criterion 352
7.2.4 Bayesian deviance 354
7.3 Monte Carlo and MCMC approximations 356
7.3.1 Importance sampling 356
7.3.2 Bridge sampling 358
7.3.3 MCMC methods 359
7.3.4 Reversible jump MCMC 363
7.4 Model averaging 366
7.5 Model projections 369
7.6 Goodness-of-fit 374
7.7 Exercises 377
7.8 Notes 386