Statistical TheoryProf. Gesine Reinert
Aim: To review and extend the main ideas in Statistical Inference, both
from a frequentist viewpoint and from a Bayesian viewpoint. This course
serves not only as background to other courses, but also it will provide a
basis for developing novel inference methods when faced with a new situation
which includes uncertainty. Inference here includes estimating parameters
and testing hypotheses.
Overview
• Part 1: Frequentist Statistics
– Chapter 1: Likelihood, sufficiency and ancillarity. The Factorization
Theorem. Exponential family models.
– Chapter 2: Point estimation. When is an estimator a good estimator?
Covering bias and variance, information, efficiency. Methods
of estimation: Maximum likelihood estimation, nuisance parameters
and profile likelihood; method of moments estimation. Bias
and variance approximations via the delta method.
– Chapter 3: Hypothesis testing. Pure significance tests, significance
level. Simple hypotheses, Neyman-Pearson Lemma. Tests
for composite hypotheses. Sample size calculation. Uniformly
most powerful tests, Wald tests, score tests, generalized likelihood
ratio tests. Multiple tests, combining independent tests.
– Chapter 4: Interval estimation. Confidence sets and their connection
with hypothesis tests. Approximate confidence intervals.
Prediction sets.
– Chapter 5: Asymptotic theory. Consistency. Asymptotic normality
of maximum likelihood estimates, score tests. Chi-square
approximation for generalized likelihood ratio tests. Likelihood
confidence regions. Pseudo-likelihood tests.
• Part 2: Bayesian Statistics
– Chapter 6: Background. Interpretations of probability; the Bayesian
paradigm: prior distribution, posterior distribution, predictive
distribution, credible intervals. Nuisance parameters are easy.
-Chapter 7: Bayesian models. Sufficiency, exchangeability. De
Finetti’s Theorem and its intepretation in Bayesian statistics.
– Chapter 8: Prior distributions. Conjugate priors. Noninformative
priors; Jeffreys priors, maximum entropy priors posterior summaries.
If there is time: Bayesian robustness.
– Chapter 9: Posterior distributions. Interval estimates, asymptotics
(very short).
• Part 3: Decision-theoretic approach:
– Chapter 10: Bayesian inference as a decision problem. Decision
theoretic framework: point estimation, loss function, decision
rules. Bayes estimators, Bayes risk. Bayesian testing, Bayes
factor. Lindley’s paradox. Least favourable Bayesian answers.
Comparison with classical hypothesis testing.
– Chapter 11: Hierarchical and empirical Bayes methods. Hierarchical
Bayes, empirical Bayes, James-Stein estimators, Bayesian
computation.
• Chapter 12: Principles of inference. The likelihood principle. The
conditionality principle. The stopping rule principle.
附件列表