Book Title:Statistics
作者:Richard Weber(Cambridge)
下载对象:如果你在学计量过程中为你薄弱的数理统计基础而担忧,如果想要在最短的时间内迅速掌握数理统计的基本概念、定理、原理、统计量、各种基本的常用的方法,那么它就是你的上等选择。
特色:快捷、精练、准确、而且不损失数学严格性。
有人会说wooldridge的附录很好(第三版附录超过100页),足够了。你只要看一下下面的目录,你就知道了这里所包含的内容的层次要远远超过wooldridge,简单的例如The p-value of an observation ,稍微复杂一点的如Bayesian estimation, Likelihood ratio tests ,Generalized likelihood ratio tests ,χ2 test of homogeneity ,Bootstrap estimators ,Posterior analysis ,Hypothesis testing as decision making。要写东西多并不是难事,关键是讲的精练和准确确实不易。说它是书一点不过,但我宁可称其为小册子,因为它容纳了如此多的内容同时,居然不超过70页,而且里面还有很多例子。
建议:把它打印出来,在学计量过程中如有问题随时查阅,我自己感觉效果很好!
目录
1 Parameter estimation 1
1.1 What is Statistics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 RVs with values in Rn or Zn . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Some important random variables . . . . . . . . . . . . . . . . . . . . 4
1.4 Independent and IID RVs . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Indicating dependence on parameters . . . . . . . . . . . . . . . . . . . 5
1.6 The notion of a statistic . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Unbiased estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Sums of independent RVs . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9 More important random variables . . . . . . . . . . . . . . . . . . . . . 7
1.10 Laws of large numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.11 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . 8
1.12 Poisson process of rate λ . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Maximum likelihood estimation 9
2.1 Maximum likelihood estimation . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Sufficient statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 The Rao-Blackwell theorem 13
3.1 Mean squared error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 The Rao-Blackwell theorem . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Consistency and asymptotic efficiency∗ . . . . . . . . . . . . . . . . . . 16
3.4 Maximum likelihood and decision-making . . . . . . . . . . . . . . . . 16
4 Confidence intervals 17
4.1 Interval estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Opinion polls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Constructing confidence intervals . . . . . . . . . . . . . . . . . . . . . 19
4.4 A shortcoming of confidence intervals* . . . . . . . . . . . . . . . . . . 20
5 Bayesian estimation 21
5.1 Prior and posterior distributions . . . . . . . . . . . . . . . . . . . . . 21
5.2 Conditional pdfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Estimation within Bayesian statistics . . . . . . . . . . . . . . . . . . . 24
6 Hypothesis testing 25
6.1 The Neyman–Pearson framework . . . . . . . . . . . . . . . . . . . . . 25
6.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.3 Likelihood ratio tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4 Single sample: testing a given mean, simple alternative, known variance
(z-test) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7 Further aspects of hypothesis testing 29
7.1 The p-value of an observation . . . . . . . . . . . . . . . . . . . . . . . 29
7.2 The power of a test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.3 Uniformly most powerful tests . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Confidence intervals and hypothesis tests . . . . . . . . . . . . . . . . 31
7.5 The Bayesian perspective on hypothesis testing . . . . . . . . . . . . . 32
8 Generalized likelihood ratio tests 33
8.1 The χ2 distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.2 Generalised likelihood ratio tests . . . . . . . . . . . . . . . . . . . . . 33
8.3 Single sample: testing a given mean, known variance (z-test) . . . . . 34
8.4 Single sample: testing a given variance, known mean (χ2-test) . . . . . 35
8.5 Two samples: testing equality of means, known common variance (ztest)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.6 Goodness-of-fit tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9 Chi-squared tests of categorical data 37
9.1 Pearson’s chi-squared statistic . . . . . . . . . . . . . . . . . . . . . . . 37
9.2 χ2 test of homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.3 χ2 test of row and column independence . . . . . . . . . . . . . . . . . 40
10 Distributions of the sample mean and variance 41
10.1 Simpson’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.2 Transformation of variables . . . . . . . . . . . . . . . . . . . . . . . . 41
10.3 Orthogonal transformations of normal variates . . . . . . . . . . . . . 42
10.4 The distributions of ¯X and SXX . . . . . . . . . . . . . . . . . . . . . 43
10.5 Student’s t-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 44
11 The t-test 45
11.1 Confidence interval for the mean, unknown variance . . . . . . . . . . 45
11.2 Single sample: testing a given mean, unknown variance (t-test) . . . . 46
11.3 Two samples: testing equality of means, unknown common variance
(t-test) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.4 Single sample: testing a given variance, unknown mean (χ2-test) . . . 48
12 The F-test and analysis of variance 49
12.1 F-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
12.2 Two samples: comparison of variances (F-test) . . . . . . . . . . . . . 49
12.3 Non-central χ2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.4 One way analysis of variance . . . . . . . . . . . . . . . . . . . . . . . 50
13 Linear regression and least squares 53
13.1 Regression models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
13.2 Least squares/MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
13.3 Practical usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
13.4 Data sets with the same summary statistics . . . . . . . . . . . . . . . 55
13.5 Other aspects of least squares . . . . . . . . . . . . . . . . . . . . . . . 56
14 Hypothesis tests in regression models 57
14.1 Distributions of the least squares estimators . . . . . . . . . . . . . . . 57
14.2 Tests and confidence intervals . . . . . . . . . . . . . . . . . . . . . . . 58
14.3 The correlation coefficient . . . . . . . . . . . . . . . . . . . . . . . . . 58
14.4 Testing linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
14.5 Analysis of variance in regression models . . . . . . . . . . . . . . . . . 59
15 Computational methods 61
15.1 Analysis of residuals from a regression . . . . . . . . . . . . . . . . . . 61
15.2 Discriminant analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
15.3 Principal components / factor analysis . . . . . . . . . . . . . . . . . . 62
15.4 Bootstrap estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
16 Decision theory 65
16.1 The ideas of decision theory . . . . . . . . . . . . . . . . . . . . . . . . 65
16.2 Posterior analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
16.3 Hypothesis testing as decision making . . . . . . . . . . . . . . . . . . 68
16.4 The classical and subjective points of view . . . . . . . . . . . . . . . . 68
样张: