<p>[UseMoney=2/239494]
<br/></p>《MATHEMATICAL STATISTICS》<br/>【原书作者】: &nbsp;Keith Knight&nbsp;&nbsp; Department of Statistics University of Toronto Ontario, Canada <p></p><p>【ISBN 】: 1-58488-178-X</p>【页数 】:476<br/>【开本 】 :<br/>【出版社】 : CHAPMAN &amp; HALL/CRC <p></p><p>【出版日期】:Nov 24, 1999</p><p>【文件格式】:PDF<br/>【摘要或目录】:&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;<br/>Book Description<br/>This book is intended as a textbook (or reference) for a full year Master’s level (or senior level undergraduate) course in mathematical statistics aimed at students in statistics, biostatistics, and related fields.</p>This book grew from lecture notes and handouts that I developed for a course in mathematical statistics that I first taught in 1992-93 at the University ofT oronto. In teaching this course, I realized that many students viewed the course as largely irrelevant to their education. To me this seemed strange since much ofmathematical statistics is directly relevant to statistical practice; for example, what statistician has not used a χ2 approximation at some pointin their life? At the same time, I could also sympathize with their point ofview. To a student first encountering the subject, the traditional syllabus ofa mathematical statistics course does seem heavily weighed down with optimality theory ofv arious flavours that was developed in the 1940s and 1950s; while this is interesting (and certainly important), it does leave the impression that mathematical statistics has little to offer beyond some nice mathematics.My main objective in writing this book was to provide a set of useful tools that would allow students to understand the theoretical underpinnings ofstatistical methodology. At the same time, I wanted to be as mathematically rigorous as possible within certain constraints. I have devoted a chapter to convergence for sequences of random variables (and random vectors) since, for better or for <p></p>worse, these concepts play an important role in the analysis of estimation and other inferential procedures in statistics. I have concentrated on inferential procedures within the framework of parametric models; however, in recognition ofthe fact that models are typically misspecified, estimation is also viewed from a nonparametric perspective by considering estimation off unctional parameters (or statistical functionals, as they are often called). <p></p><p>&nbsp;</p>Preface <p></p>1 Introduction to Probability <p></p>1.1 Random experiments <p></p>1.2 Probability measures <p></p>1.3 Conditional probability and independence <p></p>1.4 Random variables <p></p>1.5 Transformations of random variables <p></p>1.6 Expected values <p></p>1.7 Problems and complements <p></p>2 Random vectors and joint distributions <p></p>2.1 Introduction <p></p>2.2 Discrete and continuous random vectors <p></p>2.3 Conditional distributions and expected values <p></p>2.4 Distribution theory for Normal samples <p></p>2.5 Poisson processes <p></p>2.6 Generating random variables <p></p>2.7 Problems and complements <p></p>3 Convergence of Random Variables <p></p>3.1 Introduction <p></p>3.2 Convergence in probability and distribution <p></p>3.3 Weak Law ofLarg e Numbers <p></p>3.4 Proving convergence in distribution <p></p>3.5 Central Limit Theorems <p></p>3.6 Some applications <p></p>3.7 Convergence with probability 1 <p></p>3.8 Problems and complements <p></p>4 Principles of Point Estimation <p></p>4.1 Introduction <p></p>4.2 Statistical models <p></p>4.3 Sufficiency <p></p>4.4 Point estimation <p></p>4.5 The substitution principle <p></p>4.6 Influence curves <p></p>4.7 Standard errors and their estimation <p></p>4.8 Asymptotic relative efficiency <p></p>4.9 The jackknife <p></p>4.10 Problems and complements <p></p>5 Likelihood-Based Estimation <p></p>5.1 Introduction <p></p>5.2 The likelihood function <p></p>5.3 The likelihood principle <p></p>5.4 Asymptotic theory for MLEs <p></p>5.5 Misspecified models <p></p>5.6 Non-parametric maximum likelihood estimation <p></p>5.7 Numerical computation ofMLE s <p></p>5.8 Bayesian estimation <p></p>5.9 Problems and complements <p></p>6 Optimality in Estimation <p></p>6.1 Introduction <p></p>6.2 Decision theory <p></p>6.3 Minimum variance unbiased estimation <p></p>6.4 The Cram&acute;er-Rao lower bound <p></p>6.5 Asymptotic efficiency <p></p>6.6 Problems and complements <p></p>7 Interval Estimation and Hypothesis Testing <p></p>7.1 Confidence intervals and regions <p></p>7.2 Highest posterior density regions <p></p>7.3 Hypothesis testing <p></p>7.4 Likelihood ratio tests <p></p>7.5 Other issues <p></p>7.6 Problems and complements <p></p>8 Linear and Generalized Linear Model <p></p>8.1 Linear models <p></p>8.2 Estimation in linear models <p></p>8.3 Hypothesis testing in linear models <p></p>8.4 Non-normal errors <p></p>8.5 Generalized linear models <p></p>8.6 Quasi-Likelihood models <p></p>8.7 Problems and complements <p></p>9 Goodness-of-Fit <p></p>9.1 Introduction <p></p>9.2 Tests based on the Multinomial distribution <p></p>9.3 Smooth goodness-of-fit tests <p></p>9.4 Problems and complements <p></p>References <p></p><p>&nbsp;</p><p>&nbsp;</p>