From an early age, human beings are experts at inference. It is such a fundamental part of our intelligence that we do it without even thinking about it. We learn to classify objects on the basis of a very limited set of examples. In statistical inference, we go from specific to general via a mathematical model. Our specific observations come from a data set; that is, a collection of numbers, or at least, information that can be represented numerically.
The mathematical models that we use draw on distributions of probability that are described in the companion half course ST3133 Advanced statistics: distribution theory.
You must pass the following courses first before this half course is attempted:
- (ST104a Statistics 1 and ST104b Statistics 2).
- And - MT1174 Calculus or (MT105a Mathematics 1 and MT105b Mathematics 2) or MT1186 Mathematical Methods.
- Data reduction: Sufficiency, minimal sufficiency. Likelihood.
- Point estimation: Bias, consistency, mean square error. Central limit theorem. Rao-Blackwell theorem. Minimum variance unbiased estimates, Cramer-Rao bound. Properties of maximum likelihood estimates.
- Interval estimation: Pivotal quantities. Size and coverage probability.
- Hypothesis testing: Likelihood ratio test. Most powerful tests. Neyman-Pearson lemma.
If you complete the course successfully, you should be able to:
- Explain the principles of data reduction
- Judge the quality of estimators
- Choose appropriate methods of inference to tackle real problems.
Unseen written examination (2 hrs).
- Casella, G. and R.L. Berger. Statistical Inference. Duxbury.
- Hogg, R.V. and E.A. Tanis. Probability and Statistical Inference. Pearson/Prentice Hall.
Course information sheets
Download the course information sheets from the LSE website.