Statistical inference
Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability.[1] Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.
Not to be confused with Statistical interference.Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model";[2] in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference.
Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model.[3]
Konishi & Kitagawa state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling".[4] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[5]
The conclusion of a statistical inference is a statistical proposition.[6] Some common forms of statistical proposition are the following:
Predictive inference [edit]
Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations.
Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper,[63] and has since been propounded by such statisticians as Seymour Geisser.[64]