Katana VentraIP

Maximum likelihood estimation

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate.[1] The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.[2][3][4]

This article is about the statistical techniques. For computer data storage, see partial-response maximum-likelihood.

If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.[5]


From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with uniform prior distributions (or a normal prior distribution with a standard deviation of infinity). In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.

: the sequence of MLEs converges in probability to the value being estimated.

Consistency

: If is the maximum likelihood estimator for , and if is any transformation of , then the maximum likelihood estimator for is . This property is less commonly known as functional equivariance. The invariance property holds for arbitrary transformation , although the proof simplifies if is restricted to one-to-one transformations.

Invariance

i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound), which also means that MLE has asymptotic normality.

Efficiency

Second-order efficiency after correction for bias.

: a criterion to compare statistical models, based on MLE

Akaike information criterion

: a more general class of estimators to which MLE belongs

Extremum estimator

: information matrix, its relationship to covariance matrix of ML estimates

Fisher information

: a measure of how 'good' an estimator of a distributional parameter is (be it the maximum likelihood estimator or some other estimator)

Mean squared error

: a method to estimate parameters of a mathematical model given data that contains outliers

RANSAC

: yields a process for finding the best possible unbiased estimator (in the sense of having minimal mean squared error); the MLE is often a good starting place for the process

Rao–Blackwell theorem

: provides a means of estimating the size and shape of the region of roughly equally-probable estimates for the population's parameter values, using the information from a single sample, using a chi-squared distribution

Wilks' theorem

(1986). Econometric Applications of Maximum Likelihood Methods. New York, NY: Cambridge University Press. ISBN 0-521-25317-9.

Cramer, J.S.

Eliason, Scott R. (1993). Maximum Likelihood Estimation: Logic and Practice. Newbury Park: Sage.  0-8039-4107-2.

ISBN

(1989). Unifying Political Methodology: the Likehood Theory of Statistical Inference. Cambridge University Press. ISBN 0-521-36697-6.

King, Gary

(1990). "Maximum likelihood: An Introduction". ISI Review. 58 (2): 153–171. doi:10.2307/1403464. JSTOR 1403464.

Le Cam, Lucien

Magnus, Jan R. (2017). "Maximum Likelihood". Introduction to the Theory of Econometrics. Amsterdam, NL: VU University Press. pp. 53–68.  978-90-8659-766-6.

ISBN

Millar, Russell B. (2011). Maximum Likelihood Estimation and Inference. Hoboken, NJ: Wiley.  978-0-470-09482-2.

ISBN

Pickles, Andrew (1986). . Norwich: W. H. Hutchins & Sons. ISBN 0-86094-190-6.

An Introduction to Likelihood Analysis

Severini, Thomas A. (2000). Likelihood Methods in Statistics. New York, NY: Oxford University Press.  0-19-850650-3.

ISBN

; Ahlquist, John S. (2018). Maximum Likelihood for Social Science: Strategies for Analysis. Cambridge University Press. ISBN 978-1-316-63682-4.

Ward, Michael D.

Tilevik, Andreas (2022). (video)

Maximum likelihood vs least squares in linear regression

, Encyclopedia of Mathematics, EMS Press, 2001 [1994]

"Maximum-likelihood method"

Purcell, S. .

"Maximum Likelihood Estimation"

; Stachurski, John. "Maximum Likelihood Estimation". Quantitative Economics with Python.

Sargent, Thomas

Toomet, Ott; Henningsen, Arne (2019-05-19). .

"maxLik: A package for maximum likelihood estimation in R"

Lesser, Lawrence M. (2007). . Mathematical Sciences / College of Science. University of Texas. El Paso, TX. Retrieved 2021-03-06.

"'MLE' song lyrics"