Maximum likelihood

From WikiMD's Wellness Encyclopedia

Maximum likelihood estimation (MLE) is a method in statistics used for estimating the parameters of a statistical model. The method of maximum likelihood corresponds to many well-known estimation methods in statistics. It is particularly useful for parameter estimation in complex models where other methods are infeasible.

Overview[edit | edit source]

Maximum likelihood estimation involves defining a likelihood function which measures the plausibility of a parameter value given a sample of data. This function is constructed based on the probability distribution of the data. The estimated parameters are those that maximize this likelihood function.

Mathematical Definition[edit | edit source]

Given a statistical model with parameters θ, and a set of data y, the likelihood function L(θ|y) is defined as the probability of observing the given data y given the parameters θ:

L(θ|y) = P(y|θ)

The goal of maximum likelihood estimation is to find the value of θ that maximizes L(θ|y). This is often done by taking the natural logarithm of the likelihood function, known as the log-likelihood, which simplifies the calculations without affecting the location of the maximum:

ℓ(θ|y) = log(L(θ|y))

The values of θ that maximize ℓ(θ|y) are considered the maximum likelihood estimates of the parameters.

Properties[edit | edit source]

Maximum likelihood estimators have several important properties:

  • Consistency: As the sample size increases, the MLE converges in probability to the true value of the parameter.
  • Efficiency: Under regular conditions, the MLE achieves the lowest possible variance among all unbiased estimators, known as the Cramér-Rao lower bound.
  • Asymptotic normality: Under certain conditions, the distribution of the MLE approaches a normal distribution as the sample size increases.

Applications[edit | edit source]

Maximum likelihood estimation is widely used in many fields of science and engineering, including econometrics, biostatistics, and machine learning. It is particularly popular in the context of logistic regression, Poisson regression, and other types of generalized linear models.

Limitations[edit | edit source]

While MLE has many desirable properties, it also has limitations:

  • It can be sensitive to the choice of the model: if the model assumptions are incorrect, the MLE can be biased or inconsistent.
  • In some cases, especially in complex models, finding the maximum likelihood estimate can be computationally challenging.

Example[edit | edit source]

Consider a set of data y_i (i=1, ..., n) that are independently and identically distributed following a normal distribution with unknown mean μ and variance σ². The likelihood function for this model is:

L(μ, σ²|y) = ∏ (1/√(2πσ²)) exp(-(y_i - μ)² / (2σ²))

The log-likelihood function is:

ℓ(μ, σ²|y) = -n/2 log(2π) - n/2 log(σ²) - 1/(2σ²) ∑ (y_i - μ)²

Maximizing this function with respect to μ and σ² gives the maximum likelihood estimates for the mean and variance of the distribution.

See Also[edit | edit source]


WikiMD
Navigation: Wellness - Encyclopedia - Health topics - Disease Index‏‎ - Drugs - World Directory - Gray's Anatomy - Keto diet - Recipes

Search WikiMD

Ad.Tired of being Overweight? Try W8MD's physician weight loss program.
Semaglutide (Ozempic / Wegovy and Tirzepatide (Mounjaro / Zepbound) available.
Advertise on WikiMD

WikiMD's Wellness Encyclopedia

Let Food Be Thy Medicine
Medicine Thy Food - Hippocrates

WikiMD is not a substitute for professional medical advice. See full disclaimer.
Credits:Most images are courtesy of Wikimedia commons, and templates Wikipedia, licensed under CC BY SA or similar.

Contributors: Prab R. Tumpati, MD