Metropolis–Hastings algorithm

From WikiMD's Wellness Encyclopedia

Flowchart-of-Metropolis-Hastings-M-H-algorithm-for-the-parameter-estimation-using-the
3dRosenbrock.png

The Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult. This algorithm is named after Nicholas Metropolis and W. K. Hastings.

Overview[edit | edit source]

The Metropolis–Hastings algorithm generates a sequence of samples from the target distribution by constructing a Markov chain that has the desired distribution as its equilibrium distribution. The algorithm proceeds by proposing a move to a new state and then deciding whether to accept or reject the move based on a certain acceptance criterion.

Algorithm[edit | edit source]

The algorithm can be described as follows: 1. Initialization: Start with an initial state \( x_0 \). 2. Iteration: For each iteration \( t \):

 a. Propose a new state \( x' \) based on the current state \( x_t \) using a proposal distribution \( q(x'|x_t) \).
 b. Calculate the acceptance ratio:
  \[
  \alpha = \min\left(1, \frac{\pi(x') q(x_t|x')}{\pi(x_t) q(x'|x_t)}\right)
  \]
  where \( \pi \) is the target distribution.
 c. Accept the new state with probability \( \alpha \). If the new state is accepted, set \( x_{t+1} = x' \); otherwise, set \( x_{t+1} = x_t \).

Properties[edit | edit source]

The Metropolis–Hastings algorithm has several important properties:

  • Ergodicity: The Markov chain is ergodic, meaning that it will eventually explore the entire state space given enough time.
  • Detailed balance: The algorithm satisfies the detailed balance condition, ensuring that the target distribution is the stationary distribution of the Markov chain.
  • Convergence: Under certain conditions, the samples generated by the algorithm will converge to the target distribution.

Applications[edit | edit source]

The Metropolis–Hastings algorithm is widely used in various fields, including:

Variants[edit | edit source]

Several variants of the Metropolis–Hastings algorithm exist, including:

  • Gibbs sampling: A special case where the proposal distribution is chosen to be the conditional distribution of each variable given the others.
  • Adaptive Metropolis–Hastings: An extension that adapts the proposal distribution based on the history of the chain.

See also[edit | edit source]

References[edit | edit source]

Contributors: Prab R. Tumpati, MD