Forward–backward algorithm

From WikiMD's Food, Medicine & Wellness Encyclopedia

Forward–backward algorithm is a dynamic programming algorithm for inferring the hidden states in a hidden Markov model (HMM). It is used in a variety of fields, including Bioinformatics, Speech recognition, and Natural language processing. The algorithm computes the posterior probabilities of the hidden states given the observed events, enabling the understanding of the model's internal states that are not directly observable.

Overview[edit | edit source]

The Forward–backward algorithm consists of two parts: the "forward" pass and the "backward" pass. The forward pass calculates the probability of each state at each time, given the observed data up to that point. The backward pass, on the other hand, calculates the probability of the observed data from a certain point in time, given each state. These two sets of probabilities are then used to compute the posterior probabilities of the states at each time step.

Forward Pass[edit | edit source]

The forward pass of the algorithm calculates the probability of ending up in any particular state at a time t, given the sequence of observations up to that point. This is achieved by summing over the probabilities of arriving at each state from any previous state, multiplied by the probability of making the observed transition.

Backward Pass[edit | edit source]

The backward pass calculates the probability of observing the future data from time t onwards, given each state at time t. This involves a similar process to the forward pass but works in the reverse direction, starting from the end of the observation sequence.

Mathematical Formulation[edit | edit source]

Let O = {o1, o2, ..., oT} be the observed sequence of events, and let S = {s1, s2, ..., sN} be the set of states in the HMM. The forward probability αt(i) is defined as the probability of being in state i at time t, given the observed sequence up to time t. Similarly, the backward probability βt(i) is the probability of observing the sequence from time t+1 to T, given state i at time t.

The forward and backward probabilities can be calculated using the following recursive formulas:

  • Forward: αt+1(j) = [∑i=1^N αt(i) * aij] * bj(ot+1)
  • Backward: βt(i) = ∑j=1^N aij * bj(ot+1) * βt+1(j)

where aij is the transition probability from state i to state j, and bj(ot) is the probability of observing ot given state j.

Applications[edit | edit source]

The Forward–backward algorithm is crucial in various applications that involve HMMs. In bioinformatics, it is used for Gene prediction and Protein structure prediction. In speech recognition, it helps in understanding spoken language by modeling the sequences of sounds as hidden states. Similarly, in natural language processing, it aids in tasks such as Part-of-speech tagging and Named entity recognition.

Implementation[edit | edit source]

Implementing the Forward–backward algorithm involves initializing the forward and backward probabilities, then iteratively applying the recursive formulas to update these probabilities. Finally, the posterior probabilities of the states are calculated using the forward and backward probabilities.

See Also[edit | edit source]

References[edit | edit source]


Wiki.png

Navigation: Wellness - Encyclopedia - Health topics - Disease Index‏‎ - Drugs - World Directory - Gray's Anatomy - Keto diet - Recipes

Search WikiMD


Ad.Tired of being Overweight? Try W8MD's physician weight loss program.
Semaglutide (Ozempic / Wegovy and Tirzepatide (Mounjaro) available.
Advertise on WikiMD

WikiMD is not a substitute for professional medical advice. See full disclaimer.

Credits:Most images are courtesy of Wikimedia commons, and templates Wikipedia, licensed under CC BY SA or similar.


Contributors: Prab R. Tumpati, MD