Forward–backward Algorithm (operator Splitting)
   HOME

TheInfoList



OR:

The forward–backward algorithm is an inference
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
for
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
s which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions o_:= o_1,\dots,o_T, i.e. it computes, for all hidden state variables X_t \in \, the distribution P(X_t\ , \ o_). This inference task is usually called smoothing. The algorithm makes use of the principle of
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name ''forward–backward algorithm''. The term ''forward–backward algorithm'' is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.


Overview

In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all t \in \, the probability of ending up in any particular state given the first t observations in the sequence, i.e. P(X_t\ , \ o_). In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point t, i.e. P(o_\ , \ X_t). These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: :P(X_t\ , \ o_) = P(X_t\ , \ o_, o_) \propto P(o_\ , \ X_t) P( X_t , o_) The last step follows from an application of the
Bayes' rule In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
and the
conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
of o_ and o_ given X_t. As outlined above, the algorithm involves three steps: # computing forward probabilities # computing backward probabilities # computing smoothed values. The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the ''message-passing'' used in general
belief propagation A belief is an attitude that something is the case, or that some proposition is true. In epistemology, philosophers use the term "belief" to refer to attitudes about the world which can be either true or false. To believe something is to take i ...
approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results. The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
).


Forward probabilities

The following description will use matrices of probability values rather than probability distributions, although in general the forward-backward algorithm can be applied to continuous as well as discrete probability models. We transform the probability distributions related to a given
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
into matrix notation as follows. The transition probabilities \mathbf(X_t\mid X_) of a given random variable X_t representing all possible states in the hidden Markov model will be represented by the matrix \mathbf where the column index j will represent the target state and the row index i represents the start state. A transition from row-vector state \mathbf to the incremental row-vector state \mathbf is written as \mathbf = \mathbf \mathbf. The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then: :\mathbf = \begin 0.7 & 0.3 \\ 0.3 & 0.7 \end In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form: :\mathbf = \begin 0.9 & 0.1 \\ 0.2 & 0.8 \end provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system (\mathbf), the probability of observing event j is then: :\mathbf(O = j)=\sum_ \pi_i B_ The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector (\mathbf) with an observation matrix (\mathbf = \mathrm(B_)) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be: :\mathbf = \begin 0.9 & 0.0 \\ 0.0 & 0.2 \end This allows us to calculate the new unnormalized probabilities state vector \mathbf through Bayes rule, weighting by the likelihood that each element of \mathbf generated event 1 as: : \mathbf = \mathbf \mathbf We can now make this general procedure specific to our series of observations. Assuming an initial state vector \mathbf_0, (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin with \mathbf = \mathbf_0, then updating the state distribution and weighting by the likelihood of the first observation: : \mathbf = \mathbf_0 \mathbf \mathbf This process can be carried forward with additional observations using: : \mathbf = \mathbf \mathbf \mathbf This value is the forward unnormalized probability vector. The i'th entry of this vector provides: : \mathbf(i) = \mathbf(o_1, o_2, \dots, o_t, X_t=x_i , \mathbf_0 ) Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that: : \mathbf = c_t^\ \mathbf \mathbf \mathbf where \mathbf represents the scaled vector from the previous step and c_t represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states: : \mathbf(o_1, o_2, \dots, o_t, \mathbf_0) = \prod_^t c_s This allows us to interpret the scaled probability vector as: : \mathbf(i) = \frac = \frac = \mathbf(X_t=x_i , o_1, o_2, \dots, o_t, \mathbf_0 ) We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time.


Backward probabilities

A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities: : \mathbf(i) = \mathbf(o_, o_, \dots, o_ , X_t=x_i ) That is, we now want to assume that we start in a particular state (X_t=x_i), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with: : \mathbf = \ 1\ 1\ \dotsT Notice that we are now using a column vector while the forward probabilities used row vectors. We can then work backwards using: : \mathbf = \mathbf\mathbf\mathbf While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the same c_t constants used in the forward probability calculations. \mathbf is not scaled, but subsequent operations use: : \mathbf = c_t^ \mathbf\mathbf\mathbf where \mathbf represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by: : \mathbf(i) = \frac This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values: : \mathbf(i) = \mathbf(X_t=x_i , o_1, o_2, \dots, o_T, \mathbf_0) = \frac = \frac = \mathbf(i) \cdot \mathbf(i) To understand this, we note that \mathbf(i) \cdot \mathbf(i) provides the probability for observing the given events in a way that passes through state x_i at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability that X_t=x_i. These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability. The values \mathbf(i) thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e. \mathbf(X_t=x_i,X_=x_j) \neq \mathbf(X_t=x_i) \mathbf(X_=x_j) . The most probable sequence of states that produced an observation sequence can be found using the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
.


Example

This example takes as its basis the umbrella world in Russell & Norvig 2010 Chapter 15 pp. 567 in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then: :\mathbf = \begin 0.7 & 0.3 \\ 0.3 & 0.7 \end We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix: :\mathbf = \begin 0.9 & 0.1 \\ 0.2 & 0.8 \end We then observe the following sequence of events: which we will represent in our calculations as: : \mathbf = \begin0.9 & 0.0 \\ 0.0 & 0.2 \end~~\mathbf = \begin0.9 & 0.0 \\ 0.0 & 0.2 \end~~\mathbf = \begin0.1 & 0.0 \\ 0.0 & 0.8 \end~~\mathbf = \begin0.9 & 0.0 \\ 0.0 & 0.2 \end~~\mathbf = \begin0.9 & 0.0 \\ 0.0 & 0.2 \end Note that \mathbf differs from the others because of the "no umbrella" observation. In computing the forward probabilities we begin with: : \mathbf= \begin 0.5 & 0.5 \end which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form: : (\mathbf)^T = c_t^\mathbf(\mathbf)^T(\mathbf)^T instead of: : \mathbf = c_t^\mathbf \mathbf \mathbf Notice that the transformation matrix is also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides: : (\mathbf)^T = c_1^\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.5000 \\ 0.5000 \end= c_1^\begin0.4500 \\ 0.1000\end= \begin0.8182 \\ 0.1818 \end : (\mathbf)^T = c_2^\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.8182 \\ 0.1818 \end= c_2^\begin0.5645 \\ 0.0745\end= \begin0.8834 \\ 0.1166 \end : (\mathbf)^T = c_3^\begin0.1 & 0.0 \\ 0.0 & 0.8 \end\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.8834 \\ 0.1166 \end= c_3^\begin0.0653 \\ 0.2772\end= \begin0.1907 \\ 0.8093 \end : (\mathbf)^T = c_4^\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.1907 \\ 0.8093 \end= c_4^\begin0.3386 \\ 0.1247\end= \begin0.7308 \\ 0.2692 \end : (\mathbf)^T = c_5^\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.7308 \\ 0.2692 \end= c_5^\begin0.5331 \\ 0.0815\end= \begin0.8673 \\ 0.1327 \end For the backward probabilities, we start with: : \mathbf = \begin 1.0 \\ 1.0\end We are then able to compute (using the observations in reverse order and normalizing with different constants): : \mathbf = \alpha\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin1.0000 \\ 1.0000 \end=\alpha\begin0.6900 \\ 0.4100\end=\begin0.6273 \\ 0.3727 \end : \mathbf = \alpha\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin0.6273 \\ 0.3727 \end=\alpha\begin0.4175 \\ 0.2215\end=\begin0.6533 \\ 0.3467 \end : \mathbf = \alpha\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.1 & 0.0 \\ 0.0 & 0.8 \end\begin0.6533 \\ 0.3467 \end=\alpha\begin0.1289 \\ 0.2138\end=\begin0.3763 \\ 0.6237 \end : \mathbf = \alpha\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin0.3763 \\ 0.6237 \end=\alpha\begin0.2745 \\ 0.1889\end=\begin0.5923 \\ 0.4077 \end : \mathbf = \alpha\begin 0.7 & 0.3 \\ 0.3 & 0.7 \end\begin0.9 & 0.0 \\ 0.0 & 0.2 \end\begin0.5923 \\ 0.4077 \end=\alpha\begin0.3976 \\ 0.2170\end=\begin0.6469 \\ 0.3531 \end Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with the c_t's found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time. : (\mathbf)^T = \alpha\begin0.5000 \\ 0.5000 \end\circ \begin0.6469 \\ 0.3531 \end=\alpha\begin0.3235 \\ 0.1765\end=\begin0.6469 \\ 0.3531 \end : (\mathbf)^T = \alpha\begin0.8182 \\ 0.1818 \end\circ \begin0.5923 \\ 0.4077 \end=\alpha\begin0.4846 \\ 0.0741\end=\begin0.8673 \\ 0.1327 \end : (\mathbf)^T = \alpha\begin0.8834 \\ 0.1166 \end\circ \begin0.3763 \\ 0.6237 \end=\alpha\begin0.3324 \\ 0.0728\end=\begin0.8204 \\ 0.1796 \end : (\mathbf)^T = \alpha\begin0.1907 \\ 0.8093 \end\circ \begin0.6533 \\ 0.3467 \end=\alpha\begin0.1246 \\ 0.2806\end=\begin0.3075 \\ 0.6925 \end : (\mathbf)^T = \alpha\begin0.7308 \\ 0.2692 \end\circ \begin0.6273 \\ 0.3727 \end=\alpha\begin0.4584 \\ 0.1003\end=\begin0.8204 \\ 0.1796 \end : (\mathbf)^T = \alpha\begin0.8673 \\ 0.1327 \end\circ \begin1.0000 \\ 1.0000 \end=\alpha\begin0.8673 \\ 0.1327 \end=\begin0.8673 \\ 0.1327 \end Notice that the value of \mathbf is equal to \mathbf and that \mathbf is equal to \mathbf. This follows naturally because both \mathbf and \mathbf begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However, \mathbf will only be equal to \mathbf when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the case \mathbf needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points. The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value at \mathbf quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella.


Performance

The forward–backward algorithm runs with time complexity O(S^2 T) in space O(S T) , where T is the length of the time sequence and S is the number of symbols in the state alphabet. The algorithm can also run in constant space with time complexity O(S^2 T^2) by recomputing values at each step. For comparison, a brute-force procedure would generate all possible S^T state sequences and calculate the joint probability of each state sequence with the observed series of events, which would have
time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
O(T \cdot S^T) . Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high. An enhancement to the general forward-backward algorithm, called the
Island algorithm The island algorithm is an algorithm for performing inference on hidden Markov models, or their generalization, dynamic Bayesian networks. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. The is ...
, trades smaller memory usage for longer running time, taking O(S^2 T \log T) time and O(S \log T) memory. Furthermore, it is possible to invert the process model to obtain an O(S) space, O(S^2 T) time algorithm, although the inverted process may not exist or be
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
. In addition, algorithms have been developed to compute \mathbf efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm. Russell & Norvig 2010 Figure 15.6 pp. 580


Pseudocode

algorithm forward_backward is input: guessState int ''sequenceIndex'' output: ''result'' if ''sequenceIndex'' is past the end of the sequence then return 1 if (''guessState'', ''sequenceIndex'') has been seen before then return saved result ''result'' := 0 for each neighboring state n: ''result'' := result + (transition probability from ''guessState'' to n given observation element at ''sequenceIndex'') × Backward(n, ''sequenceIndex'' + 1) save result for (''guessState'', ''sequenceIndex'') return ''result''


Python example

Given HMM (just like in
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
) represented in the
Python programming language Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected. It supports multiple programming p ...
: states = ('Healthy', 'Fever') end_state = 'E' observations = ('normal', 'cold', 'dizzy') start_probability = transition_probability = emission_probability = We can write the implementation of the forward-backward algorithm like this: def fwd_bkw(observations, states, start_prob, trans_prob, emm_prob, end_st): """Forward–backward algorithm.""" # Forward part of the algorithm fwd = [] for i, observation_i in enumerate(observations): f_curr = for st in states: if i

0: # base case for the forward part prev_f_sum = start_prob t else: prev_f_sum = sum(f_prev * trans_prob st] for k in states) f_curr t= emm_prob tobservation_i] * prev_f_sum fwd.append(f_curr) f_prev = f_curr p_fwd = sum(f_curr * trans_prob end_st] for k in states) # Backward part of the algorithm bkw = [] for i, observation_i_plus in enumerate(reversed(observations[1:] + (None,))): b_curr = for st in states: if i

0: # base case for backward part b_curr t= trans_prob tend_st] else: b_curr t= sum(trans_prob tl] * emm_prob observation_i_plus] * b_prev for l in states) bkw.insert(0,b_curr) b_prev = b_curr p_bkw = sum(start_prob * emm_prob observations * b_curr for l in states) # Merging the two parts posterior = [] for i in range(len(observations)): posterior.append() assert p_fwd

p_bkw return fwd, bkw, posterior
The function fwd_bkw takes the following arguments: x is the sequence of observations, e.g. normal', 'cold', 'dizzy'/code>; states is the set of hidden states; a_0 is the start probability; a are the transition probabilities; and e are the emission probabilities. For simplicity of code, we assume that the observation sequence x is non-empty and that a j] and e j] is defined for all states i,j. In the running example, the forward-backward algorithm is used as follows: def example(): return fwd_bkw(observations, states, start_probability, transition_probability, emission_probability, end_state) >>> for line in example(): ... print(*line) ...


See also

*
Baum–Welch algorithm In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the f ...
*
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
*
BCJR algorithm The BCJR algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises (principally convolutional codes). The algorithm is named after its inventors: Bahl, Cocke, Frederick Jelinek, Jelinek and Raviv.L. ...


References

* Lawrence R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. ''Proceedings of the
IEEE The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operation ...
'', 77 (2), p. 257–286, February 1989
10.1109/5.18626
* * *


External links


An interactive spreadsheet for teaching the forward–backward algorithm
(spreadsheet and article with step-by-step walk-through)

* ttp://code.google.com/p/aima-java/ Collection of AI algorithms implemented in Java(including HMM and the forward–backward algorithm) {{DEFAULTSORT:Forward-backward algorithm Dynamic programming Error detection and correction Machine learning algorithms Markov models