A hidden Markov model (HMM) is a
Markov model
In probability theory, a Markov model is a stochastic model used to Mathematical model, model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, ...
in which the observations are dependent on a latent (or ''hidden'')
Markov process
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
(referred to as
). An HMM requires that there be an observable process
whose outcomes depend on the outcomes of
in a known way. Since
cannot be observed directly, the goal is to learn about state of
by observing
. By definition of being a Markov model, an HMM has an additional requirement that the outcome of
at time
must be "influenced" exclusively by the outcome of
at
and that the outcomes of
and
at
must be conditionally independent of
at
given
at time
. Estimation of the parameters in an HMM can be performed using
maximum likelihood estimation
In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
. For linear chain HMMs, the
Baum–Welch algorithm can be used to estimate parameters.
Hidden Markov models are known for their applications to
thermodynamics
Thermodynamics is a branch of physics that deals with heat, Work (thermodynamics), work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed b ...
,
statistical mechanics
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
,
physics
Physics is the scientific study of matter, its Elementary particle, fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge whi ...
,
chemistry
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and chemical compound, compounds made of atoms, molecules a ...
,
economics
Economics () is a behavioral science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services.
Economics focuses on the behaviour and interac ...
,
finance
Finance refers to monetary resources and to the study and Academic discipline, discipline of money, currency, assets and Liability (financial accounting), liabilities. As a subject of study, is a field of Business administration, Business Admin ...
,
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
,
information theory
Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
,
pattern recognition
Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their p ...
—such as
speech
Speech is the use of the human voice as a medium for language. Spoken language combines vowel and consonant sounds to form units of meaning like words, which belong to a language's lexicon. There are many different intentional speech acts, suc ...
,
handwriting
Handwriting in Italian schools (XXth - XXIst century)
Handwriting is the personal and unique style of writing with a writing instrument, such as a pen or pencil in the hand. Handwriting includes both block and cursive styles and is separa ...
,
gesture recognition
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to ...
,
part-of-speech tagging
In corpus linguistics, part-of-speech tagging (POS tagging, PoS tagging, or POST), also called grammatical tagging, is the process of marking up a word in a text ( corpus) as corresponding to a particular part of speech, based on both its defini ...
, musical score following,
partial discharge
In electrical engineering, partial discharge (PD) is a localized dielectric breakdown (DB) (which does not completely bridge the space between the two conductors) of a small portion of a solid or fluid electrical insulation (EI) system under hi ...
s and
bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
.
Definition
Let
and
be discrete-time
stochastic process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Sto ...
es and
. The pair
is a ''hidden Markov model'' if
*
is a
Markov process
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
whose behavior is not directly observable ("hidden");
*
,
:for every
,
, and every
Borel set
In mathematics, a Borel set is any subset of a topological space that can be formed from its open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets ...
.
Let
and
be continuous-time stochastic processes. The pair
is a ''hidden Markov model'' if
*
is a Markov process whose behavior is not directly observable ("hidden");
*
,
:for every
, every Borel set
, and every family of Borel sets
.
Terminology
The states of the process
(resp.
are called ''hidden states'', and
(resp.
is called ''emission probability'' or ''output probability''.
Examples
Drawing balls from hidden urns

In its discrete form, a hidden Markov process can be visualized as a generalization of the
urn problem with replacement (where each item from the urn is returned to the original urn before the next step). Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the ''n''-th ball depends only upon a random number and the choice of the urn for the ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a
Markov process
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
. It can be described by the upper part of Figure 1.
The Markov process cannot be observed, only the sequence of labeled balls, thus this arrangement is called a ''hidden Markov process''. This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, ''e.g.'' y1, y2 and y3 on the conveyor belt, the observer still cannot be ''sure'' which urn (''i.e.'', at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
Weather guessing game
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete
Markov chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are ''hidden'' from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the ''observations''. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in
Python:
states = ("Rainy", "Sunny")
observations = ("walk", "shop", "clean")
start_probability =
transition_probability =
emission_probability =
In this piece of code,
start_probability
represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately
. The
transition_probability
represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The
emission_probability
represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.

''A similar example is further elaborated in the
Viterbi algorithm page.''
Structural architecture
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable ''x''(''t'') is the hidden state at time (with the model from the above diagram, . The random variable ''y''(''t'') is the observation at time (with . The arrows in the diagram (often called a
trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the
conditional probability distribution
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X ...
of the hidden variable ''x''(''t'') at time , given the values of the hidden variable at all times, depends ''only'' on the value of the hidden variable ; the values at time and before have no influence. This is called the
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Ma ...
. Similarly, the value of the observed variable ''y''(''t'') depends on only the value of the hidden variable ''x''(''t'') (both at time ).
In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
) or continuous (typically from a
Gaussian distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
). The parameters of a hidden Markov model are of two types, ''transition probabilities'' and ''emission probabilities'' (also known as ''output probabilities''). The transition probabilities control the way the hidden state at time is chosen given the hidden state at time
.
The hidden state space is assumed to consist of one of possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the possible states that a hidden variable at time can be in, there is a transition probability from this state to each of the possible states of the hidden variable at time
, for a total of
transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, the
matrix of transition probabilities is a
Markov matrix. Because any transition probability can be determined once the others are known, there are a total of
transition parameters.
In addition, for each of the possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with possible values, governed by a
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
, there will be
separate parameters, for a total of
emission parameters over all hidden states. On the other hand, if the observed variable is an -dimensional vector distributed according to an arbitrary
multivariate Gaussian distribution
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One de ...
, there will be parameters controlling the
mean
A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
s and
parameters controlling the
covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
, for a total of
emission parameters. (In such a case, unless the value of is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
Inference
Several
inference
Inferences are steps in logical reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinct ...
problems are associated with hidden Markov models, as outlined below.
Probability of an observed sequence
The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
The probability of observing a sequence
:
of length ''L'' is given by
:
where the sum runs over all possible hidden-node sequences
:
Applying the principle of
dynamic programming, this problem, too, can be handled efficiently using the
forward algorithm
The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known as ''filtering''. The forward alg ...
.
Probability of the latent variables
A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations
.
Filtering
The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute
. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end.
This problem can be handled efficiently using the
forward algorithm
The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known as ''filtering''. The forward alg ...
. An example is when the algorithm is applied to a Hidden Markov Network to determine
.
Smoothing
This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute
for some
. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time ''k'' in the past, relative to time ''t''.
The
forward-backward algorithm is a good method for computing the smoothed values for all hidden state variables.
Most likely explanation
The task, unlike the previous two, asks about the
joint probability
A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGra ...
of the ''entire'' sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is
part-of-speech tagging
In corpus linguistics, part-of-speech tagging (POS tagging, PoS tagging, or POST), also called grammatical tagging, is the process of marking up a word in a text ( corpus) as corresponding to a particular part of speech, based on both its defini ...
, where the hidden states represent the underlying
parts of speech
In grammar, a part of speech or part-of-speech (abbreviated as POS or PoS, also known as word class or grammatical category) is a category of words (or, more generally, of lexical items) that have similar grammatical properties. Words that are as ...
corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute.
This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the
Viterbi algorithm.
Statistical significance
For some of the above problems, it may also be interesting to ask about
statistical significance
In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
. What is the probability that a sequence drawn from some
null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the
false positive rate associated with failing to reject the hypothesis for the output sequence.
Learning
The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the
Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the
expectation-maximization algorithm.
If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like
Markov chain Monte Carlo
In statistics, Markov chain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements' distribution approximates it – that ...
(MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
Applications

HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include:
*
Computational finance
Computational finance is a branch of applied computer science that deals with problems of practical interest in finance.Rüdiger U. Seydel, ''Tools for Computational Finance'', Springer; 3rd edition (May 11, 2006) 978-3540279235 Some slightly diff ...
*
Single-molecule kinetic analysis
*
Neuroscience
Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, ...
*
Cryptanalysis
Cryptanalysis (from the Greek ''kryptós'', "hidden", and ''analýein'', "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic se ...
*
Speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
, including
Siri
Siri ( , backronym: Speech Interpretation and Recognition Interface) is a digital assistant purchased, developed, and popularized by Apple Inc., which is included in the iOS, iPadOS, watchOS, macOS, Apple TV, audioOS, and visionOS operating sys ...
*
Speech synthesis
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal langua ...
*
Part-of-speech tagging
In corpus linguistics, part-of-speech tagging (POS tagging, PoS tagging, or POST), also called grammatical tagging, is the process of marking up a word in a text ( corpus) as corresponding to a particular part of speech, based on both its defini ...
* Document separation in scanning solutions
*
Machine translation
Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages.
Early approaches were mostly rule-based or statisti ...
*
Partial discharge
In electrical engineering, partial discharge (PD) is a localized dielectric breakdown (DB) (which does not completely bridge the space between the two conductors) of a small portion of a solid or fluid electrical insulation (EI) system under hi ...
*
Gene prediction
In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include prediction of other functio ...
*
Handwriting recognition
Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
*
Alignment of bio-sequences
*
Time series analysis
In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
*
Activity recognition
*
Protein folding
Protein folding is the physical process by which a protein, after Protein biosynthesis, synthesis by a ribosome as a linear chain of Amino acid, amino acids, changes from an unstable random coil into a more ordered protein tertiary structure, t ...
* Sequence classification
*
Metamorphic virus detection
*
Sequence motif
In biology, a sequence motif is a nucleotide or amino-acid sequence pattern that is widespread and usually assumed to be related to biological function of the macromolecule. For example, an ''N''-glycosylation site motif can be defined as ''A ...
discovery (DNA and proteins)
* DNA hybridization kinetics
*
Chromatin
Chromatin is a complex of DNA and protein found in eukaryote, eukaryotic cells. The primary function is to package long DNA molecules into more compact, denser structures. This prevents the strands from becoming tangled and also plays important r ...
state discovery
*
Transportation forecasting
*
Solar irradiance variability
History
Hidden Markov models were described in a series of statistical papers by
Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was
speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar.
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular
DNA
Deoxyribonucleic acid (; DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of al ...
. Since then, they have become ubiquitous in the field of
bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
.
Extensions
General state spaces
In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a
categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
) or continuous (typically from a
Gaussian distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a
linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a
Gaussian distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the
Kalman filter
In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the
extended Kalman filter
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered t ...
or the
particle filter.
Nowadays, inference in hidden Markov models is performed in
nonparametric settings, where the dependency structure enables
identifiability of the model and the learnability limits are still under exploration.
Bayesian modeling of the transitions probabilities
Hidden Markov models are
generative model
In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
s, in which the
joint distribution of observations and hidden states, or equivalently both the
prior distribution
A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
of hidden states (the ''transition probabilities'') and
conditional distribution
Conditional (if then) may refer to:
* Causal conditional, if X then Y, where X is a cause of Y
*Conditional probability, the probability of an event A given that another event B
* Conditional proof, in logic: a proof that asserts a conditional, ...
of observations given states (the ''emission probabilities''), is modeled. The above algorithms implicitly assume a
uniform
A uniform is a variety of costume worn by members of an organization while usually participating in that organization's activity. Modern uniforms are most often worn by armed forces and paramilitary organizations such as police, emergency serv ...
prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the
Dirichlet distribution, which is the
conjugate prior
In Bayesian probability theory, if, given a likelihood function
p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the ''concentration parameter'') controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in
unsupervised part-of-speech tagging
In corpus linguistics, part-of-speech tagging (POS tagging, PoS tagging, or POST), also called grammatical tagging, is the process of marking up a word in a text ( corpus) as corresponding to a particular part of speech, based on both its defini ...
, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using
Gibbs sampling
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
or extended versions of the
expectation-maximization algorithm.
An extension of the previously described hidden Markov models with
Dirichlet
Johann Peter Gustav Lejeune Dirichlet (; ; 13 February 1805 – 5 May 1859) was a German mathematician. In number theory, he proved special cases of Fermat's last theorem and created analytic number theory. In Mathematical analysis, analysis, h ...
priors uses a
Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a ''hierarchical Dirichlet process hidden Markov model'', or ''HDP-HMM'' for short. It was originally described under the name "Infinite Hidden Markov Model" and was further formalized in "Hierarchical Dirichlet Processes".
Discriminative approach
A different type of extension uses a
discriminative model in place of the
generative model
In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsiste ...
of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called ''
maximum entropy Markov model'' (MEMM), which models the conditional distribution of the states using
logistic regression
In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
(also known as a "
maximum entropy model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be
statistically independent
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two event (probability theory), events are independent, statistically independent, or stochastically independent if, informally s ...
of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities.
A variant of the previously described discriminative model is the linear-chain
conditional random field. This uses an undirected graphical model (aka
Markov random field
In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph
In discrete mathematics, particularly ...
) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called ''label bias'' problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
Other extensions
Yet another variant is the ''factorial hidden Markov model'', which allows for a single observation to be conditioned on the corresponding hidden variables of a set of
independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with
states (assuming there are
states for each chain), and therefore, learning in such a model is difficult: for a sequence of length
, a straightforward Viterbi algorithm has complexity
. To find an exact solution, a junction tree algorithm could be used, but it results in an
complexity. In practice, approximate techniques, such as variational approaches, could be used.
All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general
adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an
running time, for
adjacent states and
total observations (i.e. a length-
Markov chain). This extension has been widely used in
bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
, in the modeling of
DNA sequences
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the ...
.
Another recent extension is the ''triplet Markov model'',
in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the ''theory of evidence'' and the ''triplet Markov models''
and which allows to fuse data in Markovian context
[Boudaren et al.](_blank)
, M. Y. Boudaren, E. Monfrini, W. Pieczynski, and A. Aissani, Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context, EURASIP Journal on Advances in Signal Processing, No. 134, 2012. and to model nonstationary data.
[Lanchantin et al.](_blank)
P. Lanchantin and W. Pieczynski, Unsupervised restoration of hidden non stationary Markov chain using evidential priors, IEEE Transactions on Signal Processing, Vol. 53, No. 8, pp. 3091-3098, 2005.[Boudaren et al.](_blank)
M. Y. Boudaren, E. Monfrini, and W. Pieczynski, Unsupervised segmentation of random discrete data hidden with switching noise distributions, IEEE Signal Processing Letters, Vol. 19, No. 10, pp. 619-622, October 2012. Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g.,
Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012.
It consists in employing a small recurrent neural network (RNN), specifically a reservoir network, to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution.
In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions. Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law. This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications.
The model suitable in the context of longitudinal data is named latent Markov model. The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in
Measure theory

Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the states
, with invariant distribution
. By ignoring the distinction between
, this space of subshifts is projected on
into another space of subshifts on
, and this projection also projects the probability measure down to a probability measure on the subshifts on
.
The curious thing is that the probability measure on the subshifts on
is not created by a Markov chain on
, not even multiple orders. Intuitively, this is because if one observes a long sequence of
, then one would become increasingly sure that the
, meaning that the observable part of the system can be affected by something infinitely in the past.
Sofic Measures: Characterizations of Hidden Markov Chains by Linear Algebra, Formal Languages, and Symbolic Dynamics
' - Karl Petersen, Mathematics 210, Spring 2006, University of North Carolina at Chapel Hill
Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6
).
See also
*
Andrey Markov
Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain. He was also a strong, close to mas ...
*
Baum–Welch algorithm
*
Bayesian inference
Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
*
Bayesian programming
*
Richard James Boys
*
Conditional random field
*
Estimation theory
Estimation theory is a branch of statistics that deals with estimating the values of Statistical parameter, parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such ...
*
HH-suite (HHpred, HHsearch) free server and software for protein sequence searching
*
HMMER, a free hidden Markov model program for protein sequence analysis
*
Hidden Bernoulli model
*
Hidden semi-Markov model
*
Hierarchical hidden Markov model
*
Layered hidden Markov model
*
Sequential dynamical system
*
Stochastic context-free grammar
*
Time series
In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
analysis
*
Variable-order Markov model
*
Viterbi algorithm
References
External links
Concepts
*
A Revealing Introduction to Hidden Markov Modelsby Mark Stamp, San Jose State University.
Fitting HMM's with expectation-maximization – complete derivation ''(University of Leeds)''
''(an exposition using basic mathematics)''
''(by Narada Warakagoda)''
* Hidden Markov Models: Fundamentals and Application
Part 1Part 2''(by V. Petrushin)''
* Lecture on a Spreadsheet by Jason Eisner
Videoan
interactive spreadsheet
{{Authority control
Markov models
Bioinformatics
Articles with example Python (programming language) code