HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, an expectation–maximization (EM) algorithm is an
iterative method In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived fr ...
to find (local)
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
or
maximum a posteriori In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the ...
(MAP) estimates of
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s in
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
s, where the model depends on unobserved
latent variable In statistics, latent variables (from Latin: present participle of ''lateo'', “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or me ...
s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the
log-likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.


History

The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster,
Nan Laird Nan McKenzie Laird (born September 18, 1943) is the Harvey V. Fineberg Professor of Public Health, Emerita in Biostatistics at the Harvard T.H. Chan School of Public Health. She served as Chair of the Department from 1990 to 1999. She was the H ...
, and
Donald Rubin Donald is a masculine given name derived from the Gaelic name ''Dòmhnall''.. This comes from the Proto-Celtic *''Dumno-ualos'' ("world-ruler" or "world-wielder"). The final -''d'' in ''Donald'' is partly derived from a misinterpretation of the ...
. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster-Laird-Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University. following his collaboration with
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer scie ...
and Anders Martin-Löf.Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer scie ...
. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer scie ...
. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")
Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, A. P. Dempster,
D. Basu Debabrata Basu (5 July 1924 – 24 March 2001) was an Indian statistician who made fundamental contributions to the foundations of statistics. Basu invented simple examples that displayed some difficulties of likelihood-based statistics and ...
, D. R. Cox,
A. W. F. Edwards Anthony William Fairbank Edwards, FRS (born 1935) is a British statistician, geneticist and evolutionary biologist. He is the son of the surgeon Harold C. Edwards, and brother of medical geneticist John H. Edwards. He has sometimes been called ...
, D. A. Sprott, G. A. Barnard, O. Barndorff-Nielsen, J. D. Kalbfleisch and G. Rasch and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.
The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence outside of the
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
, as claimed by Dempster–Laird–Rubin.


Introduction

The EM algorithm is used to find (local)
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
parameters of a
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
in cases where the equations cannot be solved directly. Typically these models involve
latent variable In statistics, latent variables (from Latin: present participle of ''lateo'', “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or me ...
s in addition to unknown
parameters A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
and known data observations. That is, either
missing values In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Miss ...
exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation ...
can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. F ...
s of the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a
saddle point In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function ...
. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.


Description

Given the
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
which generates a set \mathbf of observed data, a set of unobserved latent data or
missing values In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Miss ...
\mathbf, and a vector of unknown parameters \boldsymbol\theta, along with a
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
L(\boldsymbol\theta; \mathbf, \mathbf) = p(\mathbf, \mathbf\mid\boldsymbol\theta), the
maximum likelihood estimate In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statist ...
(MLE) of the unknown parameters is determined by maximizing the
marginal likelihood A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample from a prior and is therefore often referred to as model evi ...
of the observed data :L(\boldsymbol\theta; \mathbf) = p(\mathbf\mid\boldsymbol\theta) = \int p(\mathbf,\mathbf \mid \boldsymbol\theta) \, d\mathbf = \int p(\mathbf \mid \mathbf, \boldsymbol\theta) p(\mathbf \mid \boldsymbol\theta) \, d\mathbf However, this quantity is often intractable since \mathbf is unobserved and the distribution of \mathbf is unknown before attaining \boldsymbol\theta. The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps: :''Expectation step (E step)'': Define Q(\boldsymbol\theta\mid\boldsymbol\theta^) as the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of the log
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
of \boldsymbol\theta, with respect to the current
conditional distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
of \mathbf given \mathbf and the current estimates of the parameters \boldsymbol\theta^: ::Q(\boldsymbol\theta\mid\boldsymbol\theta^) = \operatorname_\left \log L (\boldsymbol\theta; \mathbf,\mathbf) \right\, :''Maximization step (M step)'': Find the parameters that maximize this quantity: ::\boldsymbol\theta^ = \underset \ Q(\boldsymbol\theta\mid\boldsymbol\theta^) \, The typical models to which EM is applied use \mathbf as a latent variable indicating membership in one of a set of groups: #The observed data points \mathbf may be
discrete Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory * Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a ...
(taking values in a finite or countably infinite set) or
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
(taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. #The
missing values In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Miss ...
(aka
latent variables In statistics, latent variables (from Latin: present participle of ''lateo'', “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or me ...
) \mathbf are
discrete Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory * Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a ...
, drawn from a fixed number of values, and with one latent variable per observed unit. #The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points whose corresponding latent variable has that value). However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parameters \boldsymbol\theta is known, usually the value of the latent variables \mathbf can be found by maximizing the log-likelihood over all possible values of \mathbf, either simply by iterating over \mathbf or through an algorithm such as the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
for
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
s. Conversely, if we know the value of the latent variables \mathbf, we can find an estimate of the parameters \boldsymbol\theta fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both \boldsymbol\theta and \mathbf are unknown: # First, initialize the parameters \boldsymbol\theta to some random values. # Compute the probability of each possible value of \mathbf , given \boldsymbol\theta. # Then, use the just-computed values of \mathbf to compute a better estimate for the parameters \boldsymbol\theta. # Iterate steps 2 and 3 until convergence. The algorithm as just described monotonically approaches a local minimum of the cost function.


Properties

Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a
maximum likelihood estimator In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statist ...
. For multimodal distributions, this means that an EM algorithm may converge to a
local maximum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
of the observed data likelihood function, depending on starting values. A variety of heuristic or
metaheuristic In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimizati ...
approaches exist to escape a local maximum, such as random-restart
hill climbing numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solutio ...
(starting with several different random initial estimates \boldsymbol\theta^), or applying
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It ...
methods. EM is especially useful when the likelihood is an
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
: the E step becomes the sum of expectations of
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the pa ...
s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive
closed-form expression In mathematics, a closed-form expression is a mathematical expression that uses a finite number of standard operations. It may contain constants, variables, certain well-known operations (e.g., + − × ÷), and functions (e.g., ''n''th roo ...
updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer scie ...
and Anders Martin-Löf). The EM method was modified to compute
maximum a posteriori In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the ...
(MAP) estimates for
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
in the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such as
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
,
conjugate gradient In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterat ...
, or variants of the
Gauss–Newton algorithm The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum ...
. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.


Proof of correctness

Expectation-Maximization works to improve Q(\boldsymbol\theta\mid\boldsymbol\theta^) rather than directly improving \log p(\mathbf\mid\boldsymbol\theta). Here it is shown that improvements to the former imply improvements to the latter. For any \mathbf with non-zero probability p(\mathbf\mid\mathbf,\boldsymbol\theta), we can write : \log p(\mathbf\mid\boldsymbol\theta) = \log p(\mathbf,\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\mathbf,\boldsymbol\theta). We take the expectation over possible values of the unknown data \mathbf under the current parameter estimate \theta^ by multiplying both sides by p(\mathbf\mid\mathbf,\boldsymbol\theta^) and summing (or integrating) over \mathbf. The left-hand side is the expectation of a constant, so we get: : \begin \log p(\mathbf\mid\boldsymbol\theta) & = \sum_ p(\mathbf\mid\mathbf,\boldsymbol\theta^) \log p(\mathbf,\mathbf\mid\boldsymbol\theta) - \sum_ p(\mathbf\mid\mathbf,\boldsymbol\theta^) \log p(\mathbf\mid\mathbf,\boldsymbol\theta) \\ & = Q(\boldsymbol\theta\mid\boldsymbol\theta^) + H(\boldsymbol\theta\mid\boldsymbol\theta^), \end where H(\boldsymbol\theta\mid\boldsymbol\theta^) is defined by the negated sum it is replacing. This last equation holds for every value of \boldsymbol\theta including \boldsymbol\theta = \boldsymbol\theta^, : \log p(\mathbf\mid\boldsymbol\theta^) = Q(\boldsymbol\theta^\mid\boldsymbol\theta^) + H(\boldsymbol\theta^\mid\boldsymbol\theta^), and subtracting this last equation from the previous equation gives : \log p(\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\boldsymbol\theta^) = Q(\boldsymbol\theta\mid\boldsymbol\theta^) - Q(\boldsymbol\theta^\mid\boldsymbol\theta^) + H(\boldsymbol\theta\mid\boldsymbol\theta^) - H(\boldsymbol\theta^\mid\boldsymbol\theta^), However,
Gibbs' inequality 200px, Josiah Willard Gibbs In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequ ...
tells us that H(\boldsymbol\theta\mid\boldsymbol\theta^) \ge H(\boldsymbol\theta^\mid\boldsymbol\theta^), so we can conclude that : \log p(\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\boldsymbol\theta^) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^) - Q(\boldsymbol\theta^\mid\boldsymbol\theta^). In words, choosing \boldsymbol\theta to improve Q(\boldsymbol\theta\mid\boldsymbol\theta^) causes \log p(\mathbf\mid\boldsymbol\theta) to improve at least as much.


As a maximization–maximization procedure

The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of
coordinate descent Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, ...
. Consider the function: : F(q,\theta) := \operatorname_q \log L (\theta ; x,Z) + H(q), where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynam ...
of the distribution ''q''. This function can be written as : F(q,\theta) = -D_\big(q \parallel p_(\cdot\mid x;\theta ) \big) + \log L(\theta;x), where p_(\cdot\mid x;\theta ) is the conditional distribution of the unobserved data given the observed data x and D_ is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fro ...
. Then the steps in the EM algorithm may be viewed as: :''Expectation step'': Choose q to maximize F: :: q^ = \operatorname_q \ F(q,\theta^) :''Maximization step'': Choose \theta to maximize F: :: \theta^ = \operatorname_\theta \ F(q^,\theta)


Applications

EM is frequently used for parameter estimation of
mixed model A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. ...
s, notably in
quantitative genetics Quantitative genetics deals with phenotypes that vary continuously (such as height or mass)—as opposed to discretely identifiable phenotypes and gene-products (such as eye-colour, or the presence of a particular biochemical). Both branches u ...
. In
psychometrics Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally refers to specialized fields within psychology and education devoted to testing, measurement, assessment, and ...
, EM is an important tool for estimating item parameters and latent abilities of
item response theory In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring ...
models. With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. The EM algorithm (and its faster variant
ordered subset expectation maximization In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography, f ...
) is also widely used in
medical image Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to rev ...
reconstruction, especially in
positron emission tomography Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in Metabolism, metabolic processes, and in other physiological activities including bl ...
,
single-photon emission computed tomography Single-photon emission computed tomography (SPECT, or less commonly, SPET) is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera (that is, ...
, and x-ray computed tomography. See below for other faster variants of EM. In
structural engineering Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and muscles' that create the form and shape of man-made structures. Structural engineers also must understand and cal ...
, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). EM is also used for
data clustering Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
. In
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
, two prominent instances of the algorithm are the
Baum–Welch algorithm In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the ...
for
hidden Markov models A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
, and the inside-outside algorithm for unsupervised induction of
probabilistic context-free grammar Grammar theory to model symbol strings originated from work in computational linguistics aiming to understand the structure of natural languages. Probabilistic context free grammars (PCFGs) have been applied in probabilistic modeling of RNA structur ...
s.


Filtering and smoothing EM algorithms

A
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimat ...
is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: ;E-step : Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. ;M-step : Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates. Suppose that a
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimat ...
or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
calculation : \widehat^2_v = \frac \sum_^N ^2, where \widehat_k are scalar output estimates calculated by a filter or a smoother from N scalar measurements z_k. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by : \widehat^2_w = \frac \sum_^N ^2, where \widehat_k and \widehat_ are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via : \widehat = \frac. The convergence of parameter estimates such as those above are well studied.


Variants

A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using
conjugate gradient In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterat ...
and modified
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
s (Newton–Raphson). Also, EM can be used with constrained estimation methods. ''Parameter-expanded expectation maximization (PX-EM)'' algorithm often provides speed up by "us nga `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". ''Expectation conditional maximization (ECM)'' replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter ''θ''''i'' is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the ''Expectation conditional maximization either (ECME)'' algorithm. This idea is further extended in ''generalized expectation maximization (GEM)'' algorithm, in which is sought only an increase in the objective function ''F'' for both the E step and M step as described in the As a maximization–maximization procedure section. GEM is further developed in a distributed environment and shows promising results. It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.


α-EM algorithm

The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by
Yasuo Matsuyama Yasuo Matsuyama (born March 23, 1947) is a Japanese researcher in machine learning and human-aware information processing. Matsuyama is a Professor Emeritus and an Honorary Researcher of the Research Institute of Science and Engineering of Wased ...
is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.


Relation to variational Bayes methods

EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
over the latent variables (in the Bayesian style) together with a point estimate for ''θ'' (either a
maximum likelihood estimate In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statist ...
or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over ''θ'' and the latent variables. The Bayesian approach to inference is simply to treat ''θ'' as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (
variational Bayes Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usuall ...
), solving can iterate over each latent variable (now including ''θ'') and optimize them one at a time. Now, ''k'' steps per iteration are needed, where ''k'' is the number of latent variables. For
graphical models ''Graphical Models'' is an academic journal in computer graphics and geometry processing publisher by Elsevier. , its editor-in-chief is Jorg Peters of the University of Florida. History This journal has gone through multiple names. Founded in 1 ...
this is easy to do as each variable's new ''Q'' depends only on its
Markov blanket In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. ...
, so local
message passing In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporting i ...
can be used for efficient inference.


Geometric interpretation

In
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to prob ...
, the E step and the M step are interpreted as projections under dual
affine connection In differential geometry, an affine connection is a geometric object on a smooth manifold which ''connects'' nearby tangent spaces, so it permits tangent vector fields to be differentiated as if they were functions on the manifold with values i ...
s, called the e-connection and the m-connection; the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fro ...
can also be understood in these terms.


Examples


Gaussian mixture

Let \mathbf = (\mathbf_1,\mathbf_2,\ldots,\mathbf_n) be a sample of n independent observations from a
mixture In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the ...
of two
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One d ...
s of dimension d, and let \mathbf = (z_1,z_2,\ldots,z_n) be the latent variables that determine the component from which the observation originates. : X_i \mid(Z_i = 1) \sim \mathcal_d(\boldsymbol_1,\Sigma_1) and X_i \mid(Z_i = 2) \sim \mathcal_d(\boldsymbol_2,\Sigma_2), where : \operatorname (Z_i = 1 ) = \tau_1 \, and \operatorname (Z_i=2) = \tau_2 = 1-\tau_1. The aim is to estimate the unknown parameters representing the ''mixing'' value between the Gaussians and the means and covariances of each: : \theta = \big( \boldsymbol,\boldsymbol_1,\boldsymbol_2,\Sigma_1,\Sigma_2 \big), where the incomplete-data likelihood function is : L(\theta;\mathbf) = \prod_^n \sum_^2 \tau_j \ f(\mathbf_i;\boldsymbol_j,\Sigma_j), and the complete-data likelihood function is : L(\theta;\mathbf,\mathbf) = p(\mathbf,\mathbf \mid \theta) = \prod_^n \prod_^2 \ (\mathbf_i;\boldsymbol_j,\Sigma_j) \tau_j^, or : L(\theta;\mathbf,\mathbf) = \exp \left\, where \mathbb is an
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , one has \mathbf_(x)=1 if x\i ...
and f is the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
of a multivariate normal. In the last equality, for each , one indicator \mathbb(z_i=j) is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.


E step

Given our current estimate of the parameters ''θ''(''t''), the conditional distribution of the ''Z''''i'' is determined by
Bayes theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
to be the proportional height of the normal
density Density (volumetric mass density or specific mass) is the substance's mass per unit of volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' can also be used. Mathematical ...
weighted by ''τ'': : T_^ := \operatorname(Z_i=j \mid X_i=\mathbf_i ;\theta^) = \frac. These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: : \beginQ(\theta\mid\theta^) &= \operatorname_ log L(\theta;\mathbf,\mathbf) \\ &= \operatorname_ log \prod_^L(\theta;\mathbf_i,Z_i) \\ &= \operatorname_ sum_^n \log L(\theta;\mathbf_i,Z_i) \\ &= \sum_^n\operatorname_ log L(\theta;\mathbf_i,Z_i) \\ &= \sum_^n \sum_^2 P(Z_i =j \mid X_i = \mathbf_i; \theta^) \log L(\theta_j;\mathbf_i,j) \\ &= \sum_^n \sum_^2 T_^ \big \Sigma_j, -\tfrac(\mathbf_i-\boldsymbol_j)^\top\Sigma_j^ (\mathbf_i-\boldsymbol_j) -\tfrac \log(2\pi) \big \end The expectation of \log L(\theta;\mathbf_i,Z_i) inside the sum is taken with respect to the probability density function P(Z_i \mid X_i = \mathbf_i; \theta^), which might be different for each \mathbf_i of the training set. Everything in the E step is known before the step is taken except T_, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, because ''τ'' and μ/Σ appear in separate linear terms and can thus be maximized independently.


M step

''Q''(''θ'' ,  ''θ''(''t'')) being quadratic in form means that determining the maximizing values of ''θ'' is relatively straightforward. Also, ''τ'', (μ1,''Σ''1) and (μ2,''Σ''2) may all be maximized independently since they all appear in separate linear terms. To begin, consider ''τ'', which has the constraint ''τ''1 + ''τ''2=1: : \begin\boldsymbol^ &= \underset \ Q(\theta \mid \theta^ ) \\ &= \underset \ \left\. \end This has the same form as the MLE for the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
, so : \tau^_j = \frac = \frac \sum_^n T_^. For the next estimates of (μ11): : \begin(\boldsymbol_1^,\Sigma_1^) &= \underset \operatorname\ Q(\theta \mid \theta^ ) \\ &= \underset \operatorname\ \sum_^n T_^ \left\ \end. This has the same form as a weighted MLE for a normal distribution, so : \boldsymbol_1^ = \frac and \Sigma_1^ = \frac and, by symmetry, : \boldsymbol_2^ = \frac and \Sigma_2^ = \frac.


Termination

Conclude the iterative process if E_ log L(\theta^;\mathbf,\mathbf)\leq E_ log L(\theta^;\mathbf,\mathbf)+ \varepsilon for \varepsilon below some preset threshold.


Generalization

The algorithm illustrated above can be generalized for mixtures of more than two
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One d ...
s.


Truncated and censored regression

The EM algorithm has been implemented in the case where an underlying
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
.


Alternatives

EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches'' or the so-called ''spectral techniques''. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.


See also

*
mixture distribution In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection a ...
*
compound distribution In probability and statistics, a compound probability distribution (also known as a mixture distribution or contagious distribution) is the probability distribution that results from assuming that a random variable is distributed according to som ...
*
density estimation In statistics, probability density estimation or simply density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought of ...
*
Principal component analysis Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
*
total absorption spectroscopy Total absorption spectroscopy is a measurement technique that allows the measurement of the gamma radiation emitted in the different nuclear gamma transitions that may take place in the daughter nucleus after its unstable parent has decayed by mean ...
* The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm.


References


Further reading

* * gives an easier explanation of EM algorithm as to lowerbound maximization. * * A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet. * includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models. *


External links

* Various 1D, 2D and 3
demonstrations of EM together with Mixture Modeling
are provided as part of the paired
SOCR The Statistics Online Computational Resource (SOCR) is an online multi-institutional research and education organization. SOCR designs, validates and broadly shares a suite of online tools for statistical computing, and interactive materials for ...
activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
Class hierarchy
in
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
(GPL) including Gaussian Mixtures
The on-line textbook: Information Theory, Inference, and Learning Algorithms
by
David J.C. MacKay Professor Sir David John Cameron MacKay (22 April 1967 – 14 April 2016) was a British physicist, mathematician, and academic. He was the Regius Professor of Engineering in the Department of Engineering at the University of Cambridge and fro ...
includes simple examples of the EM algorithm such as clustering using the soft ''k''-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
Variational Algorithms for Approximate Bayesian Inference
by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs

.
The Expectation Maximization Algorithm: A short tutorial
A self-contained derivation of the EM Algorithm by Sean Borman.
The EM Algorithm
by Xiaojin Zhu.
EM algorithm and variants: an informal tutorial
by Alexis Roche. A concise and very clear description of EM and many interesting variants. {{DEFAULTSORT:Expectation-maximization Algorithm Estimation methods Machine learning algorithms Missing data Statistical algorithms Optimization algorithms and methods Cluster analysis algorithms