HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, an expectation–maximization (EM) algorithm is an
iterative method In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
to find (local)
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
or maximum a posteriori (MAP) estimates of
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
s in
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
s, where the model depends on unobserved
latent variable In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.


History

The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers, Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University. following his collaboration with
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Sweden, Swedish logician, philosopher, and mathematical statistics, mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathe ...
and Anders Martin-Löf.
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Sweden, Swedish logician, philosopher, and mathematical statistics, mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathe ...
. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula", credited to Anders Martin-Löf).
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Sweden, Swedish logician, philosopher, and mathematical statistics, mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathe ...
. 1970. ''Statistiska Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Lecture notes 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University.
Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, A. P. Dempster, D. Basu, D. R. Cox,
A. W. F. Edwards Anthony William Fairbank Edwards, Fellow of the Royal Society, FRS One or more of the preceding sentences incorporates text from the royalsociety.org website where: (born 1935) is a British statistician, geneticist and evolutionary biologist. Ed ...
, D. A. Sprott, G. A. Barnard, O. Barndorff-Nielsen, J. D. Kalbfleisch and G. Rasch and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.
The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin.


Introduction

The EM algorithm is used to find (local)
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
parameters of a
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
in cases where the equations cannot be solved directly. Typically these models involve
latent variable In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
s in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
s of the
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a
saddle point In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), ...
. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.


Description


The symbols

Given the
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
which generates a set \mathbf of observed data, a set of unobserved latent data or missing values \mathbf, and a vector of unknown parameters \boldsymbol\theta, along with a
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
L(\boldsymbol\theta; \mathbf, \mathbf) = p(\mathbf, \mathbf\mid\boldsymbol\theta), the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data :L(\boldsymbol\theta; \mathbf) = p(\mathbf\mid\boldsymbol\theta) = \int p(\mathbf,\mathbf \mid \boldsymbol\theta) \, d\mathbf = \int p(\mathbf \mid \mathbf, \boldsymbol\theta) p(\mathbf \mid \boldsymbol\theta) \, d\mathbf However, this quantity is often intractable since \mathbf is unobserved and the distribution of \mathbf is unknown before attaining \boldsymbol\theta.


The EM algorithm

The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps: :''Expectation step (E step)'': Define Q(\boldsymbol\theta\mid\boldsymbol\theta^) as the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of the log
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
of \boldsymbol\theta, with respect to the current
conditional distribution Conditional (if then) may refer to: * Causal conditional, if X then Y, where X is a cause of Y *Conditional probability, the probability of an event A given that another event B * Conditional proof, in logic: a proof that asserts a conditional, ...
of \mathbf given \mathbf and the current estimates of the parameters \boldsymbol\theta^: ::Q(\boldsymbol\theta\mid\boldsymbol\theta^) = \operatorname_\left \boldsymbol\theta) \right\, :''Maximization step (M step)'': Find the parameters that maximize this quantity: ::\boldsymbol\theta^ = \underset \ Q(\boldsymbol\theta\mid\boldsymbol\theta^) \, More succinctly, we can write it as one equation:\boldsymbol\theta^ = \underset \operatorname_\left \boldsymbol\theta) \right\,


Interpretation of the variables

The typical models to which EM is applied use \mathbf as a latent variable indicating membership in one of a set of groups: #The observed data points \mathbf may be
discrete Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory * Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit * Discrete group, ...
(taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. #The missing values (aka latent variables) \mathbf are
discrete Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory * Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit * Discrete group, ...
, drawn from a fixed number of values, and with one latent variable per observed unit. #The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points whose corresponding latent variable has that value). However, it is possible to apply EM to other sorts of models. The motivation is as follows. If the value of the parameters \boldsymbol\theta is known, usually the value of the latent variables \mathbf can be found by maximizing the log-likelihood over all possible values of \mathbf, either simply by iterating over \mathbf or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables \mathbf, we can find an estimate of the parameters \boldsymbol\theta fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both \boldsymbol\theta and \mathbf are unknown: # First, initialize the parameters \boldsymbol\theta to some random values. # Compute the probability of each possible value of \mathbf , given \boldsymbol\theta. # Then, use the just-computed values of \mathbf to compute a better estimate for the parameters \boldsymbol\theta. # Iterate steps 2 and 3 until convergence. The algorithm as just described monotonically approaches a local minimum of the cost function.


Properties

Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a
local maximum In mathematical analysis, the maximum and minimum of a function (mathematics), function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given Interval (ma ...
of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart
hill climbing numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better soluti ...
(starting with several different random initial estimates \boldsymbol\theta^), or applying
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
methods. EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment: the E step becomes the sum of expectations of
sufficient statistic In statistics, sufficiency is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. A sufficient statistic contains all of the information that the dataset provides about the model parameters. It ...
s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive
closed-form expression In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as ''basic'' and connected by arithmetic operations (, and integer powers) and function composition. ...
updates for each step, using the Sundberg formula (proved and published by Rolf Sundberg, based on unpublished results of
Per Martin-Löf Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Sweden, Swedish logician, philosopher, and mathematical statistics, mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathe ...
and Anders Martin-Löf). The EM method was modified to compute maximum a posteriori (MAP) estimates for
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
in the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.


Proof of correctness

Expectation-Maximization works to improve Q(\boldsymbol\theta\mid\boldsymbol\theta^) rather than directly improving \log p(\mathbf\mid\boldsymbol\theta). Here it is shown that improvements to the former imply improvements to the latter. For any \mathbf with non-zero probability p(\mathbf\mid\mathbf,\boldsymbol\theta), we can write : \log p(\mathbf\mid\boldsymbol\theta) = \log p(\mathbf,\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\mathbf,\boldsymbol\theta). We take the expectation over possible values of the unknown data \mathbf under the current parameter estimate \theta^ by multiplying both sides by p(\mathbf\mid\mathbf,\boldsymbol\theta^) and summing (or integrating) over \mathbf. The left-hand side is the expectation of a constant, so we get: : \begin \log p(\mathbf\mid\boldsymbol\theta) & = \sum_ p(\mathbf\mid\mathbf,\boldsymbol\theta^) \log p(\mathbf,\mathbf\mid\boldsymbol\theta) - \sum_ p(\mathbf\mid\mathbf,\boldsymbol\theta^) \log p(\mathbf\mid\mathbf,\boldsymbol\theta) \\ & = Q(\boldsymbol\theta\mid\boldsymbol\theta^) + H(\boldsymbol\theta\mid\boldsymbol\theta^), \end where H(\boldsymbol\theta\mid\boldsymbol\theta^) is defined by the negated sum it is replacing. This last equation holds for every value of \boldsymbol\theta including \boldsymbol\theta = \boldsymbol\theta^, : \log p(\mathbf\mid\boldsymbol\theta^) = Q(\boldsymbol\theta^\mid\boldsymbol\theta^) + H(\boldsymbol\theta^\mid\boldsymbol\theta^), and subtracting this last equation from the previous equation gives : \log p(\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\boldsymbol\theta^) = Q(\boldsymbol\theta\mid\boldsymbol\theta^) - Q(\boldsymbol\theta^\mid\boldsymbol\theta^) + H(\boldsymbol\theta\mid\boldsymbol\theta^) - H(\boldsymbol\theta^\mid\boldsymbol\theta^). However, Gibbs' inequality tells us that H(\boldsymbol\theta\mid\boldsymbol\theta^) \ge H(\boldsymbol\theta^\mid\boldsymbol\theta^), so we can conclude that : \log p(\mathbf\mid\boldsymbol\theta) - \log p(\mathbf\mid\boldsymbol\theta^) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^) - Q(\boldsymbol\theta^\mid\boldsymbol\theta^). In words, choosing \boldsymbol\theta to improve Q(\boldsymbol\theta\mid\boldsymbol\theta^) causes \log p(\mathbf\mid\boldsymbol\theta) to improve at least as much.


As a maximization–maximization procedure

The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function: : F(q,\theta) := \operatorname_q \log L (\theta ; x,Z) + H(q), where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of the distribution ''q''. This function can be written as : F(q,\theta) = -D_\big(q \parallel p_(\cdot\mid x;\theta ) \big) + \log L(\theta;x), where p_(\cdot\mid x;\theta ) is the conditional distribution of the unobserved data given the observed data x and D_ is the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
. Then the steps in the EM algorithm may be viewed as: :''Expectation step'': Choose q to maximize F: :: q^ = \operatorname_q \ F(q,\theta^) :''Maximization step'': Choose \theta to maximize F: :: \theta^ = \operatorname_\theta \ F(q^,\theta)


Applications

*EM is frequently used for parameter estimation of
mixed model A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. ...
s, notably in
quantitative genetics Quantitative genetics is the study of quantitative traits, which are phenotypes that vary continuously—such as height or mass—as opposed to phenotypes and gene-products that are Categorical variable, discretely identifiable—such as eye-col ...
. *In
psychometrics Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and rela ...
, EM is an important tool for estimating item parameters and latent abilities of
item response theory In psychometrics, item response theory (IRT, also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of Test (student assessment), tests, questionnaires, and sim ...
models. *With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. *The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in
positron emission tomography Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes, and in other physiological activities including blood flow, r ...
,
single-photon emission computed tomography Single-photon emission computed tomography (SPECT, or less commonly, SPET) is a nuclear medicine tomography, tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera ...
, and x-ray
computed tomography A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or ...
. See below for other faster variants of EM. *In
structural engineering Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and joints' that create the form and shape of human-made Structure#Load-bearing, structures. Structural engineers also ...
, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). *EM is also used for
data clustering Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each o ...
. In
natural language processing Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related ...
, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars. *In the analysis of intertrade waiting times i.e. the time between subsequent trades in shares of stock at a
stock exchange A stock exchange, securities exchange, or bourse is an exchange where stockbrokers and traders can buy and sell securities, such as shares of stock, bonds and other financial instruments. Stock exchanges may also provide facilities for ...
the EM algorithm has proved to be very useful.


Filtering and smoothing EM algorithms

A
Kalman filter In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: ;E-step : Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. ;M-step : Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates. Suppose that a
Kalman filter In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
calculation : \widehat^2_v = \frac \sum_^N ^2, where \widehat_k are scalar output estimates calculated by a filter or a smoother from N scalar measurements z_k. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by : \widehat^2_w = \frac \sum_^N ^2, where \widehat_k and \widehat_ are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via : \widehat = \frac. The convergence of parameter estimates such as those above are well studied.


Variants

A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
s (Newton–Raphson). Also, EM can be used with constrained estimation methods. ''Parameter-expanded expectation maximization (PX-EM)'' algorithm often provides speed up by "us nga `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". ''Expectation conditional maximization (ECM)'' replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter ''θ''''i'' is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the ''Expectation conditional maximization either (ECME)'' algorithm. This idea is further extended in ''generalized expectation maximization (GEM)'' algorithm, in which is sought only an increase in the objective function ''F'' for both the E step and M step as described in the As a maximization–maximization procedure section. GEM is further developed in a distributed environment and shows promising results. It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.


α-EM algorithm

The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.


Relation to variational Bayes methods

EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
over the latent variables (in the Bayesian style) together with a point estimate for ''θ'' (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over ''θ'' and the latent variables. The Bayesian approach to inference is simply to treat ''θ'' as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above ( variational Bayes), solving can iterate over each latent variable (now including ''θ'') and optimize them one at a time. Now, ''k'' steps per iteration are needed, where ''k'' is the number of latent variables. For graphical models this is easy to do as each variable's new ''Q'' depends only on its Markov blanket, so local
message passing In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporting ...
can be used for efficient inference.


Geometric interpretation

In
information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to proba ...
, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
can also be understood in these terms.


Examples


Gaussian mixture

Let \mathbf = (\mathbf_1,\mathbf_2,\ldots,\mathbf_n) be a sample of n independent observations from a
mixture In chemistry, a mixture is a material made up of two or more different chemical substances which can be separated by physical method. It is an impure substance made up of 2 or more elements or compounds mechanically mixed together in any proporti ...
of two
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
s of dimension d, and let \mathbf = (z_1,z_2,\ldots,z_n) be the latent variables that determine the component from which the observation originates. : X_i \mid(Z_i = 1) \sim \mathcal_d(\boldsymbol_1,\Sigma_1) and X_i \mid(Z_i = 2) \sim \mathcal_d(\boldsymbol_2,\Sigma_2), where : \operatorname (Z_i = 1 ) = \tau_1 \, and \operatorname (Z_i=2) = \tau_2 = 1-\tau_1. The aim is to estimate the unknown parameters representing the ''mixing'' value between the Gaussians and the means and covariances of each: : \theta = \big( \boldsymbol,\boldsymbol_1,\boldsymbol_2,\Sigma_1,\Sigma_2 \big), where the incomplete-data likelihood function is : L(\theta;\mathbf) = \prod_^n \sum_^2 \tau_j \ f(\mathbf_i;\boldsymbol_j,\Sigma_j), and the complete-data likelihood function is : L(\theta;\mathbf,\mathbf) = p(\mathbf,\mathbf \mid \theta) = \prod_^n \prod_^2 \ (\mathbf_i;\boldsymbol_j,\Sigma_j) \tau_j^, or : L(\theta;\mathbf,\mathbf) = \exp \left\, where \mathbb is an
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
and f is the
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
of a multivariate normal. In the last equality, for each , one indicator \mathbb(z_i=j) is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.


E step

Given our current estimate of the parameters ''θ''(''t''), the conditional distribution of the ''Z''''i'' is determined by
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
to be the proportional height of the normal
density Density (volumetric mass density or specific mass) is the ratio of a substance's mass to its volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' (or ''d'') can also be u ...
weighted by ''τ'': : T_^ := \operatorname(Z_i=j \mid X_i=\mathbf_i ;\theta^) = \frac. These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below). This E step corresponds with setting up this function for Q: : \beginQ(\theta\mid\theta^) &= \operatorname_ log L(\theta;\mathbf,\mathbf) \\ &= \operatorname_ log \prod_^L(\theta;\mathbf_i,Z_i) \\ &= \operatorname_ sum_^n \log L(\theta;\mathbf_i,Z_i) \\ &= \sum_^n\operatorname_ log L(\theta;\mathbf_i,Z_i) \\ &= \sum_^n \sum_^2 P(Z_i =j \mid X_i = \mathbf_i; \theta^) \log L(\theta_j;\mathbf_i,j) \\ &= \sum_^n \sum_^2 T_^ \big \Sigma_j, -\tfrac(\mathbf_i-\boldsymbol_j)^\top\Sigma_j^ (\mathbf_i-\boldsymbol_j) -\tfrac \log(2\pi) \big \end The expectation of \log L(\theta;\mathbf_i,Z_i) inside the sum is taken with respect to the probability density function P(Z_i \mid X_i = \mathbf_i; \theta^), which might be different for each \mathbf_i of the training set. Everything in the E step is known before the step is taken except T_, which is computed according to the equation at the beginning of the E step section. This full conditional expectation does not need to be calculated in one step, because ''τ'' and μ/Σ appear in separate linear terms and can thus be maximized independently.


M step

Q(\theta \mid \theta^ ) being quadratic in form means that determining the maximizing values of \theta is relatively straightforward. Also, \tau, (\boldsymbol_1,\Sigma_1) and (\boldsymbol_2,\Sigma_2) may all be maximized independently since they all appear in separate linear terms. To begin, consider \tau, which has the constraint \tau_1+\tau_2=1: : \begin\boldsymbol^ &= \underset \ Q(\theta \mid \theta^ ) \\ &= \underset \ \left\. \end This has the same form as the maximum likelihood estimate for the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
, so : \tau^_j = \frac = \frac \sum_^n T_^. For the next estimates of (\boldsymbol_1,\Sigma_1): : \begin(\boldsymbol_1^,\Sigma_1^) &= \underset \operatorname\ Q(\theta \mid \theta^ ) \\ &= \underset \operatorname\ \sum_^n T_^ \left\ \end. This has the same form as a weighted maximum likelihood estimate for a normal distribution, so : \boldsymbol_1^ = \frac and \Sigma_1^ = \frac and, by symmetry, : \boldsymbol_2^ = \frac and \Sigma_2^ = \frac.


Termination

Conclude the iterative process if E_ log L(\theta^;\mathbf,\mathbf)\leq E_ log L(\theta^;\mathbf,\mathbf)+ \varepsilon for \varepsilon below some preset threshold.


Generalization

The algorithm illustrated above can be generalized for mixtures of more than two
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
s.


Truncated and censored regression

The EM algorithm has been implemented in the case where an underlying
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
.


Alternatives

EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches'' or the so-called ''spectral techniques''. Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.


See also

* mixture distribution * compound distribution * density estimation *
Principal component analysis Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing. The data is linearly transformed onto a new coordinate system such that th ...
* total absorption spectroscopy * The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm.


References


Further reading

* * gives an easier explanation of EM algorithm as to lowerbound maximization. * * A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet. * includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models. *


External links

* Various 1D, 2D and 3
demonstrations of EM together with Mixture Modeling
are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
Class hierarchy
in C++ (GPL) including Gaussian Mixtures
The on-line textbook: Information Theory, Inference, and Learning Algorithms
by David J.C. MacKay includes simple examples of the EM algorithm such as clustering using the soft ''k''-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
Variational Algorithms for Approximate Bayesian Inference
by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs

.
The Expectation Maximization Algorithm: A short tutorial
A self-contained derivation of the EM Algorithm by Sean Borman.
The EM Algorithm
by Xiaojin Zhu.
EM algorithm and variants: an informal tutorial
by Alexis Roche. A concise and very clear description of EM and many interesting variants. {{DEFAULTSORT:Expectation-maximization Algorithm Estimation methods Machine learning algorithms Missing data Statistical algorithms Optimization algorithms and methods Cluster analysis algorithms