最大期望算法
维基百科,自由的百科全书
在统计计算中,最大期望(EM)算法是在概率(probabilistic)模型中寻找参数最大似然估计的算法,其中概率模型依赖于无法观测的隐藏变量(en:latent variable)。最大期望经常用在机器学习和计算机视觉的数据集聚(en:data clustering) 领域。最大期望算法经过两个步骤交替进行计算,第一步是计算期望(E),也就是将隐藏变量象能够观测到的一样包含在内从而计算最大似然的期望值;另外一步是最大化(M),也就是最大化在 E 步上找到的最大似然的期望值从而计算参数的最大似然估计。M 步上找到的参数然后用于另外一个 E 步计算,这个过程不断交替进行。
目录 |
[编辑] 最大期望过程说明
我们用 表示能够观察到的不完整的变量值,用 表示无法观察到的变量值,这样 和 一起组成了完整的数据。 可能是实际测量丢失的数据,也可能是能够简化问题的隐藏变量,如果它的值能够知道的话。例如,在混合模型(:en:mixture model)中,如果“产生”样本的混合元素成分已知的话最大似然公式将变得更加便利(参见下面的例子)。
[编辑] 估计无法观测的数据
让 代表矢量 θ: 定义的参数的全部数据的概率分布(连续情况下)或者概率集聚函数(离散情况下),那么从这个函数就可以得到全部数据的最大似然值,另外,在给定的观察到的数据条件下未知数据的条件分布可以表示为:
when using the Bayes rule and law of total probability. This formulation only requires knowledge of the observation likelihood given the unobservable data , as well as the probability of the unobservable data .
[编辑] Maximize expected log-likelihood for the complete dataset
An EM algorithm will then iteratively improve an initial estimate θ0 and construct new estimates . An individual re-estimation step that derives from takes the following form:
where denotes the conditional expectation of being taken with θ in the conditional distribution of x fixed at θn. Log-likelihood is often used instead of true likelihood because it leads to easier formulas, but still attains its maximum at the same point as the likelihood.
In other words, θn + 1 is the value that maximizes (M) the conditional expectation (E) of the complete data log-likelihood given the observed variables under the previous parameter value. This expectation is usually denoted as Q(θ). In the continuous case, it would be given by
Speaking of an expectation (E) step is a bit of a misnomer. What is calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
[编辑] Properties
It can be shown that an EM iteration does not decrease the observed data likelihood function. However, there is no guarantee that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm will converge to a local maximum (or saddle point) of the observed data likelihood function, depending on starting values. There are a variety of heuristic approaches for escaping a local maximum such as using several different random initial estimates, θ0, or applying simulated annealing.
EM is particularly useful when maximum likelihood estimation of a complete data model is easy. If closed-form estimators exist, the M step is often trivial. A classic example is maximum likelihood estimation of a finite mixture of Gaussians, where each component of the mixture can be estimated trivially if the mixing distribution is known.
"Expectation-maximization" is a description of a class of related algorithms, not a specific algorithm; EM is a recipe or meta-algorithm which is used to devise particular algorithms. The Baum-Welch algorithm is an example of an EM algorithm applied to hidden Markov models. Another example is the EM algorithm for fitting a mixture density model.
An EM algorithm can also find maximum a posteriori (MAP) estimates, by performing MAP estimation in the M step, rather than maximum likelihood.
There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss-Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
[编辑] Incremental versions
The classic EM procedure is to replace both Q and θ with their optimal possible (argmax) values at each iteration. However it can be shown (see Neal & Hinton, 1999) that simply finding Q and θ to give some improvement over their current value will also ensure successful convergence.
For example, to improve Q, we could restrict the space of possible functions to a computationally simple distribution such as a factorial distribution,
Thus at each E step we compute the variational approximation of Q.
To improve θ, we could use any hill-climbing method, and not worry about finding the optimal θ, just some improvement. This method is also known as Generalized EM (GEM).
[编辑] Relation to variational Bayes methods
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). We may want a fully Bayesian version of this, giving a probability distribution over θ as well as the latent variables. In fact the Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If we use the factorized Q approximation as described above (variational Bayes), we may iterate over each latent variable (now including θ) and optimize them one at a time. There are now k steps per iteration, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
[编辑] Example: Mixture Gaussian
Assume that the samples , where , are drawn from the gaussians , such that
The model you are trying to estimate is
[编辑] E-step:
Estimation for unobserved event (which Gaussian is used), conditioned on the observation, using the values from the last maximisation step:
[编辑] M-step
You want to maximise the expected log-likelihood of the joint event:
If we expand the probability of the joint event, we get
You have a constraint
If we add a lagrangian, and expand the pdf, we get
To find the new estimate θt + 1, you find a maxima where .
New estimate for mean (using some differentiation rules from matrix calculus):
New estimate for covariance:
New estimate for class probability:
Inserting into the constraint:
Inserting λ into our estimate:
These estimates now become our θt + 1, to be used in the next estimation step.
[编辑] 参考文献
- Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977 [1].
- Robert Hogg, Joseph McKean and Allen Craig. Introduction to Mathematical Statistics. pp. 359-364. Upper Saddle River, NJ: Pearson Prentice Hall, 2005.
- Radford Neal, Geoffrey Hinton. "A view of the EM algorithm that justifies incremental, sparse, and other variants". In Michael I. Jordan (editor), Learning in Graphical Models pp 355-368. Cambridge, MA: MIT Press, 1999.
- The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the E-M algorithm such as clustering using the soft K-means algorithm, and emphasizes the variational view of the E-M algorithm.
- A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, by J. Bilmes includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
[编辑] 参见
- 估计理论
- 数据集聚