The expectation-maximization (EM) method facilitates computation of maximum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservable complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-step. The EM algorithm is most effective when maximizing the function Q(θ) defined in the E-step is easier than maximizing the likelihood function. The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can be computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of the standard EM algorithm avoiding computation of conditional expectations. The new algorithm utilizes the score function of the incomplete data. The role of augmented data is retained in our algorithm, named augmented data scoring (ADS). ADS determines step sizes of a gradient type algorithm through use of the complete data information matrix. An algorithm of Titterington (1984) for sequentially acquired data provides a similar use of augmented data and Fisher information, but in a sequential context. We establish a close connection between ADS and EM algorithms. By substituting a Fisher's scoring iteration for M-step optimization, ADS also circumvents difficulties in the M-step, another common problem in applying an EM algorithm.
|Number of pages||16|
|Journal||Communications in Statistics - Theory and Methods|
|Publication status||Published - 1998|
- Data augmentation
- EM algorithm
- Fisher scoring
- Observed and expected information