ML methods are based on the assumption that the missingness mechanism is at least MAR, but not necessarily MCAR; on the other hand, it is assumed that the data are normally distributed. Non-Bayesian methods–maximum likelihood (ML) approach with the EM algorithm The formulation of the EM algorithm made it feasible to compute ML estimates in many missing-data problems. EM algorithm may be used for maximum likelihood estimation. The EM algorithm is a method for obtaining maximum likelihood estimates in the presence of missing data. Because the EM algorithm only produces correlation and mean parameters that must subsequently serve as input for the structural equation model, this technique is considered an indirect ML procedure, in contrast with the FIML approache, which can estimate latent variable models directly from raw data. maximum-likelihood(ML)methods, particularly expectation-maximization(EM), ,are being seen as the methods of choice for dealing with analyses of incomplete data (e.g., Schafer, 1997), although their underlying assumption of normally distributed data would seem to limit their applicability. If direct ML (FIML) and EM yield equivalent parameter estimates, one may question the utility of using a procedure that requires an additional analytic step to preprocess the missing data? – MAR is satisfied when missingness on a variable x is unrelated to the underlying values of x, but is related to another measured variable, say y. The important point here is that MAR only holds if the cause of missingness (e.g., y) is included in the substantive model. If y is a measured variable that is not included in the substantive model, MAR does not hold, and biased parameter estimates may result from direct ML. Unfortunately, SEM software packages that implement direct ML do not currently offer options for readily incorporating substantively irrelevant (i.e., auxiliary) variables into the ultimate model. It is quite easy to incorporate information from auxiliary variables when using the EM algorithm, thereby increasing the plausibility of MAR. When preprocessing the data, one simply implements EM using a superset of variables included in the ultimate analysis.Asubset of theEMcovariance matrix can then be extracted for input into an SEM software package. For example, reconsider the situation where missingness on x is related to y, but y is not in the substantive model. In this case, the initial EM analysis would include both x and y, but the SEM model need not include y because the covariance matrix elements involving x were already conditioned on y. the FIML estimator implemented in AMOS and MPLUS yields point estimates of model parameters that are identical to those that would be obtained using the EM algorithm. However, a drawback of EM is that standard errors of regression model parameters are not obtained directly from the procedure and require additional analytic steps (e.g., bootstrapping). Standard errors would be provided on computer output when using the EM covariance matrix as input for further analyses (e.g., SEM, multiple regression), but these standard errors would be based on the wrong sample size and thus are incorrect—as noted above, correct standard errors are obtained directly when using the FIML estimator offered in SEM packages. distinction betweeem EM algorithm and the direct estimation MLapproache (i.e., FIML)Although both produce ML estimates, theEM algorithm does not impose the restrictions on the covariance matrix implied by the structural model. Enders (2001) suggested an advantage of the EM algorithm over direct ML estimation is the ability of the EM algorithm to incorporate variables into the missing data treatment that are not part of the substantive model being tested (i.e., auxiliary variables). EM algorithm can provide ML estimates of the means and correlations based on a large set of (both central and auxiliary) variables that may be suspected to produce missingness while only using a subset of these variables in the substantive model of interest. However, FIML generally applied only gain protection from MAR when the variables supposed to produce the missingness (or correlated with the variables containing missingness) are included in the model being tested. Recent work has begun to address the issues of incorporating auxiliary variables into FIML approaches. when using direct ML (FIML )under MAR conditions, the cause of the missingness must be included in the substantive model. If a measured variable, say y, is related to the missingness, but is not included in the ultimate model, then MAR does not hold. Currently, SEM software packages offer no default options for including auxiliary variables in the model, although the analyst can accomplish this with some extra effort. Expectation Maximization - it is an iterative process with an E step and an Mstep. The E step finds the conditional expectation of the missing data giventhe observed values and the current estimated parameters. The expectations arethen substituted for the missing values. The M step then uses maximumlikelihood to estimate the parameters as though the missing data had been filledin. Computer or user criteria are used to determine when convergence has occurred. The EM algorithm generates a sequence of parameter estimates by cycling iteratively between an expectation (E) step and a maximization (M) step. In the E-step, the conditional expectation of the complete data log-likelihood is computed given the observed data and the current parameter estimates. In the M-step, the expected log-likelihood obtained in the E-step is maximized to obtain updated parameter estimates under the structural model. The iteration process is stopped when its parameter estimates converge to some pre-established criterion. The results from EM should be very similar indeed to those obtained by the AMOS FIML method. Although NORM is a program designed for multiple imputation, it will in factallow you to obtain EM estimates without using multiple imputation. If you run the EM step in NORM and then go to “impute from parameters,” it will output a file containing these values just as you would get from SPSS. Many SEM analysts have used the means and covariance matrix produced by the EM algorithm as input to SEM software. However, this two-step approach is less than ideal, for two reasons. When the SEM to be estimated is just-identified (i.e., the model implies no restrictions on the covariance matrix), then the resulting parameter estimates are true ML estimates. But in the more usual case when the SEM is overidentified, the resulting estimates are not true ML estimates and are generally less efficient (although the loss of efficiency is likely to be small). Moreover, the standard errors reported by SEM software using this two-step method will not be consistent estimates of the true standard errors.