1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MxCompute.R
\name{mxComputeEM}
\alias{mxComputeEM}
\alias{MxComputeEM-class}
\title{Fit a model using DLR's (1977) Expectation-Maximization (EM) algorithm}
\usage{
mxComputeEM(
expectation = NULL,
predict = NA_character_,
mstep,
observedFit = "fitfunction",
...,
maxIter = 500L,
tolerance = 1e-09,
verbose = 0L,
freeSet = NA_character_,
accel = "varadhan2008",
information = NA_character_,
infoArgs = list(),
estep = NULL
)
}
\arguments{
\item{expectation}{a vector of expectation names \lifecycle{deprecated}}
\item{predict}{what to predict from the observed data \lifecycle{deprecated}}
\item{mstep}{a compute plan to optimize the completed data model}
\item{observedFit}{the name of the observed data fit function (defaults to "fitfunction")}
\item{...}{Not used. Forces remaining arguments to be specified by name.}
\item{maxIter}{maximum number of iterations}
\item{tolerance}{optimization is considered converged when the maximum relative change in fit is less than tolerance}
\item{verbose}{integer. Level of run-time diagnostic output. Set to zero to disable}
\item{freeSet}{names of matrices containing free variables}
\item{accel}{name of acceleration method ("varadhan2008" or "ramsay1975")}
\item{information}{name of information matrix approximation method}
\item{infoArgs}{arguments to control the information matrix method}
\item{estep}{a compute plan to perform the expectation step}
}
\description{
The EM algorithm constitutes the following steps: Start with an
initial parameter vector. Predict the missing data to form a
completed data model. Optimize the completed data model to obtain
a new parameter vector. Repeat these steps until convergence
criteria are met.
}
\details{
The arguments to this function have evolved. The old style
\code{mxComputeEM(e,p,mstep=m)} is equivalent to the new style
\code{mxComputeEM(estep=mxComputeOnce(e,p), mstep=m)}. This change
allows the API to more closely match the literature on the E-M
method. You might use \code{mxAlgebra(..., recompute='onDemand')} to
contain the results of the E-step and then cause this algebra to
be recomputed using \code{mxComputeOnce}.
This compute plan does not work with any and all expectations. It
requires a special kind of expectation that can predict its
missing data to create a completed data model.
The EM algorithm does not produce a parameter covariance matrix
for standard errors. The Oakes (1999) direct method and S-EM, an
implementation of Meng & Rubin (1991), are included.
Ramsay (1975) was recommended in Bock, Gibbons, & Muraki (1988).
}
\examples{
library(OpenMx)
set.seed(190127)
N <- 200
x <- matrix(c(rnorm(N/2,0,1),
rnorm(N/2,3,1)),ncol=1,dimnames=list(NULL,"x"))
data4mx <- mxData(observed=x,type="raw")
class1 <- mxModel("Class1",
mxMatrix(type="Full",nrow=1,ncol=1,free=TRUE,values=0,name="Mu"),
mxMatrix(type="Full",nrow=1,ncol=1,free=TRUE,values=4,name="Sigma"),
mxExpectationNormal(covariance="Sigma",means="Mu",dimnames="x"),
mxFitFunctionML(vector=TRUE))
class2 <- mxRename(class1, "Class2")
mm <- mxModel(
"Mixture", data4mx, class1, class2,
mxAlgebra((1-Posteriors) * Class1.fitfunction, name="PL1"),
mxAlgebra(Posteriors * Class2.fitfunction, name="PL2"),
mxAlgebra(PL1 + PL2, name="PL"),
mxAlgebra(PL2 / PL, recompute='onDemand',
initial=matrix(runif(N,.4,.6), nrow=N, ncol = 1), name="Posteriors"),
mxAlgebra(-2*sum(log(PL)), name="FF"),
mxFitFunctionAlgebra(algebra="FF"),
mxComputeEM(
estep=mxComputeOnce("Mixture.Posteriors"),
mstep=mxComputeGradientDescent(fitfunction="Mixture.fitfunction")))
mm <- mxOption(mm, "Max minutes", 1/20) # remove this line to find optimum
mmfit <- mxRun(mm)
summary(mmfit)
}
\references{
Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-information
item factor analysis. \emph{Applied Psychological Measurement,
6}(4), 431-444.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from
incomplete data via the EM algorithm. \emph{Journal of the Royal Statistical Society.
Series B (Methodological)}, 1-38.
Meng, X.-L. & Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance
matrices: The SEM algorithm. \emph{Journal of the American Statistical Association,
86} (416), 899-909.
Oakes, D. (1999). Direct calculation of the information matrix via
the EM algorithm. \emph{Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 61}(2), 479-482.
Ramsay, J. O. (1975). Solving implicit equations in psychometric data analysis.
\emph{Psychometrika, 40} (3), 337-360.
Varadhan, R. & Roland, C. (2008). Simple and globally convergent
methods for accelerating the convergence of any EM
algorithm. \emph{Scandinavian Journal of Statistics, 35}, 335-353.
}
\seealso{
\link[=mxAlgebra]{MxAlgebra}, \link{mxComputeOnce}
}
|