1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/loo_compare.R, R/loo_compare.psis_loo_ss_list.R
\name{loo_compare}
\alias{loo_compare}
\alias{loo_compare.default}
\alias{print.compare.loo}
\alias{print.compare.loo_ss}
\title{Model comparison}
\usage{
loo_compare(x, ...)
\method{loo_compare}{default}(x, ...)
\method{print}{compare.loo}(x, ..., digits = 1, simplify = TRUE)
\method{print}{compare.loo_ss}(x, ..., digits = 1, simplify = TRUE)
}
\arguments{
\item{x}{An object of class \code{"loo"} or a list of such objects. If a list is
used then the list names will be used as the model names in the output. See
\strong{Examples}.}
\item{...}{Additional objects of class \code{"loo"}, if not passed in as a single
list.}
\item{digits}{For the print method only, the number of digits to use when
printing.}
\item{simplify}{For the print method only, should only the essential columns
of the summary matrix be printed? The entire matrix is always returned, but
by default only the most important columns are printed.}
}
\value{
A matrix with class \code{"compare.loo"} that has its own
print method. See the \strong{Details} section.
}
\description{
Compare fitted models based on \link[=loo-glossary]{ELPD}.
By default the print method shows only the most important information. Use
\code{print(..., simplify=FALSE)} to print a more detailed summary.
}
\details{
When comparing two fitted models, we can estimate the difference in their
expected predictive accuracy by the difference in
\code{\link[=loo-glossary]{elpd_loo}} or \code{elpd_waic} (or multiplied by \eqn{-2}, if
desired, to be on the deviance scale).
When using \code{loo_compare()}, the returned matrix will have one row per model
and several columns of estimates. The values in the
\code{\link[=loo-glossary]{elpd_diff}} and \code{\link[=loo-glossary]{se_diff}} columns of the
returned matrix are computed by making pairwise comparisons between each
model and the model with the largest ELPD (the model in the first row). For
this reason the \code{elpd_diff} column will always have the value \code{0} in the
first row (i.e., the difference between the preferred model and itself) and
negative values in subsequent rows for the remaining models.
To compute the standard error of the difference in \link[=loo-glossary]{ELPD} ---
which should not be expected to equal the difference of the standard errors
--- we use a paired estimate to take advantage of the fact that the same
set of \eqn{N} data points was used to fit both models. These calculations
should be most useful when \eqn{N} is large, because then non-normality of
the distribution is not such an issue when estimating the uncertainty in
these sums. These standard errors, for all their flaws, should give a
better sense of uncertainty than what is obtained using the current
standard approach of comparing differences of deviances to a Chi-squared
distribution, a practice derived for Gaussian linear models or
asymptotically, and which only applies to nested models in any case.
Sivula et al. (2022) discuss the conditions when the normal
approximation used for SE and \code{se_diff} is good.
If more than \eqn{11} models are compared, we internally recompute the model
differences using the median model by ELPD as the baseline model. We then
estimate whether the differences in predictive performance are potentially
due to chance as described by McLatchie and Vehtari (2023). This will flag
a warning if it is deemed that there is a risk of over-fitting due to the
selection process. In that case users are recommended to avoid model
selection based on LOO-CV, and instead to favor model averaging/stacking or
projection predictive inference.
}
\examples{
# very artificial example, just for demonstration!
LL <- example_loglik_array()
loo1 <- loo(LL) # should be worst model when compared
loo2 <- loo(LL + 1) # should be second best model when compared
loo3 <- loo(LL + 2) # should be best model when compared
comp <- loo_compare(loo1, loo2, loo3)
print(comp, digits = 2)
# show more details with simplify=FALSE
# (will be the same for all models in this artificial example)
print(comp, simplify = FALSE, digits = 3)
# can use a list of objects with custom names
# will use apple, banana, and cherry, as the names in the output
loo_compare(list("apple" = loo1, "banana" = loo2, "cherry" = loo3))
\dontrun{
# works for waic (and kfold) too
loo_compare(waic(LL), waic(LL - 10))
}
}
\references{
Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model
evaluation using leave-one-out cross-validation and WAIC.
\emph{Statistics and Computing}. 27(5), 1413--1432. doi:10.1007/s11222-016-9696-4
(\href{https://link.springer.com/article/10.1007/s11222-016-9696-4}{journal version},
\href{https://arxiv.org/abs/1507.04544}{preprint arXiv:1507.04544}).
Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2024).
Pareto smoothed importance sampling. \emph{Journal of Machine Learning Research},
25(72):1-58.
\href{https://jmlr.org/papers/v25/19-556.html}{PDF}
Sivula, T, Magnusson, M., Matamoros A. A., and Vehtari, A. (2025).
Uncertainty in Bayesian leave-one-out cross-validation based model
comparison. \emph{Bayesian Analysis}. \doi{10.1214/25-BA1569}
McLatchie, Y., and Vehtari, A. (2024). Efficient estimation and
correction of selection-induced bias with order statistics.
\emph{Statistics and Computing}. 34(132). \doi{10.1007/s11222-024-10442-4}
}
\seealso{
\itemize{
\item The \href{https://mc-stan.org/loo/articles/online-only/faq.html}{FAQ page} on
the \strong{loo} website for answers to frequently asked questions.
}
}
|