File: equivalence_test.lm.Rd

package info (click to toggle)
r-cran-parameters 0.24.2-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 3,852 kB
  • sloc: sh: 16; makefile: 2
file content (355 lines) | stat: -rw-r--r-- 17,653 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/equivalence_test.R
\name{equivalence_test.lm}
\alias{equivalence_test.lm}
\alias{equivalence_test.ggeffects}
\title{Equivalence test}
\usage{
\method{equivalence_test}{lm}(
  x,
  range = "default",
  ci = 0.95,
  rule = "classic",
  effects = "fixed",
  vcov = NULL,
  vcov_args = NULL,
  verbose = TRUE,
  ...
)

\method{equivalence_test}{ggeffects}(
  x,
  range = "default",
  rule = "classic",
  test = "pairwise",
  verbose = TRUE,
  ...
)
}
\arguments{
\item{x}{A statistical model.}

\item{range}{The range of practical equivalence of an effect. May be
\code{"default"}, to automatically define this range based on properties of the
model's data.}

\item{ci}{Confidence Interval (CI) level. Default to \code{0.95} (\verb{95\%}).}

\item{rule}{Character, indicating the rules when testing for practical
equivalence. Can be \code{"bayes"}, \code{"classic"} or \code{"cet"}. See 'Details'.}

\item{effects}{Should parameters for fixed effects (\code{"fixed"}), random
effects (\code{"random"}), both fixed and random effects (\code{"all"}), or the
overall (sum of fixed and random) effects (\code{"random_total"}) be returned?
Only applies to mixed models. May be abbreviated. If the calculation of
random effects parameters takes too long, you may use \code{effects = "fixed"}.}

\item{vcov}{Variance-covariance matrix used to compute uncertainty estimates
(e.g., for robust standard errors). This argument accepts a covariance
matrix, a function which returns a covariance matrix, or a string which
identifies the function to be used to compute the covariance matrix.
\itemize{
\item A covariance matrix
\item A function which returns a covariance matrix (e.g., \code{stats::vcov()})
\item A string which indicates the kind of uncertainty estimates to return.
\itemize{
\item Heteroskedasticity-consistent: \code{"HC"}, \code{"HC0"}, \code{"HC1"}, \code{"HC2"},
\code{"HC3"}, \code{"HC4"}, \code{"HC4m"}, \code{"HC5"}. See \code{?sandwich::vcovHC}
\item Cluster-robust: \code{"CR"}, \code{"CR0"}, \code{"CR1"}, \code{"CR1p"}, \code{"CR1S"},
\code{"CR2"}, \code{"CR3"}. See \code{?clubSandwich::vcovCR}
\item Bootstrap: \code{"BS"}, \code{"xy"}, \code{"residual"}, \code{"wild"}, \code{"mammen"},
\code{"fractional"}, \code{"jackknife"}, \code{"norm"}, \code{"webb"}. See
\code{?sandwich::vcovBS}
\item Other \code{sandwich} package functions: \code{"HAC"}, \code{"PC"}, \code{"CL"}, \code{"OPG"},
\code{"PL"}.
}
}}

\item{vcov_args}{List of arguments to be passed to the function identified by
the \code{vcov} argument. This function is typically supplied by the
\strong{sandwich} or \strong{clubSandwich} packages. Please refer to their
documentation (e.g., \code{?sandwich::vcovHAC}) to see the list of available
arguments. If no estimation type (argument \code{type}) is given, the default
type for \code{"HC"} equals the default from the \strong{sandwich} package; for type
\code{"CR"}, the default is set to \code{"CR3"}.}

\item{verbose}{Toggle warnings and messages.}

\item{...}{Arguments passed to or from other methods.}

\item{test}{Hypothesis test for computing contrasts or pairwise comparisons.
See \href{https://strengejacke.github.io/ggeffects/reference/test_predictions.html}{\code{?ggeffects::test_predictions}}
for details.}
}
\value{
A data frame.
}
\description{
Compute the (conditional) equivalence test for frequentist models.
}
\details{
In classical null hypothesis significance testing (NHST) within a
frequentist framework, it is not possible to accept the null hypothesis, H0 -
unlike in Bayesian statistics, where such probability statements are
possible. "\link{...} one can only reject the null hypothesis if the test
statistics falls into the critical region(s), or fail to reject this
hypothesis. In the latter case, all we can say is that no significant effect
was observed, but one cannot conclude that the null hypothesis is true."
(\emph{Pernet 2017}). One way to address this issues without Bayesian methods is
\emph{Equivalence Testing}, as implemented in \code{equivalence_test()}. While you
either can reject the null hypothesis or claim an inconclusive result in
NHST, the equivalence test - according to \emph{Pernet} - adds a third category,
\emph{"accept"}. Roughly speaking, the idea behind equivalence testing in a
frequentist framework is to check whether an estimate and its uncertainty
(i.e. confidence interval) falls within a region of "practical equivalence".
Depending on the rule for this test (see below), statistical significance
does not necessarily indicate whether the null hypothesis can be rejected or
not, i.e. the classical interpretation of the p-value may differ from the
results returned from the equivalence test.
\subsection{Calculation of equivalence testing}{
\itemize{
\item "bayes" - Bayesian rule (Kruschke 2018)

This rule follows the "HDI+ROPE decision rule" (\emph{Kruschke, 2014, 2018}) used
for the \code{\link[bayestestR:equivalence_test]{Bayesian counterpart()}}. This
means, if the confidence intervals are completely outside the ROPE, the
"null hypothesis" for this parameter is "rejected". If the ROPE
completely covers the CI, the null hypothesis is accepted. Else, it's
undecided whether to accept or reject the null hypothesis. Desirable
results are low proportions inside the ROPE (the closer to zero the
better).
\item "classic" - The TOST rule (Lakens 2017)

This rule follows the "TOST rule", i.e. a two one-sided test procedure
(\emph{Lakens 2017}). Following this rule...
\itemize{
\item practical equivalence is assumed (i.e. H0 \emph{"accepted"}) when the narrow
confidence intervals are completely inside the ROPE, no matter if the
effect is statistically significant or not;
\item practical equivalence (i.e. H0) is \emph{rejected}, when the coefficient is
statistically significant, both when the narrow confidence intervals
(i.e. \code{1-2*alpha}) include or exclude the the ROPE boundaries, but the
narrow confidence intervals are \emph{not fully covered} by the ROPE;
\item else the decision whether to accept or reject practical equivalence is
undecided (i.e. when effects are \emph{not} statistically significant \emph{and}
the narrow confidence intervals overlaps the ROPE).
}
\item "cet" - Conditional Equivalence Testing (Campbell/Gustafson 2018)

The Conditional Equivalence Testing as described by \emph{Campbell and
Gustafson 2018}. According to this rule, practical equivalence is
rejected when the coefficient is statistically significant. When the
effect is \emph{not} significant and the narrow confidence intervals are
completely inside the ROPE, we accept (i.e. assume) practical equivalence,
else it is undecided.
}
}

\subsection{Levels of Confidence Intervals used for Equivalence Testing}{

For \code{rule = "classic"}, "narrow" confidence intervals are used for
equivalence testing. "Narrow" means, the the intervals is not 1 - alpha,
but 1 - 2 * alpha. Thus, if \code{ci = .95}, alpha is assumed to be 0.05
and internally a ci-level of 0.90 is used. \code{rule = "cet"} uses
both regular and narrow confidence intervals, while \code{rule = "bayes"}
only uses the regular intervals.
}

\subsection{p-Values}{

The equivalence p-value is the area of the (cumulative) confidence
distribution that is outside of the region of equivalence. It can be
interpreted as p-value for \emph{rejecting} the alternative hypothesis and
\emph{accepting} the "null hypothesis" (i.e. assuming practical equivalence). That
is, a high p-value means we reject the assumption of practical equivalence
and accept the alternative hypothesis.
}

\subsection{Second Generation p-Value (SGPV)}{

Second generation p-values (SGPV) were proposed as a statistic that
represents \emph{the proportion of data-supported hypotheses that are also null
hypotheses} \emph{(Blume et al. 2018, Lakens and Delacre 2020)}. It represents the
proportion of the \emph{full} confidence interval range (assuming a normally or
t-distributed, equal-tailed interval, based on the model) that is inside the
ROPE. The SGPV ranges from zero to one. Higher values indicate that the
effect is more likely to be practically equivalent ("not of interest").

Note that the assumed interval, which is used to calculate the SGPV, is an
estimation of the \emph{full interval} based on the chosen confidence level. For
example, if the 95\% confidence interval of a coefficient ranges from -1 to 1,
the underlying \emph{full (normally or t-distributed) interval} approximately
ranges from -1.9 to 1.9, see also following code:

\if{html}{\out{<div class="sourceCode">}}\preformatted{# simulate full normal distribution
out <- bayestestR::distribution_normal(10000, 0, 0.5)
# range of "full" distribution
range(out)
# range of 95\% CI
round(quantile(out, probs = c(0.025, 0.975)), 2)
}\if{html}{\out{</div>}}

This ensures that the SGPV always refers to the general compatible parameter
space of coefficients, independent from the confidence interval chosen for
testing practical equivalence. Therefore, the SGPV of the \emph{full interval} is
similar to the ROPE coverage of Bayesian equivalence tests, see following
code:

\if{html}{\out{<div class="sourceCode">}}\preformatted{library(bayestestR)
library(brms)
m <- lm(mpg ~ gear + wt + cyl + hp, data = mtcars)
m2 <- brm(mpg ~ gear + wt + cyl + hp, data = mtcars)
# SGPV for frequentist models
equivalence_test(m)
# similar to ROPE coverage of Bayesian models
equivalence_test(m2)
# similar to ROPE coverage of simulated draws / bootstrap samples
equivalence_test(simulate_model(m))
}\if{html}{\out{</div>}}
}

\subsection{ROPE range}{

Some attention is required for finding suitable values for the ROPE limits
(argument \code{range}). See 'Details' in \code{\link[bayestestR:rope_range]{bayestestR::rope_range()}}
for further information.
}
}
\note{
There is also a \href{https://easystats.github.io/see/articles/parameters.html}{\code{plot()}-method}
implemented in the \href{https://easystats.github.io/see/}{\strong{see}-package}.
}
\section{Statistical inference - how to quantify evidence}{

There is no standardized approach to drawing conclusions based on the
available data and statistical models. A frequently chosen but also much
criticized approach is to evaluate results based on their statistical
significance (\emph{Amrhein et al. 2017}).

A more sophisticated way would be to test whether estimated effects exceed
the "smallest effect size of interest", to avoid even the smallest effects
being considered relevant simply because they are statistically significant,
but clinically or practically irrelevant (\emph{Lakens et al. 2018, Lakens 2024}).

A rather unconventional approach, which is nevertheless advocated by various
authors, is to interpret results from classical regression models either in
terms of probabilities, similar to the usual approach in Bayesian statistics
(\emph{Schweder 2018; Schweder and Hjort 2003; Vos 2022}) or in terms of relative
measure of "evidence" or "compatibility" with the data (\emph{Greenland et al. 2022;
Rafi and Greenland 2020}), which nevertheless comes close to a probabilistic
interpretation.

A more detailed discussion of this topic is found in the documentation of
\code{\link[=p_function]{p_function()}}.

The \strong{parameters} package provides several options or functions to aid
statistical inference. These are, for example:
\itemize{
\item \code{\link[=equivalence_test.lm]{equivalence_test()}}, to compute the (conditional)
equivalence test for frequentist models
\item \code{\link[=p_significance.lm]{p_significance()}}, to compute the probability of
\emph{practical significance}, which can be conceptualized as a unidirectional
equivalence test
\item \code{\link[=p_function]{p_function()}}, or \emph{consonance function}, to compute p-values and
compatibility (confidence) intervals for statistical models
\item the \code{pd} argument (setting \code{pd = TRUE}) in \code{model_parameters()} includes
a column with the \emph{probability of direction}, i.e. the probability that a
parameter is strictly positive or negative. See \code{\link[bayestestR:p_direction]{bayestestR::p_direction()}}
for details. If plotting is desired, the \code{\link[=p_direction.lm]{p_direction()}}
function can be used, together with \code{plot()}.
\item the \code{s_value} argument (setting \code{s_value = TRUE}) in \code{model_parameters()}
replaces the p-values with their related \emph{S}-values (\emph{Rafi and Greenland 2020})
\item finally, it is possible to generate distributions of model coefficients by
generating bootstrap-samples (setting \code{bootstrap = TRUE}) or simulating
draws from model coefficients using \code{\link[=simulate_model]{simulate_model()}}. These samples
can then be treated as "posterior samples" and used in many functions from
the \strong{bayestestR} package.
}

Most of the above shown options or functions derive from methods originally
implemented for Bayesian models (\emph{Makowski et al. 2019}). However, assuming
that model assumptions are met (which means, the model fits well to the data,
the correct model is chosen that reflects the data generating process
(distributional model family) etc.), it seems appropriate to interpret
results from classical frequentist models in a "Bayesian way" (more details:
documentation in \code{\link[=p_function]{p_function()}}).
}

\examples{
\dontshow{if (requireNamespace("sandwich")) (if (getRversion() >= "3.4") withAutoprint else force)(\{ # examplesIf}
data(qol_cancer)
model <- lm(QoL ~ time + age + education, data = qol_cancer)

# default rule
equivalence_test(model)

# using heteroscedasticity-robust standard errors
equivalence_test(model, vcov = "HC3")

# conditional equivalence test
equivalence_test(model, rule = "cet")

# plot method
if (require("see", quietly = TRUE)) {
  result <- equivalence_test(model)
  plot(result)
}
\dontshow{\}) # examplesIf}
}
\references{
\itemize{
\item Amrhein, V., Korner-Nievergelt, F., and Roth, T. (2017). The earth is
flat (p > 0.05): Significance thresholds and the crisis of unreplicable
research. PeerJ, 5, e3544. \doi{10.7717/peerj.3544}
\item Blume, J. D., D'Agostino McGowan, L., Dupont, W. D., & Greevy, R. A.
(2018). Second-generation p-values: Improved rigor, reproducibility, &
transparency in statistical analyses. PLOS ONE, 13(3), e0188299.
https://doi.org/10.1371/journal.pone.0188299
\item Campbell, H., & Gustafson, P. (2018). Conditional equivalence
testing: An alternative remedy for publication bias. PLOS ONE, 13(4),
e0195145. doi: 10.1371/journal.pone.0195145
\item Greenland S, Rafi Z, Matthews R, Higgs M. To Aid Scientific Inference,
Emphasize Unconditional Compatibility Descriptions of Statistics. (2022)
https://arxiv.org/abs/1909.08583v7 (Accessed November 10, 2022)
\item Kruschke, J. K. (2014). Doing Bayesian data analysis: A tutorial with
R, JAGS, and Stan. Academic Press
\item Kruschke, J. K. (2018). Rejecting or accepting parameter values in
Bayesian estimation. Advances in Methods and Practices in Psychological
Science, 1(2), 270-280. doi: 10.1177/2515245918771304
\item Lakens, D. (2017). Equivalence Tests: A Practical Primer for t Tests,
Correlations, and Meta-Analyses. Social Psychological and Personality
Science, 8(4), 355–362. doi: 10.1177/1948550617697177
\item Lakens, D. (2024). Improving Your Statistical Inferences (Version v1.5.1).
Retrieved from https://lakens.github.io/statistical_inferences/.
\doi{10.5281/ZENODO.6409077}
\item Lakens, D., and Delacre, M. (2020). Equivalence Testing and the Second
Generation P-Value. Meta-Psychology, 4.
https://doi.org/10.15626/MP.2018.933
\item Lakens, D., Scheel, A. M., and Isager, P. M. (2018). Equivalence Testing
for Psychological Research: A Tutorial. Advances in Methods and Practices
in Psychological Science, 1(2), 259–269. \doi{10.1177/2515245918770963}
\item Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., and Lüdecke, D. (2019).
Indices of Effect Existence and Significance in the Bayesian Framework.
Frontiers in Psychology, 10, 2767. \doi{10.3389/fpsyg.2019.02767}
\item Pernet, C. (2017). Null hypothesis significance testing: A guide to
commonly misunderstood concepts and recommendations for good practice.
F1000Research, 4, 621. doi: 10.12688/f1000research.6963.5
\item Rafi Z, Greenland S. Semantic and cognitive tools to aid statistical
science: replace confidence and significance by compatibility and surprise.
BMC Medical Research Methodology (2020) 20:244.
\item Schweder T. Confidence is epistemic probability for empirical science.
Journal of Statistical Planning and Inference (2018) 195:116–125.
\doi{10.1016/j.jspi.2017.09.016}
\item Schweder T, Hjort NL. Frequentist analogues of priors and posteriors.
In Stigum, B. (ed.), Econometrics and the Philosophy of Economics: Theory
Data Confrontation in Economics, pp. 285-217. Princeton University Press,
Princeton, NJ, 2003
\item Vos P, Holbert D. Frequentist statistical inference without repeated sampling.
Synthese 200, 89 (2022). \doi{10.1007/s11229-022-03560-x}
}
}
\seealso{
For more details, see \code{\link[bayestestR:equivalence_test]{bayestestR::equivalence_test()}}. Further
readings can be found in the references. See also \code{\link[=p_significance]{p_significance()}} for
a unidirectional equivalence test.
}