File: p_significance.lm.Rd

package info (click to toggle)
r-cran-parameters 0.24.2-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 3,852 kB
  • sloc: sh: 16; makefile: 2
file content (251 lines) | stat: -rw-r--r-- 13,151 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/p_significance.R
\name{p_significance.lm}
\alias{p_significance.lm}
\title{Practical Significance (ps)}
\usage{
\method{p_significance}{lm}(
  x,
  threshold = "default",
  ci = 0.95,
  vcov = NULL,
  vcov_args = NULL,
  verbose = TRUE,
  ...
)
}
\arguments{
\item{x}{A statistical model.}

\item{threshold}{The threshold value that separates significant from
negligible effect, which can have following possible values:
\itemize{
\item \code{"default"}, in which case the range is set to \code{0.1} if input is a vector,
and based on \code{\link[bayestestR:rope_range]{rope_range()}} if a (Bayesian) model is provided.
\item a single numeric value (e.g., 0.1), which is used as range around zero
(i.e. the threshold range is set to -0.1 and 0.1, i.e. reflects a symmetric
interval)
\item a numeric vector of length two (e.g., \code{c(-0.2, 0.1)}), useful for
asymmetric intervals
\item a list of numeric vectors, where each vector corresponds to a parameter
\item a list of \emph{named} numeric vectors, where names correspond to parameter
names. In this case, all parameters that have no matching name in \code{threshold}
will be set to \code{"default"}.
}}

\item{ci}{Confidence Interval (CI) level. Default to \code{0.95} (\verb{95\%}).}

\item{vcov}{Variance-covariance matrix used to compute uncertainty estimates
(e.g., for robust standard errors). This argument accepts a covariance
matrix, a function which returns a covariance matrix, or a string which
identifies the function to be used to compute the covariance matrix.
\itemize{
\item A covariance matrix
\item A function which returns a covariance matrix (e.g., \code{stats::vcov()})
\item A string which indicates the kind of uncertainty estimates to return.
\itemize{
\item Heteroskedasticity-consistent: \code{"HC"}, \code{"HC0"}, \code{"HC1"}, \code{"HC2"},
\code{"HC3"}, \code{"HC4"}, \code{"HC4m"}, \code{"HC5"}. See \code{?sandwich::vcovHC}
\item Cluster-robust: \code{"CR"}, \code{"CR0"}, \code{"CR1"}, \code{"CR1p"}, \code{"CR1S"},
\code{"CR2"}, \code{"CR3"}. See \code{?clubSandwich::vcovCR}
\item Bootstrap: \code{"BS"}, \code{"xy"}, \code{"residual"}, \code{"wild"}, \code{"mammen"},
\code{"fractional"}, \code{"jackknife"}, \code{"norm"}, \code{"webb"}. See
\code{?sandwich::vcovBS}
\item Other \code{sandwich} package functions: \code{"HAC"}, \code{"PC"}, \code{"CL"}, \code{"OPG"},
\code{"PL"}.
}
}}

\item{vcov_args}{List of arguments to be passed to the function identified by
the \code{vcov} argument. This function is typically supplied by the
\strong{sandwich} or \strong{clubSandwich} packages. Please refer to their
documentation (e.g., \code{?sandwich::vcovHAC}) to see the list of available
arguments. If no estimation type (argument \code{type}) is given, the default
type for \code{"HC"} equals the default from the \strong{sandwich} package; for type
\code{"CR"}, the default is set to \code{"CR3"}.}

\item{verbose}{Toggle warnings and messages.}

\item{...}{Arguments passed to other methods.}
}
\value{
A data frame with columns for the parameter names, the confidence
intervals and the values for practical significance. Higher values indicate
more practical significance (upper bound is one).
}
\description{
Compute the probability of \strong{Practical Significance} (\emph{ps}),
which can be conceptualized as a unidirectional equivalence test. It returns
the probability that an effect is above a given threshold corresponding to a
negligible effect in the median's direction, considering a parameter's \emph{full}
confidence interval. In other words, it returns the probability of a clear
direction of an effect, which is larger than the smallest effect size of
interest (e.g., a minimal important difference). Its theoretical range is
from zero to one, but the \emph{ps} is typically larger than 0.5 (to indicate
practical significance).

In comparison the the \code{\link[=equivalence_test]{equivalence_test()}} function, where the \emph{SGPV}
(second generation p-value) describes the proportion of the \emph{full} confidence
interval that is \emph{inside} the ROPE, the value returned by \code{p_significance()}
describes the \emph{larger} proportion of the \emph{full} confidence interval that is
\emph{outside} the ROPE. This makes \code{p_significance()} comparable to
\code{\link[bayestestR:p_direction]{bayestestR::p_direction()}}, however, while \code{p_direction()} compares to a
point-null by default, \code{p_significance()} compares to a range-null.
}
\details{
\code{p_significance()} returns the proportion of the \emph{full} confidence
interval range (assuming a normally or t-distributed, equal-tailed interval,
based on the model) that is outside a certain range (the negligible effect,
or ROPE, see argument \code{threshold}). If there are values of the distribution
both below and above the ROPE, \code{p_significance()} returns the higher
probability of a value being outside the ROPE. Typically, this value should
be larger than 0.5 to indicate practical significance. However, if the range
of the negligible effect is rather large compared to the range of the
confidence interval, \code{p_significance()} will be less than 0.5, which
indicates no clear practical significance.

Note that the assumed interval, which is used to calculate the practical
significance, is an estimation of the \emph{full interval} based on the chosen
confidence level. For example, if the 95\% confidence interval of a
coefficient ranges from -1 to 1, the underlying \emph{full (normally or
t-distributed) interval} approximately ranges from -1.9 to 1.9, see also
following code:

\if{html}{\out{<div class="sourceCode">}}\preformatted{# simulate full normal distribution
out <- bayestestR::distribution_normal(10000, 0, 0.5)
# range of "full" distribution
range(out)
# range of 95\% CI
round(quantile(out, probs = c(0.025, 0.975)), 2)
}\if{html}{\out{</div>}}

This ensures that the practical significance always refers to the general
compatible parameter space of coefficients. Therefore, the \emph{full interval} is
similar to a Bayesian posterior distribution of an equivalent Bayesian model,
see following code:

\if{html}{\out{<div class="sourceCode">}}\preformatted{library(bayestestR)
library(brms)
m <- lm(mpg ~ gear + wt + cyl + hp, data = mtcars)
m2 <- brm(mpg ~ gear + wt + cyl + hp, data = mtcars)
# probability of significance (ps) for frequentist model
p_significance(m)
# similar to ps of Bayesian models
p_significance(m2)
# similar to ps of simulated draws / bootstrap samples
p_significance(simulate_model(m))
}\if{html}{\out{</div>}}
}
\note{
There is also a \href{https://easystats.github.io/see/articles/bayestestR.html}{\code{plot()}-method}
implemented in the \href{https://easystats.github.io/see/}{\strong{see}-package}.
}
\section{Statistical inference - how to quantify evidence}{

There is no standardized approach to drawing conclusions based on the
available data and statistical models. A frequently chosen but also much
criticized approach is to evaluate results based on their statistical
significance (\emph{Amrhein et al. 2017}).

A more sophisticated way would be to test whether estimated effects exceed
the "smallest effect size of interest", to avoid even the smallest effects
being considered relevant simply because they are statistically significant,
but clinically or practically irrelevant (\emph{Lakens et al. 2018, Lakens 2024}).

A rather unconventional approach, which is nevertheless advocated by various
authors, is to interpret results from classical regression models either in
terms of probabilities, similar to the usual approach in Bayesian statistics
(\emph{Schweder 2018; Schweder and Hjort 2003; Vos 2022}) or in terms of relative
measure of "evidence" or "compatibility" with the data (\emph{Greenland et al. 2022;
Rafi and Greenland 2020}), which nevertheless comes close to a probabilistic
interpretation.

A more detailed discussion of this topic is found in the documentation of
\code{\link[=p_function]{p_function()}}.

The \strong{parameters} package provides several options or functions to aid
statistical inference. These are, for example:
\itemize{
\item \code{\link[=equivalence_test.lm]{equivalence_test()}}, to compute the (conditional)
equivalence test for frequentist models
\item \code{\link[=p_significance.lm]{p_significance()}}, to compute the probability of
\emph{practical significance}, which can be conceptualized as a unidirectional
equivalence test
\item \code{\link[=p_function]{p_function()}}, or \emph{consonance function}, to compute p-values and
compatibility (confidence) intervals for statistical models
\item the \code{pd} argument (setting \code{pd = TRUE}) in \code{model_parameters()} includes
a column with the \emph{probability of direction}, i.e. the probability that a
parameter is strictly positive or negative. See \code{\link[bayestestR:p_direction]{bayestestR::p_direction()}}
for details. If plotting is desired, the \code{\link[=p_direction.lm]{p_direction()}}
function can be used, together with \code{plot()}.
\item the \code{s_value} argument (setting \code{s_value = TRUE}) in \code{model_parameters()}
replaces the p-values with their related \emph{S}-values (\emph{Rafi and Greenland 2020})
\item finally, it is possible to generate distributions of model coefficients by
generating bootstrap-samples (setting \code{bootstrap = TRUE}) or simulating
draws from model coefficients using \code{\link[=simulate_model]{simulate_model()}}. These samples
can then be treated as "posterior samples" and used in many functions from
the \strong{bayestestR} package.
}

Most of the above shown options or functions derive from methods originally
implemented for Bayesian models (\emph{Makowski et al. 2019}). However, assuming
that model assumptions are met (which means, the model fits well to the data,
the correct model is chosen that reflects the data generating process
(distributional model family) etc.), it seems appropriate to interpret
results from classical frequentist models in a "Bayesian way" (more details:
documentation in \code{\link[=p_function]{p_function()}}).
}

\examples{
\dontshow{if (requireNamespace("bayestestR") && packageVersion("bayestestR") > "0.14.0" && requireNamespace("sandwich")) (if (getRversion() >= "3.4") withAutoprint else force)(\{ # examplesIf}
data(qol_cancer)
model <- lm(QoL ~ time + age + education, data = qol_cancer)

p_significance(model)
p_significance(model, threshold = c(-0.5, 1.5))

# based on heteroscedasticity-robust standard errors
p_significance(model, vcov = "HC3")

if (require("see", quietly = TRUE)) {
  result <- p_significance(model)
  plot(result)
}
\dontshow{\}) # examplesIf}
}
\references{
\itemize{
\item Amrhein, V., Korner-Nievergelt, F., and Roth, T. (2017). The earth is
flat (p > 0.05): Significance thresholds and the crisis of unreplicable
research. PeerJ, 5, e3544. \doi{10.7717/peerj.3544}
\item Greenland S, Rafi Z, Matthews R, Higgs M. To Aid Scientific Inference,
Emphasize Unconditional Compatibility Descriptions of Statistics. (2022)
https://arxiv.org/abs/1909.08583v7 (Accessed November 10, 2022)
\item Lakens, D. (2024). Improving Your Statistical Inferences (Version v1.5.1).
Retrieved from https://lakens.github.io/statistical_inferences/.
\doi{10.5281/ZENODO.6409077}
\item Lakens, D., Scheel, A. M., and Isager, P. M. (2018). Equivalence Testing
for Psychological Research: A Tutorial. Advances in Methods and Practices
in Psychological Science, 1(2), 259–269. \doi{10.1177/2515245918770963}
\item Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., and Lüdecke, D. (2019).
Indices of Effect Existence and Significance in the Bayesian Framework.
Frontiers in Psychology, 10, 2767. \doi{10.3389/fpsyg.2019.02767}
\item Rafi Z, Greenland S. Semantic and cognitive tools to aid statistical
science: replace confidence and significance by compatibility and surprise.
BMC Medical Research Methodology (2020) 20:244.
\item Schweder T. Confidence is epistemic probability for empirical science.
Journal of Statistical Planning and Inference (2018) 195:116–125.
\doi{10.1016/j.jspi.2017.09.016}
\item Schweder T, Hjort NL. Frequentist analogues of priors and posteriors.
In Stigum, B. (ed.), Econometrics and the Philosophy of Economics: Theory
Data Confrontation in Economics, pp. 285-217. Princeton University Press,
Princeton, NJ, 2003
\item Vos P, Holbert D. Frequentist statistical inference without repeated sampling.
Synthese 200, 89 (2022). \doi{10.1007/s11229-022-03560-x}
}
}
\seealso{
For more details, see \code{\link[bayestestR:p_significance]{bayestestR::p_significance()}}. See also
\code{\link[=equivalence_test]{equivalence_test()}}, \code{\link[=p_function]{p_function()}} and \code{\link[bayestestR:p_direction]{bayestestR::p_direction()}}
for functions related to checking effect existence and significance.
}