Some code in this vignette requires marginaleffects
version 0.7.1 or the development version hosted on Github.
All the standard errors generated by the marginaleffects()
, comparisons()
, and deltamethod()
functions of this package package are estimated using the delta method. Mathematical treatments of this method can be found in most statistics textbooks and on Wikipedia. Roughly speaking, the delta method allows us to approximate the distribution of a smooth function of an asymptotically normal estimator.
Concretely, this allows us to generate standard errors around functions of a model’s coefficient estimates. Predictions, contrasts, marginal effects, and marginal means are functions of the coefficients, so we can use the delta method to estimate standard errors around all of those quantities. Since there are a lot of mathematical treatments available elsewhere, this vignette focuses on the “implementation” in marginaleffects
.
Consider the case of the marginalmeans()
function. When a user calls this function, they obtain a vector of marginal means. To estimate standard errors around this vector:
The main function used to compute standard errors in marginaleffects
is here: https://github.com/vincentarelbundock/marginaleffects/blob/main/R/get_se_delta.R
marginaleffects()
and comparisons()
All standard errors for the marginaleffects()
and comparisons()
functions are computed using the delta method, as described above.
The summary()
and tidy()
functions compute the average marginal effect (or contrast) when they are applied to an object produced by marginaleffects()
(or comparisons()
). This is done in 3 steps:
As explained succinctly on Stack Exchange:
we want the variance of the Average Marginal Effect (AME) and hence our transformed function is: \(AME = \frac{1}{N} \sum_{i=1}^N g_i(x_i,\hat{\beta})\) Then using the delta method we have \(Var \left( g(\hat{\beta}) \right) = J_g' \Omega_{\hat{\beta}} J_g\) where \(\Omega_{\hat{\beta}} = Var(\hat{\beta})\) and \(J_g' = \frac{\partial\left[\frac{1}{N}\sum_{i=1}^N g (x_i,\hat{\beta})\right]}{\partial \hat\beta} = \frac{1}{N}{\left[\sum_{i=1}^N \frac{\partial \left (g (x_i,\hat{\beta})\right)}{\partial \hat\beta}\right]}\) Which justifies using the “average Jacobian” in the delta method to calculate variance of the AME.
References:
marginalmeans()
The marginalmeans()
function can compute the confidence intervals in two ways. If the following conditions hold:
type = "response"
transform_post
argument is NULL
then marginalmeans()
will first compute the marginal means on the link scale, and then back transform them using the inverse link function supplied by insight::link_inverse(model)
function.
In all other cases, standard errors are computed using the delta method as described above.
predictions()
The marginaleffects
package handles all calculations to compute standard errors, t statistics, p values, and confidence intervals for contrasts (comparisons()
), marginal effects (marginaleffects()
), and marginal means (marginalmeans()
) functions.
When possible, however, the calculation of standard errors and confidence intervals for the output of predictions()
is outsourced to the insight
package. The benefit of this is that, for many popular models, insight
can compute confidence intervals via back-transformation, which gives them certain nice properties. For example, this ensures that confidence intervals around the predictions of a logistic regression model will remain in between 0 and 1.
When insight
does not support a model, marginaleffects
computes standard errors using the delta method, as described above. In those cases, confidence intervals are not created automatically, but users can easily compute them manually, as usual, by multiplying the standard error by a critical value. The standard errors thus estimated have desirable properties under normal assumptions., but since there is no back-transformation, it is not always advisable to use them to construct symmetric confidence intervals around adjusted predictions.
One special case is when we use the the tidy()
or summary()
functions to compute average predictions, or the by
argument to compute average predictions by group. In those cases, it may be a good idea to compute predicted values on the link scale, average the predictions, and only then back transform. This can be done using the transform_post
and transform_avg
arguments. In this example, we use the link_inverse
argument from the insight
package to get the inverse link function:
library(insight)
# simulate data
set.seed(1024)
25
N <- data.frame(
dat <-y = rbinom(N, 1, prob = .9),
x = rnorm(N),
groupid = rbinom(N, 1, prob = .5))
# estimate model
glm(y ~ x + groupid, family = binomial, data = dat)
mod <-
# average group-wise predictions
predictions(mod, by = "groupid")
#> type groupid predicted std.error
#> 1 response 1 0.8888889 0.10282561
#> 2 response 0 0.8750000 0.07697122
Note that if we simply add 1.96*SE to the predictions, the upper bound of the confidence interval will exceed the logical limit of 1. Instead, one possibility would be to estimate the predicted values on the link scale, and then transform the results:
predictions(
mod,by = "groupid",
type = "link",
transform_post = link_inverse(mod))
#> type groupid predicted p.value conf.low conf.high
#> 1 link 1 0.9135928 0.03809366 0.5323305 0.9899205
#> 2 link 0 0.9229036 0.02478264 0.5780974 0.9905287
We can do something similar with average predictions using the tidy()
or summary()
functions:
predictions(
p <-
mod,type = "link")
summary(p, transform_avg = link_inverse(mod))
#> Predicted Pr(>|z|) CI low CI high
#> 1 0.9197 0.0061301 0.667 0.9849
#>
#> Model type: glm
#> Prediction type: link
Warning: users should be aware that these alternative approaches will not in general yield the same results, because the mean of a transformation is not always equal to the transformation of the mean:
rnorm(100)
x <-mean(link_inverse(mod)(x))
#> [1] 0.4706487
link_inverse(mod)(mean(x))
#> [1] 0.4665151
It is easy to use the bootstrap as an alternative strategy to compute standard errors and confidence intervals. Several R
packages can help us achieve this, including the long-established boot
package:
library(boot)
set.seed(123)
function(data, indices, ...) {
bootfun <- data[indices, ]
d <- lm(mpg ~ am + hp + factor(cyl), data = d)
mod <- comparisons(mod, newdata = d, vcov = FALSE, variables = "am")
cmp <-tidy(cmp)$estimate
}
boot(data = mtcars, statistic = bootfun, R = 1000)
b <-
b#>
#> ORDINARY NONPARAMETRIC BOOTSTRAP
#>
#>
#> Call:
#> boot(data = mtcars, statistic = bootfun, R = 1000)
#>
#>
#> Bootstrap Statistics :
#> original bias std. error
#> t1* 4.157856 0.01543426 1.003461
boot.ci(b, type = "perc")
#> BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
#> Based on 1000 bootstrap replicates
#>
#> CALL :
#> boot.ci(boot.out = b, type = "perc")
#>
#> Intervals :
#> Level Percentile
#> 95% ( 2.240, 6.277 )
#> Calculations and Intervals on Original Scale
Note that, in the code above, we set vcov=FALSE
to avoid computation of delta method standard errors and speed things up.
Compare to the delta method standard errors:
lm(mpg ~ am + hp + factor(cyl), data = mtcars)
mod <-comparisons(mod, variables = "am") |> summary()
#> Term Contrast Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
#> 1 am 1 - 0 4.158 1.257 3.309 0.00093648 1.695 6.621
#>
#> Model type: lm
#> Prediction type: response
All the functions in the marginaleffects
package can compute robust standard errors on the fly for any model type supported by the sandwich
package. The vcov
argument supports string shortcuts like "HC3"
, a one-sided formula to request clustered standard errors, variance-covariance matrices, or functions which return such matrices. Here are a few examples.
Adjusted predictions with classical or heteroskedasticity-robust standard errors:
library(marginaleffects)
library(patchwork)
lm(mpg ~ hp, data = mtcars)
mod <-
predictions(mod)
p <-head(p, 2)
#> rowid type predicted std.error statistic p.value conf.low conf.high mpg hp
#> 1 1 response 22.59375 0.7772744 29.06792 9.135725e-186 21.00634 24.18116 21 110
#> 2 2 response 22.59375 0.7772744 29.06792 9.135725e-186 21.00634 24.18116 21 110
predictions(mod, vcov = "HC3")
p <-head(p, 2)
#> rowid type predicted std.error statistic p.value conf.low conf.high mpg hp
#> 1 1 response 22.59375 0.8629746 26.18125 4.345995e-151 20.83132 24.35618 21 110
#> 2 2 response 22.59375 0.8629746 26.18125 4.345995e-151 20.83132 24.35618 21 110
Marginal effects with cluster-robust standard errors:
marginaleffects(mod, vcov = ~cyl)
mfx <-summary(mfx)
#> Term Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
#> 1 hp -0.06823 0.01868 -3.653 0.00025909 -0.1048 -0.03162
#>
#> Model type: lm
#> Prediction type: response
Comparing adjusted predictions with classical and robust standard errors:
plot_cap(mod, condition = "hp")
p1 <- plot_cap(mod, condition = "hp", vcov = "HC3")
p2 <-+ p2 p1
For linear mixed effects models we can apply the Satterthwaite and Kenward-Roger corrections in the same way as above:
library(marginaleffects)
library(patchwork)
library(lme4)
mtcars
dat <-$cyl <- factor(dat$cyl)
dat$am <- as.logical(dat$am)
dat lmer(mpg ~ hp + am + (1 | cyl), data = dat) mod <-
Marginal effects at the mean with classical standard errors and z-statistic:
marginaleffects(mod, newdata = "mean")
#> type term contrast dydx std.error statistic p.value conf.low conf.high
#> 1 response hp dY/dX -0.05184187 0.01146238 -4.522784 6.103158e-06 -0.07430773 -0.02937602
#> 2 response am TRUE - FALSE 4.66614142 1.13425639 4.113833 3.891429e-05 2.44303975 6.88924310
Marginal effects at the mean with Kenward-Roger adjusted variance-covariance and degrees of freedom:
marginaleffects(mod,
newdata = "mean",
vcov = "kenward-roger")
#> type term contrast dydx std.error statistic p.value conf.low conf.high df
#> 1 response hp dY/dX -0.05184187 0.01518879 -3.413167 0.09642979 -0.1305545 0.02687074 1.682575
#> 2 response am TRUE - FALSE 4.66614142 1.28244270 3.638479 0.08741990 -1.9798401 11.31212296 1.682575
We can use the same option in any of the package’s core functions, including:
plot_cap(mod, condition = "hp", vcov = "satterthwaite")
See the brms
vignette for a discussion of bayesian estimates and credible intervals.