The goal of
partR2 is to partition the variance explained in generalized linear mixed models (GLMMs) into variation unique to and shared among predictors. This can be done using semi-partial (or ‘part’) \(R^2\) and inclusive \(R^2\). But the package also does a few other things. Here is a quick summary of what the package calculates:
To install the development version of
In order to use the package appropriately, it is important to know about the key quantities and how they are calculated. Here are some brief clarifications:
effectsize(Makowski & Lüdecke, 2019) offer more flexible ways to estimate beta weights.
partR2: ratios of variance components
rptR package (Stoffel et al., 2017) and the
partR2 package go hand in hand. The intra-class coefficient or repeatability calculated in
rptR is the proportion of variance explained by random effects while the
partR2 package estimates the proportion of variance explained by fixed effects (or fixed plus random effects when
R2_type = "conditional"). Both the repeatability \(R\) and the coefficient of determination \(R^2\) are therefore ratios of variance components. However, we now changed the design of the
partR2 package compared to
partR2 user now fits the model first in
lme4 so that any modeling problem can be recognized straight away. We think this is generally preferable as it separates the modeling part from the rest of the calculations and allows the user to focus on specifying an appropriate model first.
The workhorse of the package is a single function,
partR2(). It takes a fitted model from
lme4 (Bates et al. 2015), with either a Gaussian, Poisson or binomial family. The function returns a
partR2 object. The package includes
forestplot() functions to display the results, and a helper function
mergeR2 that is useful for models with interactions (see below).
Before we go through some examples, we load the
biomass dataset. This is a simulated dataset that aims to mimic a study on biomass production in grasslands. In a nutshell, virtual invertebrates were sampled once every year over 10 consecutive years (Year) from 20 different virtual populations (Population). Temperature and Precipitation were measured and overall species diversity (SpeciesDiversity) and Biomass were recorded for each Population in each Year.
library(partR2) library(lme4) data("biomass") head(biomass) #> Temperature Precipitation Population SpeciesDiversity Year Biomass #> 1 24.62971 399.7463 1 24 1991 41.61498 #> 2 21.61168 371.0526 1 24 1992 37.78420 #> 3 23.28266 414.3659 1 24 1993 40.21902 #> 4 27.53635 416.6633 1 24 1994 44.45499 #> 5 26.67594 403.2696 1 24 1995 43.63841 #> 6 29.43292 410.0026 1 24 1996 44.34646 #> ResilienceLat Recovered NotRecovered ExtinctionLat Extinction #> 1 0.8095387 38 12 5.274812 4 #> 2 0.6994134 40 10 3.068951 5 #> 3 0.9060063 47 3 5.437968 7 #> 4 0.9915898 50 0 5.460337 4 #> 5 0.9178274 46 4 6.174574 5 #> 6 0.9025914 45 5 5.076269 4
First we fit a linear mixed model in
lme4. We assume, that Biomass depends on effects of Year, Temperature, Precipitation and SpeciesDiversity (fitted as fixed effects) and Population (fitted as a random effect).
modBM <- lmer(Biomass ~ Year + Temperature + Precipitation + SpeciesDiversity + (1|Population), data = biomass)
Now we would usually do the standard model checks and evaluations to ensure that the model works well. For the sake of simplicity (and because we simulated the data from Gaussian distributions), we skip this step here and go straight into the analysis with
partR2. We first calculate the overall marginal \(R^2\) and use parametric bootstrapping to estimate confidence intervals. Marginal \(R^2\) refers to the variance explained by fixed effect predictors relative to the total variance in the response. Alternatively, we can estimate conditional \(R^2\) (by setting
R2_type ="conditional"), which is the variance explained by fixed effects and random effects together relative to the total variance in the response.
Note that we supply the fitted
merMod object (the
lme4 output) and the original dataset used to fit the model. If no data is provided, partR2 will try to fetch it, so it is usually not necessary to provide the data. There is one important thing to pay attention to: If there are missing observations for some of the predictors/response,
glmer will remove all rows containing
NA, which will result in a mismatch between the data in the
data object and the data used to fit the model. In order to avoid complications, it is advisable to remove rows with missing data prior to the fitting the model.
R2_BM <- partR2(modBM, data = biomass, R2_type = "marginal", nboot = 10) R2_BM #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.6006 0.5606 0.6385 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #>  "No partitions selected."
Marginal \(R^2\) is around 60% and confidence intervals are fairly narrow. Temperature and Precipitation are highly correlated in the dataset (as they often are in real-life situations) and we want to know how much each of them uniquely explains and what they explain together.
R2_BMa <- partR2(modBM, partvars = c("Temperature", "Precipitation"), R2_type = "marginal", nboot = 10) R2_BMa #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.6006 0.5276 0.6706 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.6006 0.5276 0.6706 10 5 #> Temperature 0.0427 0.0000 0.1839 10 4 #> Precipitation 0.0830 0.0170 0.2190 10 4 #> Temperature+Precipitation 0.3913 0.3245 0.4880 10 3
So Temperature and Precipitation each uniquely explain only a small proportion of the variation in Biomass (around 4% and 9%, respectively). Together, however, they explain around 39% of the variation. This situation is typical for correlated predictors, since part \(R^2\) is the variance uniquely explained by each predictor, while here a large part of the variance is explained equally by both predictors. If Temperature is removed from the model, the variance explained is only marginally reduced, because Precipitation still explains a large part of the variance in Biomass (and vice versa).
Besides part \(R^2\),
partR2 also estimates inclusive \(R^2\), standardized model estimates (beta weights) and structure coefficients. These are shown when calling the
summary(R2_BMa) #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper ndf #> 0.6006 0.5276 0.6706 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper ndf #> Model 0.6006 0.5276 0.6706 5 #> Temperature 0.0427 0.0000 0.1839 4 #> Precipitation 0.0830 0.0170 0.2190 4 #> Temperature+Precipitation 0.3913 0.3245 0.4880 3 #> #> ---------- #> #> Inclusive R2 (SC^2 * R2): #> Predictor IR2 CI_lower CI_upper #> Year 0.0219 0.0118 0.0433 #> Temperature 0.3530 0.2839 0.4027 #> Precipitation 0.3749 0.3142 0.4778 #> SpeciesDiversity 0.1873 0.1381 0.2516 #> #> ---------- #> #> Structure coefficients r(Yhat,x): #> Predictor SC CI_lower CI_upper #> Year 0.1912 0.1478 0.2548 #> Temperature 0.7667 0.6899 0.8072 #> Precipitation 0.7901 0.7057 0.8568 #> SpeciesDiversity 0.5585 0.4651 0.6286 #> #> ---------- #> #> Beta weights (standardised estimates) #> Predictor BW CI_lower CI_upper #> Year 0.1140 0.0752 0.1716 #> Temperature 0.2855 0.1733 0.3854 #> Precipitation 0.4004 0.2908 0.5238 #> SpeciesDiversity 0.3986 0.3368 0.4749 #> #> ----------
Beta weights (BW) show that all four predictors seem to have an effect on Biomass, because none of the confidence intervals overlaps zero. The effect of Precipitation appears largest. Structure coefficients (SC) tell us that both Temperature and Precipitation are quite strongly correlated with the overall model prediction. Structure coefficients are correlations and by squaring them followed by multiplications with the marginal \(R^2\) of the full model, we get inclusive \(R^2\) (\(IR^2 = SC^2 * R^2\)). SC and \(IR^2\) are large for Temperature and Precipitation as both are highly correlated to the predicted response, but their part \(R^2\) (the variance that they uniquely explain) are small due to their collinearity.
partR2 calculates part \(R^2\) for all predictors individually and in all possible combinations. The number of combinations increases exponentially with \(2^n-1\) combinations for \(n\) predictors. This increases computation time exponentially. There are two options to control the number of \(R^2\) to be calculated. Sometimes we want only part \(R^2\) for each predictor, but not for all their combinations. We can specify this with the argument
max_level = 1.
R2_BMb <- partR2(modBM, partvars = c("Temperature", "Precipitation", "Year", "SpeciesDiversity"), R2_type = "marginal", max_level = 1, nboot = 10) R2_BMb #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.6006 0.5206 0.6496 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.6006 0.5206 0.6496 10 5 #> Temperature 0.0427 0.0000 0.1042 10 4 #> Precipitation 0.0830 0.0000 0.1428 10 4 #> Year 0.0125 0.0000 0.0752 10 4 #> SpeciesDiversity 0.1806 0.0358 0.2364 10 4
Another option is offered by the
partbatch takes a list of character vectors that contain predictor names. The terms in each character vector are then always fitted or removed together. This is most useful for models with many variables, when using dummy coding or when predictors are otherwise linked in any meaningful way. We illustrate the use of
partbatch here by requesting part \(R^2\) for the two climatic variables (Temperature and Precipitation) in combination.
R2_BMc <- partR2(modBM, partbatch = list(c("Temperature", "Precipitation")), R2_type = "marginal", nboot = 10) R2_BMc #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.6006 0.5417 0.6594 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.6006 0.5417 0.6594 10 5 #> Temperature+Precipitation 0.3913 0.3228 0.4761 10 3
partbatch can be used instead of or in combination with
partvars. For convenience it is also possible to name the elements in the list given to
partbatch. The output will then show the name of the batch instead of all list elements. We now request part \(R^2\) for the two climatic variables (now named ClimateVars) along with SpeciesDiversity.
R2_BMd <- partR2(modBM, partvars = c("SpeciesDiversity"), partbatch = list(ClimateVars = c("Temperature", "Precipitation")), R2_type = "marginal", nboot = 10) R2_BMd #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.6006 0.5576 0.665 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.6006 0.5576 0.6650 10 5 #> ClimateVars 0.3913 0.3655 0.4766 10 3 #> SpeciesDiversity 0.1806 0.1579 0.2868 10 4 #> SpeciesDiversity+ClimateVars 0.5786 0.5374 0.6453 10 2
We can plot the results as forestplots, powered by
ggplot2 (Wickham, 2016). The output of
forestplot is a
ggplot object. We can customize the object using
ggplot syntax or assemble multiple plots with packages like
patchwork (Pedersen, 2019). In the example here, we added a few arguments for a nicer appearance (see
?forestplot for more information). What is plotted is controlled by the
type argument, which can be
type = "R2" (the default) for part \(R^2\),
type = "IR2" for inclusive \(R^2\),
type = "SC" for structure coefficients,
type = "BW" for beta weights and
type = "Ests" for raw model estimates.
library(patchwork) p1 <- forestplot(R2_BMb, type = "R2", text_size = 10) p2 <- forestplot(R2_BMb, type = "IR2", text_size = 10) p3 <- forestplot(R2_BMb, type = "SC", text_size = 10) p4 <- forestplot(R2_BMb, type = "BW", text_size = 10) (p1 + p2) / (p3 + p4) + plot_annotation(tag_levels = "A")
However, for a publication we might want the data extract the results either for more customized plotting or because we want to present it in a table. Here is how the results can be retrieved from a
# An overview str(R2_BMb, max.level = 1) #> List of 17 #> $ call : language lmer(formula = Biomass ~ Year + Temperature + Precipitation + SpeciesDiversity + (1 | Population), data = biomass) #> $ R2_type : chr "marginal" #> $ R2 : tibble [5 × 5] (S3: tbl_df/tbl/data.frame) #> $ SC : tibble [4 × 4] (S3: tbl_df/tbl/data.frame) #> $ IR2 : tibble [4 × 4] (S3: tbl_df/tbl/data.frame) #> $ BW : tibble [4 × 4] (S3: tbl_df/tbl/data.frame) #> $ Ests : tibble [4 × 4] (S3: tbl_df/tbl/data.frame) #> $ R2_boot : tibble [5 × 2] (S3: tbl_df/tbl/data.frame) #> $ SC_boot : tibble [4 × 2] (S3: tbl_df/tbl/data.frame) #> $ IR2_boot : tibble [4 × 2] (S3: tbl_df/tbl/data.frame) #> $ BW_boot : tibble [4 × 2] (S3: tbl_df/tbl/data.frame) #> $ Ests_boot : tibble [4 × 2] (S3: tbl_df/tbl/data.frame) #> $ partvars : chr [1:4] "Temperature" "Precipitation" "Year" "SpeciesDiversity" #> $ partbatch : logi NA #> $ CI : num 0.95 #> $ boot_warnings:List of 10 #> $ boot_messages:List of 10 #> - attr(*, "class")= chr "partR2"
The list shows the key elements of
partR2 that contain (a) the point estimates and confidence intervals and (b) bootstrap replicates. Note that each bootstrap replicate refers to a fit to a simulated dataset (data simulated based on model estimates). Notably, bootstrap estimates are stored in list columns. Every element in a list column contains a vector with all bootstrap estimates for a given term.
# (a) point estimates and confidence intervals R2_BMb$R2 # R2s R2_BMb$SC # Structure coefficients R2_BMb$IR2 # inclusive R2s R2_BMb$BW # Standardised model estimates R2_BMb$Ests # Model estimates # (b) bootstrap replicates R2_BMb$R2_boot R2_BMb$SC_boot R2_BMb$IR2_boot R2_BMb$BW_boot R2_BMb$Ests_boot
Parametric bootstrapping works through simulation of new response variables based on model estimates, followed by refitting the model to these simulated responses. This regularly causes warnings, usually reporting convergence problems or singularity warnings. Such warning messages often occur when one of the components is close to zero (often one of the random effects). Removing those components often prevents warning messages. However, if the cause of warnings is that some component is estimated at the boundary, then this does not constitute a major problem. It is more important that the original model fit is scrutinized for model validity than every single bootstrap iteration. Nevertheless,
partR2 captures warnings (and messages) and saves them in the return object. Lets have a look at potential warnings and messages for the first two simulations, though it turns out there are none in this analysis.
R2_BMb$boot_warnings[1:2] #> $sim_1 #> NULL #> #> $sim_2 #> NULL R2_BMb$boot_messages[1:2] #> $sim_1 #> NULL #> #> $sim_2 #> NULL
We generally advice to do all variable transformation before fitting the model. Otherwise it is important to specify the variable in the
partbatch) argument exactly (!) how it was done in the original model fit. Here is an example where we fit an additional term for Precipitation^2. A model with Precipitation and Precipitation^2 produces warning messages, because of the strong correlation between the linear and the squared term. This can be solved by centering before squaring (see Schielzeth 2010).
biomass$PrecipitationC <- biomass$Precipitation - mean(biomass$Precipitation) modBMC <- lmer(Biomass ~ Temperature + PrecipitationC + I(PrecipitationC^2) + (1|Population), data = biomass) R2_BMe <- partR2(modBMC, partvars = c("Temperature"), partbatch=list(c("PrecipitationC", "I(PrecipitationC^2)")), nboot = 10) R2_BMe #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.4365 0.351 0.4975 10 4 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot #> Model 0.4365 0.3510 0.4975 10 #> PrecipitationC+I(PrecipitationC^2) 0.0997 0.0039 0.1693 10 #> Temperature 0.0497 0.0000 0.1237 10 #> Temperature+PrecipitationC+I(PrecipitationC^2) 0.4365 0.3510 0.4975 10 #> ndf #> 4 #> 2 #> 3 #> 1
More generally, we have previously had problems with partial matching (e.g. terms female and male in the same model). Although this problem should now be fixed, it is worth to be aware that unusual names may cause complications and renaming usually offers an easy solution.
Parametric bootstrapping is an inherently slow process, because it involves repeated fitting of mixed effects models. Furthermore, computation time increases exponentially with the number of terms requested, all being tested in isolation and in combination. It is therefore advisable to run preliminary analyses first with low numbers of bootstraps, just to see that things work and make sense in principle. The final analysis should be done with a large number of bootstraps (at least 100, better 1000). This can take time.
A simple way to save runtime is to distribute the bootstrap iterations across multiple cores.
partR2 parallelises with the
future (Bengtsson, 2019) and
furrr (Vaughan & Dancho, 2018) packages. This makes it flexible for the user on how exactly to parallelise and should work on whatever OS you are running your analyses, be it Mac, Windows or Linux.
We illustrate the parallel option of
partR2 with the
modBM fitted above. First we specify how we want to parallelise using future’s
plan() function. Check out
?future::plan for more information. Generally, if you the analysis is run in RStudio, it is recommended to use
plan(multisession). After specifying the plan, you need to set the
parallel = TRUE argument when calling
partR2() and bootstrapping will run in parallel. If there is no specified plan,
partR2 will still run sequentially.
Partitioning \(R^2\) in Poisson and binomial models works with the same
partR2 function, in the same way as for Gaussian data. Here, we simulated a Poisson distributed response variable Extinction for the biomass dataset.
First we fit a Poisson model using
lme4::glmer with Year, Temperature, Precipitation and SpeciesDiversity as fixed effect predictors and Population as a random effect. Then we request part \(R^2\) for Temperature and Precipitation.
Since a model with raw predictors produces warning messages (and the recommendation to scale variables), we scale Temperature and Precipitation and center Year and SpeciesDiversity.
biomass$YearC <- biomass$Year - mean(biomass$Year) biomass$TemperatureS <- scale(biomass$Temperature) biomass$PrecipitationS <- scale(biomass$Precipitation) biomass$SpeciesDiversityC <- biomass$SpeciesDiversity - mean(biomass$SpeciesDiversity) modExt <- glmer(Extinction ~ YearC + TemperatureS + PrecipitationS + SpeciesDiversityC + (1|Population), data=biomass, family="poisson") R2_Ext <- partR2(modExt, partvars = c("TemperatureS", "PrecipitationS"), R2_type = "marginal", nboot=10) #> An observational level random-effect has been fitted #> to account for overdispersion. print(R2_Ext, round_to = 3) # rounding decimals in print and summary #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.077 0.043 0.192 10 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.077 0.043 0.192 10 5 #> TemperatureS 0.002 0.000 0.129 10 4 #> PrecipitationS 0.043 0.005 0.163 10 4 #> TemperatureS+PrecipitationS 0.058 0.018 0.176 10 3
We also simulated a proportion-scale response variables Recovered and NotRecovered for the biomass dataset. Again, we first fit a Binomial model using
lme4::glmer with Year, Temperature, Precipitation and SpeciesDiversity as fixed effect predictors and Population as a random effect. Then we calculate part \(R^2\) for Temperature and Precipitation. Again, we use centered/scaled predictors to facilitate model convergence.
modRecov <- glmer(cbind(Recovered, NotRecovered) ~ YearC + TemperatureS + PrecipitationS + SpeciesDiversityC + (1|Population), data=biomass, family="binomial") R2_Recov <- partR2(modRecov, partvars = c("TemperatureS", "PrecipitationS"), R2_type = "marginal", nboot=10) summary(R2_Recov, round_to=3) #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper ndf #> 0.104 0.078 0.192 5 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper ndf #> Model 0.104 0.078 0.192 5 #> TemperatureS 0.008 0.000 0.102 4 #> PrecipitationS 0.006 0.000 0.099 4 #> TemperatureS+PrecipitationS 0.044 0.013 0.135 3 #> #> ---------- #> #> Inclusive R2 (SC^2 * R2): #> Predictor IR2 CI_lower CI_upper #> YearC 0.015 0.009 0.022 #> TemperatureS 0.048 0.034 0.074 #> PrecipitationS 0.041 0.031 0.059 #> SpeciesDiversityC 0.045 0.022 0.130 #> #> ---------- #> #> Structure coefficients r(Yhat,x): #> Predictor SC CI_lower CI_upper #> YearC 0.376 0.267 0.469 #> TemperatureS 0.680 0.497 0.794 #> PrecipitationS 0.629 0.419 0.757 #> SpeciesDiversityC 0.656 0.496 0.837 #> #> ---------- #> #> Beta weights (standardised estimates) #> Predictor BW CI_lower CI_upper #> YearC 0.264 0.190 0.337 #> TemperatureS 0.307 0.174 0.454 #> PrecipitationS 0.247 0.161 0.347 #> SpeciesDiversityC 0.477 0.317 0.859 #> #> ----------
Both Poisson and binomials models give low \(R^2\) values – even though they have been generated with similar parameter settings as the Gaussian data. This situation is common and originates from the inherent rounding when traits are expressed as discrete or binary numbers. Fixed and random effects explain variation only at the latent scale, but inherent distribution-specific variance contribute to the residual variances. This leads to large residual variance and relatively lower explained variance components.
One thing to keep in mind when using GLMMs are observation-level random effects (OLRE, see Harrison 2014).
partR2 fits OLRE internally to quantify overdispersion but the function will recognise an OLRE when it is already fitted in the model. Fitting of OLRE can be suppressed by
olre = FALSE in case this is required. (Note, that we do not fit an OLRE for binomial models with binary responses because in this case there is no overdispersion.)
It is possible to estimate part \(R^2\) for models with interaction terms, but this requires some thought. Recall that the part \(R^2\) of a predictor is calculated as the reduction in the variance explained after the predictor is removed from the model. Interactions are the product of two predictors. When raw covariates are multiplied with each other, the product (=the interaction) is usually highly correlated with each one of the main effects. Hence, interactions and main effects tend to have a large overlap in the variance that they explain in the response. This can lead to erroneous conclusions. A generally useful strategies when dealing with models with interactions is to center all covariates, because this usually mitigates the correlations and makes main effects independent of interactions (Schielzeth 2010). However, even with centered covariates there are different options on how we can partition the variance explained.
We will introduce these options with the
guinea pig dataset (Mutwill et al. in prep.). The dataset contains testosterone measurements of 31 male guinea pigs, each measured at 5 time points. We will analyze log-transformed testosterone titers (Testo) and fit male identify (MaleID) as a random effect. As covariates the dataset contains Time (time point of measurement) and a Rank (rank index quantified from behavioral observations).
data(GuineaPigs) head(GuineaPigs) #> MaleID Time Rank Testo #> 1 M64B 1 0.10 4.52 #> 2 M44D 1 NA 12.60 #> 3 M42C 1 0.07 4.88 #> 4 M65B 1 0.03 7.92 #> 5 M74A 1 0.10 7.83 #> 6 M45D 1 0.27 7.00 GuineaPigs <- subset(GuineaPigs, !is.na(Testo) & !is.na(Rank)) GuineaPigs$TestoTrans <- log(GuineaPigs$Testo)
One option is to estimate the variance explained by main effects irrespective of whether there are interactions. We can do this, as usual, by specifying main effects (and interactions) in the
partR2 will then iteratively remove each of those predictors and will treat interactions and main effects equally.
modGP <- lmer(TestoTrans ~ Rank * Time + (1|MaleID), data=GuineaPigs) R2_modGPa <- partR2(modGP, partvars = c("Rank", "Time", "Rank:Time"), nboot=10) R2_modGPa #> #> #> R2 (marginal) and 95% CI for the full model: #> R2 CI_lower CI_upper nboot ndf #> 0.1634 0.0879 0.2214 10 4 #> #> ---------- #> #> Part (semi-partial) R2: #> Predictor(s) R2 CI_lower CI_upper nboot ndf #> Model 0.1634 0.0879 0.2214 10 4 #> Rank 0.0900 0.0186 0.1596 10 3 #> Time 0.0051 0.0000 0.0880 10 3 #> Rank:Time 0.0297 0.0000 0.1087 10 3 #> Rank+Time 0.1241 0.0509 0.1884 10 2 #> Rank+Rank:Time 0.1623 0.0869 0.2205 10 2 #> Time+Rank:Time 0.0496 0.0000 0.1255 10 2 #> Rank+Time+Rank:Time 0.1634 0.0879 0.2214 10 1
A conceptual Venn diagram shows what has been estimated. Y illustrates the total variance of the response (here TestoTrans), YRE the variance explained by random effects (here MaleID) and YR the residual variance. The other fields are the variance explained by the main effects (e.g. X1 = Rank and X2 = Time) and their interaction (Xint). Intersections show parts of the phenotypic variance explained by multiple components. What we have estimated is the variance uniquely explained by main effects and uniquely explained by the interaction. The variance explained that is shared between main effects and the interaction is neither attributed to the main effect nor the interaction. Instead we have estimated the variance Yx1 (horizontal stripes,Rank) and Yx2 (vertical stripes, Time). Finally, Yint (dotted) is attributed to the the interaction (Rank:Time).