Minimal Example

Nicole Erler

2018-08-14

In this vignette, we use the NHANES data that are part of the JointAI package. For more info on this data, check the help file for the NHANES data, go to the web page of the National Health and Nutrition Examination Survey (NHANES) and check out the vignette Visualizing Incomplete Data, in which the NHANES data is explored.

Fitting a linear regression model

Fitting a linear regression model with JointAI is straightforward with the function lm_imp():

lm1 <- lm_imp(SBP ~ gender + age + race + WC + alc + educ + albu + bili,
              data = NHANES, n.iter = 500, progress.bar = 'none')

The specification of lm_imp() is similar to the specification of a linear regression model for complete data using lm().1 In this minimal example the only difference is that for lm_imp() the number of iterations n.iter has to be specified. Of course there are many more parameters that can or should be specified. In the vignette Model Specification many of these parameters are explained in detail.

n.iter specifies the length of the Markov Chain, i.e., the number of draws from the posterior distribution of the parameter or unobserved value. How many iterations are necessary depends on the data and complexity of the model and can vary from as few as 100 up to thousands or millions.

One important criterion is that the Markov chains need to have converged. This can be evaluated visually with a traceplot.

Traceplot

traceplot(lm1)

The function traceplot() produces a plot of the sampled values across iterations per parameter. By default, three2 Markov chains are produced for each parameter and represented by different colors.

When the sampler has converged the chains show a horizontal band, as in the above figure. Consequently, when traces show a trend, convergence has not been reached and more iterations are necessary (e.g., using add_samples()).

When convergence has been achieved, we can obtain the result of the model from the model summary.

Model Summary

Results from a JointAI model can be printed using

summary(lm1)
#> 
#>  Linear model fitted with JointAI 
#> 
#> Call:
#> lm_imp(formula = SBP ~ gender + age + race + WC + alc + educ + 
#>     albu + bili, data = NHANES, n.iter = 500, progress.bar = "none")
#> 
#> Posterior summary:
#>                           Mean       SD      2.5%    97.5% tail-prob.
#> (Intercept)            60.6010 22.39656  18.23292 106.3354   0.004000
#> genderfemale           -3.0673  2.23899  -7.33642   1.3586   0.182667
#> age                     0.3638  0.07148   0.23097   0.5050   0.000000
#> raceOther Hispanic      0.8612  5.21985  -9.07720  11.3548   0.885333
#> raceNon-Hispanic White -1.2823  3.18466  -7.39218   5.1783   0.664000
#> raceNon-Hispanic Black  9.0326  3.57249   2.01145  15.8427   0.009333
#> raceother               3.8476  3.56645  -3.06812  10.8745   0.266667
#> educhigh               -3.4014  2.12949  -7.64901   0.7094   0.100000
#> WC                      0.2380  0.07998   0.08064   0.3977   0.001333
#> albu                    5.3024  4.08731  -3.43691  13.0637   0.185333
#> bili                   -5.5319  4.87784 -15.35527   4.1577   0.250667
#> alc>=1                  7.4365  2.25109   2.85509  11.6432   0.001333
#> 
#> Posterior summary of residual std. deviation:
#>            Mean     SD  2.5% 97.5%
#> sigma_SBP 13.17 0.7285 11.83 14.58
#> 
#> 
#> MCMC settings:
#> Iterations = 101:600
#> Sample size per chain = 500 
#> Thinning interval = 1 
#> Number of chains = 3 
#> 
#> Number of observations: 186

The output gives the posterior summary, i.e., the summary of the MCMC (Markov Chain Monte Carlo) sample (which consists of all Markov chains combined).

By default, summary() will only print the posterior summary for the main model parameters of the analysis model. How to select which parameters are shown is described in the vignette Selecting Parameters.

The summary consists of the posterior mean, the standard deviation and the 2.5% and 97.5% quantiles of the MCMC sample, and the tail probability. The tail probability is a measure of how likely the value 0 is under the estimated posterior distribution, and is calculated as \[2\times\min\left\{Pr(\theta > 0), Pr(\theta < 0)\right\}\] (where \(\theta\) is the parameter of interest).

In the following graphics, the shaded areas represent the minimum of \(Pr(\theta > 0)\) and \(Pr(\theta < 0)\):

Additionally, some important characteristics of the MCMC samples on which the summary is based, is given. This includes the range and number of iterations (= Sample size per chain), thinning interval and number of chains.

Furthermore, the number of observations (the sample size of the data) is given.

With the arguments start, end and thin it is possible to select which iterations from the MCMC sample are included in the summary.

For example:
When the traceplot shows that the chains only converged after 1500 iterations, start = 1500 should be specified in summary().

Plot of the posterior distributions

The posterior distributions can be visualized using the function densplot():

densplot(lm1)

By default, densplot() plots the empirical distribution of each of the chains separately. When joined = TRUE the distributions of the combined chains are plotted.


  1. Specification for generalized linear models follows the specification of glm(), specification of linear mixed models follows lme() from package nlme.

  2. The number of chains can be changed with the argument n.chain.