Additive Information & Details of Evidence Synthesis 

Febrary 3, 2024
The aides, a R package, emerges as a valuable collection of functions designed to provide supplementary information and intricacies in the critical processes of data synthesis and evidence evaluation. In the realm of evidencebased decisionmaking, these processes are pivotal, shaping the foundation upon which informed conclusions are drawn. The aides, essentially a toolkit for pooled analysis of aggregated data, is meticulously crafted to enhance the inclusivity and depth of this decisionmaking approach.
Developed with core values of flexibility, ease of use, and comprehensibility, aides plays a crucial role in simplifying the often complex analysis process. This accessibility extends to both seasoned professionals and the broader public, fostering a more widespread engagement with synthesized evidence. The significance of such engagement cannot be overstated, as it empowers individuals to navigate through the intricacies of data, promoting a better understanding of the evidence at hand.
Moreover, aides is committed to staying at the forefront of advances in the methodology of data synthesis and evidence evaluation. This commitment ensures that users have access to some advanced methods, further enhancing the robustness and reliability of their decisionmaking processes. In the long term, the overarching goal of the aides package is to contribute to knowledge translation, enabling individuals to make decisions based on a comprehensive understanding of the evidence. In essence, aides serves as a beacon, guiding users through the complex terrain of data synthesis and evidence evaluation, ultimately facilitating informed and impactful decisionmaking.
Users are suggested to use functions in aides by calling the library with following syntax:
library(aides)
Briefly, aides currently consists of three focuses as follows:
Disparity: a newly proposed assumption regarding disparities in sample size analysis.
Discordance: a newly proposed assumption regarding discordance in rank of study size analysis.
Sequential analysis: a method to examine the sufficiency of information size.
Users can import their data and do relevant tests or graphics using functions in package aides. The present package consists of eight functions listed as follows:
Disparity test:  
PlotDistrSS()

In the section of Disparity test (Step 3)  
TestDisparity()

In the section of Disparity test (Step 4)  
PlotDisparity()

In the section of Disparity test (Step 5)  
Discordance test:  
TestDiscordance()

In the section of Discordance test (Step 3)  
Sequential analysis:  
DoSA()

In the section of Sequential analysis (Step 2)  
DoOSA()

In the section of Sequential analysis (additional)  
PlotOSA()

In the section of Sequential analysis (additional)  
PlotPower()

In the section of Sequential analysis (additional) 
The following steps and syntax demonstrate how user can carry out disparity test. Users can test the distribution of sample size among studies with visualization (Figure 2.1) for choosing appropriate method for disparity test. Figure 2.2 visualizes the test based on the excessive cases of oulier(s).
library(meta)
data("Olkin1995")
< Olkin1995 dataOlkin1995
$n < dataOlkin1995$n.exp + dataOlkin1995$n.cont dataOlkin1995
Using function shapiro.test()
is a simple way to test whether sample sizes distribute normally, and further visualization with statistics can be carried out by function PlotDistrSS()
.
shapiro.test(dataOlkin1995$n)
PlotDistrSS(dataOlkin1995$n)
#>
#> ShapiroWilk normality test
#>
#> data: dataOlkin1995$n
#> W = 0.2596, pvalue < 2.2e16
If users would like to check normality using KolmogorovSmirnov test, they can set parameter method
with argument "ks"
in the function PlotDistrSS()
.
PlotDistrSS(n = n,
data = dataOlkin1995,
study = author,
time = year,
method = "ks")
TestDisparity(n = n,
data = dataOlkin1995,
study = author,
time = year)
#> Summary of disparities in sample size test:
#> Number of outliers = 13 (Excessive cases = 36509; Pvalue < 0.001)
#> Variability = 3.658 (Pvalue < 0.001)
#>
#> Outlier detection method: MAD
#> Variability detection method: CV
TestDisparity(n = n,
data = dataOlkin1995,
study = author,
time = year,
plot = TRUE)
Due to nonnormal distribution among the study sizes as shown in Step 3 (Figure 2.1 and also see the result of shapiro test), robust method is recommended for testing variability, which can be carried out by the following syntax:
< TestDisparity(n = n,
rsltDisparity data = dataOlkin1995,
study = author,
time = year,
vrblty = "MAD")
#> Summary of disparities in sample size test:
#> Number of outliers = 13 (Excessive cases = 36509; Pvalue < 0.001)
#> Variability = 0.951 (Pvalue < 0.001)
#>
#> Outlier detection method: MAD
#> Variability detection method: MAD
The following syntax instead of step 5 aforementioned is recommended for illustrating disparity plot of variability based on robust coefficient of variation:
PlotDisparity(rsltDisparity,
which = "CV",
szFntAxsX = 1)
The following steps and syntax demonstrate how user can carry out discordance test. Figure 2.4 visualizes the test.
library(meta)
data("Fleiss1993bin")
< Fleiss1993bin dataFleiss1993bin
$n < dataFleiss1993bin$n.asp + dataFleiss1993bin$n.plac
dataFleiss1993bin$se < sqrt((1 / dataFleiss1993bin$d.asp)  (1 / dataFleiss1993bin$n.asp) + (1 / dataFleiss1993bin$d.plac)  (1 / dataFleiss1993bin$n.plac)) dataFleiss1993bin
TestDiscordance(n = n,
se = se,
study = study,
data = dataFleiss1993bin)
#> Summary of discordance in ranks test:
#> Statistics (Bernoulli exact): 2
#> Pvalue: 0.423
#> Note: No significant finding in the test of discordance in study size ranks.
TestDiscordance(n = n,
se = se,
study = study,
data = dataFleiss1993bin,
plot = TRUE)
Sequential analysis in evidencebased medicine is a statistical approach applied in clinical trials and metaanalyses to continually assess accumulating data, allowing for interim decisions on the effectiveness or safety of medical interventions (Kang, 2021; Jennison & Turnbull, 2005; Wetterslev et al., 2017; Wetterslev et al., 2008). In contrast to traditional methods that wait for all data to be collected, sequential analysis enables periodic assessments during a trial. This method is especially valuable when ongoing assessment is ethically or practically necessary. It seeks to balance the need for robust statistical inference with the ethical duty to ensure participant safety and prompt decisionmaking that can influence clinical practice. Proper planning and prespecification in study protocols are crucial for maintaining the integrity of sequential analysis (Thomas et al., 2019).
This method has the capability to manage the overall Type I error in clinical trials and cumulative metaanalyses through the utilization of an alpha spending function, exemplified by the approach introduced by O’Brien & Fleming (1979). The alpha spending function allocates less significance level to early interim analyses, demanding more substantial evidence for declaring statistical significance. Critical boundaries for significance are determined by this function, becoming less stringent as more data accumulates. Sequential analyses use these boundaries to decide at each interim analysis, declaring statistical significance if the cumulative Zscore crosses them. This approach minimizes the risk of false positives in sequential analyses, where multiple interim analyses are conducted, while maintaining the ability to detect true effects.
Thus, essentials of sequential analysis of cumulative metaanalysis encompassing study time, sample size, cumulative Zscore, effects of intervention or exposure, variance of the effects, assumed probability of false positive (type I error; \(\alpha\)), assumed probability of false negative (type II error; \(\beta\)), required information size (RIS), alphaspending monitoring boundaries. The basic formula for calculating the RIS is as follows:
\[ RIS = 2 \times (Z_{1\alpha/2} + Z_{1  \beta})^2 \times 2 \times \sigma^2 / \delta^2 \]
where \(\alpha\) and \(\beta\) represent the assumed overall probabilities of false positive and false negative, respectively. Additionally, \(\delta\) and \(\sigma^2\) refer to the assumed effects, representing either the minimal or expected meaningful effects, and the associated variance.
The following steps and syntax demonstrate how user can carry out sequential analysis. Figure 2.5 sequential analysis plot.
library(meta)
data("Fleiss1993bin")
< Fleiss1993bin dataFleiss1993bin
DoSA(Fleiss1993bin,
source = study,
time = year,
r1 = d.asp,
n1 = n.asp,
r2 = d.plac,
n2 = n.plac,
measure = "RR",
PES = 0.1,
RRR = 0.2,
group = c("Aspirin", "Placebo"))
#> Summary of sequential analysis (main information)
#> Acquired sample size: 28003
#> Required sample size (heterogeneity adjusted): 20874
#> Cumulative z score: 2.035
#> Alphaspending boundary: 1.692 and 1.692
#> Adjusted confidence interval is not necessary to be performed.
#>
#> Summary of sequential analysis (additional information)
#> 1. Assumed information
#> 1.1. Defined type I error: 0.05
#> 1.2. Defined type II error: 0.2
#> 1.3. Defined power: 0.8
#> 1.4. Presumed effect: 0.025
#> (risks in group 1 and 2 were 9.87315825% (expected) and 12.34144781% respectively; RRR = 0.2)
#> 1.5. Presumed variance: 0.099
#>
#> 2. Metaanalysis
#> 2.1. Setting of the metaanalysis
#> Data were pooled using inverse variance approach in randomeffects model with DL method.
#> 2.2. Result of the metaanalysis
#> Log RR: 0.113 (95% CI: 0.222 to 0.004)
#>
#> 3. Adjustment factor
#> The required information size is calculated with adjustment factor based on diversity (Dsquared). Relevant parameters are listed as follows.
#> 3.1. Heterogeneity (Isquared): 39.6%
#> 3.2. Diversity (Dsquared): 76%
#> 3.3. Adjustement factor: 4.103
DoSA(Fleiss1993bin,
source = study,
time = year,
r1 = d.asp,
n1 = n.asp,
r2 = d.plac,
n2 = n.plac,
measure = "RR",
PES = 0.1,
RRR = 0.2,
group = c("Aspirin", "Placebo"),
plot = TRUE)
Observed sequential analysis is recommended for those pooled analysis without prespecified parameters for sequential analysis. In this situation, thus, Step 2 should use following syntax:
DoOSA(Fleiss1993bin,
source = study,
time = year,
r1 = d.asp,
n1 = n.asp,
r2 = d.plac,
n2 = n.plac,
measure = "RR",
group = c("Aspirin", "Placebo"))
#> Summary of observed sequential analysis (main information)
#> Acquired sample size: 28003
#> Optimal sample size (heterogeneity adjusted): 36197
#> Cumulative z score: 2.035
#> Alphaspending boundary: 2.228 and 2.228
#> Adjusted confidence interval is suggested to be performed.
#>
#> Adjusted confidence interval based on type I error 0.0129289672426074:
#> 0.252 to 0.025
#>
#> Summary of observed sequential analysis (additional information)
#> 1. Observed information
#> 1.1. Defined type I error: 0.05
#> 1.2. Defined type II error: 0.2
#> 1.3. Defined power: 0.8
#> 1.4. Observed effect size 0.019
#> (risks in group 1 and 2 were 10.44608758% and 12.34144781% respectively; RRR = 0.181)
#> 1.5. Observed variance: 0.101
#>
#> 2. Metaanalysis
#> 2.1. Setting of the metaanalysis
#> Data were pooled using inverse variance approach in randomeffects model with DL method.
#> 2.2. Result of the metaanalysis
#> Log RR: 0.113 (95% CI: 0.222 to 0.004)
#>
#> 3. Adjustment factor
#> The optimal information size is calculated with adjustment factor based on diversity (Dsquared). Relevant parameters are listed as follows.
#> 3.1. Heterogeneity (Isquared): 39.6%
#> 3.2. Diversity (Dsquared): 76%
#> 3.3. Adjustment factor: 4.103
Observed sequential analysis is illustrated in using the same function (DoOSA()
) with argument TRUE
for the parameter plot
, and plot of sequentialadjusted power could be an alternative graphic of observed sequential analysis. These analyses and graphics can be carried out by the following two steps with syntax.
< DoOSA(Fleiss1993bin,
output source = study,
time = year,
r1 = d.asp,
n1 = n.asp,
r2 = d.plac,
n2 = n.plac,
measure = "RR",
group = c("Aspirin", "Placebo"),
plot = TRUE,
SAP = TRUE)
PlotPower(output)
Jennison, C., & Turnbull, B. W. (2005). Metaanalyses and adaptive group sequential designs in the clinical development process. Journal of biopharmaceutical statistics, 15(4), 537–558. https://doi.org/10.1081/BIP200062273
Kang, H. (2021). Trial sequential analysis: novel approach for metaanalysis. Anesthesia and Pain Medicine, 16(2), 138150. https://doi.org/10.17085/apm.21038
O’Brien, P. C., & Fleming, T. R. (1979). A multiple testing procedure for clinical trials. Biometrics, 35(3), 549556. ftp://ftp.biostat.wisc.edu/pub/chappell/641/papers/paper34.pdf
Thomas, J., Askie, L. M., Berlin, J. A., Elliott, J. H., Ghersi, D., Simmonds, M., Takwoingi, Y., Tierney, J. F., Higgins, J. P. T. (2019). Prospective approaches to accumulating evidence. In Higgins J. P. T., & Green, S., (Eds.), Cochrane Handbook for Systematic Reviews of Interventions. Chichester (UK): John Wiley & Sons. https://training.cochrane.org/handbook/archive/v6/chapter22
Wetterslev, J., Thorlund, K., Brok, J., & Gluud, C. (2008). Trial sequential analysis may establish when firm evidence is reached in cumulative metaanalysis. Journal of clinical epidemiology, 61(1), 6475. https://doi.org/10.1016/j.jclinepi.2007.03.013
Wetterslev, J., Jakobsen, J. C., & Gluud, C. (2017). Trial sequential analysis in systematic reviews with metaanalysis. BMC medical research methodology, 17(1), 118. https://doi.org/10.1186/s1287401703157