Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S28: Meta-analysis and systematic reviews I
Time:
Tuesday, 05/Sept/2023:
11:00am - 12:40pm

Session Chair: Sebastian Dragos Weber
Session Chair: Kristina Weber
Location: Seminar Room U1.195 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:20am

LFK index does not reliably detect bias in meta-analysis

Guido Schwarzer, Gerta Rücker

Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg

Background: The LFK index has been introduced and promoted as a new improved quantitative method to detect bias in meta-analysis (Furuya-Kanamori et al., 2018). Putative main advantage compared to established tests for funnel plot asymmetry (Sterne et al., 2011) is that its performance does not depend on the number of studies in the meta-analysis (Furuya-Kanamori et al., 2020). An independent evaluation of the LFK index has not been conducted to our knowledge.

Methods: We conducted a simulation study under the null hypothesis of no bias in meta-analysis with a continuous normally distributed outcome, comparing the LFK index test to three standard tests for funnel plot asymmetry (Sterne et al., 2011). In total, 108 scenarios were evaluated by varying the number of studies, mean and standard deviation in experimental group and between-study heterogeneity. In addition, two settings with smaller or larger group sample sizes were considered.

Results: In general, the type I error of the LFK index test showed a massive dependency on the number of studies k ranging from 30 percent (k=10) to about 2 percent (k=100) for smaller group sample sizes and from 19 percent (k=10) to 0 percent (k=100) for larger group sample sizes. Egger's test adhered only well to the prespecified significance level of 10 percent under homogeneity, but was too liberal (smaller groups) or conservative (larger groups) under heterogeneity. The rank test was too conservative for most simulation scenarios. The Thompson-Sharp test was too conservative under homogeneity, but adhered well to the significance level in case of heterogeneity.

Conclusion: The LFK index in its current implementation should not be used to assess bias in meta-analysis. The Thompson-Sharp test shows the best performance in heterogeneous meta-analyses.

References:

Furuya-Kanamori, L., Barendregt, J. J. & Doi, S. A. R. A new improved graphical and quantitative method for detecting bias in meta-analysis. International Journal of Evidence-Based Healthcare 16, 195–203 (2018).

Furuya-Kanamori, L. et al. P value–driven methods were underpowered to detect publication bias: analysis of Cochrane review meta-analyses. Journal of Clinical Epidemiology 118, 86–92 (2020).

Sterne, J. A. C. et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 343, d4002 (2011).



11:20am - 11:40am

A REML method for the evidence-splitting model in network meta-analysis

Hans-Peter Piepho1, Johannes Forkman2, Waqas Malik1

1University of Hohenheim, Germany; 2Swedish University of Agricultural Sciences, Uppsala, Sweden

Checking for possible inconsistency between direct and indirect evidence is an important component of network meta-analysis. Recently, an evidence-splitting model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for heterogeneity appears in both the mean and the variance structure. Thus, full maximum likelihood (ML) has been proposed for estimating the parameters of this model. ML is known to yield biased variance component estimates in linear mixed models. The purpose of the present paper, therefore, is to propose a method based on residual maximum likelihood (REML). Our simulation shows that this new method is quite competitive to methods based on full ML in terms of bias and mean squared error. In addition, some limitations of the evidence-splitting model are discussed. While this model splits direct and indirect evidence, it is not a plausible model for the cause of inconsistency.



11:40am - 12:00pm

Random-effects meta-analysis of subgroup specific effects and treatment-by-subgroup interactions

Renato Valladares Panaro, Christian Röver, Tim Friede

University Medical Center Göttingen, Germany

Random-effects meta-analysis of subgroup specific effects and treatment-by-subgroup interactions

Renato Panaro, Christian Röver and Tim Friede

Contrast-based meta-analysis investigates whether a treatment is more effective than a reference across a set of controlled trials. In addition to the main effect analysis, studies commonly report effect estimates by subgroups based on some baseline characteristics (e.g., sex or age groups). This way, meta-analysis methods considering subgroup information may be able to identify whether or which patient subgroups benefit most from an intervention by explicitly accounting for treatment-by-subgroup interaction effects.

Several meta-analyses have been proposed in this context, modeling main effects and interactions jointly or separately (Godolphin et al, 2020; van Houwelingen et al, 2002). When this is done separately, standard methods for meta-analysis can be applied. However, the results might not be consistent across the analyses. To avoid such inconsistencies, recently an estimation matching strategy has been proposed to first analyze subgroup contrasts and subsequently derive reference subgroup effects from the interaction residuals (Godolphin et. al 2020). However, estimate matching has some disadvantages when it comes to subgroup effect estimation, especially when not all subgroups are represented in all trials, or when heterogeneity in treatment-by-subgroup interactions is considered. This work investigates and compares the different subgroup estimators regarding their statistical properties and operational performance in simulation studies. The methods are motivated and illustrated using recent treatment trials in COVID-19 (WHO 2020, 2021).

Keywords:,

Clinical trials, COVID-19, between-trial heterogeneity, hierarchical models, random-effects, subgroups

Godolphin, PJ, White, IR, Tierney, JF, Fisher, DJ. Estimating interactions and subgroup-specific treatment effects in meta-analysis without aggregation bias: A within-trial framework. Res Syn Meth. 2023; 14( 1): 68- 78. doi:10.1002/jrsm.1590

The WHO Rapid Evidence Appraisal for COVID-19 Therapies (REACT) Working Group. Association Between Administration of Systemic Corticosteroids and Mortality Among Critically Ill Patients With COVID-19: A Meta-analysis. JAMA. 2020;324(13):1330–1341. doi:10.1001/jama.2020.17023

The WHO Rapid Evidence Appraisal for COVID-19 Therapies (REACT) Working Group. Association Between Administration of IL-6 Antagonists and Mortality Among Patients Hospitalized for COVID-19: A Meta-analysis. JAMA. 2021;326(6):499–518. doi:10.1001/jama.2021.11330

van Houwelingen HC, Arends LR, Stijnen T. Advanced methods in meta-analysis: multivariate approach and meta-regression. Stat Med. 2002 Feb 28;21(4):589-624. doi: 10.1002/sim.1040.



12:00pm - 12:20pm

Nonproportional Hazards in Network Meta-Analysis: Efficient Strategies for Model Building and Analysis

Anna Wiksten1, Tiina Kirsilä2, Hans-Peter Piepho3, Zehua Zhou1

1Bristol Myers Squibb, Switzerland; 2Novartis Pharma AG, Switzerland; 3University of Hohenheim, Germany

Cancer therapies with different mode of actions often show different time of efficacy onset. This leads to survival data violating the proportional hazards assumption. Network meta-analysis (NMA) of such data should acknowledge this. Two suitable approaches are fractional polynomial (FP) models and piece-wise constant (PWC) models. These types of models can be difficult to fit in practice, and there is a need for efficient model building and selection strategies.

In this presentation we will present a structured model building strategy for network meta-analysis of digitized Kaplan-Meier curves. We will show how the initial model selection process can be done using frequentist modelling with arm-based NMA parameterization and then selected model and the uncertainty is evaluated using Bayesian methods as presented in Jansen (2011). The proposed model building strategy was published in Wiksten et al (2020).

We will also present technical solutions how new studies can be easily added to existing network using R markdown template.



12:20pm - 12:40pm

Statistical considerations on the coverage probability of a confidence interval when sequentially combining n-of-1 studies in a cumulative meta-analysis

Eleonora Carrozzo1, Georg Zimmermann2, Arne C. Bathke3, Daniel Neunhaeuserer4, Josef Niebauer1, Stefan Tino Kulnik1

1Ludwig Boltzmann Institute for Digital Health and Prevention, Salzburg, Austria; 2Department of Research and Innovation, Paracelsus Medical University, Salzburg, Austria; 3Department of Artificial Intelligence and Human Interfaces, University of Salzburg, Austria; 4Sport and Exercise Medicine Division, Department of Medicine, University of Padova, Italy

N-of-1 randomised clinical trials are receiving broader attention in healthcare research when assessing the effect of interventions. The conventional method to establish intervention effect is a two-arm randomised controlled trial (RCT), where participants are randomised to receive an experimental intervention or a control condition. In contrast, in an N-of-1 design, the individual acts as their own control condition. N-of-1 trials might lead to higher patient care quality while identifying intervention effectiveness of an intervention at an individual level.

However, N-of-1 implementation still presents methodological issues and barriers, such as a general lack of procedural knowledge.

We previously investigated whether sequentially combining the results of single N-of-1 trials in a random-effects meta-analysis allows us to detect statistically significant intervention effects with fewer participants than in a traditional, prospectively powered two-arm RCT. Using data from a crossover RCT, the results showed that the same statistical inference as in an RCT was reached, but requiring fewer participants.

However, it is well known that performing a meta-analysis under a random-effects model systematically underestimates the nominal confidence level of the confidence interval for the overall effect size. Given the promising previous results, further investigation into the methodological properties of the sequential procedure is needed. In the present study, we performed a simulation study both under the null hypothesis and in power, in order to understand how and when this procedure may best be adopted in practice.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany