Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S34: Meta-analysis and systematic reviews II
Time:
Tuesday, 05/Sept/2023:
2:00pm - 3:40pm

Session Chair: Dominic Magirr
Session Chair: Jelena Cuklina
Location: Seminar Room U1.197 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
2:00pm - 2:20pm

IPW-based publication bias adjustment in network meta-analysis with clinical trial registries

Ao Huang1, Yi Zhou2, Satoshi Hattori3

1Department of Medical Statistics, University Medical Center Goettingen; 2Beijing International Center for Mathematical Research, Peking University; 3Department of Biomedical Statistics, Graduate School of Medicine, Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka University

Network meta-analysis is an extension of the standard pairwise meta- analysis, which entails us to compare multiple treatments simultaneously and effectively by synthesizing studies of various combinations of comparative treatments. Similarly in the standard meta-analysis, its validity may be threatened by publication bias issue. Although sensitivity analysis methods for publication bias based on selection functions have been developed for pairwise meta-analysis, it is not an easy task to extend them to network meta-analysis due to its complexity. Utilizing clinical trial registries, we propose a simple publication bias adjustment method based on the inverse probability weighting method. Our method can easily handle selective publication processes determined by the t-type statistic of the primary outcome in each study. It is more appealing than the existing sensitivity analysis methods based on the Heckman-type selection function since the t-type statistic would be responsible for publication. Specifically, with the external information from clinical trial registries, we propose multiple estimating equations to estimate the selective publication probability (propensity score) of each study. The maximum likelihood estimation method is inversely weighted by the publication probability to correct the bias. Numerical studies revealed that the proposed method successfully eliminates publication bias. In addition, the adjusted P-score is also proposed for ranking different interventions in the presence of selective publication.



2:20pm - 2:40pm

Implementation of the anchor-based indirect comparison method for equivalence margin derivation in biosimilar development

Claudia Hemmelmann1, Jessie Wang2, Rachid El Galta1

1Hexal AG, Germany; 2Sandoz Pharmaceutical

The standard approach for equivalence margin (EQM) derivation is to perform a “classical” meta-analysis on direct comparisons data and use the 95% confidence interval of the pooled effect with maintaining a certain factor of the treatment effect compared to placebo. However, the treatment regimens in many indications becomes more complex (e.g., combination treatments) and for most of these clinical study data, direct comparisons are not available. Study data for the comparison of treatment A vs treatment B as well as treatment B vs treatment C are available in some situations.

An anchor-based indirect comparison can be applied to estimate the treatment effect of treatment A vs treatment C. This treatment effect (A vs C) can be estimated by calculating the difference of the two treatment effects and the variance is the sum of both variances. The 95% confidence interval of this estimated treatment effect can then be used to derive the EQM.

However, the assumptions transitivity and consistency need to be fulfilled. Both assumptions cannot be statistically tested, especially when only two studies are available. To support the use of this anchor-based indirect comparison different sensitivity analyses were performed.

In this presentation we show the derivation of the EQM using the anchor-based indirect comparison along with the sensitivity analyses for a planned efficacy trial in the biosimilar setting. This approach was successfully applied and agreed with agencies.



2:40pm - 3:00pm

Prognostic models for disease progression in people with multiple sclerosis – a systematic review and assessment of current methodological challenges

Begum Irmak On*1, Kelly A. Reeve*2, Joachim Havla3, Jacob Burns1, Martina Gosteli4, Ulrich Mansmann1, Ulrike Held2

1Biometrics and Bioinformatics, Ludwig-Maximilians University, Germany; 2Department Biostatistics, University of Zurich, Switzerland; 3Institute of Clinical Neuroimmunology, LMU Hospital, Ludwig-Maximilians University Munich, Germany; 4University Library, University of Zurich, Switzerland; * equal contribution

Systematic reviews of prognostic models have revealed poor reporting quality, that studies suffer from high risk of bias, and lack of external validation studies [1, 2]. This is true also for studies published after the release of the TRIPOD reporting guideline [3]. In a Cochrane systematic review, we aimed to identify and summarize multivariable prognostic models for quantifying the risk of clinical disease progression, worsening, and activity in adults with multiple sclerosis (MS)[4]. Relevant databases were searched from January 1996, when an important tutorial on multivariable prognostic models was published [5], to July 2021. Studies evaluating performance (i.e., validation studies) were also included. More than 13,000 records were identified through the database search, and of these 57 studies were identified, reporting on 75 model developments. Of these, 35 models were developed using traditional statistical methods, and the remaining using machine learning (ML) methods. Only two of these models were evaluated externally multiple times. None of the validations were performed by researchers independent from those that developed the model. Over half (52%) of the developed models were not accompanied by model coefficients, tools, or instructions, which hinders their application, independent validation, or reproduction. All but one of the model developments or validations was rated as having high overall risk of bias. The main reason for this was the statistical methods used for the development or evaluation of prognostic models. Over time, we observed an increase in the percent of participants on treatment, diversification of the diagnostic criteria used, an increase in consideration of biomarkers or treatment as predictors, and increased use of ML methods, with the first being published in 2009. Major reporting deficiencies were observed, these reporting deficiencies were more pronounced in the studies using ML. We conclude that current evidence is not sufficient for recommending the use of any of the published prognostic models for people with MS. The MS prognostic research community should adhere to the current reporting and methodological guidelines and conduct many more state-of-the-art external validation studies for the existing or newly developed models. Gaps in methodological guidance were identified regarding the assessment of models developed using complex ML methods.

References:

1. Wynants, L., et al., Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. BMJ, 2020. 369: p. m1328.

2. Van Grootven, B., et al., Prediction models for hospital readmissions in patients with heart disease: a systematic review and meta-analysis. BMJ Open, 2021. 11(8): p. e047576.

3. Collins, G.S., et al., Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMJ, 2015. 350: p. g7594.

4. On Seker, B.I., Reeve, K., Havla, J., Burns, J., Gosteli, M., Lutterotti, A., Schippling, S., Mansmann, U., Held, U., Prognostic models for predicting clinical disease progression, worsening and activity in people with multiple sclerosis Cochrane Library Protocol, 2020.

5. Harrell, F.E., Jr., K.L. Lee, and D.B. Mark, Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med, 1996. 15(4): p. 361-87.



3:00pm - 3:20pm

Investigating the heterogeneity between "study twins"

Christian Röver, Tim Friede

University Medical Center Göttingen, Germany

Meta-analyses are commonly performed based on random-effects models, while in certain cases one might also argue in favour of a common-effect (or fixed-effect) model. One such case may be given by the example of two "study twins" that are performed according to a common protocol (or at least very similar study protocols) (see Bender et al, 2018) and which could be considered replications of the same experiment. Here we investigate the particular case of meta-analysis of a pair of randomized controlled trials, focusing on the question of to what extent homogeneity or heterogeneity may actually be discernible based on the data, and including an empirical investigation of published ("twin") pairs of studies. On the one hand, heterogeneity is hard to establish based only on a pair of studies. On the other hand, an empirical sample of "study twins" in fact appears very homogenous (while selection effects might also play a role). Recommendations for meta-analyses of "study twins" will be provided.

Reference:

R. Bender, T. Friede, A. Koch, O. Kuss, P. Schlattmann, G. Schwarzer, G. Skipka. Methods for evidence synthesis in the case of very few studies. Research Synthesis Methods, 9(3):382–392, 2018.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany