Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S41: Use of external data
Time:
Tuesday, 05/Sept/2023:
4:10pm - 5:50pm

Session Chair: Nicole H Augustin
Session Chair: Dominik Heinzmann
Location: Seminar Room U1.197 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
4:10pm - 4:30pm

Estimation of treatment effects in early phase randomized clinical trials involving external control data

Heiko Götte1, Marietta Kirchner2, Johannes Krisam2, Arthur Allignol3, Armin Schüler1, Meinhard Kieser2

1Merck Healthcare KGaA, Germany; 2Institute of Medical Biometry, University of Heidelberg, Germany; 3Daiichi Sankyo Europe GmbH

Randomized controlled trials (RCT) are the gold standard design even for early phase development as they provide unbiased treatment effect estimates. However, the low sample sizes in those settings lead to high variability of the treatment effect estimate. This variability could be reduced by adding external control data (augmented RCT) which might introduce bias in return. For the common setting of suitable subject-level control group data only available from one external (clinical trial or real-world) data source, we evaluate different analysis options for estimating the marginal treatment effect in the target, i.e. RCT, population via hazard ratios. The analyses options have in common that the contribution of the external control data is usually driven by the level of similarity with the current RCT data. Such level of similarity can be assessed via outcome and/or baseline covariate data comparisons. We provide an overview over existing methods where we focus on Bayesian hierarchical models (BHM) for the outcome and propensity-score (PS) based approaches for baseline covariates.

We propose a novel option which includes an outcome and a baseline data component: a model averaging-based combination of BHM estimates and PS-based estimates. The relative contribution of the estimates is determined by a model averaging weight which reflects the Likelihood contributions of RCT subjects after fitting BHM and PS-models to the complete data set. The reasoning behind this approach is twofold: the different ways in handling external data makes the comparison of the Likelihood contributions of external subjects between BHM and PS-models difficult and better model fit just due to the external control subjects would have a higher potential of introducing bias.

In our simulation study under varying assumptions regarding observable and unobservable confounder distributions using a time-to-event model, we compare a selection of existing methods as well as the proposed approach. Our various simulation scenarios also reflect the differences between external clinical trial and real-world data.

One of our main findings is that only very few analysis options for augmented RCTs perform better than using the RCT estimate which ignores the external control data. “Better” means substantially lower variability (by means of root mean square error (RMSE)) with low bias across various scenarios. The non-favorable options either introduce too much bias in certain scenarios (simple PS weighting), have limited RMSE reduction with moderate bias (unadjusted BHM) or increased RMSE (PS matching before fitting hierarchical model) compared to the RCT estimate. Only two analysis options which conflate outcome and baseline covariate data perform better than the RCT estimate: a marginalized estimate from a covariate adjusted hierarchical model (AHM) and our proposal, which might use AHM as its outcome component.



4:30pm - 4:50pm

Robust incorporation of external information in two-arm trial hypothesis testing

Silvia Calderazzo, Manuel Wiesenfarth, Annette Kopp-Schneider

German Cancer Research Center, Germany

When designing a clinical trial, external information on the trial’s parameters of interest is sometimes available. The Bayesian approach allows borrowing external information through the adoption of informative prior distributions. In two-arm trials, external information may be available separately for the treatment arm and/or the control arm mean. A difficulty in this situation is that the type I error rate under the informative prior analysis typically depends on the (unknown) true common mean under the null hypothesis, and its maximum may even reach 1. We propose a compromise approach to testing which aims at achieving type I error rates intermediate between the no borrowing (i.e., frequentist) and full borrowing approaches, according to a pre-specified weight parameter. While dependence on the true unknown control parameter value leads to an only approximate correspondence between the target and the realized type I error rate compromise, an explicit upper bound on type I error rate can still be enforced. Such an upper bound may be of advantage in a regulatory setting and improve transparency in communicating the trial design. A dynamic method tailored to hypothesis testing is also proposed to adaptively estimate the weight parameter from the observed data. Simulations are performed to show the properties of the approach under various prior-data conflict and prior informativeness configurations.



4:50pm - 5:10pm

Augmenting randomized trials with real-world data: a simulation study evaluating methods for hybrid control arm analyses

Rafael Sauter1, Benjamin Ackerman1, Martina Fontana1, Ignazio Craparo2, Brian Hennessy1

1Johnson & Johnson; 2Alira Health, Italy

Randomized controlled trials (RCTs) are considered the gold standard for estimating causal effects of new therapies and interventions, yet statistical challenges remain in detecting treatment effects among rare disease populations, particularly when serious outcomes also occur infrequently. In such cases, innovative approaches exist to supplement trials with evidence from real-world data (RWD) by constructing a hybrid control arm, retaining the benefits of randomization while increasing study sample size and power. Identifying suitable RWD for this use case is critical, and data appropriateness are highly dependent on the design of the candidate RCT, its study population, and its primary outcome measure. Even with high-quality RWD, differences in study populations may exist, and must be properly accounted for to ensure comparability. In this work, we present propensity score-type weighting methods to align study populations when conducting hybrid control arm analyses.

When augmenting a RCT control arm with RWD in a hybrid control analysis, it is important to first identify any baseline patient covariate imbalances. When prognostic factors of the outcome are imbalanced between the two studies, then naively pooling the control arms together without any population adjustment could result in a biased treatment effect estimate. Such imbalances can be accounted for with propensity scores by modeling the probability of study membership conditional of the observed prognostic factors and weighting the RWD patients by the odds of their propensity score. This ensures that the distribution of covariates in the weighted RWD sample is more similar to the demographic profile of the RCT. The outcome model is then fit on the combined data of RCT and weighted RWD patients. When all prognostic factors are accounted for, and the propensity score model is correctly specified, the augmented treatment effect is unbiased.

In our proposed analysis method, the hybrid control group consists of both unit-weighted RCT patients and propensity score-weighted RWD patients. The scale of the RWD weights is a function of the study sample sizes, and if one study is much larger than the other, inflated variability among the combined control arm could result in lower power and type-1 error. To address this, we assess methods to rescale the RWD weights.

Operating characteristics of the proposed methods are established in a simulation study, where RCT and RWD samples are generated with baseline covariates that are increasingly prognostic of the outcome and are also increasingly imbalanced between the studies. In doing so, we illustrate conditions where use of the proposed methods to augment trials with RWD yield unbiased estimates with greater precision than RCT-only analyses while maintaining the type-1 error. We provide guidance on how to adequately quantify covariate imbalance and how to accordingly justify the appropriateness of a propensity score approach. We highlight key criteria when selecting suitable RWD based on study population comparability and highlight practical considerations and limitations when implementing the proposed methods for hybrid control arm analyses.



5:10pm - 5:30pm

Dynamic borrowing to minimize mean squared error and inference with Bayesian bootstrap

Jixian Wang1, Ram Tiwari2

1BMS, Switzerland; 2BMS, US

Dynamic borrowing to augment the control arm of a small RCT leveraging external data sources has been increasingly used. The key step in dynamic borrowing is determining the amount of borrowing based on similarity of controls from the trial and the external data sources. A simple approach to this task is the empirical Bayesian approach which maximizes the marginal likelihood (maxML) of the amount of borrowing. We propose an alternative that determines the amount of borrowing to minimize the mean squared error (minMSE) of the estimator combining both sources. We derive a simple algorithm using Bayesian bootstrap for determining the amount of borrowing that minimizes the posterior MSE based on posterior means and variances. It can also be used with a pre-adjustment on the external controls for population difference between the two sources using, e.g., inverse probability weighting. We show that the minMSE rule has similar asymptotic properties as the maxML rule, which leads to either full or no borrowing, depending on weather the (adjusted) means of control outcome from the two sources are the same or not. Statistical inference can be made using the Bayesian bootstrapped posterior sample, or approximate asymptotic normal distribution. A simulation study is performed to compare the min MSE rule with the maxML rule. Generally, the minMSE rule leads to smaller MSE than that from the maxML rule, but its coverage of frequentist 95 percent confidence interval may be better or worse than that of the maxML rule, depending on multiple factors . Our approach is very easy to implement and computationally efficient. The approach is illustrated by an example of borrowing controls of an Acute Myeloid Leukemia trial from another study.



5:30pm - 5:50pm

Partial extrapolation in pediatric drug development using robust meta-analytic predictive priors, tipping point analysis and expert elicitation

Florian Voß1, Morten Dreher2, Elvira Erhardt2, Heiko Müller1, Oliver Sailer2, Christian Stock1

1Boehringer Ingelheim Pharma GmbH & Co. KG, Ingelheim, Germany; 2Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach, Germany

Drug development in pediatric patients is often challenging. Pediatric trials have typically small sample sizes due to ethical considerations and feasibility reasons, particularly for rare diseases. As a result, they are often not powered to evaluate efficacy, but focus on safety aspects and pharmacology, and it is therefore difficult to reach conclusions on efficacy solely based on the pediatric trial. On the other hand, clinical development in pediatrics is initiated after clinical trials in adults have shown a positive benefit-risk profile. This allows to extrapolate and make use of the results obtained in adults to strengthen the evidence in the pediatric population (Gamalo et al, 2022). However, it is important to pre-specify how the external data from adults is used. A transparent and scientifically rigorous way to define the weight given to the external data and to assess the sensitivity of the results to the chosen weight is desirable. We show how Bayesian dynamic borrowing with robust meta-analytic predictive (MAP) priors, visualizations of tipping point analysis and expert elicitation can achieve this goal.

Our approach is motivated and illustrated by a real example in a rare pediatric disease with a continuous endpoint and several available trials in related adult indications. During the planning stage, formal prior elicitation is conducted with a panel of clinical experts. Results of a meta-analysis in adult patients are shown to them in a first step to assess the robustness across adult trials. A tipping point analysis, as used by Best at al. (2021) with slight modification, is used to visualize for a wide range of hypothetical result of the pediatric trial and different one-sided evidence levels how much weight is needed for the informative component of the robust MAP prior to infer that the treatment is efficacious. Next, the panel is asked about conclusions they would make from the available evidence in different scenarios and the weights they would put on the evidence from the trials in adults considering their believe in the applicability of the adult data, tipping point analyses and other operating characteristics. The pre-specified weight is then derived via a roulette method (Gosling 2018) based on the experts’ opinions and used for the primary analysis while the tipping point analysis serves as a sensitivity analysis for the impact of the chosen weight on the conclusion.

We also discuss our approach in the context of the new draft guidance on pediatric extrapolation (ICH, 2022) and introduce a publicly available R package called “tipmap” which facilitates the implementation of the described approach.

References:

Best N, et al. Assessing efficacy in important subgroups in confirmatory trials: An example using Bayesian dynamic borrowing. Pharm Stat. 2021;20(3):551-562.

Gamalo M, et al. Extrapolation as a Default Strategy in Pediatric Drug Development. Ther Innov Regul Sci. 2022;56:883–894.

Gosling 2018. SHELF: The Sheffield Elicitation Framework. In: Dias, L., Morton, A., Quigley, J. (eds) Elicitation. International Series in Operations Research & Management Science, vol 261. Springer, Cham. https://doi.org/10.1007/978-3-319-65052-4_4

ICH guideline E11A on pediatric extrapolation. Draft version URL: https://www.ema.europa.eu/en/documents/scientific-guideline/draft-ich-guideline-e11a-pediatric-extrapolation-step-2b_en.pdf



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany