Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S44: Online hypothesis testing and subgroup analyses in complex innovative designs
Time:
Wednesday, 06/Sept/2023:
8:30am - 10:10am

Session Chair: Thomas Asendorf
Session Chair: Marta Bofill Roig
Location: Lecture Room U1.131 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
8:30am - 9:10am

Online error rate control for platform trials

David Robertson1, James Wason2, Franz König3, Martin Posch3, Thomas Jaki1,4

1MRC Biostatistics Unit, University of Cambridge, United Kingdom; 2Newcastle University, United Kingdom; 3Medical University of Vienna, Austria; 4University of Regensburg, Germany

Platform trials evaluate multiple experimental treatments under a single master protocol, where new treatment arms are added to the trial over time. Given the multiple treatment comparisons, there is the potential for inflation of the overall type I error rate, which is complicated by the fact that the hypotheses are tested at different times and are not necessarily pre-specified. Online error rate control methodology provides a possible solution to the problem of multiplicity for platform trials where a relatively large number of hypotheses are expected to be tested over time. In the online multiple hypothesis testing framework, hypotheses are tested one-by-one over time, where at each time-step an analyst decides whether to reject the current null hypothesis without knowledge of future tests but based solely on past decisions. Methodology has recently been developed for online control of the false discovery rate as well as the familywise error rate. In this talk, we describe how to apply online error rate control to the platform trial setting, present extensive simulation results, and give some recommendations for the use of this new methodology in practice. We also illustrate how online error rate control would have impacted a currently ongoing platform trial.



9:10am - 9:30am

Multiple testing of partial conjunction null hypotheses, with application to replicability analysis of high-dimensional studies

Thorsten Dickhaus1, Ruth Heller2, Anh-Tuan Hoang1, Anna Vesely1

1University of Bremen, Germany; 2Tel Aviv University

The partial conjunction null hypothesis is tested in order to discover a signal that is present in multiple (sub-)studies. The standard approach of carrying out a multiple test procedure on the partial conjunction p-values can be extremely conservative. We suggest alleviating this conservativeness, by eliminating many of the conservative partial conjunction p-values prior to the application of a multiple test procedure. This leads to the following two-step procedure: first, select the set with partial conjunction p-values below a selection threshold; second, within the selected set only, apply a family-wise error rate or false discovery rate controlling procedure on the conditional partial conjunction p-values, where conditioning is on the selection event. We discuss theoretical properties of the proposed procedure, and we demonstrate its performance on simulated and real data.



9:30am - 9:50am

Graphical procedures for online error control

Lasse Fischer1, Marta Bofill Roig2, Werner Brannath1

1University of Bremen; 2Medical University of Vienna

Bretz et al. (2009) proposed the representation of multiple testing procedures by directed graphs, where the null hypotheses are represented by nodes accompanied by their individual significance levels and connected by weighted vertices, illustrating level distribution in case of a rejection. Such graphical procedures are becoming increasingly popular, since they make the calculation of individual significance levels easy to follow and thus facilitate communication with users. In addition, graphical procedures are often very general and therefore include several other procedures. In many contemporary applications, hypotheses are tested in an online manner. This means, the hypotheses are tested one at a time without access to the future hypotheses and decisions. At each step, a type I error rate, such as familywise error rate (FWER) or false discovery rate (FDR), shall remain under control. In this presentation, we focus on the construction of graphical procedures providing error control in the online setting.

The extension of the classical graphical procedure by Bretz et al. (2009) to the online framework has been recently proposed. In this context, the graph illustrates the significance level distribution over time. However, previous work has shown that this Online-Graph leads to low power when the number of hypotheses is large, which is why other approaches, such as Adaptive-Discard (ADDIS) procedures (Tian and Ramdas, 2019, 2021), are preferred in the online setting.

In this talk, we present a new online procedure that combines the concepts of adaptivity and discarding to the Online-Graph. The resulting ADDIS-Graph controls the FWER when the p-values are independent. We show that it can be also adapted to a local dependence structure and an asynchronous testing setup, resulting in power superiority over current state-of-art methods. Furthermore, we extend the approach to construct an FDR-ADDIS-Graph with similar advantages. We illustrate the gain in power when using the ADDIS-Graph compared to previous procedures through a simulation study. The combination of easy interpretability and high online power makes these graphical ADDIS approaches well suited for a multitude of online applications, including complex trial designs, such as platform trials, but also large-scale test designs, such as those faced in genomics research.



9:50am - 10:10am

Multi-stage adaptive enrichment designs with BSSR

Marius Placzek, Tim Friede

University Medical Center Göttingen, Germany

Adaptive enrichment designs offer the possibility to select promising subgroups at (unblinded) interim analyses and reallocate sample size in further stages. We show a design additionally implementing a blinded sample size recalculation (BSSR) in an internal pilot study. The aim is to improve the timing of the interim analysis. To do so we investigate the influence of the timepoint of the sample size review and timepoint of the interim analysis. For normally distributed endpoints, a strategy combining blinded sample size recalculation and adaptive enrichment at an interim analysis is proposed, i.e. at an early timepoint nuisance parameters are reestimated and the sample size is adjusted while subgroup selection and enrichment is performed later. Implications of different scenarios including multiple interim analyses, i.e. multiple stages, and type I error rate control are discussed.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany