Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S26: Multiple testing
Time:
Tuesday, 05/Sept/2023:
11:00am - 12:40pm

Session Chair: Ursula Becker
Session Chair: Jiawei Wei
Location: Seminar Room U1.191 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:20am

multiCASANOVA - Multiple group comparisons for non-proportional hazard settings

Ina Dormuth1, Frank Konietschke2, Carolin Herrmann2, Markus Pauly1, Marc Ditzhaus3

1TU Dortmund University, Germany; 2Charité – Universitätsmedizin Berlin, Germany; 3Otto von Guericke University Magdeburg, Germany

Comparing multiple groups based on time-to-event data is a common subject of interest in clinical studies. The log-rank test is commonly used to assess differences between groups (e.g., treatment groups) and is optimal under the assumption of proportional hazards. However, when this assumption is violated, the log-rank test dramatically loses power. When comparing two groups, various methods that are more robust towards a violation of the proportional hazards assumption have already been proposed. One promising approach is the combination of several weighted log-rank tests, such as CASANOVA [1]. Nevertheless, when multiple groups are compared to one another, it is often not only of interest to discover a global significant difference but to determine the origin of this difference. This results in the necessity to test multiple individual hypotheses. In order to obtain a test decision for the individual comparisons, the log-rank test requires adjustment for multiple testing, such as Bonferroni. Such corrections control the familywise type I error but are usually conservative. We propose a new multiple contrast test based on the CASANOVA approach [1]. This makes the proposed test more powerful under non-proportional hazards and at the same time, renders the need for p-value correction obsolete. We evaluate the performance of the test in extensive Monte-Carlo Simulation studies covering proportional as well as non-proportional hazards settings.

[1] Ditzhaus, M., Genuneit, J., Janssen, A., & Pauly, M. (2021). CASANOVA: Permutation inference in factorial survival designs. Biometrics.



11:20am - 11:40am

Simultaneous confidence intervals for an extended Koch-Röhmel design in three-arm non-inferiority trials

Martin Scharpenberg, Werner Brannath

University of Bremen, Germany

Three-arm `gold-standard' non-inferiority trials are recommended for indications where only unstable reference treatments are available and the use of a placebo group can be justified ethically. For such trials several study designs have been suggested that use the placebo group for testing 'assay sensitivity', i.e. the ability of the trial to replicate efficacy. Should the reference fail in the given trial, then non-inferiority could also be shown with an ineffective experimental treatment and hence becomes useless. In this talk we extend the so called Koch-Röhmel design where a proof of efficacy for the experimental treatment is required in order to qualify the non-inferiority test. While efficacy of the experimental treatment is an indication for assay sensitivity, it does not guarantee that the reference is sufficient efficient to let the non-inferiority claim be meaningful. It has therefore been suggested to adaptively test non-inferiority only if the reference demonstrates superiority to placebo and otherwise to test δ-superiority of the experimental treatment over placebo, where δ is chosen in such a way that it provides proof of non-inferiority with regard to the reference's historical effect. We extend the previous work by complementing its adaptive test with compatible simultaneous confidence intervals.

Confidence intervals are commonly used and suggested by regulatory guidelines for non-inferiority trials. We show how to adopt different approaches to simultaneous confidence intervals from the literature to the setting of three-arm non-inferiority trials and compare these methods in a simulation study. Finally we apply these methods to a real clinical trial example.



11:40am - 12:00pm

Control of essential type I error rates in clinical trials with multiple hypotheses

Werner Brannath1, Frank Bretz2

1University Bremen, Germany; 2Novartis AG, Switzerland

The talk is about alternatives to the control of the familywise error rate in clinical trials with multiple hypotheses. The focus will be on concepts that control type I error rates only so far, as they are relevant to patients outside and after the trial. Focusing on studies with multiple populations, the familywise expected loss (FWEL; Maurer et al., 2022) and the population-wise error rate (PWER; Brannath et al., 2022) will be introduced as examples. Furthermore, focusing on multi-arm and platform trials with the possibility of dropping treatments mid-trial, it will be discussed how one could account for a mid-trial reduction of the post-trial risks when dropping a treatment. The solutions will be motivated with independent clinical trials, for which no multiplicity adjustment is required. The talk will end with a discussion and outlook on further questions and future research concerning the control of essential type I error rates.

Literature
Brannath, W., Hillner, C., Kornelius, R. (2023). The population-wise error rate for clinical
trials with overlapping populations. Statistical Methods in Medical Research, 32(2):334-352

Maurer, W., Bretz, F., and Xun, X. (2023). Optimal test procedures for multiple hypotheses
controlling the familywise expected loss. Biometrics, to appear.

Brannath, W. (2023). Discussion on "Optimal test procedures for multiple hypotheses
controlling the familywise expected loss" by Willi Maurer, Frank Bretz, and Xiaolei Xun.
Biometrics, to appear.



12:00pm - 12:20pm

Statistical calibration for infinite many future values in linear regression: simultaneous or pointwise tolerance intervals or what else?

Yang Han1, Yujia Sun1, Lingjiao Wang1, Wei Liu2, Frank Bretz3

1Department of Mathematics, University of Manchester, UK; 2School of Mathematical Sciences & Southampton Statistical Sciences Research Institute, University of Southampton, UK; 3Novartis Pharma AG, Basel, Switzerland

Statistical calibration using regression is a useful statistical tool with many applications. For confidence sets for x-values associated with infinitely many future y-values, there is a consensus in the statistical literature that the confidence sets constructed should guarantee a key property. While it is well known that the confidence sets based on the simultaneous tolerance intervals (STIs) guarantee this key property conservatively, it is desirable to construct confidence sets that satisfy this property exactly. Also, there is a misconception that the confidence sets based on the pointwise tolerance intervals (PTIs) also guarantee this property. This paper constructs the weighted simultaneous tolerance intervals (WSTIs) so that the confidence sets based on the WSTIs satisfy this property exactly if the future observations have the x-values distributed according to a known specific distribution F(·). Through the lens of the WSTIs, convincing counter examples are also provided to demonstrate that the confidence sets based on the PTIs do not guarantee the key property in general and so should not be used. The WSTIs have been applied to real data examples to show that the WSTIs can produce more accurate calibration intervals than STIs and PTIs.



12:20pm - 12:40pm

Online multiple testing with heterogeneous data

Sebastian Doehler1, Iqraa Meah1,2, Etienne Roquain2

1Darmstadt University of Applied Sciences, Germany; 2Universite Sorbonne, France

Online multiple testing refers to the setting where a possibly infinite number of hypotheses are tested, and the p-values are available one by one sequentially. This differs from classical multiple testing where the number of tested hypotheses is finite and known beforehand, and the p-values are available simultaneously.

It is well-known that the existing methods for online multiple testing can suffer from a significant loss of power if the null p-values are conservative. In this work, we extend the previously introduced methodology to obtain more powerful procedures for the case of super-uniformly distributed p-values. These types of p-values arise in important settings, e.g. when discrete hypothesis tests are performed or when the p-values are weighted. To this end, we introduce the method of superuniformity reward (SUR) that incorporates information about the individual null cumulative distribution functions. Our approach yields several new ’rewarded’ procedures that offer uniform power improvements over known procedures and come with mathematical guarantees for controlling online error criteria based either on the family-wise error rate (FWER) or the marginal false discovery rate (mFDR).



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.151+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany