Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S4: Biometrical Journal Showcase
Time:
Monday, 04/Sept/2023:
11:00am - 12:40pm

Session Chair: Arne Bathke
Session Chair: Matthias Schmid
Location: Lecture Room U1.101 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:20am

On the logic of collapsibility for causal effect measures

Vanessa Didelez1, Mats Stensrud2

1Leibniz Institute for Prevention Research and Epidemiology - BIPS, Germany; 2EPFL

There is a long history of confusing “non-collapsibility” and “confounding”, including a long history of attempts to clarify the distinction. The topic has received renewed attention in the context of subgroup analyses in randomized trials together with the issue of choosing an estimand in view of intercurrent events. The problem is compounded by the fact that the typical examples of non-collapsible measures, odds-ratios and hazard-ratios, also have a problematic causal interpretation, which is again a separate issue from whether they are affected by confounding in a given study. We discuss these issues from a causal point of view, separating it from whether the basis of inference is trial- or real-world data.

The key messages are: (1) To avoid misunderstandings, associational concepts of dependence should clearly and formally be distinguished from causal contrasts. (2) Confounding and non-collapsibility are separate issues and should be kept apart, and an RCT does guarantee non-confounding at baseline (but can suffer from many other problems). (3) Odds ratios and especially hazard ratios are problematic as causal contrasts, the latter due to inherent conditioning on survival, and also e.g. for transportability. However, collapsibility does not in itself guarantee a meaningful causal contrast, e.g. hazard differences. Moreover, there is empirical evidence that patients and domain experts prefer causal contrasts in terms of absolute risk, and their advantages and computation will be illustrated in this talk.



11:20am - 11:40am

Randomized p-values in replicability analysis

Thorsten Dickhaus, Anh-Tuan Hoang

University of Bremen, Germany

We will be concerned with testing replicability hypotheses for many endpoints simultaneously. This constitutes a multiple test problem with composite null hypotheses. Traditional p-values, which are computed under least favourable parameter configurations (LFCs), are over-conservative in the case of composite null hypotheses. As demonstrated in prior work, this poses severe challenges in the multiple testing context, especially when one goal of the statistical analysis is to estimate the proportion π0 of true null hypotheses. To address this issue, we will discuss the application of randomized p-values in replicability analysis. By means of theoretical considerations as well as computer simulations, we will demonstrate that their usage typically leads to a much more accurate estimation of π0 than the LFC-based approach. Furthermore, we will draw connections to other recently proposed methods for dealing with conservative p-values in the multiple testing context. Finally, we will present a real data example from genomics.



11:40am - 12:00pm

Missing data imputation in clinical trials using recurrent neural network facilitated by clustering and oversampling

Halimu Haliduola1, Frank Bretz2,3, Ulrich Mansmann4

1Alvotech Germany GmbH, Germany; 2Novartis Pharma AG, Basel, Switzerland; 3Section for Medical Statistics, Medical University of Vienna, Vienna, Austria; 4Institute for Medical Information Processing, Biometry and Epidemiology (IBE), LMU Munich, Munich, Germany

In clinical practice, the composition of missing data may be complex, for example, a mixture of missing at random (MAR) and missing not at random (MNAR) assumptions. Many methods under the assumption of MAR are available. Under the assumption of MNAR, likelihood-based methods require specification of the joint distribution of the data, and the missingness mechanism has been introduced as sensitivity analysis. These classic models heavily rely on the underlying assumption, and, in many realistic scenarios, they can produce unreliable estimates. In this paper, we develop a machine learning based missing data prediction framework with the aim of handling more realistic missing data scenarios. We use an imbalanced learning technique (i.e., oversampling of minority class) to handle the MNAR data. To implement oversampling in longitudinal continuous variable, we first perform clustering via 𝑘-mean trajectories. And use the recurrent neural network (RNN) to model the longitudinal data. Further, we apply bootstrap aggregating to improve the accuracy of prediction and also to consider the uncertainty of a single prediction. We evaluate the proposed method using simulated data. The prediction result is evaluated at the individual patient level and the overall population level. We demonstrate the powerful predictive capability of RNN for longitudinal data and its flexibility for nonlinearmodeling. Overall, the proposed method provides an accurate individual prediction for both MAR and MNAR data and reduce the bias of missing data in treatment effect estimation when compared to standard methods and classic models. Finally, we implement the proposed method in a real dataset from an antidepressant clinical trial. In summary, this paper offers an opportunity to encourage the integration of machine learning strategies for handling ofmissing data in the analysis of randomized clinical trials.



12:00pm - 12:20pm

Missing data: A statistical framework for practice

James Robert Carpenter1,2

1London School of Hygiene & Tropical Medicine, United Kingdom; 2MRC Clinical Trials Unit at UCL, UK

Missing data are ubiquitous in medical research, yet there is still uncertainty among many analysts over when restricting to the complete records is likely to be acceptable, when more complex methods (e.g. maximum likelihood, multiple imputation and Bayesian methods) should be used, how they relate to each other and the role of sensitivity analysis.

Based on Carpenter and Smuk (2021), this talk seeks to equip practitioners to address the issues mentioned above by presenting a framework for analysis of partially observed data and illustrative examples, alongside an overview of how the various missing data methodologies in the literature relate. In particular, we describe how multiple imputation can be readily used for sensitivity analyses, which are still infrequently performed.

The ideas are illustrated with a cohort study, a multi-centre case control study and a randomised clinical trial.

Reference:

Carpenter, JR, Smuk, M. Missing data: A statistical framework for practice. Biometrical Journal. 2021; 63: 915– 947. https://doi.org/10.1002/bimj.202000196



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany