Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S53: Real-world evidence
Time:
Wednesday, 06/Sept/2023:
10:40am - 12:20pm

Session Chair: Nathalie Barbier
Session Chair: Maria Geers
Location: Lecture Room U1.101 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
10:40am - 11:00am

EUnetHTA 21 methods guidelines for EU HTA: The good, the bad, and the ugly - an industry perspective.

Sandro Gsteiger, Maximo Carreras, Stefanie Hieke-Schulz, Kaspar Rufibach, Alex Simpson, Hannah Staunton, Anna Steenrod, Lutz Westermann

F. Hoffmann-La Roche Ltd.

In less than 2 years, oncology and advanced therapy medicinal products will undergo the joint European HTA process defined in the EU HTA Regulation (HTAR) adopted by the European Parliament in Dec 2021 [1]. EUnetHTA 21, a consortium of 13 national HTA organizations, has developed proposals for methodological guidelines as part of a service contract for the European Commission [2]. The final methodological guidelines will be adopted by the EU HTA Coordination Group, composed by representatives of all 27 Member States, before January 2025. However, some uncertainty remains regarding the final methods for European HTA, though they will likely rely heavily on the proposals developed by EUnetHTA 21.

From a pharmaceutical industry perspective, we consider that the EUnetHTA 21 documents provide a mixed picture. Positive elements include the clear discussion of the types of clinical outcome assessments (in line with for example the FDA categorization of COAs), the description of “classical” evidence synthesis methods such as pairwise meta-analysis and (frequentist and Bayesian) network meta-analysis, and the inclusion of the estimands framework (at least in parts). Nevertheless, there are also several shortcomings with the proposed guidelines. These include the critical view on the role of non-randomised evidence in decision making, the lack of methods such as target trial emulation or the use of external controls, the problem of how to address the multiplicity of PICOs, and the relatively large focus on hypothesis testing.

Overall, we see a risk that the proposed guidelines by EUnetHTA 21 fail to increase the level of harmonization of methods and approaches used for the clinical assessment of health technologies. The reduction of duplication and of fragmentation of the HTA landscape - sought by the EU HTAR - may not be achieved, which may in turn harm patients who rely on timely access to innovative therapies. In our view, methodological guidelines should seek to establish a common understanding of a pan-EU HTA approach. The scientific foundations of Health Technology Assessment should be harmonized more than what is currently proposed by EUnetHTA 21. Without this directional change in the methodological guidelines, the novel EU Joint Clinical Assessment may add an additional layer of complexity instead of reducing duplication and, therefore, may fail to improve patient access in Europe.

We will shortly introduce the key pillars of the EU HTAR, summarize the main elements of the EUnetHTA 21 methodological guideline proposals, point out what - in our view - are the main issues from a pharmaceutical industry perspective, propose some of the directional changes needed to achieve the objective of faster and sustainable patient access set out by the EU HTAR, and elaborate on the role statisticians can play in this process.

[1] European Commission. “Regulation (EU) 2021/2282 of the European Parliament and of the Council of 15 December 2021 on Health Technology Assessment and Amending Directive 2011/24/EU (Text with EEA Relevance),” 458 OJ L § (2021), http://data.europa.eu/eli/reg/2021/2282/oj/eng.

[2] EUnetHTA. Joint HTA Work. https://www.eunethta.eu/jointhtawork/.



11:00am - 11:20am

Quantifying and comparing the impact of different sources of uncertainty in the analysis of electronic health records: An application to intraoperative hyperoxemia

Maximilian Michael Mandl1,5, Andrea Becker-Pennrich1,3, Ludwig Christian Hinske3,4, Sabine Hoffmann1,2, Anne-Laure Boulesteix1,5

1Institute for Medical Information Processing, Biometry, and Epidemiology, Ludwig-Maximilians-Universität München; 2Department of Statistics, Ludwig-Maximilians-Universität München; 3Department of Anesthesiology, Ludwig-Maximilians-Universität München; 4Institute for Digital Medicine, University Hospital of Augsburg; 5Munich Center for Machine Learning (MCML)

In recent years, the scientific community has become aware of the fact that there is high analytical variability when different researchers study the same research question on the same data set. If this phenomenon is combined with selective reporting, it may lead to an increased rate of false positive results, inflation in effect sizes, and overoptimistic measures of predictive performance. Hoffmann et al. (2021) argued that this analytical variability can be explained by six sources of uncertainty that are omnipresent in empirical research regardless of the respective discipline: sampling, measurement, model, parameter, data pre-processing, and method uncertainty. Failure to take this variety of uncertainties into account may lead to unstable, supposedly precise, and overoptimistic results which ultimately results in research findings that are not replicable. In the long run this has devastating consequences on the credibility of research findings, in particular if independent teams of researchers publish contradicting results on the same data set.

In the field of medicine, the increasing accessibility of large data sets that were not originally collected for research objectives – such as electronic health records or administrative claims data – elicits optimism and expectations of “real-world” evidence and individualized treatment protocols. Nevertheless, these optimistic prospects are paralleled by a mounting awareness that we are confronted with an even greater degree of choices in analyzing this type of data as opposed to conventional observational studies.

A recent study by Becker-Pennrich et al. (2022) used routinely collected data regarding adult craniotomies in order to develop a random forest regressor to impute perioperative paO2 values. These results may be applied to statistical inference for analyzing the impact of perioperative paO2 on, e.g., in-hospital mortality. Failure to consider the uncertainty of these estimates in the first-stage (i.e. impuation of paO2 values) may lead to multiple noncongruent results in the second-stage (i.e. statistical inference) of the analysis.

In the present project we extend this study in order to identify and quantify the different sources of uncertainty and thus assess the stability of results on a routinely collected data set – focusing on both machine learning methods and classical statistical inference in this nested analysis setting. Our focus lies on evaluating the impact of multiple choices made throughout the analysis (“researcher degrees of freedom”) including data pre-processing and model selection decisions.

References

Andrea S Becker-Pennrich, Maximilian M Mandl, Clemens Rieder, Dominik J Hoechter, Konstantin Dietz, Benjamin P Geisler, Anne-Laure Boulesteix, Roland Tomasi, and Ludwig C Hinske. Comparing supervised machine learning algorithms for the prediction of partial arterial pressure of oxygen during craniotomy. medRxiv, 2022. doi:10.1101/2022.06.07.22275483.

Sabine Hoffmann, Felix Schönbrodt, Ralf Elsas, Rory Wilson, Ulrich Strasser, and Anne-Laure Boulesteix. The multiplicity of analysis strategies jeopardizes replicability: lessons learned across disciplines. Royal Society Open Science, 8(4):201925, 2021.



11:20am - 11:40am

On selecting a parametric model to predict long-term survival to support health technology assessment

X. Gregory Chen, Sajjad Rafiq

Biostatistics and Research Decision Sciences, MSD, Switzerland

Long-term extrapolations/predictions from parametric survival models for time-to-event outcomes fitted based on clinical data are routinely used to provide inputs for cost-effectiveness analysis in support of health technology assessment.

For any given dataset, typically, several parametric models are fitted. Some make a general distributional assumption for survival time (typically incl. exponential, Weibull, Gompertz, log-logistic, log-normal, Generalized gamma, Generalized F), while others allow more flexible specification (e.g. natural cubic spline model by Royston-Parmar, two-piecewise parametric model).

There is hence still a crucial task for statisticians after model fitting to select the best predictive model or at least provide adequate evaluation of the prediction performance over all fitted models, given available data and information, some of which may not be used in the fitting. The standard approach in practice for model selection relies on the ranking of some goodness-of-fit criteria (e.g. AIC, BIC) and/or visual assessment of the Kaplan-Meier curve versus the fitted curve.

In this talk/poster,

  • Firstly we discuss an alternative procedure to use pair-wise likelihood ratio test to make forward model selection (from the simplest model to more complex ones) that takes into account the interdependency of the 7 common distributional assumptions. The interdependency guides the pairwise comparison (not every pair out of the 7 distributions need to be tested). We will evaluate the performance of the proposed procedure via simulation.
  • Secondly, we consider more prediction-focus evaluation criteria for any parametric model, and propose a graphical approach to simultaneously screen the internal and external validity of fitted models, better informing the model selection. The external validity could be evaluated based on RWD from relevant general population, or approximated via cross-validation procedure.


11:40am - 12:00pm

Rule-based estimation of lines of therapy (LoT) from oncological registry data: the SAKK 80/19 AlpineTIR registry

Alfonso Rojas Mora1, Caroline Stepniewski1, Ulf Petrausch2, Stefanie Hayoz1

1Competence Center of Swiss Group for Clinical Cancer Research (SAKK), Bern, Switzerland; 2Onkozentrum Zürich, Zürich, Switzerland

In oncology, determining the number of prior lines of therapy (LoT) is critical to optimize future treatments, assessing the eligibility for clinical trials and estimate treatment costs. Further, the definition of LoT can have a large effect on the calculation of time-to-event endpoints from clinical data. Due to the complexity of solid cancer treatments, it is challenging to formulate a general framework LoT sequences.

In the SAKK 80/19 AlpineTIR oncological immunotherapy registry, the definition of LoT was extremely heterogeneous across the participating sites, leading to inconsistent and clinically not plausible LoT assignments. Hence, we developed a rule-based approach to determine the LoT from 702 patients enrolled to the AlpineTIR oncological registry, who had received more than 3500 therapies throughout their oncological medical history.

A first approach was to group cancer therapies that were administered within the same period in a LoT, and each newly administered group of treatments would create a new LoT. These simple rules were further refined, to consider, for example, treatment interruptions and drugs that are administered sequentially as part of a single LoT. We calculated the accuracy of the algorithm based on a subset of 352 patients and more than 870 LoT, for which the AlpineTIR coordinating investigator defined the LoT. Additionally, in a subset of 225 patients we compared the lines that were previously defined by the sites to those from the algorithm.

Our simple algorithm has more than 80% accuracy in predicting the LoT that were defined by the coordinating investigator, and the more complex algorithm yielded an accuracy over 90%. To our knowledge, this is the first time that LoT are predicted with such accuracy, providing a good framework to determine LoT from larger and more complex data.



12:00pm - 12:20pm

Extended Excess Hazard Models for Spatially Dependent Survival Data

André Victor Ribeiro Amaral1, Francisco Javier Rubio2, Manuela Quaresma3, Francisco J. Rodríguez-Cortés4, Paula Moraga1

1King Abdullah University of Science and Technology, Saudi Arabia; 2University College London; 3London School of Hygiene & Tropical Medicine; 4Universidad Nacional de Colombia

Relative survival represents the preferred framework for the analysis of population cancer survival data. The aim is to model the survival probability associated to cancer in the absence of information about the cause of death. Recent data linkage developments have allowed for incorporating the place of residence or the place where patients receive treatment into the population cancer data bases; however, modeling this spatial information has received little attention in the relative survival setting. We propose a flexible parametric class of spatial excess hazard models (along with inference tools), named "Relative Survival Spatial General Hazard" (RS-SGH), that allows for the inclusion of fixed and spatial effects in both time-level and hazard-level components. We illustrate the performance of the proposed model using an extensive simulation study, and provide guidelines about the interplay of sample size, censoring, and model misspecification. Also, we present two case studies, using real data from colon cancer patients in England, aiming at answering epidemiological questions that require the use of a spatial model. These case studies illustrate how a spatial model can be used to identify geographical areas with low cancer survival, as well as how to summarize such a model through marginal survival quantities and spatial effects.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany