Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S61: Statistical strategies in toxicology
Time:
Thursday, 07/Sept/2023:
8:30am - 10:10am

Session Chair: Bernd-Wolfgang Igl
Session Chair: Frank Konietschke
Location: Seminar Room U1.191 hybrid


Show help for 'Increase or decrease the abstract text size'
Presentations
8:30am - 9:10am

The joint analysis of multiple sources of multiplicity in the evaluation of regulatory toxicology bioassays

Ludwig A. Hothorn

retired from Leibniz University Hannover, Germany

The principal contradiction in regulatory toxicology is the need for controlling false negative error rate primarily, i.e. a proof of safety, e.g. using simultaneous non-inferiority tests versus control. But in the routine analysis the proof of hazard is used commonly, i.e. tests for differences vs. control, at level alpha.

In various bioassay many tests are performed at elementary level alpha: i) for different doses, ii) for different times, iii) for different endpoints of the same scale or mostly different-scaled, iv) different sex, v) different effect sizes (e.g., additive, multiplicative), vii) different lm/glm/glmm models (e.g. with baseline covariates or not), viii) different dose metameters in the Tukey trend test (arithmetic, ordinal, ari-log) (Tukey, 1985, Schaarschmidt, 2022) , ix) different tuning parameter in the poly-k-test for mortality-adjusted analysis of tumor proportions, ect..

The first extreme approach ‘testing at level alpha each’ generate unacceptable false positive error rates. The second extreme approach ‘to control global FWER by multiplicity adjustment’ generate unacceptable false positive error rates. Therefore, a compromise is needed.

In this talk the maxT-test, an UIT, is proposed as a compromise using the empirical variance-covariance matrix from multiple marginal models (Pipper, 2012) (notice, not the correlation between data!). I will demonstrate and interpret simultaneous confidence intervals for several tox assays over certain models/conditions/parameters using the CRAN packages multcomp, MCPAN and tukeytrendtest.

The pros and cons of such approaches are discussed for routine analysis and guidance’s in regulatory toxicology. Moreover, the above approach argues also to avoid games with p-values (FWER, FDR, g…) but to consider the underlying marginal models of pre-clinical interest and to interpret it accordingly by well-chosen effect sizes.

References

C. B. Pipper, C. Ritz, and H. Bisgaard. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models. Journal of the Royal Statistical Society Series C-Applied Statistics, 61:315–326, 2012.

J. W. Tukey, J. L. Ciminera, and J. F. Heyse. Testing the statistical certainty of a response to increasing doses of a

drug. Biometrics, 41(1):295–301, 1985.

F. Schaarschmidt, C. Ritz, and L. A. Hothorn. The Tukey trend test: Multiplicity adjustment using multiple marginal models. Biometrics. 2022;78:789–797.



9:10am - 9:30am

An integrated data-driven approach for drug safety prediction

Fetene Tekle1, Vahid Nassiri2, Kanaka Tatikola1, Helena Geys1

1Janssen R&D, Belgium; 2Open Analytics, Belgium

Predicting drug-induced organ injury is a multi-dimensional problem that requires consideration of multifaceted assays and additional compound-related information. Chemical properties of compounds, together with multiple in-vitro assays endpoints and exposure parameters such as dose, are promising markers to detect drug-induced organ injury in early drug development. Tools that can help to integrate data from different sources can make the decision-making process faster, data driven, and more efficient. In this talk, I will describe one of such tools for predicting drug-induced liver injury (DILI) and discuss how such approaches can also help predict other organ injuries. The proposed methodology will be validated by implementing a statistical model on three publicly available datasets. Model based probabilities will be used to classify the predicted DILI risk class corresponding to the true DILI severity. Classification methods such as Winner’s rule (maximum probability), LDA (linear discriminant analysis) and QDA (quadratic discriminant analysis) will be discussed and compared.



9:30am - 9:50am

Statistics in a validation process in toxicology

Tina Lang

Bayer AG, Germany

During the early phases of drug development, the assessment of safety for candidate compounds is vital for any further progress. A guideline-driven well-established apparatus of tests is in place to check the different aspects of safety and toxicity (guidelines, e.g., for micronucleus test in vitro OECD Guideline 487 (2016) and ICH S2(R1) (2012)).

These guidelines focus primarily on the experimental conduct of the preclinical study and require the adherence to GLP (good Laboratory Practice). Besides the experimental part, the analysis environment and data analysis procedures must also be validated according to the quality assurance procedures

We have been involved with our colleagues from both genetic toxicology and quality assurance in the establishment of such a validation process. In addition to the statistical evaluation, the validation and qualification also encompassed the statistical software and the IT infrastructure.

In this presentation we want to share our experience from the process validation, including the fact that “validation” has different meanings in different contexts. Thus, as it is often the case, starting with defining technical terms for common ground between statisticians, toxicologists and quality experts is an important base for the successful integration of statistical analysis in regulatory toxicology studies.

We will present our journey, obstacles, and the common goal we achieved as a team in the end.

References:

ICH S2(R1) (2012), Harmonized Tripartite Guideline, “Guidance on genotoxicity testing and data interpretation for pharmaceuticals intended for human use”.

OECD/OCDE TG 487 (2016): “OECD Guideline for the testing of chemicals – In Vitro Mammalian Cell Micronucleus Test”.



9:50am - 10:10am

THE COMET ASSAY IN VIVO – A REVIEW OF KNOWN PROPERTIES AND NEW FINDINGS

Timur Tug1, Julia Duda1, Bernd-Wolfgang Igl2, Katja Ickstadt1

1Department of Statistics, TU Dortmund University, Dortmund, Germany; 22Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riss, Germany

The in vivo alkaline comet assay is a widely used and regulatory-relevant test in genotoxicity. It is a sensitive and fast method to detect both, DNA strand break induction, and DNA repair on a single cell level.

After a brief description of the biological background and the technical performance of the in vivo test, several fundamental statistical aspects of Comet data will be examined. This includes the description of the data distribution, the handling of zeros, the interaction of negative and positive controls, the inclusion of historical negative control data and the influence of the chosen slide summary on the final test outcome.

Based on a large data set, we were able to validate the OECD 489 suggested treatment of zero values and to endorse the median as the proposed summary measure. We also offer advice on how to compare negative and positive controls for assessing the validity of a study. Moreover, a variance decomposition analysis offers insightful information on the origin of noise on the cell, slide, and animal level, which may influence the experimental design.

For this purpose, simple hierarchical mixed effect models were set up and in addition, more complex models were used to improve the understanding of the structure of the comet assay. Nevertheless, open points will be sketched to be considered in the future.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: CEN 2023
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany