Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 1st Apr 2026, 04:29:05pm CEST
|
Agenda Overview |
| Session | ||
WS 3a - Reproducible Benchmarking and Multi-Omics Integration Using the Multiverse Framework
| ||
| Session Abstract | ||
|
Brief Description and Outline: This workshop will focus on standardized benchmarking and reproducible evaluation of multiomics integration methods in bioinformatics, using Multiverse as a unifying framework rather than as the sole focus. The session will begin with a 15-minute conceptual overview of multimodal integration challenges, model diversity, and current limitations in benchmarking practices. We will then dedicate 15-minutes to a structured review of representative integration approaches, emphasizing their assumptions, strengths, and limitations in biological contexts. The core of the workshop will consist of two hands-on blocks. The first 30-minute session will guide participants through running a standardized benchmarking workflow: selecting curated datasets, configuring preprocessing pipelines, training multiple integration models, performing hyperparameter optimization, and evaluating results using multiple integration metrics. The second 60-minute session will focus on extensibility and community benchmarking: integrating a custom model within a containerized setup and performing comparative analysis across methods. We will explicitly demonstrate how containerization resolves dependency conflicts and ensures reproducibility across heterogeneous tools. Overall, the tutorial balances theoretical context, coveerage of representative methodological families, and practical reproducible workflows, ensuring relevance beyond a single framework while providing concrete skills for multi-omics integration research. Goals: The primary goal of this tutorial is to equip participants with a rigorous and practical understanding of how to benchmark multi-omics integration methods in a standardized and reproducible manner. Participants will learn how to design fair comparisons across heterogeneous models, select appropriate evaluation metrics aligned with biological objectives, and interpret performance trade-offs between modality alignment, clustering quality, and biological signal preservation. A second goal is to provide hands-on experience with reproducible computational workflows. Participants will gain practical skills in running containerized pipelines, managing model dependencies, performing systematic hyperparameter optimization, and executing end-to-end benchmarking experiments across multiple datasets and methods. A third goal is to foster extensibility and community-driven benchmarking. Attendees will learn how to integrate new datasets, add custom models, and implement additional evaluation metrics within a unified framework, enabling them to adapt benchmarking workflows to their own research questions. By the end of the tutorial, participants will be able to critically assess integration methods and build reproducible, extensible benchmarking pipelines for multi-omics data analysis. Presenters Experience: Anis Ismail is a second-year PhD researcher at the Laboratory of Multi-Omic Integrative Bioinformatics working on explainable multi-omic representation learning models. He is currently President of the ISCB Regional Student Group Belgium and has co-organized the Interuniversity Belgian Biohackathon (September 2025). He also leads the Data for Good Challenge at Emergent, designing and organizing Leuven’s biggest Data Science Challenge. He also has substantial teaching and outreach experience, including workshops on AI and data science at Beirut AI, Zaka, SE Factory, and Lebanese American University. He will be leading the workshop, and will focus on multi-omics representation learning concepts, benchmarking methodology and demonstrating containerized workflows. Saptarshi Chakrabarti is a data engineer at the Leuven Institute of Single Cell Omics where he develops scalable machine learning infrastructure for computational biology. He contributed to the MLOps and workflow automation components of the Multiverse framework, with expertise in reproducible research software engineering, pipeline deployment, and maintainable ML systems in academic environments. He has mentored undergraduate programming courses at KU Leuven and presented a technical demo at RSE Day 2025 at KU Leuven. His role in the workshop will be focusing on the practical implementation aspects of benchmarking multi-omics models and technical support of attendees. Target Audience: The target audience includes master’s students, PhD students, and researchers working in single-cell and multi-modal bioinformatics. Participants are expected to have a basic understanding of machine learning concepts and familiarity with biological data analysis. Practical experience with Python and computational workflows is recommended, as the tutorial will involve hands-on benchmarking, model training, and evaluation of multi-omics integration methods. Prior knowledge of deep learning is helpful but not strictly required, as core methodological ideas will be introduced during the session. Keywords: Multi-omics integration, Benchmarking, Deep learning, Containerization, Reproducibility, Model evaluation, Bioinformatics tools |
