Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 1st Apr 2026, 04:28:11pm CEST
|
Agenda Overview |
| Session | ||
WS 3b - Responsible AI in Industrial Production: Practices and Methods for Predictive Models
| ||
| Session Abstract | ||
Brief Description and Outline:This workshop addresses how ethical, fairness-related, and sustainability considerations can be systematically embedded into industrial AI projects. While industrial AI systems increasingly rely on predictive models and uncertainty-aware decision support, ethical aspects are often addressed late, implicitly, or inconsistently. The workshop focuses on practical, method-oriented approaches that help integrate responsibility into AI projects without slowing down innovation. Rather than presenting specific software or algorithms, the session emphasizes transferable concepts such as uncertainty-aware decision-making, data governance, and Corporate Digital Responsibility (CDR). Participants will work with representative industrial AI scenarios involving tabular and time-series data, such as predictive models in production contexts. The workshop treats Responsible AI as a set of practices and methods that shape how predictive models are designed, deployed, and embedded in organizational contexts. Outline (2 hours):
- Goals: The goal of this workshop is to provide participants with practical orientation on how to integrate ethics, fairness, and sustainability into industrial AI projects. The session aims to demonstrate that ethical reflection can function as a structuring element for clarity, robustness, and trust in data-driven decision-making. The workshop contributes to HAICON26 by bridging conceptual frameworks with real-world AI practice and fostering dialogue between researchers and industry practitioners. Responsible AI is conceptualized not as a checklist, but as a structured approach to shaping practices and methods in industrial AI projects. - Presenters Experience: Dr. Christopher Koska Senior Researcher and coordinator of the work package “Ethics” within the transfer project VoMoPro at the Centre for Future Production, University of Augsburg. His work focuses on Responsible AI as an application-oriented framework for shaping ethical and organizational practices in socio-technical contexts and supporting knowledge transfer between research and practice. He holds a PhD in philosophy with a monograph on the ethics of algorithms and has over 15 years of experience in designing and facilitating workshops and training formats on Corporate Digital Responsibility, with a focus on data and algorithmic ethics, in both academic and industry settings. Markus Schatzl Director of the Innovation Lab at senswork, responsible for innovation processes and the professional development of employees. His work focuses on translating AI-based technologies into organizational practice, with particular attention to innovation processes, responsibility, and human-centered design. In addition to internal training formats, he regularly designs and conducts workshops and educational programs for external audiences, including students and pupils, addressing digital technologies, innovation, and responsible technology use. Target Audience: The workshop targets researchers and practitioners working on industrial AI systems, including machine learning researchers, data scientists, engineers, and project managers. A basic understanding of AI concepts and data-driven decision-making is expected; no prior expertise in AI ethics is required.
Keywords: Responsible AI; Ethically Aligned Design; Embedded Ethics; Corporate Digital Responsibility; Industrial AI; Uncertainty Quantification; Data Governance; Decision Support. |
