Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Responsibility
Time:
Friday, 27/June/2025:
8:45am - 10:00am

Session Chair: Jordi Viader Guerrero
Location: Auditorium 2


Show help for 'Increase or decrease the abstract text size'
Presentations

Beyond acceptance: expanding the desirability assessment of potable water reuse

Karen Moesker, Udo Pesch

TU Delft, Netherlands, The

Potable water reuse is increasingly taken up as a technological response to water scarcity. Yet, these types of technological systems remain inherently controversial, while implementation projects often face strong public resistance. As a result, there are increasing efforts to foster social acceptance of potable reuse. To date, these acceptance-enhancing strategies have primarily focused on public outreach campaigns. While these approaches can help address misconceptions and build public trust, they frequently rely on the information deficit model and neglect broader ethical dimensions inherent to such technologies.

Scholars in the ethics of technology argue that addressing social acceptance alone is insufficient; responsible innovation must also consider broader normative aspects which may not surface in the public debate but carry long-term implications. As a result, the focus should move from social acceptance alone towards incorporating ethical acceptability alongside it. Yet, assessing the ethical implications of technology development remains inherently challenging because it lacks clear methods and empirical benchmarks that social acceptance frameworks employ. As such, we propose that ethical acceptability should be based on a reflexive stance, which would make social acceptance and ethical acceptability complementary approaches, compensating for each other’s weaknesses.

To pursue such a combination of approaches, this paper introduces a typology linking the substantive and procedural dimensions of social acceptance and ethical acceptability. In this, the substantive dimension entails that social acceptance needs to identify and integrate public concerns into decision-making, while ethical acceptability needs to ensure that these concerns are addressed in a morally sound manner and identify additional issues that might not surface through public discourse alone. The procedural dimension entails that social acceptance strategies focus on how to address socially relevant concerns best. At the same time, ethical acceptability must assess the appropriateness and moral desirability of the processes used to address problems.

We will showcase the workings of this typology in the case of potable water reuse. We find that, on the substantive level, current acceptance-enhancing strategies often consider a narrow and locally confined problem space, thereby overlooking international and intergenerational implications of potable reuse. Procedurally, it becomes evident that already marginalized groups and their interests (i.e., future generations, the environment, and vulnerable communities) remain under-addressed, although we identified increasing efforts to overcome this issue.



Responsibility Gaps and Engineers’ Obligations in the Design of AI Systems

Yutaka Akiba

Nagoya University, Japan

Autonomous AI systems, such as automated vehicles and Lethal Autonomous Weapons Systems (LAWS), are rapidly advancing and becoming increasingly prevalent in our society. While these technologies offer numerous benefits, they also pose serious ethical challenges. One prominent issue is the so-called “Responsibility Gaps": situations where no stakeholders ―designers, developers, deployers, policymakers, end-users, or even the systems themselves- can be held responsible for the actions or consequences of these technologies. Most existing research on Responsibility Gaps focus on backward-looking responsibilities, such as liability or culpability. In contrast, forward-looking responsibilities, particularly obligations, have received relatively little attention. Some authors mention forward-looking responsibilities, but their focus is often limited to fostering responsible development cultures or promoting educational programs for engineers (Bonnefon, et al. 2020; Santoni de Sio, & Mecassi, 2021).

In this presentation, I will argue that engineers can still fulfill their obligations to mitigate harm caused by autonomous AI systems, or address “Obligation Gap” (Nyholm, 2020), even though predicting these systems’ behavior remains inherently difficult due to their technological properties. Moving beyond the cultural or educational perspectives, I propose that engineers can enhance their performance by employing “moral imagination”: the ability to envision situations in which technological mediations occur, feedback this insight to the design process, and eliminate morally problematic elements while fostering morally desirable ones (Verbeek, 2006).

This presentation is structured into 4 parts. First, I will briefly review recent developments in autonomous AI systems and provide an overview of existing discussions on responsibility gaps. I will then position the focus of this presentation within the broader landscape of various types of responsibility gaps.

Next, I will examine the obligations of engineers across different technological domains, drawing on established research in engineering ethics. Especially, I highlight the concept of “Preventive Ethics” (Harris, et al. 2019), and show relevant cases in where engineers have successfully fulfilled their obligations to avoid harm.

Following this, I will clarify the concept of the obligation gap, comparing it with related terms such as the "Active Responsibility Gap" (Santoni de Sio, & Mecassi, 2021). I will refine the definition of the obligation gap by incorporating detailed theoretical frameworks. Zimmerman’s (2014) distinction among objective, subjective, and prospective moral obligations provides a useful foundation for this effort.

Finally, I will propose a potential solution to the obligation gap through design methodologies. While existing studies address backward-looking responsibility gaps by suggesting meaningful human control in design (Santoni de Sio, & van den Hoven, 2018), I aim to articulate specific design requirements to prevent harm, using moral imagination. Predicting harmful scenarios in deployment contexts can be operationalized through scenario-making and ethical assessments involving diverse stakeholders. These tools, when integrated with iterative design processes, should establish a routine obligation for AI engineers, ultimately helping to bridge the obligation gap.

Bonnefon, J.-F., Černý, D., Danaher, J., Devillier, N., Johansson, V., Kovacikova, T., Martens, M., Mladenovic, M. N., Palade, P., Reed, N., Santoni de Sio, F., Tsinorema, S., Wachter, S., & Zawieska, K. (2020). Ethics of connected and automated vehicles: Recommendations on road safety, privacy, fairness, explainability and responsibility. European Commission.

Harris, C. E., Pritchard, M. S., Rabins, M. J., James, R. W., & Englehardt, E. E. (2019). Engineering ethics: Concepts and cases (6th ed.). Belmont, CA: Wadsworth.

Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.

Santoni de Sio, F., & Van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15.

Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084.

Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values, 31(3), 361–380.

Zimmerman, M. J. (2014). Ignorance and moral obligation. Oxford University Press.



Mapping the ethics landscape: moral distance in geospatial AI research

Peter Darch

University of Illinois at Urbana-Champaign, United States of America

Embedding ethical principles into the workflows of academic researchers using AI systems remains a critical yet under-addressed challenge. These systems, shaped by the interplay of human and technical factors, generate ethical and social impacts stemming from distributed processes rather than singular decisions. This diffusion of responsibility complicates accountability by spreading ethical obligations across teams, organizations, and workflows (Floridi, 2016). Compounding this is moral distance, where physical, temporal, cultural, and bureaucratic separations reduce individuals' sense of responsibility, undermining ethical engagement (Vanhee & Borit, 2023).

This paper examines the interplay between moral distance and distributed responsibility in shaping researchers’ ethical accountability. Using a longitudinal case study of the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) platform, the study expands existing frameworks by emphasizing two underexplored aspects: the role of pragmatic considerations and the significance of proximity to AI system production. I-GUIDE, a five-year, $16 million US National Science Foundation-funded initiative, builds an AI platform enabling researchers from diverse disciplines to mine and integrate geospatial datasets addressing sustainability challenges.

Findings reveal how Vanhee and Borit's (2023) dimensions of moral distance manifest in multidisciplinary, AI-based academic research. Cultural distance, influenced by disciplinary training, significantly shaped ethical engagement. Researchers with technical backgrounds prioritized computational efficiency and technical rigor, while those in social sciences or geography engaged more with societal impacts. Bureaucratic distance, driven by career hierarchies, further complicated accountability, with early-career researchers deferring ethical considerations to senior colleagues or institutional frameworks. Proximity distance also influenced accountability, with researchers perceiving their work as contributing to societal impacts, such as policymaking, displaying greater ethical awareness than those focused on academic outputs like dissertations.

This paper extends proximity distance to include proximity to system production processes. Researchers directly involved in data collection or model development were more attuned to ethical challenges due to their awareness of ad hoc decisions and quality compromises inherent in these processes. In contrast, those relying on external datasets or models deferred responsibility to data or model producers, emphasizing the role of trust in external sources.

Additionally, the study highlights the contingent nature of moral distance, shaped by pragmatic constraints such as publication pressures, funding requirements, and tight project deadlines. These constraints often led researchers to deprioritize ethical engagement in favor of meeting immediate goals. This finding challenges the notion of moral distance as fixed, demonstrating its dependence on context and situational pressures.

Floridi, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112.

Vanhee, L., & Borit, M. (2023). Moral distance, AI, and the ethics of care: A framework for understanding ethical responsibilities in sociotechnical systems. AI & Society, 38(1), 13–26.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany