Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Decision-making
Time:
Thursday, 26/June/2025:
5:20pm - 6:35pm

Session Chair: Bouke van Balen
Location: Auditorium 6


Show help for 'Increase or decrease the abstract text size'
Presentations

Two’s company, three’s a crowd: theoretical considerations for shared-decision making in AI-assisted healthcare

Emma-Jane Spencer1, Cathleen Parsons2, Stefan Buijsman3

1Erasmus MC, TU Delft; 2TU Delft; 3TU Delft

Despite the growing acceptance of patient involvement in shared decision-making (SDM), the design of artificial intelligence clinical decision support systems (AI-CDSS) for healthcare has thus far been predominantly user-facing in design, which is to say, clinician-centred. This means that while continuous thought is being given to the design requirements necessary for the empowerment of clinicians, there has hitherto not been the requisite attention given to the design requirements necessary for empowering patients alongside their clinicians.

In this paper, we explore the SDM paradigm, arguing that while it remains the preferred approach to clinical decision-making, the shortcomings of the current theoretical approaches do not allow for the seamless integration of AI-CDSS. Most critically, the current approaches are outdated in the sense that they do not make clear where in the patient-physician dynamic AI systems ought to be positioned and, moreover, what role(s) they ought to play. Furthermore, we argue that the contemporary approaches to SDM often result in insubstantial attempts at patient involvement in the decision-making process, such as tokenism. Thus, we argue that the absence of a clear SDM framework which includes AI-CDSS poses a substantial risk to the autonomy of both physicians and patients. This can be seen in the recent suggestion that AI ought to be accommodated as a third participant in the SDM dynamic, such that the doctor-patient relationship ought to become a doctor-patient-AI triad (Lorenzini et al., 2022). We are especially cautious of this proposed dynamic, as we argue it involves an implicit assumption that AI systems can function as agents in a participative, autonomous sense. Rather than promoting the agential status of AI systems, we will suggest that AI systems can instead, when designed appropriately, promote the autonomy of patients and physicians. Given that clinical AI systems are already being developed without proper recourse to patient values, we view the newly invigorated discourse around SDM as an opportunity to intervene on the current AI-CDSS models’ design and development pipelines to ensure that patients and clinicians are not epistemically defeated by AI in their decision-making, but are rather empowered by it.

Thus, to refine the existing SDM paradigm, our paper proposes a new model which we argue better accommodates the integration of clinical AI into the decision-making process. This model, which we will call the contemplative model, places more emphasis on patient-centred values, elongating the pre-intervention stage of clinical treatment by first fully eliciting patient values and preferences and subsequently exploring treatment options through the use of AI-CDSS. Moreover, in the exploration of treatment options, the contemplative model advocates that the presentation of treatment pathways ought to be presented in a dialogical fashion, with a patient-friendly model design, so that SDM can be ongoing and iterative, rather than based on a singular input-output interaction. The term “contemplative” refers to the fact that such an approach aims to create a SDM process which simulates an open-ended, ongoing conversation. This paper thus discusses the various conditions necessary for adhering to the contemplative model of SDM.

References

Lorenzini G, Arbelaez Ossa L, Shaw DM, Elger BS. Artificial intelligence and the doctor-patient relationship expanding the paradigm of shared decision making. Bioethics. 2023 Jun;37(5):424-429. doi: 10.1111/bioe.13158. Epub 2023 Mar 25. PMID: 36964989.



On the philosophical limits of artificially intelligent decisions

Samuele Murtinu

Utrecht University, Netherlands, The

Philosophy has extensively researched what belongs to artificial intelligence (AI) and how the integration of AI in decision processes impacts organizations and societies (e.g., Carter, 2007; Copeland, 1993; Pollock, 1990; Schiaffonati, 2003). In the introduction to a special volume of Minds and Machines (2012), Vincent Mueller summarizes the classical philosophical debates surrounding AI (e.g., “Can AI machines think?”), as well as the limits of AI as evidenced by the declining consensus of the ‘computationalist’ view of AI. This latter equates cognition to “just” computation over representations, which may thus be implemented in any artificial system. However, while computation is digital, representation is strongly intertwined with cognition, which relies on conscience, free will, action, meaning, intuition, intentionality and interaction which are not digital and cannot be automated.

In organizations, AI is increasingly integrated into decision-making, (Panico et al., 2024) driven by its capacity to process vast amounts of data (Hilbert and López, 2011) and generate knowledge worker-like outputs. While some organizations view AI-based decision-making as superior to human judgment, full automation of decisions remains rare (Benbya et al., 2020). Instead, many decisions are typically made collaborations between humans and AI under labor specialization (e.g., Agrawal et al., 2019) or via human-AI ensembles, where human and algorithmic decisions are aggregated (e.g., Choudhary et al., 2023). This may reflect a transitional phase where AI complements rather than replaces human cognition. However, questions remain about AI’s ability to simulate human-like decision-making and consciousness.

This paper emphasizes the need to critically assess three philosophical limitations of AI in decision-making. First, when faced with undecidable (in a Gödelian sense) problems, AI may introduce randomness or utilize unknown methods to make decisions, posing challenges to human comprehension of its processes. This underscores the importance of human oversight to ensure transparency and accountability in critical decisions, especially when these affect society.

Second, semantically, AI lacks the ability to ascribe genuine meaning to concepts or sentiments like empathy. Its operations are limited to processing symbols and patterns, falling short of human cognition, which inherently combines syntax with semantics. As illustrated by Searle’s “Chinese Room” argument, AI simulates but does not replicate human understanding. This limitation raises concerns about AI’s ability to grasp abstract concepts like civic values, which are deeply tied to human experience and social interaction.

Third, AI systems are also inherently shaped by human perceptions, which are subjective and influenced by biases or evolving paradigms. Examples include how time and space are conceptualized or how cultural constructs influence AI training datasets. While AI can simulate human rationality, it cannot emulate human irrationality, imagination, or the capacity for paradigm shifts that drive scientific and societal progress. This makes AI ill-suited to discover entirely new paradigms or explore the deeper ontology of reality.

Ultimately, the essay emphasizes the need to understand AI’s processes, boundaries, and limitations for ensuring ethical and effective human-AI collaboration. Humans must retain oversight to prevent decisions that could harm society, due to the inherent gaps in AI’s ability to simulate human cognition and morality.

References

Agrawal, A., Gans, JS, & Goldfarb, A (2019). Artificial intelligence: the ambiguous labor market impact of automating prediction. Journal of Economic Perspectives, 33(2), 31-50.

Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: current state and future opportunities. MIS Quarterly Executive, 19(4).

Carter, M. (2007). Minds and computers: An introduction to the philosophy of artificial intelligence. Edinburgh University Press.

Choudhary, V., Marchetti, A., Shrestha, Y. R., & Puranam, P. (2023). Human-AI Ensembles: When Can They Work?. Journal of Management, 01492063231194968.

Copeland, J. (1993). Artificial intelligence: A philosophical introduction. John Wiley & Sons.

Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. Science, 332(6025), 60-65.

Mueller, V. C. (2012). Introduction: philosophy and theory of artificial intelligence. Minds and Machines, 22(2), 67-69.

Panico, C., Murtinu, S., Cennamo, C. (2024). How do humans and algorithms interact? Augmentation, automation, and co-specialization for greater precision in decision-making. Mimeo.

Pollock, J. (1990). Philosophy and artificial intelligence. Philosophical Perspectives, 4, 461-498.

Schiaffonati, V. (2003). A Framework for the Foundation of the Philosophy of Artificial Intelligence. Minds and Machines, 13, 537-552.



Shaping technology with society's voice: measuring gut feelings and values

Marieke van Vliet, Linda Hofman, Anika Kok, Fleur van Liesdonk, Bart Wernaart

Fontys, Netherlands, The

The intimate technological revolution is changing how we connect with technology on a deeply personal level. It’s no longer just around us, it’s within us, between us, and learning from us in ways we’ve never experienced before. As these technologies become deeply intertwined with our daily experiences, they transform our identities and values, not just our behaviours. This intimate integration raises urgent questions about how we can understand and shape the values that should guide our technological future.

Research has consistently shown that the topics we choose to speak about reflect what occupies our minds and what we consider important, revealing our unique perspectives and underlying identity (Pennebaker, 2003, 2011). How can we use this idea to uncover the gut feelings and values of society to align the development of emerging technology with what truly matters to society? Our work explores this question through an innovative method: the Moral Data City Hunt (van Veen, 2022, Wernaart, 2021).

In this interactive moral lab, citizens are confronted with scenarios about technologies that they actively shape through moral choices. "How do we balance the benefits and drawbacks of emerging technologies that promise progress but disrupt current ways of life, such as delivery drones offering convenience while raising privacy concerns or lab-grown meat making food production more sustainable while raising questions about the artificial nature of food?" Through dynamic interviews that explore participants' reasoning, we create a rich context for exploring values about their future with this technology.

Our methodology is grounded in Schwartz's Value Theory, which has emerged as the most influential framework for understanding personal values and their interrelationships (Schwartz, 2012). Research has shown that the words people use in natural conversation reflect their underlying values more accurately than self-reporting methods (Boyd et al., 2015). By analyzing participants' language through the Personal Value Dictionary (Ponizovskiy et al., 2020) - a validated tool linking words to Schwartz's value framework - we can capture the subtle ways values emerge in discussions about technological futures. This approach allows us to identify not just explicit moral statements, but also the implicit value patterns that emerge when people envision and discuss their desired relationship with technology.

This work aims to explore which interview techniques best facilitate linguistic analysis of values. The key question we address is: how can we structure a five-minute conversation to elicit the rich linguistic patterns for automated value analysis through tools like the Personal Value Dictionary? Answering this question unlocks a powerful middle ground between traditional research approaches: capturing authentic societal gut feelings that surveys and focus groups often miss, while avoiding the contextual limitations of mining online linguistic data. This combination of real-world engagement and linguistic analysis provides a uniquely nuanced window into what truly drives society's relationship with technology.

Wernaart, B. (2021). Developing a roadmap for the moral programming of smart technology. Technology in Society, 64, 101466. https://doi.org/10.1016/j.techsoc.2020.101466

Pennebaker, J. W., Mehl, M. R., & Niederhoffer, K. G. (2003). Psychological Aspects of Natural Language Use: Our Words, Our Selves. Annual Review of Psychology, 54(Volume 54, 2003), 547-577. https://doi.org/10.1146/annurev.psych.54.101601.145041

Pennebaker, J. W. (2011). The secret life of pronouns. New Scientist, 211(2828), 42-45. https://doi.org/10.1016/S0262-4079(11)62167-2

van Veen, M., & Wernaart, B. (2022). Building a techno-moral city – Reconciling public values, the ethical city committee and citizens’ moral gut feeling in techno-moral decision making by local governments.

Ponizovskiy, V., Ardag, M., Grigoryan, L., Boyd, R., Dobewall, H., & Holtz, P. (2020). Development and Validation of the Personal Values Dictionary: A Theory–Driven Tool for Investigating References to Basic Human Values in Text. European Journal of Personality, 34(5), 885-902. https://doi.org/10.1002/per.2294

Boyd, R., Wilson, S., Pennebaker, J., Kosinski, M., Stillwell, D., & Mihalcea, R. (2015). Values in Words: Using Language to Evaluate and Understand Personal Values. Proceedings of the International AAAI Conference on Web and Social Media, 9(1), 31-40. https://doi.org/10.1609/icwsm.v9i1.14589

Schwartz, S. H. (2012). An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-0919.1116



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany