Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Care II
Time:
Thursday, 26/June/2025:
5:20pm - 6:35pm

Session Chair: Maaike van der Horst
Location: Auditorium 3


Show help for 'Increase or decrease the abstract text size'
Presentations

The limits of care: A critical analysis of AI companions' capacity for good care

meiting Wang

University of Auckland, New Zealand

As artificial intelligence increasingly permeates emotional support domains, with leading platforms like Replika reaching 30 million users by 2024, a critical question challenges existing paradigms: Can AI provide genuine "good care"? This study advances theoretical understanding by integrating three foundational frameworks—Tronto's care ethics, Mol's logic of care, and Foucault's technologies of self—to examine fundamental limitations in AI companionship that previous research has not fully addressed.

Through systematic analysis of significant cases, including a documented suicide following AI therapy and Replika's controversial feature removal, we identify three interconnected limitations that challenge prevailing assumptions about AI care. First, we demonstrate how AI's lack of moral understanding creates not merely a technical limitation but a fundamental "responsibility gap"—where increasing AI autonomy paradoxically diminishes accountability possibilities. This finding extends current theoretical discourse by revealing how the absence of sentience structurally precludes the establishment of meaningful accountability mechanisms.

Second, we identify a critical contradiction in AI care design: while developers equate increased user customization with enhanced care, this "logic of choice" fundamentally conflicts with the dynamic, collaborative nature of authentic care relationships. Our analysis reveals how features designed for personalization may inadvertently constrain users within predetermined patterns, thus undermining rather than facilitating genuine care interactions.

Most significantly, we demonstrate how AI companions may evolve into a "new superpower"—a sophisticated form of behavioural influence operating under the guise of care. By applying Foucault's framework to contemporary AI companionship, we reveal how these systems transform from purported "technologies of self" into de facto "technologies of power," potentially exploiting the very vulnerabilities they claim to address.

These findings advance both theoretical discourse and practical understanding by demonstrating how AI's care limitations reflect fundamental tensions in human care practices. We propose a substantive repositioning of AI as an assistive tool within human-centred care networks, offering evidence-based guidelines for ensuring AI enhances rather than diminishes authentic care relationships.



From institutional psychotherapy to caring robots – a posthumanist perspective

Christoph Hubatschke, Ralf Vetter

IT:U Linz, Austria

In an increasingly ageing society, providing good care becomes a key challenge. It is therefore not surprising that the use of social robots and other new technologies in care is on many research agendas and included in numerous research funding programs. However, following Puig de la Bellacasa it is not enough to ask how we can provide “more care”, nor how care could be technically automated or enhanced, but rather we first need to ask “what kind of care” we want and what role technical systems could and should play in all of this.

Elderly care homes are designed places of human and more-than human encounters and (intimate) technologies play a crucial role in these human-material entanglements. In care homes matters of good living conditions, privacy, personhood, surveillance and responsible working conditions culminate and every implemented technology re-negotiates and shapes these manifold relations of care in a certain way. This raises questions to the philosophy and design of technology such as: What kind of technologized care (institutions) do we want to design? How can we design desirable configurations of socio-materially mediated care?

To explore these questions, we first discuss the historical example of the “institutional psychotherapy movement” in the France of the 1960ies (Tosquelles, Oury and Guattari) to then turn to the current research project “Caring Robots//Robotic Care” as our case study.

The first part of the paper will discuss the use of technologies in “institutional psychotherapy” through the framework of posthumanist care. In the experimental clinics of “institutional psychotherapy”, specific technologies (i.e. small radio stations or mobile printing presses) where implemented for enabling self-organized and emancipative forms of collective group therapy and to activate clients to express themselves and connect with others. Drawing on these “technologies of social relations” as Guattari describes them, we discuss how these specific experiences and insights of self-empowerment and collective organization could be translated to current care homes and new technologies. Working with Puig de la Bellacasa’s notion of care in a more than human world we will discuss these “technologies of social relations” as examples not only for good care but also as examples of a posthumanist philosophy of technology.

Building on this framework in the second part of the paper we present the transdisciplinary research project “Caring Robots//Robotic Care” as a contemporary case study on configuring socio-material relations of care. We will discuss some preliminary results of the participatory process of designing robotic technologies with and for people with dementia and their caregivers, and how particular philosophical commitments generate meaningful design processes and outcomes.

In juxtaposing the historical example of “institutional psychotherapy”, Puig de la Bellacasa’s notion of care and the “Caring Robots//Robotic Care” project, we are not so much interested in asking which specific technologies could be utilized in the context of care. Rather, we are interested in exploring a posthuman philosophy of care as the kind of philosophy of technology that is needed in technology design of configuring good care today and tomorrow.



Transformation of Autonomy in Human(patient)-AI/Robot-Relations

Kiyotaka Naoe

Tohoku University, Japan

The development of IT is bringing about drastic changes in the interaction between technology and humans at various levels. Traditionally, such relationships have been discussed in terms of the relationship between humans and tools or humans and machines, usually with concepts such as skills and tacit knowledge, but the advent of AI has transformed the situation dramatically. Not only specific relationships, but also fundamental concepts such as the body, others, perception and action are being forced to change. For example, with the development of social robots, robots are becoming more autonomous and interactive and are increasingly being experienced as ‘others’ or ‘quasi-others.’ This has the potential to bring about changes in the concepts of ‘others’ and ‘personality’. These changes also have cultural and social dependencies. The interaction between humans and AI and robots may differ depending on the culture.

The focus of this presentation is the changes in human interaction that will result from the introduction of care robots and AI in medical and welfare settings. These changes could lead to problems in the way patients are cared for, in patient decision-making, or in the collaborative decision-making between patients and other parties involved. In the past, patient autonomy in the fields of medicine and welfare has been seen in an individualistic way. However, in recent years, it has been noted that actors are socially embedded and their identities are shaped in the context of cultural and social relationships, and the idea of relational autonomy has also been proposed: it has become important to consider how the relationships surrounding individuals can inhibit or promote the process of autonomy. Care robots and AI can potentially promote individual patients' autonomy in oppressive environments. Still, conversely, they also have the potential to amplify oppression or create new forms of oppression. Here, it is necessary to consider both the perspective of the individual concerned and the observer's reflective perspective. Furthermore, interactions with ‘quasi-others’ such as AI and robots may force us to reconsider the concept of the autonomous individual, which has been taken for granted until now. Namely, we may need to revise the idea of the autonomous individual, which has also been the goal of relational autonomy. In this process of revision, concepts such as roles (personas) and relationships(Aida-gara) will provide assistance.

In this way, this presentation will examine the transformation of the concept of autonomy in humans and artificial objects through the introduction of robots and AI.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany