Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Artificial Intelligence
Time:
Friday, 27/June/2025:
3:35pm - 4:50pm

Session Chair: Luuk Stellinga
Location: Auditorium 3


Show help for 'Increase or decrease the abstract text size'
Presentations

Towards friendship among nonhumans: human, dog and robot

Masashi Takeshita

Hokkaido University, Japan

Friendship is one of the most significant relationships in human life. Humans enjoy spending time with friends, growing together, and cultivating these bonds by caring for each other. Because humans are social animals, having friends is important for our well-being.

In recent years, the rise of social robots and advances in conversational AI have prompted discussions about whether humans and robots can form genuine friendships. Some philosophers argue that such friendships are impossible due to robots’ lack of internal mental states and the inherently asymmetrical nature of human-robot relationships (e.g., Nyholm 2020). Others suggest more flexible criteria for what constitutes friendship (Ryland 2022).

Meanwhile, research in animal ethics has examined whether humans and other animals, particularly dogs, can form friendships (Townley 2017). Dogs, like humans, are social animals who depend on close, supportive relationships for their well-being—dogs left alone at home, for instance, may feel lonely or experience separation anxiety (Schwartz 2003). It suggests that dogs benefit from friendships, not only with humans but potentially with robots as well.

The purpose of this study is to investigate whether dogs and robots can indeed be friends. First, I review discussions in robot ethics regarding human-robot friendship and debates in animal ethics concerning human-dog friendship to determine the necessary and sufficient conditions for friendship. Next, I draw on studies in animal-computer interaction to assess whether dogs and robots can meet these conditions.

In robot ethics, some scholars (e.g., Elder 2017; Nyholm 2020) reject the possibility of genuine human-robot friendship, while others (e.g., Danaher 2019; Ryland 2022) argue that it is possible. Similarly, some in animal ethics have criticized arguments that deny the possibility of human-dog friendship (e.g., Townley 2017). By examining these arguments, I identify the minimum conditions for friendship and show that dogs and robots can theoretically satisfy these conditions, suggesting that dog-robot friendships may be possible.

I then turn to animal-computer interaction research to explore how such friendships might manifest in practice. Some studies (e.g., Lakotas et al. 2014; Qin et al. 2020) show that dogs respond differently to humanoid robots than other artificial objects. For instance, Qin et al. (2020) found that dogs did not respond to a simple loudspeaker, yet they reacted to a call from a humanoid robot, suggesting a potential for interaction that could develop into friendship.

Finally, based on these theoretical and empirical findings, I consider how robots should be designed so that dogs and robots can become friends. By elucidating the conditions for dog-robot friendship and suggesting a design idea, this study aims to deepen our understanding of social robotics, improve animal welfare, and open new avenues for human-animal-robot interactions.

References:

Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5-24.

Elder, A. M. (2017). Friendship, robots, and social media: False friends and second selves. Routledge.

Lakatos, G., Janiak, M., Malek, L., Muszynski, R., Konok, V., Tchon, K., & Miklósi, Á. (2014). Sensing sociality in dogs: what may make an interactive robot social?. Animal cognition, 17(2), 387-397.

Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.

Qin, M., Huang, Y., Stumph, E., Santos, L., & Scassellati, B. (2020). Dog Sit! Domestic Dogs (Canis familiaris) Follow a Robot's Sit Commands. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 16-24.

Ryland, H. (2021). It’s friendship, Jim, but not as we know it: A degrees-of-friendship view of human–robot friendships. Minds and Machines, 31(3), 377-393.

Schwartz, S. (2003). Separation anxiety syndrome in dogs and cats. Journal of the American Veterinary Medical Association, 222(11), 1526-1532.

Townley, C. (2017). Friendship with companion animals. In Overall, C. (ed.), Pets and people: The ethics of companion animals. Oxford University Press.



Artificial intelligence aided resolution of moral disagreement

Berker Bahceci

TU/e, Netherlands, The

The last decade has seen a surging interest in how existing or future AI systems can help us in moral deliberation. Such systems could, now or in the future, provide insights to our moral psychology (Buttrick, 2024), aid us in moral decision-making (Giubilini & Savulescu, 2018), or act as moral interlocutors (Schwitzgebel et al., 2024). Implicit in some of these discussions is the belief that there is a right thing to do in a particular case, and that suitably using AI can allow us to discover the right thing to do. However, even supposing that there is a right thing to do, what the right thing to do is is not always clear, nor easily accessible. Two moral agents exposed to the same scenario might have conflicting beliefs as to what is the right thing to do, and morally disagree as peers. In this talk, I explore the possibility that AI itself could help us with that problem—that we could use AI in at least three ways to resolve moral disagreement between peers.

First, I will argue that AI can highlight the morally relevant or morally irrelevant features in a particular case. For example, one person can object to a member of LGBTQ+ community’s right to access universal healthcare on the basis of their sexual orientation, even though a person’s sexual orientation is not the morally salient feature in assessing the moral status of the right to access universal healthcare. If the moral disagreement about case stemmed from this error, it can be resolved.

Second, as Klincewicz (2016) has argued, AI could provide its users with “moral arguments grounded in first-order normative theories, such as Kantianism.” A person may not be moved by the AI system’s claim that a person’s sexual orientation is not the morally salient feature in their right to universal healthcare. Indeed, we often seek answers to ‘Why I ought to X?’ as much as ‘Ought I X?’ (Baumann, 2018). A valid, theory-driven argument could serve an explanatory role for the truth of the fact that a person’s sexual orientation is not the morally salient feature in assessing the moral status of the right to access universal healthcare, and situate this fact in a larger, theoretical framework. This could be another way AI resolves disagreement.

Finally, successfully inferring a verdict about a case could be too demanding, even if one has access to morally salient features of the case (Beauchamps & Childress, 2013). AI systems could assist their users reach verdicts by reducing the number of considerations, and, as Clay & Ontiveros (2023) argued, by informing the users about rules of inference. AI could formalize the user’s considerations in first-order predicate logic, and present them with example inferences structurally similar to the one at hand. By assisting the user to successfully infer a verdict, AI could help resolve moral disagreement.

I end this work with addressing some possible objections that one might have to my arguments, drawing from existing literature on disagreement and AI ethics.



Befriending AI: a cybernetic view

Naketa Williams

New Jersey Institute of Technology, United States of America

Friendship is a system built on trust, reciprocity, adaptation, and growth—and AI is capable of all these qualities. Society is growing comfortable with befriending AI companions. Replika—which has over 10 million users as of 2023—underscores a widespread interest in AI companionship.[1] This research aims to explore whether AI can truly participate in meaningful relationships with humans and whether it can be okay.

One major challenge in befriending AI is the fact that these entities are often products of the tech industry. Corporate AI systems are typically designed to prioritize profit over human connection, which may compromise the grounds for friendship. In this context, there are few authentic motives to build trust, reciprocity, and growth between humans and AI because of corporate influence. Instead, there only remains a risky relationship rooted in manipulation and self-interest.

Guided by the philosophies of Baruch Spinoza, Tullia D’Aragona, Donna Haraway, and Cybernetics, this paper explores ways to rethink what’s possible. Can we befriend corporate-controlled AI? If so, how can we ensure that these systems are worthy of our friendship? To navigate these questions, we need to think critically about how corporate motives shape AI’s behavior. At the same time, we must construct our own consciousness. To enact change, we must first recognize oppressive systems and envision new, liberatory ways for AI to coexist with us—as collaborators and as friends.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany