Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Anthropomorphism
Time:
Thursday, 26/June/2025:
5:20pm - 6:35pm

Session Chair: Ibo van de Poel
Location: Auditorium 4


Show help for 'Increase or decrease the abstract text size'
Presentations

Anthropomorphism, false beliefs and conversational AIs

Beatrice Marchegiani

University of Oxford, United Kingdom

Abstract:

Conversational AIs (CAIs) are autonomous systems capable of engaging in natural language interactions with users. Recent advancements have enabled CAIs to engage in conversations with users that are virtually indistinguishable from human interactions. We're now dealing with a new generation of Large Language Model that can hold detailed, coherent, and context-aware conversations, often making it hard for users to tell them apart from human interactions. The new abilities of CAIs, combined with anthropomorphic cues present in recent models, pose a substantive risk of users forming anthropomorphic false beliefs about them. For the purposes of this paper, I define an anthropomorphic false belief as a mistaken belief that an entity possesses human-like traits when, in fact, it does not. Such false beliefs can occur when the CAI’s nature is not disclosed and users mistakenly believe they are interacting with a human, or, even if the CAI is disclosed, through subconscious anthropomorphism. Existing literature on anthropomorphism and AI addresses the instrumental harms associated with anthropomorphism. There has been little discussion on the relationship between anthropomorphism and autonomy and how anthropomorphism might be, in itself, bad, especially when considering its impact on user autonomy. This paper aims to address this gap by arguing that anthropomorphic false beliefs undermine users' autonomy. For the purpose of this paper I am going to assume that autonomy holds intrinsic value.

The core argument is structured as follows:

P1: (Empirical) Interactions with CAIs are likely to cause users to falsely believe that CAIs have some human-like attributes (form anthropomorphic false beliefs).

P2: Anthropomorphic false beliefs undermine users' autonomy.

Conclusion: Interactions with CAIs are likely to undermine users’ autonomy.

The paper is organised into six sections. In part 1, I begin by justifying an autonomy-based approach to analysing anthropomorphic false beliefs in the context of CAIs and briefly mention two alternative ways in which such false beliefs can be criticised: either by disconnecting us from reality or through the lens of deception as exemplified by existing literature on social robots. In part 2, I outline two mechanisms that lead users to form anthropomorphic false beliefs in the context of CAIs. The first mechanism is a lack of explicit disclosure, and the second is subconscious anthropomorphism. In part 3, I explore how some false beliefs can undermine an agent’s autonomy. I then propose a characterization called "the intention test" to identify which false beliefs undermine autonomy. In part 4, I apply "the intention test" to the case of anthropomorphic false beliefs and CAIs, demonstrating that such false beliefs undermine autonomy. In part 5, I address two objections to my argument, first considering whether the loss of autonomy is significant enough to pose a serious threat, then addressing cases where it might be best for users to form false beliefs about CAIs. Finally, in I conclude by discussing practical ways to minimise the autonomy-eroding potential of CAIs.



What's the problem with anthropomorphising AI-driven systems?

Giles Howdle

Utrecht University, Netherlands, The

It is uncontroversial that we commonly anthropomorphise AI-driven systems, particularly social AI-driven systems, such as humanoid robots and chatbots. Indeed, the field of human-robot interaction (HRI) is replete with empirical studies that, their authors claim, show that we do (Aienti, 2018; Damholdt et al., 2023; Duffy, 2003; Li & Suh, 2022; Salles et al., 2020).

According to ‘a widespread view’ (Coghlan, 2024), this anthropomorphic way thinking and talking about AI-driven systems is a mistake of some kind. I first distinguish two interpretations of the supposed anthropomorphic mistake, metaphysical and pragmatic. I object to the metaphysical interpretation and develop the pragmatic interpretation.

On the metaphysical interpretation (section 2), the mistake we make when we anthropomorphise AI-driven systems is that our thoughts and utterances carry a commitment to ontological falsehoods, for example to the existence of (non-existent) artificial minds.

I provide two objections to this metaphysical interpretation (section 3). First, we may be using non-literal or metaphorical anthropomorphic ascriptions that do not carry an ontological commitment. Second, a ‘companions-in-guilt’ objection: if we are committing ourselves to ontological falsehoods when talking and thinking about AI, then we are also doing so when we talk about corporations and thermostats. But this is implausible.

The objections to the metaphysical interpretation motivate an alternative, pragmatic interpretation of the anthropomorphic mistake (section 4). It is not that our AI-related thought and talk fail to correspond with reality; rather, we are adopting a way of thinking and speaking that can get us into trouble. I articulate this pragmatic interpretation via Daniel Dennett’s ‘intentional stance.’ The mistake is that thinking and talking anthropomorphically about AI-driven systems leads to (vulnerability to) predictive error, which can have negative downstream consequences, including leading us to make poor inferences.

I further distinguish two kinds of pragmatic mistake we might be making by anthropomorphising AI. The first is the more fundamental mistake of adopting the intentional stance toward a system that is not the right kind of system for that stance. The second is adopting the intentional stance toward a system that could warrant it, but doing so poorly or naively—for example, misattributing a specific belief to the system.

Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34, 1-25.

Damholdt, M.F., Quick, O.S., Seibt, J., Vestergaard, C., & Hansen, M. (2023). A Scoping Review of HRI Research on ‘Anthropomorphism’: Contributions to the Method Debate in HRI. International Journal of Social Robotics, 15, 1203-1226.

Duffy, B.R. (2003). Anthropomorphism and the social robot. Robotics Auton. Syst., 42, 177-190.

Li, M., & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32, 2245 - 2275.

Placani, A. (2024). Anthropomorphism in AI: hype and fallacy. AI Ethics, 4, 691-698.

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11, 88 - 95.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany