It is uncontroversial that we commonly anthropomorphise AI-driven systems, particularly social AI-driven systems, such as humanoid robots and chatbots. Indeed, the field of human-robot interaction (HRI) is replete with empirical studies that, their authors claim, show that we do (Aienti, 2018; Damholdt et al., 2023; Duffy, 2003; Li & Suh, 2022; Salles et al., 2020).
According to ‘a widespread view’ (Coghlan, 2024), this anthropomorphic way thinking and talking about AI-driven systems is a mistake of some kind. I first distinguish two interpretations of the supposed anthropomorphic mistake, metaphysical and pragmatic. I object to the metaphysical interpretation and develop the pragmatic interpretation.
On the metaphysical interpretation (section 2), the mistake we make when we anthropomorphise AI-driven systems is that our thoughts and utterances carry a commitment to ontological falsehoods, for example to the existence of (non-existent) artificial minds.
I provide two objections to this metaphysical interpretation (section 3). First, we may be using non-literal or metaphorical anthropomorphic ascriptions that do not carry an ontological commitment. Second, a ‘companions-in-guilt’ objection: if we are committing ourselves to ontological falsehoods when talking and thinking about AI, then we are also doing so when we talk about corporations and thermostats. But this is implausible.
The objections to the metaphysical interpretation motivate an alternative, pragmatic interpretation of the anthropomorphic mistake (section 4). It is not that our AI-related thought and talk fail to correspond with reality; rather, we are adopting a way of thinking and speaking that can get us into trouble. I articulate this pragmatic interpretation via Daniel Dennett’s ‘intentional stance.’ The mistake is that thinking and talking anthropomorphically about AI-driven systems leads to (vulnerability to) predictive error, which can have negative downstream consequences, including leading us to make poor inferences.
I further distinguish two kinds of pragmatic mistake we might be making by anthropomorphising AI. The first is the more fundamental mistake of adopting the intentional stance toward a system that is not the right kind of system for that stance. The second is adopting the intentional stance toward a system that could warrant it, but doing so poorly or naively—for example, misattributing a specific belief to the system.
Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34, 1-25.
Damholdt, M.F., Quick, O.S., Seibt, J., Vestergaard, C., & Hansen, M. (2023). A Scoping Review of HRI Research on ‘Anthropomorphism’: Contributions to the Method Debate in HRI. International Journal of Social Robotics, 15, 1203-1226.
Duffy, B.R. (2003). Anthropomorphism and the social robot. Robotics Auton. Syst., 42, 177-190.
Li, M., & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32, 2245 - 2275.
Placani, A. (2024). Anthropomorphism in AI: hype and fallacy. AI Ethics, 4, 691-698.
Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11, 88 - 95.