Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Symposium) Uncanny desires: AI, psychoanalysis, and the future of human identity
Time:
Wednesday, 25/June/2025:
5:00pm - 6:30pm

Location: Blauwe Zaal


Show help for 'Increase or decrease the abstract text size'
Presentations

Uncanny desires: AI, psychoanalysis, and the future of human identity

Chair(s): Luca Possati (University of Twente, Netherlands, The), Maaike van der Horst (University of Twente)

Psychoanalysis has long provided a powerful lens for examining the complex interplay of desire, identity, and the unconscious. In the era of artificial intelligence (AI), these psychoanalytic concepts take on renewed significance as we grapple with how human desire is shaped—and continually reshaped—by technological innovation. As Thanga (2024) aptly highlights, psychoanalysis is crucial for understanding AI's impact because it reveals the incomputable that underpins the computable—what he describes as the "undecidable as an inherent aspect of any computable system." Psychoanalysis furthermore recognizes the inhuman aspects of the human, as the unconscious functions mechanically. This can cause uncanny feelings and desires. We experience repulsion yet fascination at the increasing human-likeness of AI and might increasingly desire to become more like AI. This panel aims to illuminate the connections between this structural incomputability, uncanniness, human desire, and identity, offering a fresh perspective on the ways AI and psychoanalysis intersect to shape our understanding of the self in a rapidly evolving technological landscape.

In psychoanalysis, desire is a central concept that transcends mere biological need or conscious demand. It is a dynamic and inexhaustible force rooted in the unconscious, manifesting as a ceaseless pursuit of what is lacking—something that, by definition, can never be fully attained. Desire is not an object to be possessed but a constitutive tension of human existence, intrinsically tied to the body. Desire has been conceptualized by a variety of psychoanalytic thinkers. Freud (1900, 1915, 1920) conceived of desire as the driving force of the instincts, an unconscious push emerging from the conflict between the life instincts (Eros) and death instincts (Thanatos). Lacan (1966) expanded and refined Freud’s ideas, framing desire as the product of a confrontation with manque (lack). We do not desire what fulfills our needs (biological) or what we consciously ask for (demand), but rather what is inaccessible—the enigmatic object of desire, which Lacan called objet petit a. This object, perpetually unattainable, structures and sustains desire. Winnicott (1953) offered a complementary perspective, linking desire to creativity and play. In his studies on transitional spaces, Winnicott emphasized how desire develops through objects that bridge the subject’s inner world and external reality. These objects—neither wholly internal nor fully external—allow individuals to explore, transform, and engage with the world while maintaining their sense of self.

The central questions for this panel are: How is human desire—in all its psychoanalytically illuminated dimensions—transformed by artificial intelligence? How does the concept of objet petit a evolve in interactions with AI systems that personalize experiences and desires? Do algorithms function as transitional objects, mediating the subject’s relationship with external reality? Does AI reinforce or distort unconscious desires through algorithmic personalization? What psychic mechanisms are activated in this process? Does AI generate new desires, or does it merely amplify preexisting ones, rendering them more visible and conscious? To what extent does AI create new circuits of jouissance (enjoyment), or does it intensify the subject’s alienation instead?

References:

Freud, S. (1900), The interpretation of dreams. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 4 and 5. London: Hogarth.

Freud, S. (1915), The unconscious. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 14. London: Hogarth, pp. 166–204.

Freud, S. (1920), Beyond the pleasure principle. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 18. London: Hogarth, pp. 7–64.

Lacan, J. (1966). Ecrits. Paris: Seuil.

Thanga, M.K.C. (2024). "The undecidability in the Other AI." Humanit Soc Sci Commun 11, 1372. https://doi.org/10.1057/s41599-024-03857-x

Winnicott, D.W. (1953). "Transitional Object and Transitional Phenomena." International Journal of Psychoanalysis, 34, 89-97.

 

Presentations of the Symposium

 

Can technology destroy desire? Stieglerian considerations

Bas De Boer
University of Twente

Bernard Stiegler is one of the few philosophers of technology that explicitly formulates a theory of desire. This theory has both phenomenological and psychoanalytic aspects: on the one hand, Stiegler draws from Husserl to show that technologies shape retentions and protentions thereby structuring human anticipation. On the other hand, he is inspired by the work of Freud when discussing the relationship between anticipation and desire. Drawing from the work of Stiegler, this presentation addresses the following question: can technology destroy desire?

The main goal of this presentation is to clarify why, in the context of Stiegler’s theory of technology and his interpretation of Freud, it makes sense to ask this question. The first step in doing so is to recognize the radical nature of this question vis-à-vis approaches in the philosophy of technology, like mediation theory (e.g., de Boer, 2021; Kudina, 2023; Verbeek 2011), that speak about how technology (or technologies) shapes humanity. Formulating the question of whether technology can destroy desire rather asks if technology can destroy humanity. “To destroy,” here, does not refer to the factual elimination of all human organisms, but rather to the annihilation of human desire through the construction of a system that makes sure that each individual desires “what he [sic] is supposed to desire” (Marcuse, 1955, p. 46). The question of this paper can then be reformulated as “can technology create a system in which people desire what they are supposed to desire?”.

According to Stiegler (2011), answering this question requires moving beyond Marcuse’s framework and to recognize technology as a crucial organizer of libidinal energy. This paper will first clarify how to understand the notion of technology in the context of Stiegler’s oeuvre to subsequently clarify how it can emerge as an organizer of libidinal energy. Secondly, it will link the issue of libidinal energy to the annihilation of desire by showing how a particular organization of libidinal energy might constitute a mass without individuality. This, for Stiegler, would effectively constitute a situation in which we can no longer meaningfully speak of desire. In conclusion, I will flesh out some characteristics of a “drive-based” society (Stiegler, 2009) in which desire is no longer present.

References:

de Boer, B. (2021). How scientific instruments speak. Lexington.

Kudina, O. (2023). Moral hermeneutics and technology. Lexington.

Marcuse, H. (1955). Eros and civilization: A philosophical inquiry into Freud. Beacon Press.

Stiegler, B. (2009). For a new critique of political economy. Polity Press.

Stiegler, B. (2011). Pharmacology of desire. Drive-based capitalism and libidinal dis-economy. New Formations, 72, https://doi.org/10.3898/NEWF.72.12.2011

Verbeek, P.P-. (2011). Moralizing technology. University of Chicago Press.

 

The algorithmic other: AI, desire, and self-formation on digital platforms

Ciano Aydin
University of Twente

AI-driven platforms like Tinder profoundly shape how users relate to their desires and identities. From a Lacanian perspective, these platforms function as a “Big Other,” promising mastery over uncertainty and complete fulfillment of desire. This promise is tempting because it offers to resolve the structural lack that defines human subjectivity. However, Lacanian theory reveals that such promises are inherently illusory, as no external system can eliminate this lack. Tinder illustrates how digital environments reinforce Lacanian clinical structures. The psychotic user identifies entirely with algorithmic outputs, relying on matches to define their sense of self. The perverse user manipulates their profile to fulfill the algorithm’s imagined desires, reducing themselves to objects of jouissance. The neurotic user oscillates between obsessive doubt and hysterical overinvestment in the quest for a perfect match, perpetuating cycles of dissatisfaction. Despite these pitfalls, AI platforms also create opportunities for singular self-formation beyond neurosis. By disrupting the fantasy of completeness and exposing users to the Real, Tinder can challenge users to confront their desires critically. To realize this potential, platforms must avoid commodifying desire, foster ambiguity, and emphasize reflective detachment. Features like algorithmic transparency, randomized “serendipity modes,” and prompts for self-reflection can help users move beyond reliance on the algorithm and engage with their split subjectivity. This paper argues that AI platforms, though fraught with risks, can be reimagined as tools for fostering singularity, enabling users to navigate their desires authentically and acknowledge their uncomfortable human condition.

 

Deadbots and the unconscious: A qualitative analysis

Luca Possati
University of Twente

This paper examines the psychological effects of engaging with deadbots—artificial intelligence systems designed to simulate conversations with deceased individuals using their digital and personal remains—from a psychoanalytic perspective. The research question is: How does interaction with a deadbot alter the process of mourning from a psychoanalytical perspective? Drawing on first-person testimonies of users who interacted with Project December, an online platform dedicated to creating deadbots, the study investigates how these interactions reshape the experience of mourning and challenge our understanding of death. Grounded in psychoanalytic theories, particularly object relations theory, the paper explores the complex emotional dynamics at play between humans and deadbots. It argues that deadbots function as transitional objects or “projective identifications tools”, offering a distinctive medium for emotional processing and memory work. Projective identification in deadbots begins when an individual transfers aspects of their relationship with the deceased—such as fantasies, emotions, or memories—onto the chatbot (phase 1). This act of splitting is driven by anxiety and repression, as the individual feels the need to distance themselves from the deceased by externalizing these emotional contents. The projection process then compels the chatbot to replicate these traits (phase 2). In practice, this means that the individual starts to treat the chatbot as though it embodies or represents the qualities of the deceased. For example, they might project the deceased person's mannerisms, personality traits, or even specific phrases onto the chatbot, expecting it to respond or behave in a similar way. As the chatbot increasingly mimics these qualities, the individual perceives them as externalized, reinforcing the sense of separation. This cycle of pressure, imitation, and validation becomes crucial for the eventual reintegration of the projected content (phase 3), allowing the individual to reprocess and incorporate those emotions back into their psyche.

By framing deadbots within the psychoanalytic tradition, this research seeks to deepen the discourse on the psychological, existential, religious, and ethical dimensions of AI in the context of grief and mourning.

 

Reconceptualizing reciprocity through a lacanian lens: the case of human-robot-interactions

Maaike van der Horst, Ciano Aydin, Luca Possati
University of Twente

In this paper we offer a critique and reworking of the concept of reciprocity as it is predominantly understood in HRI (human-robot interaction) literature. Reciprocity is in HRI understood from the perspective of ‘the golden rule:’ doing onto others as they have done onto you. We show that this understanding implies a utilitarian, symmetrical and dyadic view of reciprocity, that this understanding lays both a descriptive and normative claim on HHI (human-human interaction) and that a different understanding of reciprocity in HHI) and HRI is possible and desirable. We show that a golden rule perspective of reciprocity is particularly problematic in designing companion robots – human-like robots designed to establish social relationships and emotional bonds with the user. In this paper we provide a different understanding of reciprocity based on the philosophical anthropology of Jacques Lacan. We show how Lacan's conception of reciprocity is deeply intertwined with his psychoanalytic theory, particularly through the Aristotelean conceptual pair automaton and tuché. For Lacan, reciprocity goes beyond purely pre-structured, rule-based approach to encompass aspects that challenge predictability and foster mediation, creativity, disruption and transformation. This view, we propose, provides a richer conceptual framework than the dominant perspective of the golden rule, allowing for a more appropriate understanding of reciprocal HHI and HRI. We illustrate this view through a Lacanian interpretation of the film Lars and the Real Girl (2007), in which the protagonist forms a romantic relationship with a lifelike doll. We conclude by providing suggestions for designing social robots that support rather than replace reciprocal HHI through a Lacanian lens.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany