Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Time: Thursday, 26/June/2025: 8:45am - 10:00am Session Chair: Philip Antoon Emiel Brey |
Location: Blauwe Zaal |
Time: Thursday, 26/June/2025: 8:45am - 10:00am Session Chair: Julia Hermann |
Location: Auditorium 2 |
Time: Thursday, 26/June/2025: 8:45am - 10:00am Session Chair: Luca Possati |
Location: Auditorium 4 |
Time: Thursday, 26/June/2025: 10:05am - 11:20am Session Chair: Robin Hillenbrink |
Location: Auditorium 8 |
Time: Thursday, 26/June/2025: 10:05am - 11:20am Session Chair: Hans Voordijk |
Location: Auditorium 6 |
Time: Thursday, 26/June/2025: 10:05am - 11:20am Session Chair: Wouter Eggink |
Location: Auditorium 1 |
Time: Thursday, 26/June/2025: 11:50am - 1:05pm Session Chair: Sage Cammers-Goodwin |
Location: Auditorium 5 |
Time: Thursday, 26/June/2025: 11:50am - 1:05pm Session Chair: Maaike van der Horst |
Location: Auditorium 3 |
Time: Thursday, 26/June/2025: 11:50am - 1:05pm Session Chair: Maren Behrensen |
Location: Auditorium 7 |
Time: Thursday, 26/June/2025: 3:35pm - 4:50pm Session Chair: Nolen Gertz |
Location: Auditorium 4 |
Time: Thursday, 26/June/2025: 3:35pm - 4:50pm Session Chair: Luca Possati |
Location: Auditorium 2 |
Time: Thursday, 26/June/2025: 5:20pm - 6:35pm Session Chair: Maaike van der Horst |
Location: Auditorium 3 |
Time: Thursday, 26/June/2025: 5:20pm - 6:35pm Session Chair: Julia Hermann |
Location: Auditorium 2 |
Time: Friday, 27/June/2025: 10:05am - 11:20am Session Chair: Hans Voordijk |
Location: Auditorium 3 |
Time: Friday, 27/June/2025: 10:05am - 11:20am Session Chair: Michael Nagenborg |
Location: Auditorium 6 |
Time: Friday, 27/June/2025: 3:35pm - 4:50pm Session Chair: Sage Cammers-Goodwin |
Location: Auditorium 5 |
Time: Friday, 27/June/2025: 3:35pm - 4:50pm Session Chair: Maren Behrensen |
Location: Auditorium 2 |
Time: Saturday, 28/June/2025: 2:20pm - 3:45pm Session Chair: Wouter Eggink |
Location: Auditorium 13 |
Uncanny desires: AI, psychoanalysis, and the future of human identity
Psychoanalysis has long provided a powerful lens for examining the complex interplay of desire, identity, and the unconscious. In the era of artificial intelligence (AI), these psychoanalytic concepts take on renewed significance as we grapple with how human desire is shaped—and continually reshaped—by technological innovation. As Thanga (2024) aptly highlights, psychoanalysis is crucial for understanding AI's impact because it reveals the incomputable that underpins the computable—what he describes as the "undecidable as an inherent aspect of any computable system." Psychoanalysis furthermore recognizes the inhuman aspects of the human, as the unconscious functions mechanically. This can cause uncanny feelings and desires. We experience repulsion yet fascination at the increasing human-likeness of AI and might increasingly desire to become more like AI. This panel aims to illuminate the connections between this structural incomputability, uncanniness, human desire, and identity, offering a fresh perspective on the ways AI and psychoanalysis intersect to shape our understanding of the self in a rapidly evolving technological landscape.
In psychoanalysis, desire is a central concept that transcends mere biological need or conscious demand. It is a dynamic and inexhaustible force rooted in the unconscious, manifesting as a ceaseless pursuit of what is lacking—something that, by definition, can never be fully attained. Desire is not an object to be possessed but a constitutive tension of human existence, intrinsically tied to the body. Desire has been conceptualized by a variety of psychoanalytic thinkers. Freud (1900, 1915, 1920) conceived of desire as the driving force of the instincts, an unconscious push emerging from the conflict between the life instincts (Eros) and death instincts (Thanatos). Lacan (1966) expanded and refined Freud’s ideas, framing desire as the product of a confrontation with manque (lack). We do not desire what fulfills our needs (biological) or what we consciously ask for (demand), but rather what is inaccessible—the enigmatic object of desire, which Lacan called objet petit a. This object, perpetually unattainable, structures and sustains desire. Winnicott (1953) offered a complementary perspective, linking desire to creativity and play. In his studies on transitional spaces, Winnicott emphasized how desire develops through objects that bridge the subject’s inner world and external reality. These objects—neither wholly internal nor fully external—allow individuals to explore, transform, and engage with the world while maintaining their sense of self.
The central questions for this panel are: How is human desire—in all its psychoanalytically illuminated dimensions—transformed by artificial intelligence? How does the concept of objet petit a evolve in interactions with AI systems that personalize experiences and desires? Do algorithms function as transitional objects, mediating the subject’s relationship with external reality? Does AI reinforce or distort unconscious desires through algorithmic personalization? What psychic mechanisms are activated in this process? Does AI generate new desires, or does it merely amplify preexisting ones, rendering them more visible and conscious? To what extent does AI create new circuits of jouissance (enjoyment), or does it intensify the subject’s alienation instead?
References:
Freud, S. (1900), The interpretation of dreams. Standard Edition of the Complete
Psychological Works of Sigmund Freud, 4 and 5. London: Hogarth.
Freud, S. (1915), The unconscious. Standard Edition of the Complete
Psychological Works of Sigmund Freud, 14. London: Hogarth, pp. 166–204.
Freud, S. (1920), Beyond the pleasure principle. Standard Edition of the Complete
Psychological Works of Sigmund Freud, 18. London: Hogarth, pp. 7–64.
Lacan, J. (1966). Ecrits. Paris: Seuil.
Thanga, M.K.C. (2024). "The undecidability in the Other AI." Humanit Soc Sci Commun 11, 1372. https://doi.org/10.1057/s41599-024-03857-x
Winnicott, D.W. (1953). "Transitional Object and Transitional Phenomena." International Journal of Psychoanalysis, 34, 89-97.
Presentations of the Symposium
Can technology destroy desire? Stieglerian considerations
Bernard Stiegler is one of the few philosophers of technology that explicitly formulates a theory of desire. This theory has both phenomenological and psychoanalytic aspects: on the one hand, Stiegler draws from Husserl to show that technologies shape retentions and protentions thereby structuring human anticipation. On the other hand, he is inspired by the work of Freud when discussing the relationship between anticipation and desire. Drawing from the work of Stiegler, this presentation addresses the following question: can technology destroy desire?
The main goal of this presentation is to clarify why, in the context of Stiegler’s theory of technology and his interpretation of Freud, it makes sense to ask this question. The first step in doing so is to recognize the radical nature of this question vis-à-vis approaches in the philosophy of technology, like mediation theory (e.g., de Boer, 2021; Kudina, 2023; Verbeek 2011), that speak about how technology (or technologies) shapes humanity. Formulating the question of whether technology can destroy desire rather asks if technology can destroy humanity. “To destroy,” here, does not refer to the factual elimination of all human organisms, but rather to the annihilation of human desire through the construction of a system that makes sure that each individual desires “what he [sic] is supposed to desire” (Marcuse, 1955, p. 46). The question of this paper can then be reformulated as “can technology create a system in which people desire what they are supposed to desire?”.
According to Stiegler (2011), answering this question requires moving beyond Marcuse’s framework and to recognize technology as a crucial organizer of libidinal energy. This paper will first clarify how to understand the notion of technology in the context of Stiegler’s oeuvre to subsequently clarify how it can emerge as an organizer of libidinal energy. Secondly, it will link the issue of libidinal energy to the annihilation of desire by showing how a particular organization of libidinal energy might constitute a mass without individuality. This, for Stiegler, would effectively constitute a situation in which we can no longer meaningfully speak of desire. In conclusion, I will flesh out some characteristics of a “drive-based” society (Stiegler, 2009) in which desire is no longer present.
References:
de Boer, B. (2021). How scientific instruments speak. Lexington.
Kudina, O. (2023). Moral hermeneutics and technology. Lexington.
Marcuse, H. (1955). Eros and civilization: A philosophical inquiry into Freud. Beacon Press.
Stiegler, B. (2009). For a new critique of political economy. Polity Press.
Stiegler, B. (2011). Pharmacology of desire. Drive-based capitalism and libidinal dis-economy. New Formations, 72, https://doi.org/10.3898/NEWF.72.12.2011
Verbeek, P.P-. (2011). Moralizing technology. University of Chicago Press.
The algorithmic other: AI, desire, and self-formation on digital platforms
AI-driven platforms like Tinder profoundly shape how users relate to their desires and identities. From a Lacanian perspective, these platforms function as a “Big Other,” promising mastery over uncertainty and complete fulfillment of desire. This promise is tempting because it offers to resolve the structural lack that defines human subjectivity. However, Lacanian theory reveals that such promises are inherently illusory, as no external system can eliminate this lack. Tinder illustrates how digital environments reinforce Lacanian clinical structures. The psychotic user identifies entirely with algorithmic outputs, relying on matches to define their sense of self. The perverse user manipulates their profile to fulfill the algorithm’s imagined desires, reducing themselves to objects of jouissance. The neurotic user oscillates between obsessive doubt and hysterical overinvestment in the quest for a perfect match, perpetuating cycles of dissatisfaction. Despite these pitfalls, AI platforms also create opportunities for singular self-formation beyond neurosis. By disrupting the fantasy of completeness and exposing users to the Real, Tinder can challenge users to confront their desires critically. To realize this potential, platforms must avoid commodifying desire, foster ambiguity, and emphasize reflective detachment. Features like algorithmic transparency, randomized “serendipity modes,” and prompts for self-reflection can help users move beyond reliance on the algorithm and engage with their split subjectivity. This paper argues that AI platforms, though fraught with risks, can be reimagined as tools for fostering singularity, enabling users to navigate their desires authentically and acknowledge their uncomfortable human condition.
Deadbots and the unconscious: A qualitative analysis
This paper examines the psychological effects of engaging with deadbots—artificial intelligence systems designed to simulate conversations with deceased individuals using their digital and personal remains—from a psychoanalytic perspective. The research question is: How does interaction with a deadbot alter the process of mourning from a psychoanalytical perspective? Drawing on first-person testimonies of users who interacted with Project December, an online platform dedicated to creating deadbots, the study investigates how these interactions reshape the experience of mourning and challenge our understanding of death. Grounded in psychoanalytic theories, particularly object relations theory, the paper explores the complex emotional dynamics at play between humans and deadbots. It argues that deadbots function as transitional objects or “projective identifications tools”, offering a distinctive medium for emotional processing and memory work. Projective identification in deadbots begins when an individual transfers aspects of their relationship with the deceased—such as fantasies, emotions, or memories—onto the chatbot (phase 1). This act of splitting is driven by anxiety and repression, as the individual feels the need to distance themselves from the deceased by externalizing these emotional contents. The projection process then compels the chatbot to replicate these traits (phase 2). In practice, this means that the individual starts to treat the chatbot as though it embodies or represents the qualities of the deceased. For example, they might project the deceased person's mannerisms, personality traits, or even specific phrases onto the chatbot, expecting it to respond or behave in a similar way. As the chatbot increasingly mimics these qualities, the individual perceives them as externalized, reinforcing the sense of separation. This cycle of pressure, imitation, and validation becomes crucial for the eventual reintegration of the projected content (phase 3), allowing the individual to reprocess and incorporate those emotions back into their psyche.
By framing deadbots within the psychoanalytic tradition, this research seeks to deepen the discourse on the psychological, existential, religious, and ethical dimensions of AI in the context of grief and mourning.
Reconceptualizing reciprocity through a lacanian lens: the case of human-robot-interactions
In this paper we offer a critique and reworking of the concept of reciprocity as it is predominantly understood in HRI (human-robot interaction) literature. Reciprocity is in HRI understood from the perspective of ‘the golden rule:’ doing onto others as they have done onto you. We show that this understanding implies a utilitarian, symmetrical and dyadic view of reciprocity, that this understanding lays both a descriptive and normative claim on HHI (human-human interaction) and that a different understanding of reciprocity in HHI) and HRI is possible and desirable. We show that a golden rule perspective of reciprocity is particularly problematic in designing companion robots – human-like robots designed to establish social relationships and emotional bonds with the user. In this paper we provide a different understanding of reciprocity based on the philosophical anthropology of Jacques Lacan. We show how Lacan's conception of reciprocity is deeply intertwined with his psychoanalytic theory, particularly through the Aristotelean conceptual pair automaton and tuché. For Lacan, reciprocity goes beyond purely pre-structured, rule-based approach to encompass aspects that challenge predictability and foster mediation, creativity, disruption and transformation. This view, we propose, provides a richer conceptual framework than the dominant perspective of the golden rule, allowing for a more appropriate understanding of reciprocal HHI and HRI. We illustrate this view through a Lacanian interpretation of the film Lars and the Real Girl (2007), in which the protagonist forms a romantic relationship with a lifelike doll. We conclude by providing suggestions for designing social robots that support rather than replace reciprocal HHI through a Lacanian lens.
Session Details:
(Symposium) Uncanny desires: AI, psychoanalysis, and the future of human identity
Time: 25/June/2025: 5:00pm-6:30pm · Location: Blauwe Zaal
Artificial Intelligence in design engineering practice
University of Twente, Netherlands, The
Artificial Intelligence through machine learning becomes increasingly important in civil engineering practice and has been applied, among others, in structural design of building infrastructures, time and cost planning of large projects, and risk quantification. Methods based on machine learning (ML) use large volumes of stored data and identify patterns or relationships within these datasets through a self-learning process. ML technologies “learn” these relationships from training data, without being explicitly programmed.
In essence, the majority of ML technologies describe patterns and real-world phenomenon in a fashion not very comprehensible, intelligible or at the very least rationalizable to human (i.e., black box solutions). Given the ever-increasing accuracy of ML technologies in civil engineering practice, the reliance and dependence of humans on ML-based solutions increase. This can create situations where users of ML technologies perceive this practice from a perspective unbeknownst to them. When introduced in decision-making, final decisions become the result of a complex interplay between designers, users and technology (Fritz, Brandt, Gimpel, & Bayer, 2020; Redaelli, 2022; Verbeek, 2008). A major question is if one can speak of a hybrid agency between these actors. Can one speak of a dialogue between these actors? And under what conditions can AI become smarter than its designer or user? Does AI also learn from its user?
An empirical case study of ML technology for the design optimization process of wind turbine foundations to reduce the overall design time without compromising the accuracy was carried out to deal with these questions. Because in an actual design process, an extensive number of design variables are involved, it is essential to verify and determine the most influential variables. Using ML, the likely influential design variables are determined. But can AI in this use practice become smarter than the design engineer by showing new influential design variables not seen by the engineer? And which actor determines there is a missing influential variable? Does ML provide designers a better understanding of the importance of each design variable and how a certain design variable influences the behavior of the wind turbine foundation?
It is shown that designing and using AI systems in design engineering involve many actors. Because there is a web of responsibilities it is impossible to hold one actor accountable. Concepts from postphenomenology (Verbeek, 2008) may clarify this perceived hybrid agency between users, designers and AI in design engineering practice. By using these concepts, the increasingly close relationship between users, designers and AI in design engineering practice can be examined. In responding to the call of Leiringer and Dainty (2023), applying these concepts can in general have the potential to increase the maturity of civil engineering research.
Session Details:
(Papers) Engineering ethics
Time: 26/June/2025: 8:45am-10:00am · Location: Auditorium 7
Human-technology relations down to earth
1Saxion University of Applied Sciences, the Netherlands; 2University of Twente, the Netherlands
In this paper we will discuss how philosophy of technology can address and hopefully help advance a much needed reorientation within design of our relation to nature and earth. There is an ecological crisis, and technology has everything to do with it. Therefore the human application of technology urgently needs to become adjusted to ecology in a more sound way. For this both design approaches and philosophy of technology are in need of more explicit orientation towards nature/earth (Latour, 2017; Lemmens, Blok & Zwier, 2017).
The use of philosophy of human-technology relations for the present-day call for preservation of the earth against damaging technology is complicated. For in the heart of this philosophy is a philosophical questioning of the meaning of nature and technology and a blurring of the distinction. Exemplary is Latour’s allegation, in earlier work of his, that all things and humans are ‘hybrids’ (We have never been modern). While modernity can be characterized by a technological flight lifting us up from our natural condition, we have gotten too far away, and it is now time for a reappraisal of our bounds to the earth and to nature. Therefore the challenge for the philosophy of technical mediation and of human-technology relations is to consider how to advance a mediation approach to the complex of humans, technology, and nature, in such a way that it averts a forgetting of nature, but rather acknowledges earth/nature in a right way. How can human-technology relations remain, or be brought back ‘down to earth’ (Latour, 2018)?
To answer this question we engage in design research and design philosophy, following an approach from the practical turn in philosophy of technology (anonymized for review, 2018b; 2021). Firstly by examining the work of Koert van Mensvoort who applies philosophy of technology to design and advances the notion of ‘next nature’ (van Mensvoort & Gerritzen, 2005; van Mensvoort & Grievink, 2015); secondly by discussing a project by design students about design and the relation to nature in the context of food (anonymized for review, 2023). Next we will research the place of earth/nature in the (post)phenomenological framework of “human-technology-world relations” (Ihde, 1990, 1993; Verbeek, 2005, 2015) and in the “Product Impact Tool” which offers a practical elaboration of the idea of technical mediation (anonymized for review, 2014; 2017).
The philosophical analysis of the relations between technology and humans has proven useful in design practice with respect to improving human-technology interaction (anonymized for review, 2018; 2020; 2021) and considering social effects (anonymized for review, 2020a; 2020b; 2020c; 2021). However, the discussed design case emphasises that the role of nature/earth is underexposed in this human-technology relations approach. Therefore, building on our analysis of the framework of “human-technology-world relations” we will present a revised design of the Product Impact Tool; a product impact tool - down to earth (anonymized for review, 2022).
references:
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Bloomington: Indiana University Press.
Ihde, D. (1993). Postphenomenology: Essays in the Postmodern Context. Chicago: Northwestern University Press.
Latour, B. (2017). Facing Gaia: Eight lectures on the new climatic regime.: John Wiley & Sons.
Latour, B. (2018). Down to earth: Politics in the new climatic regime. John Wiley & Sons.
Lemmens, P., V. Blok and J. Zwier (2017). Toward a terrestrial turn in philosophy of technology. Techné: Research in Philosophy and Technology 21(2/3): 114-126. DOI: https://doi.org/10.5840/techne2017212/363
van Mensvoort, K. and M. Gerritzen, Eds. (2005). Next Nature. Rotterdam: BIS.
van Mensvoort, K. and H.-J. Grievink, Eds. (2015). Next Nature Catalog; Nature changes along with us. Barcelona: Actar.
Verbeek, P.-P. (2005). What Things Do – Philosophical Reflections on Technology, Agency, and Design. Penn State: Penn State University Press.
Verbeek, P.-P. (2015). Beyond Interaction; a short introduction to mediation theory. Interactions 22(3): 26-31.
Session Details:
(Papers) Human - Technology
Time: 26/June/2025: 8:45am-10:00am · Location: Auditorium 2
The new stage of democracy. A call for regulation of social media platforms based on theater theory
University of Twente, Netherlands, The
Social networking systems like Facebook, TikTok and Instagram have become an essential part of everyday life. Unlike traditional media gatekeepers, these platforms allow users to search for information in an unmediated way, following a market-like logic rather than a truth-centered one. To snowball, content needs to gather several likes and visualizations that are independent from its epistemic reliability. Influencers carefully study their posts and adapt them to the audience they intend to intercept, staging up a show with measured choices of words, lights and framings. Following this trend, politicians have increasingly been using these platforms to obtain visibility, adapting their communication to this market-based, non-epistemic logic. Political scientists have described this shift in the balance of democracy with the category of “post-truth”, highlighting the rising role of appeals to emotion in the choice of who to vote. In spite of this, the CEOs of social media platforms do not acknowledge having any significant societal influence, thus upholding an image of neutrality of their businesses. On the other hand, philosophers of technology have argued that there should be more responsible regulation of social media platforms, since their current design might bring not only positive opportunities for their users but also undesirable, harmful consequences that corporate executives should anticipate. This paper aims to show that a helping hand in backing up this claim can be offered by aesthetic reflections on theater.
This paper will be structured as follows. First, I will give an overview of William Dutton’s suggestion that the Internet should be described as the fifth estate of democracy, focusing on the case of social media platforms. Second, I will argue that Ervin Goffman’s sociological framework allows for a description of social network systems as theatrical contexts. Third, I will argue that Bertolt Brecht’s theater theory allows us to understand that communication on social media platforms is centered on non-epistemic values, such as representativeness and appeals to emotion. This will also provide the justification for distinguishing social networking platforms from informational institutions. Fourth, I will back up this claim by borrowing the concept of post-truth from the field of political science and analyzing it in light of the idea of suspension of disbelief. Finally, I will conclude that social media platforms have a decisive influence on the functioning of the public sphere and that the emergence of the fifth estate as unprecedented social force can be framed as the start of a new chapter in the history of democracy. This will show that the supposed neutrality of social networking systems is a myth, and that philosophy of technology would benefit by adopting an interdisciplinary framework. Insights from sociology, theater theory and political science would provide useful analyses and labels to make the multifaceted influence of social media platforms on democracy more visible, thus making a stronger case for their regulation.
Session Details:
(Papers) Democracy
Time: 26/June/2025: 10:05am-11:20am · Location: Auditorium 4
Queering the sex robot: insights from queer Lacanian psychoanalysis and new materialism
University of Twente, Netherlands, The
Human-like sex robots have the potential to mediate intimate relationships and sexuality (Frank and Nyholm, 2016). However, sex robots are rarely imagined or used in ways that can be considered queer. In terms of their historical and current depictions and uses, sex robots have been predominantly depicted and used as ideal women-like objects by heterosexual men. Sex robots seem to mainly reinforce heteronormative and masculine ideals of sexual and romantic companionships. In this paper we explore the possibility of queering the sex robot. With queering the sex robot, we mean imagining, designing and using sex robots in ways that disrupt heteronormative, binary and particularly masculine frameworks of ‘good’ sexual and romantic companionships (Ahmed, 2006).
We do so through two distinct yet overlapping critical theoretical lenses: Queer Lacanian Psychoanalysis and New Materialism. New Materialism offers an understanding of human (and gender) identity as fluid and has introduced the figure of the cyborg as a queer concept and method. It focuses on relationships with the non-human (nature and technology) and, from a feminist perspective, highlights problematic power relations. Queer Lacanian Psychoanalysis critically analyzes the philosophical anthropology of the psychoanalyst and philosopher Jacques Lacan through a queer lens. Lacan has for instance highlighted the idea that ‘the sexual relation does not exist’ and thereby criticizes dominant heterosexual ideals. We identify several overlaps between these two perspectives on sexual relations: for example, both 1) emphasize the importance of non-human relationality 2) demonstrate how sexual identity and orientation are fundamentally fluid and 3) critique ideals of purity and harmony in sexual relations.
Based on these findings, we aim in the first part of the aper to highlight how the insights from these theories offer potential for queering sex robots, including how sex robots can be critiqued from these perspectives. In the second part of the talk, we plan to take a more practical approach and explore what queering the sex robot could look like in practice – a rather challenging endeavor. Drawing on our previous collaborations with designers and artists, we will generate some concrete ideas.
Sources
Ahmed, S. (2006). Queer Phenomenology: Orientations, Objects, Others. London: Duke University Press.
Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305-323. https://doi.org/10.1007/s10506-017-9212-y
Session Details:
(Papers) Sex robots
Time: 26/June/2025: 3:35pm-4:50pm · Location: Blauwe Zaal
Technology as a constellation: The challenges of doing ethics on enabling technologies
University of Twente
While it is tempting to think that technologies as artefacts have an author (Franssen et al., 2024), new technologies rarely emerge from a vacuum. Rather, new and emerging technologies are a summation of prior innovations, a cocktail of science, invention, and discovery. Like the ship of Theseus, elements can be swapped out and reconfigured while the intention of the entity remains the same. And these elements, too, are technologies with their own constellations of elements that can be adapted. What technologies are metaphysically can shift with a swap of a processor, update of a lens, frying of a cable, or runtime reduction of an algorithm.
Technology as an entity is fickle. While “phone” has remained a steady term in the past century, how it works and what it is capable of has shifted. This revolution is in no small part due to a constellation of technologies that formed and shifted until landlines became pocket-sized computers. Even on a micro scale, small updates to processors and shifts in materials allow corporations to release “new” models of laptops, phones, tablets (and more) annually. With the advent of the Internet of Things (IoT), allowing for and encouraging connections between disparate technologies, the breadth of entanglement between technologies is bound to make their capabilities even less concrete.
Phillip Brey (2017) described enabling technologies as “technologies that provide innovation across a range of products, industrial sectors, and social domains,” sharing that “they combine with a large number of other technologies to yield innovative products and services” (Brey 2017). Given the trend to connect technologies, the glorification of historical data collection, and the explosion of machine learning and AI services, this understanding of “enabling technologies” might be too limited. As opportunities for connections grow, so too does the scope of enablement.
As the night sky of technology grows increasingly dense, the opportunities for new Technology Constellations increases. These could form fractals of enablement, infinity loops such as AI learning by reading its own blogs or multiple supposedly privacy preserving tools joining together to form a system of surveillance. A culture of connection through open data and APIs, currently pushed as a new ethical model to encourage innovation, makes it challenging to predict ethical challenges as technologies themselves and what they enable can so readily change.
In our paper, we will use a concrete case of advanced radio frequency sensing to explore the nuances of enabling technologies by considering them as Technology Constellations and uncovering the related issues we need to be prepared for with the expansion of IoT and machine learning.
Work Cited
Brey, P. A. E. (2017). Ethics of Emerging Technologies. In S. O. Hansson (Ed.), The Ethics of Technology: Methods and Approaches (pp. 175-192). (Philosophy, Technology and Society). Rowman & Littlefield International.
Franssen, Maarten, Gert-Jan Lokhorst, and Ibo van de Poel (2024). Philosophy of Technology. In: Edward N. Zalta & Uri Nodelman (eds.), The Stanford Encyclopedia of Philosophy (Fall 2024 Edition), URL = <https://plato.stanford.edu/archives/fall2024/entries/technology/>.
Session Details:
(Papers) Machine Learning
Time: 26/June/2025: 3:35pm-4:50pm · Location: Auditorium 5
Personal and intimate relationships with AI: an assessment of their desirability
University of Twente, Netherlands, The
Artificial intelligence has reached the social integration stage: a stage at which AI systems are no longer mere tools but function as entities that engage in social relationships with humans. This development has been made possible by the rise of Large Language Models (LLMs), in particular, due to their capacity to process, generate, and interpret human language in ways that simulate meaningful interactions. Their ability to understand context, recognize emotional cues, and respond coherently enables them to participate in conversational exchanges that mimic relational dynamics, including empathy, collaboration, and trust-building. Complementary technologies that support social relationships include emotion recognition, personalization algorithms, and multimodal integration (e.g., combining text with voice or visual data). AI designed to engage in social interactions and relationships with humans may be called relational AI.
Social interactions and relationships with AI can be personal or impersonal. Impersonal interactions and relationships are instrumental in nature and focus on task performance. An example is an interaction with a chatbot aimed at finding information. Personal interactions and relationships involve direct, individualized interactions in which the AI tailors its responses to the unique characteristics, needs, or preferences of a specific human. They involve emotional or relational depth, involving a simulation of empathy, care or meaningful engagement. They also involve relational continuity, in which a history is built with an individual. They frequently also involve the AI having a personality, which helps to build trust, relatability, and emotional connectedness. Relational AI that engages in personal relationships with humans may be called personalized relational AI.
Personalized relational AI is finding a place in three kinds of AI applications: those with a focus on learning, self-improvement, and professional growth, on personal assistance, and on companionship. For each type, I will assess the benefits and drawbacks of having personalized relational AI, and I will assess whether and under what conditions such AI systems are desirable.
It will be argued that personalized relational AI offers significant benefits by tailoring interactions to individual needs and preferences. It can also improve learning outcomes, offer empathetic emotional support, and combat loneliness, particularly for vulnerable populations. However, it also presents notable drawbacks, most importantly the risk of emotional attachment to artificial systems that cannot genuinely reciprocate feelings, as well as the resulting weakening of human relationships. In addition, there are major privacy risks associated with the use of these systems, and their commercial nature raises the potential for manipulative interactions that prioritize profit over user well-being, and there is an issue of accountability when errors or harm occurs due the lack of moral agency of these systems. It will be argued that the use of personalized relational AI is in learning and self-improvement is defensible, but that its use in personal assistance and companionship may come at a cost that often is too high.
Session Details:
(Papers) Intimacy II
Time: 26/June/2025: 5:20pm-6:35pm · Location: Blauwe Zaal
Techsploitation cinema: how movies shaped our technological world
University of Twente, Netherlands, The
There is clearly an audience for movies about machines, for movies where flesh and blood has been replaced by metal and circuitry. So the question that I want to ask in this project is: Why?Why does this audience exist? Why do people want to see movies about humans fighting robots (Terminator), about robots fighting robots (Terminator 2), about robots fighting to protect humans so they can grow up to fight robots (Terminator 3)? If these movies are indeed exploiting the audience’s desires, then what are the desires that are being exploited in these movies? If a traditional sexploitation movie is about offering audiences a way to watch pornographic sex scenes surrounded by just enough plot to avoid the accusation of being a pervert, and if a traditional blaxsploitation movie is about offering audiences a way to watch racist stereotypes surrounded by just enough plot to avoid the accusation of being a racist, then what is it that a “techsploitation” movie would be offering audiences?
To answer these questions, this project will explore the movies that I have already identified and many, many others that similarly seem to cater to audiences who want nothing more than to see machines try to kill humans (e.g., Westworld), machines try to enslave humans (e.g., The Matrix), and machines try to become humans (e.g., Demon Seed). This exploration though won’t just be about investigating the technological world of cinema, but also about investigating the technological world in which we live. For just as exploring sexploitation movies and blaxploitation movies have helped us to better understand gender dynamics and racial dynamics at play in society, I believe that exploring techsploitation movies will likewise help us to better understand the dynamics at play in society in the relationship between humanity and technology.
Session Details:
(Papers) Philosophy of technology III
Time: 26/June/2025: 5:20pm-6:35pm · Location: Auditorium 1
Ontic capture and technofascism
University of Twente, Germany
In “Dear Octavia Butler,” written as a letter to the late science-fiction author that interrogates central ideas from her novels and stories, Kristie Dotson develops the concept of gestative capture. The concept describes an ideological mandate for “survival at all costs” that reduces human beings capable of bearing children to their potential role in biological reproduction. It captures these “bearers” in their reproductive essence, ignoring their dynamic existence.
Dotson connects the concept of gestative capture to a demographic trend that currently occupies the minds of the far-right in Europe and North America: sharply declining birth rates. The far-right uses this demographic trend to conjure a fight for the “survival of the West” – an obvious racist dogwhistle. This “fight” is then thought to justify the rollback of rights and access to technologies that allow “bearers” to escape gestative capture – the overturning of Roe v Wade is just the most obvious example.
Dotson’s letter is not anti-natalist – her point is that the choice to bear, nurse, and raise children has become increasingly unattractive as means to opt out of reproduction have become more and more accessible. Instead of making this choice more attractive, the political response – which so far has largely come from the far-right – pushes for the expansion of gestative capture. In my contribution, I explore the relation between this specific development and the larger context of far-right politics and “big tech” – inspired by Elon Musk’s obsessive tweeting about declining birth rates, but not limited to him or his companies.
I argue that what Dotson calls gestative capture is part of a broader phenomenon that can be described as ontic capture: the reduction of human beings and their dynamic existence to a fixed essence. This essence can be defined in reproductive terms, but it can also be sexual, ethnic or racial, religious, or economic. Ontic capture is not a new phenomenon in that all social, political, and legal classification systems depend on it to some extent – classification systems which usually have their own survival as their chief ideological mandate.
However, the entrenching of corporate “big tech” in civil society and government – again, Musk is only the most conspicuous example – threatens to render ontic capture overpowered and ungovernable. Especially where so-called “artificial intelligence” is involved, the products of “big tech” tend to be comprehensive systems of classification, surveillance, and social control – from social credit scoring and predictive policing to “social media,” they are designed to capture persons in some predictive, quantifiable essence.
While ontic capture is a part of all classification systems, this technologically overpowered version of it – especially when combined with matching commitments from corporate and political leaders – easily slips into technofascism: the outsourcing of truth to opaque technologies, and the replacement of history and politics with raw predictive power. Historical fascists used friend-foe-propaganda and the radio to will their ideas into existence, current fascists can rely on an entire arsenal of technologies to turbocharge their projects.
Bibliography:
Behrensen, Maren: The State and the Self. Identity and Identities, Rowman & Littlefield 2017.
Dotson, Kristie: “Dear Octavia Butler,” in the Proceedings of the Aristotelian Society 123 (2023), 327-346.
Jenkins, Katharine: “Ontic Injustice,” in the Journal of the American Philosophical Association 6 (2020), 188-205.
McIlroy, Erin: Silicon Valley Imperialism. Techno Fantasies and Frictions in Postsocialist Times. Duke University Press 2024.
Saul, Jennifer: Dogwhistles & Figleaves. How Manipulative Language Spreads Racism and Falsehood, Oxdorf University Press 2024.
Schmitt, Carl: Der Begriff des Politischen, Duncker & Humblot 1932.
Stanley, Jason: “Democratic Lies and Fascist Lies,” in Melissa Schwartzberg and Philip Kitcher (eds.): Truth and Evidence, New York University Press 2021, 209-222.
Teixeira Pinto, Ana: Capitalism with a Transhuman Face. The Afterlife of Fascism and the Digital Frontier, in Third Text 33 (2019), 315-336.
Session Details:
(Papers) Politics I
Time: 27/June/2025: 10:05am-11:20am · Location: Auditorium 6
Welcoming the other: More-than-human agency in regenerative design
1University of Twente; 2Wageningen University; 3Delft University of Technology
A central problem in technology development is that technologies require valuable resources from the environment, such as energy and raw materials, whose extraction and production negatively affects ecosystem health. While there have long been demands and approaches in ethics to design technologies in a more sustainable and environmentally friendly way, the approach of regenerative design has recently gained attention (Pedersen Zari 2018, Hecht et al. 2023). Regenerative design seeks to move beyond net-zero sustainability by designing technologies and infrastructures that actively contribute to restoring the capacity of ecosystems to function at optimal health (Reed 2007, Wahl 2016). The promise of regenerative design is to overcome anthropocentric relations and practices by supporting both human and non-human thriving.
In our paper, we explore this promise by examining the concept of agency in the context of regenerative design. Whereas traditional approaches argue that only humans possess agency, more relational approaches – such as postphenomenology, actor-network theory, and new materialism – attribute a form of agency to non-human entities such as technology and nature. We propose the thesis that human agency often acts as a disruptive force within nature, and that acknowledging the agency of non-human entities may provoke a shift towards less anthropocentric ways of being. Thus, our paper aims to bridge environmental philosophy and the philosophy of technology and design. Philosophy of technology has only recently started to take more seriously the material preconditions of technologies, and more research is needed to fill this gap (Kaplan 2017, Thompson 2020). Regenerative design presents an ideal case study by bringing together design practices and care for nature.
By adopting relational and more-than-human perspectives, we examine how regenerative design transforms these relationships and challenges anthropocentric frameworks. We argue that regenerative design prompts a rethinking of human-non-human relationships, emphasizing the entangled position of humans as part of an ecosystem. Key questions we address include: How should we understand human-non-human relationships within regenerative design? How are these relationships transformed in practice? What does it mean to design for relationships, and how can this be implemented effectively? To what extent does regenerative design risk recreating paternalistic forms of relation towards nature? Through this discussion, we aim to demonstrate how regenerative design fosters new relational paradigms that integrate humans, technology, and the environment in mutually beneficial ways.
References
Hecht, K., et al. (2023). Buildings as Living Systems—Towards a Tangible Framework for Ecosystem Services Design. Design for Climate Adaptation, Cham, Springer International Publishing.
Kaplan, D. M. (2017). Philosophy, technology, and the environment. Cambridge, Massachusetts, The MIT Press.
Pedersen Zari, M. (2018). Regenerative Urban Design and Ecosystem Biomimicry, Routledge.
Reed, B. (2007). "Shifting from ‘sustainability’ to regeneration." Building Research & Information 35(6): 674-680.
Thompson, P. B. (2020). Food and Agricultural Biotechnology in Ethical Perspective, Springer.
Wahl, D. C. (2016). Designing regenerative cultures. Axminster, England, Triarchy Press.
Session Details:
(Papers) Agency I
Time: 27/June/2025: 10:05am-11:20am · Location: Auditorium 7
Epistemological imbalances in assessment of surveillance technologies: what CCTV cameras show us
University of Twente, Spain
Surveillance technologies are used worldwide as mechanisms to safeguard security and maximize citizen wellbeing but, as a consequence, privacy is often negatively affected as a trade-off. A paradigmatic example of surveillance technology that tries to contribute to society by making it safer are CCTV (Closed-Circuit TeleVision) cameras, which are often used for crime prevention. But CCTV systems have the small inconvenient of not being extremely effective preventing crime (citation). Why, assuming that it is not obvious how they contribute to the general well-being, we still invest millions in their development? In this paper, I will argue that the reasons for the adoption of surveillance technologies are often based on biased evidence that comes from the overrepresentation of certain values when assessing how these technologies contribute to wellbeing. In other words, wellbeing can be improved by realising different values in society (safety, autonomy, privacy, freedom of speech, etc.), but the influence of a technology over some of these values is easier to prove than the impact it has on others. This epistemological imbalance can be found in CCTV cameras, as the contributions to security of the cameras are easily proven by comparing different timelines of conflictive areas, but the impact that cameras have over the privacy of citizens is often overlooked: compared to numbers and crime statistics, the methods for evaluating CCTV’s impact on privacy are less “objective” and do not make good headlines (interviews, case studies, etc.).
This epistemological imbalance is not sufficient to justify why we keep on installing CCTV cameras, as it is not even clear that they contribute to prevent crime, but the overrepresentation of the “goodness” of CCTV cameras by displaying it in crime statistics and quick advertisement is easily weaponized by political parties and interested stakeholders. CCTV cameras are a powerful weapon to instigate feelings of insecurity among the population, making them a tool of political mobilization in difficult times. For concluding the paper, we point at the fact that this issue derived from the imbalance to prove the influence of technology in different values that contribute to technology might be a general issue among surveillance technologies, that often carry the same structure of trade-offs between security and privacy.
Session Details:
(Papers) Epistemology I
Time: 27/June/2025: 10:05am-11:20am · Location: Auditorium 4
On Escape: breaking free from technological mediation
University of Twente, Netherlands, The
Postphenomenology’s recognition that technological artifacts play an active role in our lives by mediating our experiences and actions in the world has proven a powerful perspective for analyzing what things do. However, the relational ontology (and subject-object co-constitutionality) that undergirds this perspective has important implications for the way in which we consider ourselves and our relation to our environment. One of the more pertinent implications concerns our (notion of) freedom: from a postphenomenological perspective, there is no escape from technological mediation. Even if Ihde describes some rare situations as unmediated I-World relations (see his example of sitting on the beach (Ihde, 1990 p. 45)), notions of sedimentation (Aagaard, 2021; Rosenberger, 2012) and the technological gaze (Lewis, 2020) put such a characterization as unmediated into question. In sum, we may never ‘be free’ from mediation’s influence.
As such, a notion of freedom that builds on a modern, autonomous subject will no longer do. This has not, however, led postphenomenologists to disavow the notion of freedom but to transform it through a Foucaultian lens, a freedom always within and in relation to the things that shape us: a freedom without escape. This rehabilitation of freedom without escape in turn prompted some (e.g., Dorrestijn, 2012; Verbeek, 2011) to propose a Foucaultian ethics of technologically mediated subjectivation wherein such a mediated freedom is central. However, this presupposes 1) that there is indeed no escape, and 2) that given the former, ethics must be grounded within mediation. This paper will contest 1) by exploring two possible escape routes or ‘outsides’ of mediation in which we may find forms of freedom ánd some possible ethical grounds. Interestingly, those two escape routes point us in radically different directions.
The first is characterized by transcendence, by an orientation outward towards height: it is the encounter with the Other as it appears in the work of Emmanuel Levinas. The Other presents itself, breaks through my Being-at-home-with-myself and calls me to responsibility “from beyond Being” (Levinas, 1998 p. 11). While my experience of the other may be technologically mediated, the Other’s infinity is what (or better put, who) escapes mediation. This encounter is what grants me a paradoxical freedom by constituting self-consciousness: it is the possibility to turn my back the very Other that needs me.
Where Levinas finds escape ‘towards God’, our second escape route proclaims God is dead. That is, we may find a form of escape that is oriented inwards, to a freedom in becoming rather than being. Specifically, we mean a fundamental engagement in embodied movement inspired by the role(s) that dancing (Tanz) play(s) in Nietzsche, from pedagogically valuable to achieving a self-sufficient freedom in ‘joyous creation’. To dance is to engage the wisdom and values of one’s body and to engage in a most fundamental process of bodily becoming, of self-creation on a level that may sometimes escape mediation.
If these escape routes prove successful, we may yet find some freedom and ethics for our technological lives outside of mediation.
References:
Dorrestijn, S. (2012). The design of our own lives: Technical mediation and subjectivation after Foucault [University of Twente].
Ihde, D. (1990). Technology and the Lifeworld. Indiana University Press.
Levinas, E. (1998). Secularization and Hunger. Graduate Faculty Philosophy Journal, 20/21(2/1), 3–12.
Lewis, R. S. (2020). Technological Gaze: Understanding How Technologies Transform Perception. In A. Daly, F. Cummins, J. Jardine, & D. Moran (Eds.), Perception and the Inhuman Gaze (pp. 128–142). Routledge.
Rosenberger, R. (2012). Embodied technology and the dangers of using the phone while driving. Phenomenology and the Cognitive Sciences, 11(1), 79–94.
Verbeek, P.-P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things. The University of Chicago Press.
Session Details:
(Papers) Mediation III
Time: 27/June/2025: 3:35pm-4:50pm · Location: Auditorium 2
The ‘Technological Environmentality Compass’: factors to consider when designing technological mediations across humans, technologies, and the environment.
University of Twente, Netherlands, The
As the world we live in is increasingly shaped by our own technological creations, notions of ecosystem integrity and ‘nature’ fall short to describe the quality of contemporary human-environment relations. While humans have always altered their surroundings, the current trends in domestication, management, and digitalization of the environment by technological means represent a significant shift in our relationship with the world around us. The biosphere itself, at levels from the genetic to the landscape, is increasingly a human product.
The aim of this paper is to propose the notion of a Technological Environmentality Compass to guide discussions towards desirable technological mediations across humans, technologies, and the environment. For doing so, I will first analyze how technologies shape our possibilities for action in the environment and our possibilities of experiencing the environment. To achieve this, I will recur to Postphenomenology (Ihde 1990; Verbeek 2006) to illuminate the non-neutral role of everyday tools, digital media, infrastructure, and technological systems. Specifically, the concept of Technological Mediation will be used to explain both the existential dimension (how humans exist and behave in the world) and the hermeneutic dimension (how humans perceive and interpret the world) of human-technology-environment relations. Furthermore, I will build on the notion of ‘value-ladenness’ in technological design (van de Poel 2021) to claim that the non-neutral role of technologies translates into them having empowering and limiting roles. On the basis of their enabling and disabling characters, I will utilize the Capability Approach (Nussbaum 2011; Oosterlaken 2013) to sketch a compass for desirable technological mediations based on a naturalized understanding of homo faber as understood by Material Engagement Theory (Malafouris 2013), the Skilled Intentionality Framework (Rietveld 2014, 2021), and Niche Construction Theory (Laland 2016).
To envision such a Technological Environmentality Compass, I will:
1. Firstly explain that capabilities, namely real, substantive freedoms, or opportunities to choose to act, in a specific area of life deemed valuable, are fundamentally shaped by specific bio-cultural environments, which imply not only ‘natural goods and ecological services’, but also the enabling or disabling relations –both voluntary and involuntary– mediated by everyday tools, digital media, infrastructure, and technological systems;
2. After laying out the co-constitutional relationship between humans, technologies, and the environment, my main goal is to highlight the importance of adopting a critical perspective towards the technologies we aspire to develop, as well as to critically examine the recursive effect that human-made environments have on us. For this purpose I will build-on Nussbaum’s (2001) ‘Control Over One’s Environment’ capability to include ‘Technological Environmentality’ (Aydin, González Woge, and Verbeek 2019) as a crucial dimension of our contemporary life-world;
3. Finally, I will expand on Holland’s (2008) work on ‘Sustainable Ecological Capacity’ as a Meta-Capability and highlight that the accumulation of altered environmental characteristics, whether deliberate or accidental, significantly impacts anthropogenic practices over time and causes biological adaptations to emerge from the reciprocal interactions between humans and the technological environments. Furthermore, this dynamic relationship also shapes future generations of humans, their cultural activities, and other organisms.
To conclude, the Technological Environmentality Compass will be based on the Capability Approach as a non-essentialist, dynamic framework that allows for the analysis of co-evolutionary relations between humans, technologies and the environment. My ultimate objective is to enrich the discourse on identifying and prioritizing the human capabilities that we should aim to preserve, sustain, modify, design, and create as we continue to advance and engineer both our environment and ourselves.
References
- Aydin, C., González Woge, M., and Verbeek, P-P. (2019): “Technological Environmentality: Conceptualizing Technology as a Mediating Milieu”. Philosophy & Technology, 32(2), 321-338.
- Holland, B. (2008): “Justice and the Environment in Nussbaum's Capabilities Approach: Why Sustainable Ecological Capacity Is a Meta-Capability” in Political Research Quarterly, Vol. 61, No. 2, pp. 319-332.
- Ihde, D. (1990): Technology and the Lifeworld: from Garden to Earth. Indiana Series in the Philosophy of Technology.
- Laland, K., Matthews, B., and Feldman, MW. (2016). “An introduction to niche construction theory” in Evolutionary Ecology 30:191-202.
- Malafouris, L. (2013). How Things Shape the Mind: A Theory of Material Engagement. MIT Press.
- Oosterlaken, I. (2013): Taking a Capability Approach to Technology and its Design: A Philosophical Exploration. Netherlands, Simon Stevin Series in the Ethics of Technology, 3TU.
- Nussbaum, M. (2011): Creating Capabilities: The Human Development Approach. Cambridge, Harvard University Press.
- Van Dijk, L., and Rietveld, E. (2017). “Foregrounding Sociomaterial Practice in Our Understanding of Affordances: The Skilled Intentionality Framework” in Frontiers in Psychology, Cognitive Science, Volume 7 - 2016. ● Van de Poel, I. (2021). “Design for value change” in Ethics Information Technology 23, 27–31.
- Verbeek, P-P. (2006). “Materializing Morality: Design Ethics and Technological Mediation” in Science, Technology, & Human Values, 31: 36.
Session Details:
(Papers) Mediation III
Time: 27/June/2025: 3:35pm-4:50pm · Location: Auditorium 2
Driving for Values: Exploring the experience of autonomy with speculative design
1Eindhoven University of Technology; 2University of Twente
Newly proposed technological solutions for societal problems may face the challenge of not being accepted or morally acceptable. A key concept that can help to ensure user acceptance of ethically driven technology design is the consideration of users' autonomy, i.e. allowing users to control their interactions with the system, to understand the implications of their choices, and to make decisions that align with their own values and preferences. In this paper, we explore how value experiences of users can be collected and used in approaches that integrate values of moral importance in design such as value sensitive design (VSD; Friedman & Hendry, 2019) or design for values (van den Hoven et al., 2015). Using a research-through-design approach (Stappers & Giaccardi, 2017), we investigated a smart system that suggests navigation routes based on collective values such as safety, sustainability, and economic flourishing (the so-called Driving for Values system).
We focus on the experience of autonomy, as there may be concerns that such a system manipulates users to take alternative routes. A system that respects individual autonomy is more likely to be adopted by users and will be seen as fairer and thus more acceptable from a broader societal and ethical standpoint. We understand autonomy as involving two main components: i) the ability to freely choose among different options, and ii) the availability of meaningful options, i.e., options that enable the agent to decide and act on the basis of their own reasoned values and commitments (Blöser et al., 2010; Vugts et al., 2020).
We conducted 18 semi-structured interviews to collect insights on participants’ experiences and concerns, making use of speculative design to elicit emotions in people. Emotions play an important role in value experiences, which can be understood as experiences of what is good and desirable, or bad and undesirable, in relation to specific situations, actions, or objects. During the interviews, we showed two early system versions to each participant and asked participants to click through them and think out loud. When asking questions about autonomy, we presented participants with various definitions of autonomy and autonomy statements to explore how well they could relate to them and connect different statements to different system versions.
We found that a transparent and trustworthy system that offers a meaningful choice between value-driven route options enhances drivers’ acceptance and personal sense of autonomy. As anticipated, the interaction with the speculative design elicited emotional reactions such as delight, positive excitement, and irritation in participants, which can be interpreted as indicators of an autonomy experience. While most participants found it rather difficult to express what they take autonomy to mean when asked directly, it was easy for them to connect the presented autonomy statements with different system versions. This exercise revealed that participants preferred system versions that they felt enhanced their autonomy and that the availability of meaningful options increased their feeling of being autonomous.
REFERENCES
Blöser, C., Schöpf, A., & Willaschek, M. (2010). Autonomy, experience, and reflection: On a neglected aspect of personal autonomy. Ethical Theory and Moral Practice, 13, 239–253. https://doi.org/10.1007/s10677-009-9205-3
Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping technology with moral imagination. MIT Press.
Stappers, P. J., & Giaccardi, E. (2017). Research through design. In The encyclopedia of human-computer interaction (pp. 1–94). The Interaction Design Foundation.
van den Hoven, J., Vermaas, P. E., & van de Poel, I. (Eds.). (2015). Handbook of ethics, values, and technological design: Sources, theory, values and application domains. Springer Science+Business Media. https://doi.org/10.1007/978-94-007-6970-0
Vugts, A., Van Den Hoven, M., De Vet, E., & Verweij, M. (2020). How autonomy is understood in discussions on the ethics of nudging. Behavioural Public Policy, 4(1), 108–123. https://doi.org/10.1017/bpp.2018.5
Session Details:
(Papers) Autonomy
Time: 27/June/2025: 3:35pm-4:50pm · Location: Auditorium 7
All in on AI: A critical look at the effects of creating with AI-powered tools
University of Twente, Netherlands, The
Session Details:
Poster session
Time: 28/June/2025: 10:50am-11:50am · Location: Senaatszaal
Speculative Ethics; Practicing philosophy of technology in design education
University of Twente, Netherlands, The
Session Details:
Poster session
Time: 28/June/2025: 10:50am-11:50am · Location: Senaatszaal
Postphenomenology III: new theoretical horizons
Postphenomenology is a methodological approach that seeks to understand human-technology relations by analyzing the multiple ways in which technologies mediate human experiences and practices. Postphenomenology searches to continuously update itself in light of technological developments, arising socio-political issues, and emerging theoretical issues. The three papers in this panel explore new theoretical horizons and investigates how postphenomenological research can broaden its scope and respond to emerging socio-technical challenges.
One of the key accomplishments of postphenomenology is the development of a vocabulary for analyzing technologies in use, by focusing on how they become part of human embodiment, give rise to particular forms of sedimentation and resulting habits, or more generally make users perceive the world in a particular way. The conceptual repertoire of postphenomenology is heavily shaped by the work of Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, and more recently also by that of Bruno Latour. The rationale behind this panel is that a more explicit engagement with other thinkers helps expanding postphenomenology’s conceptual repertoire enables to respond to some recurring criticisms of postphenomenology, as well as enables an analysis of technologies beyond their direct usage.
The three papers in this panel each engage with a different thinker to expand postphenomenology. The first paper compares Ihde’s theory of technological mediation with Hegel’s theory of mutual recognition. It is argued that Hegel’s account of intersubjectivity could present a critical expansion to Ihde’s account of technology-mediated intentionality, equipping postphenomenology with a better answer to the recurrent critique of its inattention to the socio-historical dimension of technology. The second paper mobilizes the work of Jean-Paul Sartre to analyze the phenomenon of griefbots that can provide a post-mortem ‘digital self’ with which others can interact. Using Sartre’s understanding of death, the paper asks: Do griefbots create new ways of grieving and controlling one’s legacy or rather make explicit existing tensions in how we approach our legacy and relate to the dead? The third paper shows how the notion of tertiary retentions as developed in the Bernard Stiegler can help postphenomenology to develop an account of temporality that it currently lacks. It is argued that developing this account is especially relevant for postphenomenological analyses of digital technologies.
Presentations of the Symposium
The technical artefact mediating between hegel and ihde
In this presentation, I expand postphenomenology’s concept of technology-mediated intentionality, pioneered by Don Ihde, by engaging with G.W.F. Hegel’s concept of mutual recognition. Hegel’s social and political philosophy has gained increasing interest in the contemporary philosophy of technology, particularly for examining human relations with artificial intelligence and other automated technologies. Postphenomenology, in turn, is a well- established post-Heideggerian perspective that emphasizes the non-neutral agency of human- made artifacts in shaping experiences of the world and the self.
I demonstrate how both Ihde and Hegel are concerned with how “otherness” mediates humans’ subjective experiences. But while Ihde focuses on the non-human other (i.e., technical artifact), Hegel highlights the human other (i.e., self-consciousness).
For Ihde, my analysis centers on his interpretation of the Husserlian concept of intentionality and development of a variational methodology to analyze multistable visual phenomena in Experimental Phenomenology (1986). These contributions laid the groundwork for his later phenomenological descriptions of the mediating role of technical artifacts in Technics and Praxis (1979), Technology and the Lifeworld (1990), and other works.
For Hegel, I focus on his accounts of mutual recognition in the Phenomenology of Spirit (1807) and other posthumously published writings, with particular attention to the passage commonly known as the “master-slave” (or lord-bondsman) dialectic. Contrary to dominant interpretations of this passage, I argue that Hegel develops an early account of technology- mediated recognition through the slave’s activity of self-objectification (i.e., work) under coercion from the master. According to this original interpretation, self-consciousness ultimately evolves through the mediation of an opposing self-consciousness, but this experience is indispensably shaped by the jointly formed technical object.
I conclude my talk by addressing past critiques of postphenomenological research,
particularly its alleged inadequacy in engaging with the political and historical dimensions of human-technology relations. I argue that many of these critiques stem from the absence of an intersubjective foundation for examining the interplay between human intentionality and technological mediation. Efforts to establish such a foundation are currently being pursued by various researchers and perspectives. A Hegelian perspective on recognition could contribute to this endeavor by elucidating how human subjectivity is transformed through technologically mediated encounters with other humans.
My life continues without me: sartre on death and personally-curated griefbots
There are several companies offering to produce a ‘digital twin’ of you: data is collected through interviews or questionnaires, which is, then, algorithmically collated into a digital version of your personality that can interact with others through text, voice, and/or video. While they are marketed as having multiple uses, the salient use is to provide a post-mortem ‘digital self’ with which others can interact. These personally-curated griefbots are the focus of this paper. In keeping with the conference theme, one could hardly consider a more intimate technology than an ‘algorithmic echo’ of one’s personality with which others interact in meaningful ways after one’s death.
This paper will examine personally-curated griefbots by considering Sartre’s understanding of death as presented in Being and Nothingness, a work that he calls “An Essay in Phenomenological Ontology.” Bringing Sartre’s ontology of the self into postphenomenological analyses provides another lens to examine certain technologies, particularly griefbots. Sartre’s ontology of the self, centered on freedom and facticity, emphasized how others fundamentally shape our being. Upon recognizing "the look" of another, I must acknowledge my own objectification – my freedom becomes alienated by “the other.” It is not just that I can never know what others truly think of me. That is certainly true whether one has read Sartre or not. For Sartre, what makes my objectifications by others so challenging to my freedom is that I cannot deny and must accept that my ‘being-for-others’ is a constitutive aspect of my being. This permanent alienation at the heart of my existence initiates a range of irremediable tensions in human relationships: from antagonistic negotiations (at best) to interminable conflict (at worst).
While living, my ‘being-for-others’ is an ontological dimension of my existence that, while alienating, can be transcended. Given the ontological structure of the self for Sartre, however, death marks the final and complete triumph of my being-for-others over my being-for-itself. In death, I become nothing more than an object for others to determine in their stories, memories, and beliefs about me. In short, upon death, my being is exhausted in my being-for-others, which is the ultimate and final alienation of my freedom.
Personally-curated griefbots appear to offer some control over one’s complete and total objectification in death. Using Sartre's understanding of death, this paper asks: Do these technologies create novel ways of anticipating and controlling one’s legacy and of grieving for others? Or rather, do they make explicit existing tensions in how we approach our own legacy and how we relate to the dead?
Postphenomenology and temporality: digital technologies and tertiary retentions
The hypothesis of this paper is that postphenomenology lacks an account of temporality, and hence is unable to analyze how technologies mediate human-world relations over time. Although there is some work on the relations between technologies, sedimentation, and habit formation, temporality is not thematized in itself. This talk shows how the work of Bernard Stiegler can form a starting-point for articulating the temporal dimension of human-technology relations. I will specifically focus on the temporal dimension of digital media: the network of technologies that enables the transmission of digital content (e.g., social media, mobile applications).
For Stiegler, technologies essentially are mnemotechnologies that are constitutive of memory. Digital media shape what Stiegler calls tertiary retentions: they give rise to particular ways of anticipation and perception. Focusing on this notion reveals the close connection between Stiegler and Husserl’s phenomenology of time consciousness, shows how Stiegler’s analysis of technics finds it basis in phenomenology, as well as clarifies its relevance for understanding the temporality of technics. In this talk, I suggest that Stiegler’s approach to the temporality of technics forms an important addition to postphenomenology for two reasons: (1) it enables to recognize that technological artefacts are often part of larger technological infrastructures that structure temporality, and (2) helps articulating why specific technological infrastructures might have undesirable consequences, for instance by pointing to what Stiegler has called the industrialization of memory.
The talk is structured as follows. First, I argue that the issue of temporality is typically neglected in postphenomenology and show why this is a problem. Second, I will outline the basics of Stiegler’s understanding of technics, particularly focusing on how he conceptualizes the relationship between technics and memory. Second, I show how he updates Husserl’s analysis of time-consciousness through the introduction of the notion of tertiary retention. Third, I argue that, in updating Husserl in this way, Stiegler’s work enables for a phenomenological analysis of the temporality involved in contemporary digital media. Fourth, I show how Stiegler’s analysis can augment the postphenomenological approach to analyzing human-technology relations.
Session Details:
(Symposium) Postphenomenology III: new theoretical horizons
Time: 28/June/2025: 11:50am-12:50pm · Location: Auditorium 14
Technological predictions: rethinking design through active inference and the free energy principle
University of Twente, Netherlands, The
This paper explores human-technology relationships through the lens of active inference and the Free Energy Principle (FEP). Active inference, rooted in Bayesian brain theory, suggests that the brain generates predictions about sensory inputs and updates beliefs to minimize surprise or prediction errors, enabling organisms to reduce uncertainty and optimize interactions with their environment. The FEP, introduced and developed by neuroscientist Karl Friston, expands this idea, proposing that biological systems aim to minimize free energy—a measure of the discrepancy between expected and actual sensory input—to sustain homeostasis (Friston 2013; Parr et al. 2022). These frameworks can provide a novel perspective on human-technology interactions.
At the heart of this paper is a straightforward yet powerful idea: every artifact embodies a set of predictions—not only about how its user will interact with it but also about the environment in which it operates. At the same time, the artifact reflects the expectations of its designer, who conceived and built it based on assumptions about its purpose, functionality, and intended user behavior. In this sense, artifacts act as mediators, encoding and enabling the interaction of predictive models from multiple agents: the designer, the artifact itself, and the user. This perspective positions artifacts as dynamic networks of predictions, where human-technology interactions are shaped by the continuous coordination and adaptation of these models over time.
The inquiry centers on two primary questions:
How do active inference and the FEP extend to artifacts? Artifacts can be seen as mediators of predictions through three mechanisms: precision crafting, curiosity sculpting, and prediction embedding. Precision crafting directs attention to specific environmental features, aiding users in managing their inferential load. Curiosity sculpting enables exploration and uncertainty reduction, refining user predictive models. Prediction embedding encapsulates the artifact’s own predictive capacity, shaping and reflecting its intended use. These mechanisms, though interconnected, can operate independently or progressively.
Can active inference and the FEP inform UX design? By conceptualizing the relationship between designer, artifact, and user as a triad of generative models, this approach provides tools to address challenges in UX design, such as enhancing user engagement and optimizing functionality. It offers a dynamic framework that goes beyond static models, capturing the evolving interactions within the system.
To operationalize this framework, this paper introduces the Designer-Artifact-User (DAU) tool, a software platform developed to simulate and analyze artifact-based interactions. The DAU tool leverages the formalism of active inference and the FEP to model how predictions evolve across the triad of designer, artifact, and user, facilitating the refinement of design processes. By employing advanced computational models, the tool provides a powerful resource for exploring the dynamic interactions between these entities. It is specifically designed for researchers, designers, and engineers seeking to deepen their understanding of complex socio-technical systems. The framework's practical application is illustrated through a case study of the smartphone. This analysis examines how smartphones embody and influence the expectations of both users and designers, demonstrating how active inference can enhance interactions and align design intentions with user behavior.
References
Friston, K. (2013). “Life as we know it.” Journal of the Royal Society Interface 10(86): 20130475.
Parr, T., Pezzulo, G., and K. Friston. (2022). Active Inference. Cambridge, MA: MIT Press.
Session Details:
(Papers) Prediction
Time: 28/June/2025: 2:20pm-3:45pm · Location: Auditorium 13