Personal and intimate relationships with AI: an assessment of their desirability
Philip Antoon Emiel Brey
University of Twente, Netherlands, The
Artificial intelligence has reached the social integration stage: a stage at which AI systems are no longer mere tools but function as entities that engage in social relationships with humans. This development has been made possible by the rise of Large Language Models (LLMs), in particular, due to their capacity to process, generate, and interpret human language in ways that simulate meaningful interactions. Their ability to understand context, recognize emotional cues, and respond coherently enables them to participate in conversational exchanges that mimic relational dynamics, including empathy, collaboration, and trust-building. Complementary technologies that support social relationships include emotion recognition, personalization algorithms, and multimodal integration (e.g., combining text with voice or visual data). AI designed to engage in social interactions and relationships with humans may be called relational AI.
Social interactions and relationships with AI can be personal or impersonal. Impersonal interactions and relationships are instrumental in nature and focus on task performance. An example is an interaction with a chatbot aimed at finding information. Personal interactions and relationships involve direct, individualized interactions in which the AI tailors its responses to the unique characteristics, needs, or preferences of a specific human. They involve emotional or relational depth, involving a simulation of empathy, care or meaningful engagement. They also involve relational continuity, in which a history is built with an individual. They frequently also involve the AI having a personality, which helps to build trust, relatability, and emotional connectedness. Relational AI that engages in personal relationships with humans may be called personalized relational AI.
Personalized relational AI is finding a place in three kinds of AI applications: those with a focus on learning, self-improvement, and professional growth, on personal assistance, and on companionship. For each type, I will assess the benefits and drawbacks of having personalized relational AI, and I will assess whether and under what conditions such AI systems are desirable.
It will be argued that personalized relational AI offers significant benefits by tailoring interactions to individual needs and preferences. It can also improve learning outcomes, offer empathetic emotional support, and combat loneliness, particularly for vulnerable populations. However, it also presents notable drawbacks, most importantly the risk of emotional attachment to artificial systems that cannot genuinely reciprocate feelings, as well as the resulting weakening of human relationships. In addition, there are major privacy risks associated with the use of these systems, and their commercial nature raises the potential for manipulative interactions that prioritize profit over user well-being, and there is an issue of accountability when errors or harm occurs due the lack of moral agency of these systems. It will be argued that the use of personalized relational AI is in learning and self-improvement is defensible, but that its use in personal assistance and companionship may come at a cost that often is too high.
Hybrid family – intimate life with artificial intelligence
Miroslav Vacura
Prague University of Economics and Business, Czech Republic
While artificial intelligence is considered by some authors as just one of the modern technologies (after the car, radio, television, PC, Internet, mobile phone, etc.), the author argues in this paper that artificial intelligence using large language models (LLM) represents a fundamental change even in the most intimate spheres of society. This new technology will not be just another human tool, but will enter the social, corporate, familial and political spheres, resulting in a hybrid society in which, for the first time in history, the actors will not only be humans, but also other intelligent entities.
In this context, the author's paper focuses on the context of the integration of artificial intelligence not only in society in general but also in family life. What are the ethical and general philosophical issues related to artificial intelligence in the role of intimate partners, friends, caregivers or even in future surrogate parents? What requirements must such AI meet? Are there differences between a purely virtual AI and an AI that is embodied and, as an intelligent robot, has a physical body and a presence in physical reality that gives it the ability to directly influence that reality?
As part of the exploration of AI integration, the author addresses the transformative nature of inclusion of these new elements into family life. The inclusion of AI-enabled systems in family context has the potential to redefine intimacy, and emotional connection; the result is a hybrid family whose internal dynamics have a different structure from the traditional family. It is possible to consider the emotions manifested by AI as real or merely feigned. If they are not real, how can this affect the intimacy of family life? What are the implications for the ethical responsibility of human family members if artificial intelligence is a participant in the structure of human interaction, in the raising of children and in the care of elderly grandparents?
Philosophical discourse and scientific research will thus have to grapple with the challenge of how to make these intimate interactions between humans and AI enrich, rather than dilute, human relationships and family emotional dynamics, and design sufficient safeguards that will be necessary to responsibly manage these unprecedented dynamics.
One such measure, for example, is that when designing AI-equipped systems to actively participate in family interactions, emphasis should also be placed on the emotional attunement that their actions exhibit. A system that would exhibit symptoms resembling illness, fatigue, depression, etc. may induce a negative or depressive mood when interacting socially with a human, for example in the role of a co-worker, friend or caregiver. This social transmission of depression is referred to as emotional contagion or social contagion
At the same time, the context of the transformation of society as a whole needs to be considered - if AI is present in all its interactions and processes, how will this affect issues of equity and justice, especially when not everyone will have the same access to AI technology.
(Don’t) come closer: Excentric design for intimate technologies
Esther L.O. Keymolen
Tilburg University, Netherlands, The
Intimate technologies are “in us, between us, about us and just like us” (van Est 2014, p.10). The convergence of various technological and scientific disciplines has made technologies “smaller, smarter and more personalized” (ibid, p.12). Electronic implants, high-tech computers worn as watches, and AI applications that mediate communication raise profound questions.
If technologies fundamentally serve to bridge a gap, an “ontological distance” between ourselves, others, and the world around us (author); can we say that intimate technologies go a step further? Instead of bridging, do they start dissolving the distance between humans, technology, and the world? And if so, could intimate technologies also come too close, challenging what it means to be human?
In response to the rise of intimate technologies, scholars have revisited key concepts and frameworks in the domain of philosophy of technology. Verbeek (2015) extends postphenomenological theory (Ihde 1990) to address their new mediating roles, introducing human-technology relations of “immersion” and “augmentation” (Verbeek 2015, p.218-219). Immersion refers to technologies like ambient and smart systems that merge with their environments and interact proactively with users, while augmentation describes technologies that overlay additional information, creating a dual relationship with the world (e.g., augmented reality). De Mul (2003) sees in the impact of intimate technologies such as virtual reality and robotic bodies sufficient reason to rethink the human position, proposing a “poly-eccentric” understanding of being-in-the-world.
In this paper, I argue that most intimate technologies remain tele-technologies (Weibel 1992): tools that bridge the hiatus central to human existence (author 2016). Even as phenomenological boundaries between humans, technology, and the world blur, this does not necessarily signify a shift in human ontology.
However, some technologies may come too close, threatening the openness and variability of human life. Drawing on Helmuth Plessner’s concept of humans as “excentric,” “playful,” and “artificial by nature” (2019 [1928]), I propose excentric design strategies for intimate technologies. These strategies aim to preserve ambiguity, malleability, and change; qualities essential for a full and meaningful life in intimate technological times.
References
Est, R. van, with assistance of V. Rerimassie, I . van Keulen & G. Dorren. 2014. Intimate technology: The battle for our body and behaviour. (Rathenau Instituut: The Hague).
Ihde, Don. 1990. Technology and the lifeworld: From garden to earth. (Indiana University Press: Bloomington).
Plessner, Helmuth. 2019. 'Levels of Organic Life and the Human.' in, Levels of Organic Life and the Human (Fordham University Press).
Verbeek, P.-P. 2015. 'Designing the public sphere: Information technologies and the politics of mediation.' in L. Floridi (ed.), The Onlife Manifesto. Being human in a hyperconnected era (Springer: Cham).
Weibel, P. 1992. 'New space in the electronic age.' in E. Bolle (ed.), Book for the unstable media (V2: Den Bosch).
|