Emotional expressions, Informational opacity, and Technology: On the necessity of overt emotional expressions in social life
Alexandra Prégent
Leiden University, NL
The rise of sociotechnical systems and practices has impacted our lives by enmeshing people and technology in webs of information, reshaping social dynamics and expectations. A surge in affective computing and emotion recognition technology (ERT) in the last decade has exposed the general eagerness to understand and access the ‘inner affective life’ of others. While previous criticism and regulation has focused on unimodal ERTs using facial features, multimodal ERTs have shown surprisingly high levels of accuracy in the last few years, putting their development back on the radar of philosophical analysis of new and emerging technologies. At the intersection of neurorights and public privacy issues, emotional expressions are a particularly challenging and interesting topic for research on sociotechnical systems and practices.
This paper attempts to both map and forecast the social implications of the use of what I call ‘ideal’ ERTs, with a particular focus on privacy. ‘Ideal’ ERTs are emotion recognition technologies that have overcome the technical limitations of their former versions. As such, their level of accuracy is satisfactory, and they can distinguish between a variety of different types of emotional expression (EE), as well as indicate whether the expression is involuntary or intentional. Emotional expressions are a well-known source of information, mostly studied in nonverbal communication theories.
Drawing on empirical research, I argue that EEs are a reliable source of social information that sustains and regulates the social fabric of life (van Kleef 2016). Emotional expressions often tell us a lot about others, allowing us to understand their intentions, motivations and opinions, but also to predict and anticipate their behaviours. Affective communication channels are used in social interactions and can convey more or less information to the perceiver, depending on the context and her background knowledge of the emoter (Scarantino 2017, 2019). EEs are the main carriers of information in affective communication channels. Philosophers and social psychologists usually distinguish between two different types of EEs: involuntary EEs (e.g. blushing, sweating, shaking, widening of the eyes, accelerated breathing, etc.) and intentional EEs, the latter being emotional expressions that we produce voluntarily. What I am particularly interested in in this paper is the role of intentional EEs in the regulation of social life. It seems that both intentional expressions and perceptions of intentional EEs facilitate interactions and play a key role in the construction and flourishing of social relationships.
Given that 1) intentional EEs convey relevant social information that contributes to the regulation of social life, and that 2) the ‘ideal’ ERT can discriminate between information conveyed by involuntary EEs and information conveyed by intentional EEs, I argue that ERTs can threaten affective communication channels by reducing the informational opacity that is naturally present in them. While informational opacity can be the cause of miscommunication and other types of communication failures, I argue that the presence of some degree of informational opacity is a necessary condition for the success of intentional EEs, which in turn are necessary for the regulation of social life. Thus, by reducing informational opacity, ERTs can disrupt and prevent the transmission of information carried by intentional EEs in affective communication channels. I show how, contrary to our intuitions, reducing informational opacity may not be desirable for communication. The successful communication of intentional EEs over involuntary EEs, I postulate, is fundamental to keeping social friction(s) at reasonable levels, as this type of communication is a pillar of social norms and behaviours that help us to successfully navigate and regulate our interactions.
I conclude with a proposal for the future regulation of ‘ideal’ ERTs and a critical rationale for why current regulatory approaches, largely driven by the EU AI Act, may prove deleterious in the long run.
(Post)emotions in care: AI, mechanization, and emotional practices in the age of efficiency
Eliana Bergamin
Erasmus University Rotterdam, Netherlands, The
The increasing integration of artificial intelligence (AI) technologies into healthcare systems is reshaping not only the delivery of care but also the values underpinning medical and care practices. Longstanding principles that prioritize interpersonal relationships, human touch, and emotional practices in caregiving are increasingly overshadowed by values such as efficiency, quantification, and algorithmic logic. This shift raises critical questions about what is changing and what is lost in the pursuit of technological innovation, particularly in contexts where emotional resonance and relational care are central to patient and medical staff’s well-being. To explore these tensions, this paper draws on the work of Jacques Ellul and Stjepan Meštrović, whose critiques of technological determinism and the prioritization of efficiency over emotional and moral frameworks can provide insights into the implications of AI's integration into emotional and care practices.
In his seminal work The Technological Society, Ellul highlights how the relentless pursuit of the value of efficiency pervades not only technological domains but every aspect of human existence (Ellul, 1967). His insight into the mechanization of societal paradigms underscores how the drive for efficiency reshapes human values and experiences, conditioning human behavior to conform to technological systems rather than vice versa. Ellul's brief mention of emotions’ instrumentalization in favor of efficiency finds expansion in Stjepan Meštrović's exploration of the transformation of emotions in contemporary society (Meštrović & Riesman, 1997). Meštrović illustrates how the rationalized, mechanized way of thinking of postmodern society redefines human emotions, reducing them to manufactured emotional attachments or ‘postemotions’, detached from their original essence. Meštrović affirms that, in today’s society, emotions have not disappeared, but rather have been transformed into vicarious emotions. Ghosts of their original selves, they are used as tools to serve the efficiency and rationality-driven purposes of an increasingly mechanized world.
In navigating Ellul's and Meštrović's insights, this paper seeks to delve deeper into the profound impact of technological mechanization on human emotions – focusing specifically on the case of Artificial Intelligence in healthcare practices – shining a light on how the pursuit of efficiency molds emotional experiences within societal, material, and experiential frameworks. This exploration wants to examine the intellectualization and abstraction of emotions in the postemotional era, where they are portrayed as tools to be manipulated within the efficiency-driven narrative of contemporary technological society. As AI works through means of labelling and generalizations, the nuanced emotional world that humans experience is reduced to datasets and generalized classifications (Habehh & Gohel, 2021).
Building on the perspectives of Meštrović and Ellul, this research examines how AI technologies can materially influence emotional practices in care settings. By exploring this reconfiguration, it highlights how the value of efficiency, traditionally associated with industrial and administrative domains, is increasingly pervading areas of human experience—such as care—where it was previously peripheral (Alexander, 2008). This approach provides a lens to understand the material and procedural changes in emotional practices, as they intersect with technological systems, while also offering insights into the ethical and societal dimensions of these transformations, particularly as they relate to the emergence of what might be termed an efficiency-driven, "postemotional" landscape.
Affective injustice and affective artificial intelligence
Kris Goffin1, Alfred Archer2
1Maastricht University; 2Tilburg University
Affective Artificial Intelligence is on the rise. Some AI applications, such as the Hire Vue, Affectiva, Clear View, Seeing AI and Azure applications, are programmed to recognize your emotions. For example, if you scan your face, emotion recognition software provides an emotion label, such as anger. A more subtle form of affective AI are applications programmed to be empathetic. For example, an AI chatbot tries to react to your input by considering your emotions and guessing your emotional state so that it can respond accurately.
Affective AI is already being used in a range of contexts. Emotion recognition software has been developed to “teach” autistic people to recognize their own and other people’s emotions. Similarly, therapy bots are used to stand in for therapists and help users analyze and regulate their emotions. Companion bots help people to meet affective needs that are otherwise unfulfilled, and grief bots help people come to terms with the loss of loved ones.
Existing work has identified one of more of these uses of affective AI as a form of affective scaffolding (Fabry & Alfano 2024) or affective niche construction (Krueger & Osler 2019). Building on this work, we will argue that affective AI can serve as a form of affective and cognitive scaffolding that helps users to:
- Recognize one’s own and other people’s emotions
- Regulate one’s own and other people’s emotions
However, while affective AI can be a useful tool for achieving these purposes, we will argue that there are two major ethical risks with this kind of application. The first is the risk of alienation from our emotions. By offloading emotional labor to AI, one loses an essential aspect of the human experience: understanding one’s emotions. Emotional expressions are more than just unambiguous signals of internal states. When one expresses emotions, one also tries to understand one’s emotions which is a way of interpreting and constructing a sense of who we are and what we value (Brewer 2011). Interpreting the emotions of other human beings is also an essential way in which we relate to other people and develop a shared sense of meaning with them (Campbell 1997). By offloading the interpretation and regulation of emotion to AI, we run the risk of alienating ourselves from this key meaning-making process.
The second risk is that of emotional imperialism, which occurs when a dominant group imposes their emotional norms and practices on a marginalized group, whilst marking out the emotional norms of the marginalized as deviant and inferior (Archer & Matheson 2022; 2023). Rather than helping autistic people interpret emotions in ways that fit with their own emotional norms, needs and desires, there is good reason to worry that affective AI will actually serve to breed conformity and to encourage autistic people to conform to the emotional norms of non-autistic people. While particularly clear in this case, we will argue that this worry is one that applies more generally to affective AI systems.
|