“Who” is silenced when AI does the talking? Philosophical implications of using LLMs in relational settings
Tara Miranovic, Katleen Gabriels
Maastricht University, Netherlands, The
Generative Artificial Intelligence (GenAI) is increasingly stepping into the most intimate areas of our relationships. Personalised GPTs, such as the Breakup Text Assistant(1) or the Wedding Vows GPT(2), highlight how GenAI is becoming a proxy for navigating complex emotional terrain. The use of Large Language Models (LLMs) ranges from machine translation and brainstorming, among other things, to outsourcing cognitive tasks, thereby significantly reducing the investment of personal resources such as time, emotions, energy, and mental effort. Drawing on Hannah Arendt’s interrelated concepts—including the interplays between 1) who (uniqueness) and what (external qualities), 2) speech and action, and 3) thinking and morality—we examine the epistemic and moral implications of (partly) delegating complex intellectual and emotional tasks to GenAI.
As Arendt (1958/1998) writes, our who “can be hidden only in complete silence and perfect passivity” (p. 179) making reliance on tools like the breakup text assistant particularly troubling. By replacing the vulnerable process of revealing oneself with automated outputs, these tools risk silencing the who, particularly in emotional communication. Arendt distinguishes the who—one’s uniqueness, revealed through their actions and words—from the what, which encompasses external qualities, talents, and shortcomings that can be intentionally displayed or concealed. Human plurality, Arendt argues, is defined by equality (enabling mutual understanding) and distinction (the uniqueness of each individual). Through speech and action, individuals reveal their who, not merely their what, thereby disclosing their distinctiveness and enabling meaningful relationships. Speech articulates the meaning of action, while action gives substance to speech, making both essential for expressing individuality. Without speech, action becomes incoherent and loses its revelatory quality; without action, speech may lack integrity and sincerity. Even though some people feel heard by AI-chatbots (see Yin et al., 2024), delegating emotional expression to GenAI may risk reducing communication to impersonal outputs that fail to reflect the individuality inherent in human speech and action.
GenAI, designed to simplify and optimise mental effort, including thinking, risks hindering rather than supporting these fundamental human activities. Philosophy—and thinking more broadly—does not seek to ease mental effort. Following Arendt (1978; 2003), thinking is an open-ended process aimed at understanding and ascribing meaning, rather than accumulating knowledge. It requires constant practice, and is integral to moral judgement and critical self-awareness. For Arendt, morality is the internal conversation that I have with myself (two-in-one): “Morality concerns the individual in his singularity” (Arendt, 2003, p. 97). Thinking fosters resistance to oversimplification, and can prevent individuals from succumbing to conformity. It demands engagement with complexity, which is essential for moral responsibility. If a GenAI tool is involved, it should make thinking harder, not easier, and offer ‘resistance’ and complexity, rather than calculating an answer. There are some interesting attempts on the market, such as MAGMA learning(3), developed to stimulate learning and creativity.
In the presentation, we will further elaborate on Arendt’s interrelated concepts and connect them to present-day discourse on AI, including the difference between (human) self-expression and (AI) mechanical creation (Vallor, 2024), and what we risk when companies keep seducing us with the latter(4).
List of references
Arendt, H. (1998 [original 1958]). The Human Condition. The University of Chicago Press.
Arendt, H. (1978). The Life of the Mind. Volume One, Thinking. Harcourt Brace Jovanovich.
Arendt, H. (2003, edited by Jerome Kohn). Responsibility and Judgment. Schocken Books.
Vallor, S. (2024). The AI Mirror. How to reclaim our humanity in the age of machine thinking. Oxford University Press.
Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14). https://doi.org/10.1073/pnas.2319112121
Links
(1) https://galaxy.ai/ai-breakup-text-generator
(2) https://chatgpt.com/g/g-ZcFaw73hO-wedding-vows
(3) https://www.magmalearning.com/home
(4) For instance, this advertisement promotes the outsourcing of intellectual tasks to Apple devices (Apple Intelligence), https://www.youtube.com/watch?v=3m0MoYKwVTM
Connecting Dots: Political and Ethical Considerations on the Centralization of Knowledge and Information in Data Platforms and LLMs
Anne-Marie McManus
Forum Transregionale Studien, Germany
The paper presents the case study of a data platform and planned LLM for The Lab for the Study of Mass Violence in Syria (“The Lab”). This research cluster -- of which the author is a member -- is mapping relationships between previously disconnected datasets on violence conducted in the Syrian War (2011-2024). With large quantities of information ranging from sensitive testimonies by former prisoners and massacre survivors, to publicly-available GIS imagery of property damage, this case study raises exceptionally stark ethical questions around privacy; the impacts of digitally-driven epistemologies on societies; and the risks and possibilities of technological citizenship in the aftermath of displacement and violence. Without downplaying the specificities of the Syrian case, these questions have implications for global debates on ethics and technology, which have to date been primarily conducted in relation to the Global North. For reasons including climate change, the rise of the far right, and ICTs themselves, wealthier societies are not insulated from civil strife, social polarization, and/or the (de-)siloing of knowledge and information. The Lab is, moreover, based in Germany but directs its outputs to both Middle Eastern societies and diasporic communities in Europe.
The scandal of the Netflix Prize epitomized emergent ethical risks in technologies that connect and combine even anonymized datasets (Kearns & Roth, 2021). When these risks are exclusively understood in terms of individual identification, it seems sufficient to support strategies like differential privacy (Dwork & Roth, 2014). Yet a defense of individual privacy alone does not help us evaluate the sociopolitical benefits and risks of traditional and AI-driven ICTs that combine – and democratize access to tools that combine -- knowledge bases that were previously fragmented, restricted, or even undocumented. On one hand, these tools offer unprecedented possibilities for technological citizenship, democratizing memory culture and promoting transitional justice. On the other, they pose uncharted harms, including social polarization and political manipulation. Expanding on Nissenbaum’s concept of contextual integrity and Huffer’s call for a political philosophy of technology, the paper explores these epistemological developments through The Lab case study. It addresses key themes of:
• scale (i.e., the centralization of large quantities of sensitive data);
• analysis and combination (i.e., the politics and ethics of new analytical possibilities offered notably in LLMs; Fioridi, 2014);
• and access (e.g., how do we update the ethics of informed consent in light of ICT-driven epistemologies?).
In its conclusion, the paper evaluates the frameworks under which Syrian stakeholders – at an historical moment of sociopolitical transition – might develop new models of technological citizenship through the shaping and oversight of knowledge production with ICTs.
LLMs and Testimonal Injustice
William James Victor Gopal
University of Glasgow, United Kingdom
Recently, testimonial injustice (TI) has been applied to AI systems, such as decision-making-support systems within healthcare (Byrnes, 2023; Proost, 2023; Walmsley, 2023), the COMPAS recidivism algorithm (Symons, 2022), and generative AI models (Kay, 2024). Extant accounts identify the morally problematic epistemic issue as being when the user of an AI system mistakenly assumes an AI system is epistemically advantaged in contrast to another human, such that they’re assigned a credibility deficit – call these Mistaken Assumption Accounts. There are two species of Mistaken Assumption Accounts: Specified Mistaken Belief and Unspecified Mistaken Belief accounts. Specified Mistaken Belief is as follows:
Ceteris paribus, in HAIH, an instance of algorithmic TI is inflicted on a human speaker, S, iff:
(i) A human user/hearer, H, mistakenly assigns an AI system, C, a credibility excess, thereby deflating the credibility of S such that S is assigned a credibility deficit [CONTRASTIVE CREDIBILITY DEFICIT]
(ii) CONTRASTIVE CREDIBILITY DEFICIT iff H mistakenly takes C to be in a superior epistemic position than a speaker S such that the output of C that-p is in better epistemic standing than S’ testimony just because that-p is the result of an algorithmic process [SPECIFIED MISTAKEN BELIEF], and
(iii) SPECIFIED MISTAKEN BELIEF for no other reason than C “being a computer” vs S “being human” [IDENTITY PREJUDICE].
Unspecified Mistaken Belief Accounts don’t specify the content of the users’ mistaken assumption leading to CONTRASTIVE CREDIBILITY DEFICIT, only that a hearer incorrectly identifies the epistemic position of an AI system. Proponents argue these beliefs are mistaken by appealing to (i) literature on data-bias, such that an AI system will not, necessarily, be less biased or prone to error than human reasoning or (ii) literature on opacity showing that trust placed in them is unjustified.
In this paper, I focus on how algorithmic TI emerges in quotidian uses of LLMs, such as ChatGPT. In the pars destruens, I offer a series of counterexamples to Mistaken Assumption Accounts, highlighting their extensional inadequacy. Then, I argue that the picture of identity prejudice in Specified Mistaken Belief is overly broad insofar as the relevant social identities for prejudicial credibility assessments are “being a human” and “being a computer”. In the pars construens, I provide an alternative which fares better: Undue Acknowledgement as Testifier. I argue that (i) when an LLM is taken to be a genuine member of minimally equal standing to humans in an epistemic community, an LLM is assigned a credibility excess such that a human suffers a credibility deficit, (ii) these credibility assessments are driven by implicit comparative credibility assessments based on the anthropomorphised “identity” of an LLM, and (iii) identity prejudice influences these credibility assessments. To achieve this, I draw upon work in HCI and feminist STS to show how user-experience design and the social imaginary surrounding AI contribute to TI. Consequently, this paper shifts the current focus in the literature from a discussion of how the proposed conditions for TI emerge from issues of bias within training data or the opacity of AI systems to how interactive relationships between humans and LLMs enable TI.
|