Human Trust and Artificial Intelligence Is an alignment possible?
Angelo Tumminelli1,3, Federica Russo2,3, Calogero Caltagirone3, Dolores Sanchez3, Antonio Estella3, Livio Fenga3
1Lumsa University, Italy; 2Utrecht University; 3Solaris Project
This contribution aims to deepen into the challenging relationship between Artificial Intelligence and trust, where trust is understood as one of the most crucial and defining activities in human relationships. This collaborative effort is the result of a multidisciplinary approach intertwining ethical-philosophical, legal and other technical aspects. The question whether we humans can trust machines is not new, but the debate is taking a whole new dimension because of the newest generation of AI systems, also called generative AI with extraordinary capabilities in terms of computational power, mostly related to the ability of generating novel outputs, from a given prompt. We will not deepen into the most technical issues associated to generative artificial intelligence models but what they generate and how they generate it needs a re-assessment of the notion of trust and the corresponding charactersitics of ‘trustworthiness’ that is required by an increasing number of legal acts and norms, not least the EU AI Act.
With the present contribution, we intend to explore the possibility of aligning the anthropological category of ‘trust’ to the human-machine relationship and consequently, clarify whether, from an ethical and scientific point of view, it is possible to extend the experience of trust to interactions between individual subjects and artificial intelligences. In the interweaving of philosophical knowledge, normative approach and statistical model, the idea of technological trust is rendered here in all its conundrum but also in all its potential for the expressions of the techno-human condition. From the point of view of this paper we speak of “trust” only for the interpersonal relationship, but in the human-IA relationship it is better to speak of “reliability”. This is to be understood both in a performative sense (an artefact is reliable if it works well) and in another sense. Reliability can also be an extension of trust (Trustworthiness): i.e. by using the machine or the AI I feel trust not in the object but in the person who designed it, who is a human being. In this sense, we understand trustworthiness as an extension of interpersonal trust
A Foul Stain? Trust in digital data reconsidered with Zuboff and Kant
Esther Oluffa Pedersen
Roskilde University, Denmark
In her seminal article Surveillance Capitalism or Democracy? from 2022 Shoshanna Zuboff points to Google’s invention in 2001 of extraction of data from users’ action online as the “illegitimate, illicit and perfectly legal foundation of a new economic order.” As Google turned their search engine into a data extraction machine and other tech-companies soon followed suit, a new double-faced digital economy opened in which commercial data brokering (a 389 billion U.S. dollars in 2024 ) and state surveillance (think Snowden 2012 ) ever since have gone hand-in-hand. The extraction of innocuous data from individual users’ movements on web pages can be cumulated to create large scale models of human behavior to be either sold as information on consumer behavior or used by state intelligence agencies as surveillance of citizen behavior. This is the core of surveillance capitalism. It is based on data extraction as “the original sin of secret theft” . States have avoided to regulate data extraction as it apparently offered a convenience to users who were met with the ease of personalized consumption while boosting the digital economy and providing state intelligence agencies with new and better tools to ensure security in the post 9/11 world of terrorism.
In the presentation I employ a Kantian conception of trust (see O'Neiil 2002, Pedersen 2013, Myskja 2024) to argue that unregulated data extraction has corroded citizens’ possibility for moral trust online . The secret theft of user data makes up a foul stain corrupting the trustworthiness of private tech companies as well as states. The dubious trustworthiness of the providers of the digital infrastructure entails a pragmatic foundation of trust relations in the online realm in which self-interest predominates often in the form of economic advantage. As citizens we are recommended and even forced to undertake important tasks in our lives online and thus required to leave data trails and contribute with “free oil” to run the motor of the digital economy. This leaves – so I will argue in my oral presentation – citizens either in a state of digital resignation (Draper and Turow 2019), more or less frantic attempts at obfuscation of our data trails Brunton and Nissenbaum 2015) or confines us to lazily trust (Pedersen 2023) that the data extracted from our online interactions are handled in ways that are not detrimental to our autonomy.
Litterature:
Brunton, Finn, and Helen Nissenbaum. Obfuscation: A user's guide for privacy and protest. Mit Press, 2015.
Draper, Nora A., and Joseph Turow. "The corporate cultivation of digital resignation." New media & society 21, no. 8 (2019): 1824-1839
Myskja, Bjørn, “Public Trust in Technology – A Moral Obligation?,” Sats. Northern European Journal of Philosophy 23, no. 1 (2024): 11-128
O’Neill, Onora. Autonomy and Trust in Bioethics. Cambridge: Cambridge University Press, 2002
Pedersen, Esther Oluffa. “A Kantian Conception of Trust.” Sats. Northern European Journal of Philosophy 13, no. 2 (2013): 147-169
Pedersen, Esther Oluffa. “The Obligation to be Trustworthy and the Ability to Trust: An Investigation into Kant’s Scattered Remarks on Trust”, in Perspectives on Trust in the History of Philosophy, ed. David Collins et al. (2023): 133-156
Zuboff, “Surveillance capitalism or democracy? The death match of institutional orders and the politics of knowledge in our information civilization” Organization Theory (2022): 1-79
A network approach to public trust in generative AI
Andrew McIntyre1, Federica Russo2, Lucy Conover2
1University of Amsterdam; 2Utrecht University
A foundation of European AI policy and legislation, the Trustworthy AI framework presents a holistic approach to addressing the challenges posed by AI while promoting public trust and confidence in the technology. Primarily, the framework aims to promote the trustworthiness of all those actors and processes involved in the AI lifecycle by introducing robust ethical requirements for businesses, as well as technical and non-technical strategies to ensure these requirements are implemented. While succeeding in promoting trustworthy industrial practices, this paper argues that the framework is limited in scope and does not sufficiently account for the increasingly significant social role that AI technologies play in our daily lives. Generative AI technologies are now capable of convincingly replicating modes of human communication and can thus contribute to our collective knowledge by building new socio-political narratives, altering our interpretations of events, and shaping our values through discussion and argumentation. As such, these technologies are not simply industrial products or services to be regulated but, rather, they exist as active social actors that interact with human actors in unprecedented ways. To better account for the social role of generative AI, this paper develops a network approach to public trust in AI that is based on the philosophy of information and Actor-Network Theory (ANT). Moving away from traditional notions of interpersonal trust, this paper argues that trust is established and maintained first and foremost by the material interactions between social actors involved in a network. From this perspective, trust in generative AI is dependent upon a vast and precarious network of social actors that extends far beyond the AI lifecycle to include more diverse actors such as government institutions, media organizations, and public officials. As such, promoting public trust in AI is no longer solely a matter of establishing trustworthy industrial practices and this network approach would enable policymakers to identify novel policies and initiatives that build upon and augment the Trustworthy AI framework. Notably, this paper highlights how public trust in AI is fundamentally linked to trust in our broader information environment and is thus threatened by the current post-truth political crisis. The paper concludes that to effectively promote public trust in AI and to reap the societal benefits of this technology, we must first seek to establish a more trustworthy information environment and to restore public confidence in democratic institutions and processes.
References:
AI HLEG (2019) "Ethics Guidelines for Trustworthy AI" European Commission High-Level Expert Group on Artificial Intelligence.
Bisconti P, McIntyre A, and Russo F (2024) "Synthetic socio-technical systems: poiêsis as meaning making" Philosophy & Technology 37.
Latour B (2005) Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press.
Russo F (2022) Techno-Scientific Practices: An Informational Approach. Rowman & Littlefield.
|