AI outsourcing and the value of autonomy
Eleonora Catena
Centre for Philosophy and AI Research {PAIR}, Friedrich-Alexander Universität (FAU) Erlangen-Nürnberg, Germany
The relations with so-called “intimate technologies” bring about both opportunities and risks for personal and social life (van Est et al., 2014; van Riemsdijk, 2018). This paper contributes to this debate by addressing how integrating AI systems in daily life (e.g., recommender systems, chatbots, artificial assistants) affects the value of human autonomy.
While the relevant literature (Laitinen & Sahlgren, 2021; Bonicalzi et al., 2023; Prunkl, 2024) has mostly focused on the threats of AI systems to the exercise of personal autonomy, I will focus on the impact (and consequent threats) on its value. More precisely, I argue that AI outsourcing (i.e., offloading decisions and actions to AI systems) (Danaher, 2018) challenges and changes the intrinsic and instrumental value of personal autonomy.
By definition, AI outsourcing implies offloading control over some processes (Process Control) to achieve certain outcomes (Outcome Control) (see also Di Nucci, 2020 and Constantinescu, November 15 2024, on the “control paradox”). I show that Process Control and Outcome Control map onto two main components of personal autonomy: on the one hand, the deliberative-decisional capacity of forming one’s motives, based on internal properties and processes (Process Autonomy); on the other, the condition of enacting one’s motives and goals, also based on external factors (Outcome Autonomy). Given this relation between autonomy and control, then trading Process Control for Outcome Control corresponds to a reduction of Process Autonomy in favour of Outcome Autonomy.
I make this case for three emblematic examples of AI integration and outsourcing in daily life: driving automation, algorithmic recommendations, and co-creation with generative models. All these cases entail a partial or full transfer of control over a process (e.g., driving decision-making, information filtering, creative production) to the AI system for more or better opportunities to realize one’s goals (e.g., mobility options, personal decisions, creative contents). I argue that this trade-off has implications for the evaluation of personal autonomy. On the one hand, it implies a reprioritization of Outcome Autonomy over Process Autonomy. On the other hand, it challenges the (intrinsic and instrumental) value of Process Autonomy: being in control of one’s processes is not relevant as long as one is still or better able to realize their goals or other goods.
I conclude with the ethical implications of undervaluing Process Autonomy, such as the increased exposure and acceptance of manipulation. Therefore, the disruption of personal autonomy, especially as control over one’s processes, is a form of vulnerability entailed by the intimate integration and outsourcing to AI systems.
References
1. Birna van Riemsdijk, M. (2018), Intimate Computing, Abstract for the Philosophy Conference “Dimensions of Vulnerability”
van Est, R., Rerimassie, V., van Keulen, I., & Dorren, G. (2014). Intimate technology: The battle for our body and behaviour. Rathenau Institute
2. Bonicalzi, S., De Caro, M. & Giovanola, B. (2023). Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems. Topoi 42, 819–832.
https://doi.org/10.1007/s11245-023-09922-5
Laitinen, A., Sahlgren, O. (2021). AI Systems and Respect for Human Autonomy. Frontiers of Artificial Intelligence 4. https://doi.org/10.3389/frai.2021.705164
Prunkl, C. (2024). Human Autonomy at Risk? An Analysis of the Challenges from AI. Minds & Machines 34, 26. https://doi.org/10.1007/s11023-024-09665-1
3. Danaher, J. Toward an Ethics of AI Assistants: an Initial Framework. Philosophy & Technology 31, 629–653 (2018). https://doi.org/10.1007/s13347-018-0317-3
4. Di Nucci, E. (2020). The control paradox: From AI to populism. Rowman & Littlefield.
Constantinescu M. (15 November 2024). Generative AI avatars and responsibility gaps. Uppsala Vienna AI Colloquium.
Analysis of the problem of human autonomy from the perspective of authenticity ethics
Guihong Zhang
University of Science and Technology of China, China, People's Republic of
With the rapid development of artificial intelligence technology, the relationship between technology and human autonomy has become an important topic in philosophy. This paper explores the multidimensional impact of artificial intelligence on human autonomy from the perspective of authenticity, aiming to reveal the potential challenges of the development of artificial intelligence technology to individual decision-making freedom and will formation process. The study focuses on the two major approaches of internalism and externalism in the issue of authenticity, analyzes the core elements of authenticity, and extracts four dimensions: critical reflection, independent decision-making ability, informed and adequate choice, and supportive environment. Based on this analytical framework, this paper systematically examines the potential impact of artificial intelligence technology on these four dimensions. Artificial intelligence technology may seriously threaten the authenticity of human autonomy through recommendation algorithms, decision-making architecture reshaping, "black box" problems, and social bias. In response to these challenges, this study constructs a multi-level guarantee framework, from the three levels of design, user, and supervision, and constructs a system architecture that promotes critical thinking by introducing evaluation tools such as the METUX model. It is recommended to enhance system transparency and protect users' right to informed decision-making. At the social level, it advocates the establishment of systematic evaluation standards and information disclosure mechanisms to create an institutional environment conducive to the development of individual autonomy. This study provides a more comprehensive and in-depth approach to understanding human autonomy in the era of artificial intelligence.
Driving for Values: Exploring the experience of autonomy with speculative design
Kathrin Bednar1, Julia Hermann2
1Eindhoven University of Technology; 2University of Twente
Newly proposed technological solutions for societal problems may face the challenge of not being accepted or morally acceptable. A key concept that can help to ensure user acceptance of ethically driven technology design is the consideration of users' autonomy, i.e. allowing users to control their interactions with the system, to understand the implications of their choices, and to make decisions that align with their own values and preferences. In this paper, we explore how value experiences of users can be collected and used in approaches that integrate values of moral importance in design such as value sensitive design (VSD; Friedman & Hendry, 2019) or design for values (van den Hoven et al., 2015). Using a research-through-design approach (Stappers & Giaccardi, 2017), we investigated a smart system that suggests navigation routes based on collective values such as safety, sustainability, and economic flourishing (the so-called Driving for Values system).
We focus on the experience of autonomy, as there may be concerns that such a system manipulates users to take alternative routes. A system that respects individual autonomy is more likely to be adopted by users and will be seen as fairer and thus more acceptable from a broader societal and ethical standpoint. We understand autonomy as involving two main components: i) the ability to freely choose among different options, and ii) the availability of meaningful options, i.e., options that enable the agent to decide and act on the basis of their own reasoned values and commitments (Blöser et al., 2010; Vugts et al., 2020).
We conducted 18 semi-structured interviews to collect insights on participants’ experiences and concerns, making use of speculative design to elicit emotions in people. Emotions play an important role in value experiences, which can be understood as experiences of what is good and desirable, or bad and undesirable, in relation to specific situations, actions, or objects. During the interviews, we showed two early system versions to each participant and asked participants to click through them and think out loud. When asking questions about autonomy, we presented participants with various definitions of autonomy and autonomy statements to explore how well they could relate to them and connect different statements to different system versions.
We found that a transparent and trustworthy system that offers a meaningful choice between value-driven route options enhances drivers’ acceptance and personal sense of autonomy. As anticipated, the interaction with the speculative design elicited emotional reactions such as delight, positive excitement, and irritation in participants, which can be interpreted as indicators of an autonomy experience. While most participants found it rather difficult to express what they take autonomy to mean when asked directly, it was easy for them to connect the presented autonomy statements with different system versions. This exercise revealed that participants preferred system versions that they felt enhanced their autonomy and that the availability of meaningful options increased their feeling of being autonomous.
REFERENCES
Blöser, C., Schöpf, A., & Willaschek, M. (2010). Autonomy, experience, and reflection: On a neglected aspect of personal autonomy. Ethical Theory and Moral Practice, 13, 239–253. https://doi.org/10.1007/s10677-009-9205-3
Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping technology with moral imagination. MIT Press.
Stappers, P. J., & Giaccardi, E. (2017). Research through design. In The encyclopedia of human-computer interaction (pp. 1–94). The Interaction Design Foundation.
van den Hoven, J., Vermaas, P. E., & van de Poel, I. (Eds.). (2015). Handbook of ethics, values, and technological design: Sources, theory, values and application domains. Springer Science+Business Media. https://doi.org/10.1007/978-94-007-6970-0
Vugts, A., Van Den Hoven, M., De Vet, E., & Verweij, M. (2020). How autonomy is understood in discussions on the ethics of nudging. Behavioural Public Policy, 4(1), 108–123. https://doi.org/10.1017/bpp.2018.5
|