Phronesis for AI systems: conceptual foundations
Roman Krzanowski, Paweł Polak
Pontifical University of John Paul II in Krakow, Poland
Current ethical frameworks for AI agents, robotics, and software systems (collectively known as Artificial Moral Agents, or AMAs) are insufficient to ensure their safety and alignment with human values. In response, we propose integrating Aristotelian phronesis—practical wisdom—into AI systems to provide the ethical guidance necessary for them to function safely in human environments. This paper explores how to develop and implement phronesis in AI systems, ensuring they make decisions that align with human well-being and ethical values.
We argue that one critical difference between AI systems and humans lies in the method of ethical decision-making, or the "inference/ascend method." This method involves ascending to an ethical decision based on facts, action objectives, and an understanding of the ethical implications of past experiences and their outcomes. To enhance the moral capacities of AMAs, a new approach to AI ethics is needed—a paradigm shift, rather than simply refining existing models.
We present four key arguments for designing AMA with phronetic principles:
1. Human alignment: To be human-aligned or human-safe at an ethical level, AI systems must share the same moral grounding, ontology, and worldview as humans. In other words, AI-centric systems must possess specific human-like capacities to be truly human-centric.
2. Limitations of current models: Existing ethical models for AI systems do not meet these requirements.
3. Phronetic advantage: AI systems that incorporate Aristotelian phronesis will be safer and more ethically grounded than those using other proposed frameworks.
4. Simulating phronesis: We claim that key elements of phronesis can be effectively simulated in AMA systems, contributing to improved ethical decision-making.
We also propose a design for a phronetic Artificial Moral Agent. This design simplifies the original concept of phronesis into a "weak" phronetic system while preserving its core features. In this system, decisions are made by evaluating past use cases (UCs) and comparing them to the current situation. The decision is based on selecting the most "ethically proximal" UC, defined by its outcome. However, determining what constitutes an "ethically proximal case" is a significant challenge that requires a clear and operationalizable definition. The phronetic ethical decision-making process, termed phronetic ascend, focuses on evaluating past decisions and real-world cases rather than relying on general principles, abstract norms, or algorithmic procedures. This model emphasizes context-specific ethical reasoning, where each decision is grounded in the unique characteristics of the situation rather than in pre-existing rules.
Finally, we propose formalizing phronetic principles within AI systems using the situational theory framework (Devlin, 1991). Situational theory models the interaction between cognitive agents, their environment, and the flow of information. We outline how this theory can be used to represent phronetic ethics in AI systems.
One possible definition of technology- an approach from Don Ihde
Yingke Wang
Nagoya University, Japan
The philosopher of technology Don Ihde proposes four classifications of the human-technology relationship in relation to technology artifacts in his theory of technology intentionality. In this paper, I aim to provide a concise summary of Ihde's approach to defining technology artifacts, with the objective of expanding the understanding of technology from a phenomenological perspective. To this end, this paper will first offer a brief overview of Ihde's ideas concerning the definition of technology. These ideas include four types of human-technology interaction (embodiment relationship, hermeneutic relationship, alterity relationship, and background relationship) and two types of technological intentionality (simple intentionality and complex intentionality). The distinction between two kinds of technology intentionality is contingent upon the degree of intentional integration within the context of technology utilization. By comparison, the distinction between the four human-technology relationships is determined by the position of technology itself in the process of humans attempting to comprehend the world through technological means. Secondly, this paper proposes three innovative definitions of technology artifacts based on the ideas of human-technology relations and technological intentionality, namely technical objects, technified objects, and sheer technology objects. The definition of technical object in the sense of traditional instrumentalism is characterized by simplicity, transparency, and inter-referentiality. Conversely, technified objects are distinguished by the integration of complex intentionality, and have characteristics of complexity, alterity and multi-directionality. Sheer technology objects are primarily non-entity, and have characteristics of embodiment, linguistic and duality. It is important to note that Ihde's definition of technology artifacts exhibits both progress and limitations. The progressive aspects are primarily evident in the breakthrough of the limitations of traditional instrumentalism (the introduction of the concept of technology made the subjective of technology from the background to the foreground), the breakthrough of the limitations of teleology (embodiment as the intrinsic characteristic of technology made the performance of technology possible), and the innovation of the definition method (definition upon research method). The limitations are primarily evident in the theory of technological mediation (whether the mediation framework can define the essence of a category), content pragmatism (limiting technology itself to being the object of empirical practice only), and interpretive limitations(the language that made hermeneutic relationship possible).In the final section of this paper, I will analyze in depth the progressiveness and limitations of Ihde's definition of technology in order to further academic research in the future.
Queering 'the Times of AI'
Judith Campagne
Vrije Universiteit Brussel, Belgium
Increasingly, the times we live in are referred to as ‘the times of AI’. However, such a phrasing encourages the temporal logics of algorithmic time (e.g., categorization, efficiency, prediction, and linearity) to overtake the more circular, messy, and intimate experiences of human temporalities (Goldberg, 2019). Insisting on our times being ‘times of AI’ means to leave increasingly unchallenged the effects of logics of efficiency, instrumentalization, and prediction on human experiences of the temporalities of daily life. Additionally, the phrasing ‘times of AI’ takes a specific understanding of ‘AI’ as a starting point, thereby limiting the coming into being of different iterations of such technologies. Therefore, the ‘of’ in categorizations like the ‘times of AI’ must be challenged. What does it mean to, alongside technologies such as AI, also encourage space for the messiness of human temporal experiences?
In this presentation, I start at the works of artist Patricia Domínguez. Domínguez materializes the tension between human and technological memory thereby showing that the logics of linearity and instrumentality in algorithmic machines (and thus in algorithmic time) can be challenged through circular relations between past, present, and future. To give this further impetus, I stage an encounter between these artworks and queer theory as understood by José Esteban Muñoz, which “can be best described as a backward glance that enacts a future vision” (2009, p. 4). Queer theory here is a constant movement between past, present, and future in messy and simultaneously hopeful ways. Algorithmic technologies share a way of forward looking that is grounded in a specific past. Consequently, this seems to be an especially rich entry-point for challenging the linear temporal logics that a phrase such as ‘times of AI’ encourages.
Subsequently, queering ‘the times of AI’ is a conceptual critique of categorizations that wish to subsume all of human temporal experience within logics of linearity and efficiency. Hereby, this presentation is not a complaint against big data technologies like AI as such, but rather against perspectives that subsume all of human experience under the same logics. To queer the phrase ‘the times of AI’ is to challenge the idea that logics of linearity are the dominant temporal experience of life and to thereby open up space, also within the algorithmic technologies themselves, for more circular and messy experiences of time.
References
Domínguez, Patricia and Treister, Suzanne. (September 2024). ‘Dreaming at CERN.’ Burlington Contemporary. https://contemporary.burlington.org.uk/articles/articles/dreaming-at-cern
Goldberg, David Theo. (2019). ‘Coding Time.’ Critical Times 2(3). https://doi.org/10.1215/26410478-7862517
Muñoz, José Esteban. (2009). Cruising Utopia. The Then and There of Queer Futurity. New York University Press.
|