Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Symposium) Virtue ethics (SPT Special Interest Group on virtue ethics)
Time:
Thursday, 26/June/2025:
8:45am - 10:00am

Location: Atlas 2.215


Show help for 'Increase or decrease the abstract text size'
Presentations

Virtue ethics (SPT Special Interest Group on virtue ethics)

Chair(s): Marc Steen (TNO), Zoe Robaey (Wageningen University & Research)

Rationale and goal: To further develop the field of virtue ethics in the context of technology, design, engineering, innovation, and professionalism. We understand virtue ethics broadly: as diverse efforts to facilitate people to cultivate relevant virtues so that they can flourish and live different versions of ‘the good life’, with technology, and to help create structures and institutes that enable people to collectively find ways to live well together. We envision research in the following themes:

• Citizens and practices: E.g., study how technologies can help, or hinder, people to cultivate specific virtues (Vallor 2016), e.g., how a social media app can corrode one’s self-control—and how we can envision alternative designs, that can instead help people to cultivate self-control.

• Professionals and institutions: E.g., view the work of technologists, and other professionals, through a virtue ethics lens (Steen 2022). We are also interested in various institutions, e.g., for governance or oversight. This theme also relates to education and training (next item, below).

• Education and training: E.g. design and implement education and training programs. This offers opportunities to study, e.g., how students or professionals cultivate virtues. This will probably involve cultivating practical wisdom or reflexivity, as a pivotal virtue (Steen et al. 2021).

• Traditions and cultures: E.g., study and appreciate various ‘Non-Western’ virtue ethics traditions, like Confucianism, Buddhism (Ess 2006; Vallor 2016) or Indigenous cultures (Steen 2022b). We can turn to feminist ethics or study virtues in specific domains, like health care or the military.

 

Presentations of the Symposium

 

Internal conflicts among moral obligations: pursuing a quest for the good as innovators

Marco Innocenti
University of Milan, UNIMI

Developing a new technology involves assuming various roles, each of which carries moral obligations extending beyond the professional responsibilities within a team. Innovators influence human and non-human entities, from small-scale communities to broader societal and environmental impacts. These roles often conflict, as each role suggests a distinct good to be pursued. For example, an engineer or designer may experience conflicting obligations towards their team, clients, local community, or environmental health. While these roles are interconnected, each may propose a different version of what the good to pursue is, creating internal tensions. In other words, drawn from van de Poel’s (2015) description of responsibilities-as-moral-obligations, each one of them indicates that we should ‘see to it that’ something is the case, and the different ‘somethings’ may practically exclude each other. The question then arises: how can innovators address these moral conflicts in a coherent way? The challenge lies not simply in minimising harm to relevant stakeholders, but in actively pursuing positive outcomes across the different spheres of influence. Thus, it is crucial to understand how internal conflicts among these roles can be reconciled to uphold moral obligations. While compromise may seem an obvious solution, it often fails to actually ‘satisfy’ these obligations. In light of this, what alternative approaches might better address these conflicts?

This presentation addresses two central questions. First, how does the practice of developing technology transform into a recognition of different moral responsibilities as moral obligations? Second, how can these moral obligations be integrated into the technology development process in a way that provides a coherent framework for action? To answer these questions, I engage with Alasdair MacIntyre’s (1984) concept of the quest for the good, which seeks to order diverse goods in a comprehensive way. This idea offers a way to address the internal conflicts innovators face by framing technology development as a shared pursuit of the common good. I argue that MacIntyre’s virtue ethics framework allows for the ethical integration of diverse roles, providing a path toward moral coherence in the development process in small R&D teams. Another point of reference will be the concept of ‘decompartmentalization’ in MacIntyre (1999; 2016), which will highlight the difference between the present technological situation and the more distinctly ‘modern’ one. Drawing on this notion, I propose a framework that guides teams in structuring their efforts around their different understandings of the good(s), as informed by their moral obligations. This framework encourages collective deliberation, helping to identify synergies between different moral obligations and providing a pathway to address internal conflicts constructively in a procedural manner. By framing technology development as a shared quest for the good starting from individual internal conflicts, this approach aims to reconcile moral obligations across team members while ensuring that ethical reflection plays a central role in the innovation process.

Bibliography

• MacIntyre, A. (1999). Social Structures and Their Threats to Moral Agency. Philosophy, 74(289), 311–329.

• MacIntyre, A. (2007). After virtue: A study in moral theory (3rd ed). Notre Dame, Ind: University of Notre Dame Press.

• MacIntyre, A. (2016). Ethics in the Conflicts of Modernity: An Essay on Desire, Practical Reasoning, and Narrative. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316816967

• van de Poel, I. (2015). Moral Responsibility. In I. Van De Poel, L. Royakkers, & S. D. Zwart (Eds.), Moral Responsibility and the Problem of Many Hands (0 ed., pp. 12–43). Routledge. https://doi.org/10.4324/9781315734217.

 

Artificial virtues and hermeneutic harm

Andrew Rebera
KU Leuven

Virtue-based approaches to AI development are becoming increasingly popular, at least in the philosophical literature. One approach focuses on the role of human virtues—the virtues of developers, regulators, users, and so on—in ensuring that AI is responsibly designed and deployed. A second approach is concerned with the possibility of artificial virtues, virtues that AI systems themselves might have or exemplify. A burgeoning philosophical literature debates which virtues are in question, what is their nature, and how might these virtues be inculcated in “artificial moral agents” (AMAs). Attempts to implement virtuous behaviour in AMAs tend to leverage “bottom-up” rather than “top-down” strategies, exploiting the apparent affinity between, on the one hand, virtue ethics’ traditional emphasis on education in the virtues through habituation, imitation of exemplars and, on the other hand, the training of AI models through reinforcement learning, imitation learning, and other machine learning techniques where models learn behaviours by interacting with environments, observing patterns, or optimising for desired outcomes. However, some authors have argued that such approaches fundamentally misunderstand the nature of virtue and its relationship to moral agency. On one line of argument, AMAs are at best able to behave in conformity with virtue, but they cannot act from virtue (Constantinescu & Crisp, 2022). On another line of argument, even bottom-up learning approaches cannot ensure that AMAs are fully embedded in the “forms of life” against which genuine moral action takes place (Graff, 2024). Neither critique is, or is intended to, fundamentally undermine virtue-based approaches to AI development. Yet both indicate certain limitations that shape how virtue-based approaches should be implemented.

In this paper I revisit these two kinds of critique. While sympathetic, I suggest that they overlook important connections and parallels between the virtues and reactive attitudes (Strawson, 2008). When we recognise virtues in others, we rely not only on observation of their outward behaviour, but “see through” their actions to their underlying moral character. This recognition process is inseparably tied to the feeling and regulation of reactive attitudes like gratitude, resentment, and indignation. In recent work, the regulation of reactive attitudes in response to harms caused by AI agents has been argued to give rise to “hermeneutic harm”, i.e. emotional and psychological pain caused by a prolonged inability to make sense of an event (or events) in one's life (Rebera, 2024). In this presentation, I argue that virtue-based approaches to AI development may actually exacerbate this problem by creating a form of “virtue theatre” that makes it harder for humans to properly make sense of and respond to AI behaviour. Like the above mentioned critics of virtue-based approaches, I do not claim that the failure of virtue based approaches to resolve the problem of hermeneutic harm means it ought to be abandoned. But the argument does indicate an urgent need to better understand the extent and nature of AMAs’ participation in our networks of moral relationships and reactive attitudes.

Bibliography

• Constantinescu, M., & Crisp, R. (2022). Can Robotic AI Systems Be Virtuous and Why Does This Matter? International Journal of Social Robotics, 14(6), 1547–1557. https://doi.org/10.1007/s12369-022-00887-w.

• Graff, J. (2024). Moral sensitivity and the limits of artificial moral agents. Ethics and Information Technology, 26(1), 13. https://doi.org/10.1007/s10676-024-09755-9.

• Rebera, A. P. (2024). Reactive Attitudes and AI-Agents – Making Sense of Responsibility and Control Gaps. Philosophy & Technology, 37(4) https://doi.org/10.1007/s13347-024-00808-x.

• Strawson, P. F. (2008). Freedom and Resentment. In Freedom and resentment and other essays (pp. 3–28). Routledge.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany