A Political (Re-)Turn in the Philosophy of Engineering and Technology - Technological mediation
Chair(s): Michael W. Schmidt (Karlsruhe Institute of Technology)
Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).
To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.
References
Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.
Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12
Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y
Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8
Presentations of the Symposium
Toward a robust political philosophy of technology: aiming to transform and transcend regionalizations
Glenn Miller Texas A&M University
This presentation offers a brief historical narrative of the state of political philosophy of technology, on the one hand, and political philosophy, as it engages technology, on the other, and sketches out some nascent transformative lines of scholarship from philosophers of technology and political philosophers that bridges their usual disciplinary separation.
As philosophy of technology moved from its “classical period” – technology with a capital T, to use Don Ihde’s phrase – to its “empirical turn,” the focus of most of the energy in philosophy and technology has been directed toward technological transformations of individual and communal experience and action, with an eye toward immediate outcomes. The “political” dimension of philosophy of technology usually examines experienced or predicted social consequences that accompany technological adoption, evaluates or makes policy recommendations, usually from a Western democratic perspective, or offers critiques of globalization and capitalism.
Over the same time, political science has trended toward quantitative analysis using a social science approach, and, in the process, relegated political theorists asking fundamental political questions to the sidelines in many universities. When work on political philosophy, whether quantitative or qualitative, explores technology, it tends to treat it as a background condition or as a driver for technological, military, or economic competitiveness, rather than as a topic that also deserves philosophic reflection, nearly always without reference to research in philosophy and technology.
As human physical and cognitive activity is increasingly mediated by mechanical and digital technologies, and these technologies become more powerful, and the political beliefs, norms, and institutions that arose in less technological societies, functioning more or less independently, no longer seem adequate, political philosophy of technology must be transformative – extending, synthesizing, and refashioning existing scholarship – and transcending disciplinary specializations and other limiting regionalizations. This demands theoretic work by philosophers of technology on the structures, components, and processes of political institutions, the informal modes of interaction that complement, support, or weaken these institutions, and on the foundational concepts on which they are built.
To catalyze more transformative and transcending work in this area, brief sketches of recent scholarship currently undertaken by philosophers of technology and political philosophers from a variety of specializations are provided. Of the former, one can look to my symposium colleagues, perhaps most prominently Carl Mitcham, but also to Mark Coeckelbergh, Peter-Paul Verbeek, Yuk Hui, and others. Of the latter, Jürgen Habermas identifies concerns with social media platforms and the public sphere; Joshua Cohen and Archon Fung explore the presuppositions of modern democracy challenged by digital technology and provide some initial policy recommendations; and Timothy Burns, a political philosopher also inspired by Strauss, who aims to develop insights on democracy, technology, and education. The paper concludes with a brief explanation of the opportunities for scholars interested in contributing to a new Special Collection in the journal NanoEthics on “Political Philosophy of Technology.”
Artificial intelligence and common goods: an uneasy relationship
Avigail Ferdman Technion - Israel Institute of Technology
Artificial Intelligence (AI) is potentially the most disruptive technology in human history. Political philosophers have been warning against AI’s erosion of democracy, freedom, rights and justice. Other philosophers like Albert Borgmann (1984) have been worrying that technology might profoundly change us as human beings. The ‘AI virtue ethics’ literature has responded accordingly by urging us to cultivate techno-moral virtues, to reaffirm and reclaim our humanity (Vallor 2016; 2024). To date, however, there is no philosophical account of the moral principles for collective action necessary to ensure that we continue to flourish as humans in the age of AI. As a result, scholarship tends to emphasize an individual-responsibility perspective, crucially missing important structural dimensions of the problem with AI, that is, collective responsibility towards flourishing. Without a political philosophy of flourishing in the age of AI, we are left without a theoretical account of what we owe to each other in terms of the conditions for living well-rounded, flourishing lives in the age of AI.
In response I argue that to obtain a comprehensive account of the obligations we have to one another, we must better understand the conditions for human flourishing, given that AI stands to transform concepts such as human agency and the common good. Specifically, the concept of the ‘common good’ plays a critical role as a bridge between the ethical and the political. Drawing from Alasdair MacIntyre (1984; 1998; 2017) and Charles Taylor (1995), I aim to show that common goods are both constitutive of flourishing, and threatened by AI. The paper will demonstrate the threats that AI might pose for common goods by analysing two concepts: knowledge and moral decision making.
Knowledge can be perceived as an ‘epistemic commons’: the sharing in the production of knowledge as a common good. Knowledge acquired by interacting with AI could accelerate a ‘tragedy of the epistemic commons’, by undermining the social and relational capacities associated with generation of knowledge, so long as it replaces attention-sharing with AI-generated information, lacking in joint action. AI might also create ‘epistemic distance’ between persons, undermining reciprocity.
Growing reliance on Artificial Moral Advisors for moral decision making, and the mediation of social relations through AI may erode the relational dimensions of joint action by transforming the goods in question from common to individual, thereby obviating the opportunity to develop joint commitments. AI mediation might cause epistemic distance that makes genuine engagement with others difficult. Underlying this is a process of “ir-reciprocity” (Yeung et al. 2023), where the reliance on AI assistants makes persons suspicious of other persons’ willingness to engage in relationships of trust. Thus, the discussion of the common good in the age of AI must account for how the conditions for reciprocity are impacted by AI.
Using the analysis of how AI might undermine common goods, the paper will propose criteria for determining the conditions under which AI environments could promote common goods that are constitutive of human flourishing, including: joint commitment, joint action, reciprocity, relationality and trust. The discussion will contribute to a better understanding of our collective obligations towards AI environments that are conducive to human flourishing.
Borgmann, Albert. 1984. Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago, IL: University of Chicago Press.
MacIntyre, Alasdair. 1984. After Virtue. 2nd edition. Notre Dame, Indiana: University of Notre Dame Press.
———. 1998. “‘Politics, Philosophy and the Common Good.’” In The MacIntyre Reader, edited by Kelvin Knight, 235–52. University of Notre Dame Press. https://doi.org/10.2307/j.ctv19m62gb.17.
———. 2017. “Common Goods, Frequent Evils.” Presented at the The Common Good as Common Project, University of Notre Dame, March 26. https://www.youtube.com/watch?v=9nx0Kvb5U04.
Taylor, Charles. 1995. “Irreducibly Social Goods.” In Philosophical Arguments, 127–45. Cambridge, Mass.: Harvard University Press.
Vallor, Shannon. 2016. Technology and the Virtues. New York: Oxford University Press.
———. 2024. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press. https://doi.org/10.1093/oso/9780197759066.001.0001.
Yeung, Lorraine K. C., Cecilia S. Y. Tam, Sam S. S. Lau, and Mandy M. Ko. 2023. “Living with AI Personal Assistant: An Ethical Appraisal.” AI & SOCIETY. https://doi.org/10.1007/s00146-023-01776-0.
Explainable AI as a rhetorical technology
Wessel Reijers, Tobias Matzner, Suzana Alpsancar University Paderborn, Germany
This paper argues that explainable AI should be considered a rhetorical rather than an epistemic technology and outlines the normative implications of this perspective. Explainable AI captures both a set of technological solutions and a normative ideal to address the ‘black box’ problem of AI systems based on layered networks of artificial neurons. This problem implies that the outputs of these systems cannot be adequately explained because the process that produced them is fundamentally opaque. Technical and policy discourses have put explainable AI forward as an epistemic-normative ideal, closely related to the notion that ‘explaining’ AI outputs would lead to transparency. Thus positioned, explainable AI is considered as an epistemic technology that enables a form of truth-finding.
In this paper, we argue that this view of explainable AI is mistaken, and that it should rather be considered a rhetorical technology. This is not to say that explainable AI cannot play an epistemic role, for instance in the natural sciences, but rather that when it is considered as a normative principle it appeals to a rhetorical rather than an epistemic ideal. From the outset, it appears that ‘explainability’ of AI is not always normatively relevant (e.g., in the context of discovering new astronomical phenomena) but becomes relevant in the context of decision-making in the realm of human affairs, which triggers a responsibility requirement. In this context, following Hannah Arendt, we deal with the exchange of (considered) opinions rather than with scrutinizing epistemic truths. Such truths, when placed in the context of human decision-making, may even gain a despotic character. Instead, we need to cultivate a sensis communis, rhetorical discourse that cultivates the virtues of those engaged in the exchange of opinions.
Hence, when we put forward the normative requirement that AI systems be explainable, we consider them in the context of decision-making, supporting a ‘good’ rhetorical exchange. Following Aristotle, rhetoric is the faculty of observing in a given situation the possible means of persuasion. This situation is institutionally and technologically mediated; for instance, in the way that a court (including its buildings and procedures) mediates the rhetorical capacities of jury members asked to consider a verdict in a case. Similarly, explainable AI configures the setting of the rhetorical discourse, for instance when it plays a role in determining a person’s credit score. It does not help unearth a hidden ‘truth’ about the credit score but rather contributes to the exchange of considered opinions concerning why a credit score is justified (or not).
This new perspective has several normative implications. First, it answers to some of the valuable criticisms of explainable AI, which consider it as a rhetorical foil or a ‘fools gold.’ Second, it requires us to look beyond the AI system as an ‘autonomous’ agency making decisions, to the whole decision-making context. Third, it urges us to consider how explainable AI should support a rhetorical discourse that is conducive of rather than detrimental to the virtues of the people it interacts with.
|