A Political (Re-)Turn in the Philosophy of Engineering and Technology - Political liberal philosophy
Chair(s): Lukas Fuchs (University of Stirling)
Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).
To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.
References
Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.
Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12
Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y
Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8
Presentations of the Symposium
Reciprocity & Reasonability in the Age of AI
Paige Benton University of Johannesburg
Scholars often emphasise that a shortcoming of John Rawls’s theory of justice is that he did not address the role of technology in shaping future citizens (Risse 2023). Technology has been shaping citizens for decades. However, the rise of AI technology highlights a greater threat to how citizens and society may be shaped by technology. The rise of algorithmic amplification of polarising content, erosion of mutual trust, and strained public discourse have contributed to democratic instability. In this talk, I argue that John Rawls's framework of justice offers civic virtues that, when cultivated, can provide a normative safeguard of democracy in the Digital Age. Specifically, the virtues of reasonability—the capacity to engage in fair dialogue and consider diverse perspectives—and reciprocity—mutual trust and cooperation —are essential for citizens to cultivate a stable liberal democratic society to be possible (Rawls 2005 48-50, 213).
I claim that these virtues act as a kind of political inoculation, helping citizens resist the divisive and polarising content amplified by AI-driven digital platforms. The political virtues of reasonability and reciprocity foster open dialogue and trust, two elements necessary for stability in liberal constitutional democracies. The cultivation of these virtues, in theory, could help reduce the amplification of distrust between citizens, undermining civic friendship. Reasonability and reciprocity protect democratic institutions from the destabilising effects of digital dissent, ensuring that AI technology serves democratic ends rather than undermining them. These virtues do so, I claim, since citizens who have developed these capacities would have developed a sufficient sense of justice necessary for trying to develop a political consensus with those they are in moral conflict with. Given this, I argue liberal constitutional governments have a moral imperative to cultivate these two political virtues within the public sphere. This imperative to act remains that of governments with regard to domestic justice. This is due to the potential long-term consequences of governmental inaction on cultivating civic virtues which can undermine the preservation of democratic stability.
Technologies as promoters of justice. a capability-based framework
Daniel Lara De La Fuente University of Málaga
Any political philosophical approach to technology and engineering should incorporate justice concerns if we understand them as the distributional, recognitional and procedural aspects involved in the implementation of technical innovations in a given political community. Yet a question remains to be answered: how is this broad commitment to be specified? In this paper, I provide a theorical framework to assess under what conditions technological innovations can be considered just. Grounded on the capability approach, I argue that technological innovations are just and suitable to face socioecological challenges if they are effective environmental conversion factors, net ecological stabilizers and protectors of human central capabilities at levels of sufficiency. The article illustrates these criteria through a case study that analyzes the role of Artificial Intelligence in increasing the reliability, safety and scalability of nuclear fission power reactors. Overall, this theoretical assessment can be taken into account for practical purposes such as evaluating social, economic and environmental costs and benefits in policy making.
A rawlsian philosophy of technology and engineering?
Michael W. Schmidt Karlsruhe Institute of Technology
Historically, Rawlsian ideas have been prominent in the philosophy of technology and engineering (Mitcham 2024). However, especially with regard to AI related topics, Rawls’s theory gained renewed attention. Some go as far to state that Rawls is “AI’s favorite philosopher” (Procaccia 2019; see Franke 2021). So, how could and should a Rawlsian philosophy of technology and engineering be pursued?
Indeed, one of the most common approaches to incorporate Rawlsian ideas is at least problematic: The idea to apply Rawls’s original position – a hypothetical situation where one decides without knowledge of one’s personal identity, social situation, and personal features (“veil of ignorance”) and thus is forced to an impartial perspective – to any issue at hand. This is problematic since the original position with the important detail of risk averse agents is justified, according to Rawls, only for the decisions about the basic structure of society. Correspondingly a general application of the derived difference principle (maximin rule) is at least questionable (Rawls 1999, 133). Of course, one could work with Rawls’s other – and lexically prior – principles of justice, that can be derived from the original position. Serious basic rights issues that technologies and engineering are raising can be tackled in this way as well as other issues of inequality. If one wants to defend the usage of the original position, one might explore what Rawls calls the four-step-sequence: a gradual lift of the veil of ignorance so that information of the specific socio-technical setting is included in the deliberation of the agents in the original position. However, whether one should accept this new hypothetical decision situation as ethically normative guide, depends on its justifiability via Rawls’s method of reflective equilibrium.
With this method, especially its collective or public form that aims at what Rawls called “full reflective equilibrium” (Rawls 2001, 31f.) I see another promising way to incorporate a basic Rawlsian idea into the philosophy of technology and engineering: The normative idea that decisions that affect the basic structure of society must be justified to all reasonable citizens. In order to provide such a justification, one aims at full reflective equilibrium by systematizing shared political beliefs and tries to show that the accepted decision is part of the most plausible systematization. Now, since it’s actually an empirical question which beliefs are shared and how they are weighed, a full reflective equilibrium cannot be pursued from the armchair and calls for innovative participatory and transdisciplinary research.
I would like to end by highlighting an issue a Rawlsian approach should tackle in the current historical situation: In light of the growth of various forms of antiegalitarian authoritarianism and doubts on the capacity of genuine reform of capitalist democracy, any egalitarian approach must help to provide attractive alternative political visions. A focus on utopian thinking in a Rawlsian philosophy of technology and engineering thus seems warranted (e.g. Sand 2025) but the reflection or creation of technological utopias should be integrated in holistic utopian thinking in order to provide what Rawls called “realistic utopias”.
References
Franke, Ulrik. 2021. “Rawls’s Original Position and Algorithmic Fairness.” Philosophy & Technology, November. https://doi.org/10.1007/s13347-021-00488-x.
Mitcham, Carl. 2024. “Brief for Political Philosophy of Engineering and Technology.” NanoEthics 18 (3): 14. https://doi.org/10.1007/s11569-024-00463-8.
Procaccia, Ariel. 2019. “AI Researchers Are Pushing Bias Out of Algorithms.” Blooomberg.Com, March 7, 2019. https://www.bloomberg.com/opinion/articles/2019-03-07/ai-researchers-are- pushing-bias-out-of-algorithms, accessed June 30, 2021.
Rawls, John. 1999. A Theory of Justice: Revised Edition. Cambridge, Massachusetts: Belknap Press.
———. 2001. Justice as Fairness: A Restatement. Edited by Erin Kelly. Cambridge, Massachusetts: Harvard University Press.
Sand, Martin. 2025. Technological Utopianism and the Idea of Justice. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-75945-1.
|