Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Ethics V
Time:
Friday, 27/June/2025:
8:45am - 10:00am

Session Chair: Andrea Gammon
Location: Auditorium 7


Show help for 'Increase or decrease the abstract text size'
Presentations

Transcendental Technology Ethics

Donovan van der Haak

Tilburg University, Netherlands, The

Before the dominance of the empirical understanding of technology (i.e., technology as technological artifact), transcendental perspectives of Technology dominated philosophy of technology, represented by philosophers of technology such as Martin Heidegger, Herbert Marcuse, Adorno and Horkheimer, and others. They adopted a transcendental perspective of technology, exploring the a priori conditions and structures that steer and shape how technology is developed, understood and used. During the empirical turn, Science and Technology Studies and a variety of philosophers of technology provided arguments to abandon transcendental perspectives that are overly pessimistic, deterministic and overlook the relevant differences between technological artifacts and what things do (Verbeek, 2021). The empirical turn was later accompanied by an ethical turn during which technology ethics developed into a descriptive and normative direction, exploring how technological artifacts both mediate or change morality (descriptive), and how ethics can be used to steer the design and use of technological artifacts (normative). Recently, calls emerged to move beyond the empirical turn (Brey, 2010; Franssen et al., 2016), including calls for a return to the transcendental perspective (Coeckelbergh, 2020). Contemporary, transcendental thinkers have argued that technology ethics lacks sufficient self-critique and has become industrially embedded, conformist, and co-opted by the tech industry (Lemmens, 2021). A substantial incorporation of the transcendental perspective into technology ethics remains, however, notably absent from the current literature.

Despite the merits of these transcendental critiques, a transcendental perspective within technology ethics must also do justice to the critique and insights of the empirical perspective. In line with contemporary calls for a return to transcendental thought, I argue that a new ethical framework is needed that is compatible with ethical reflection on technological artifacts and the transcendental horizons surrounding these artifacts (i.e., transcendental technology ethics), as valuable insights of the transcendental perspective have become lost since the empirical and ethical turn. Descriptively, transcendental philosophers of technology deserve more recognition within the literature of, for instance, technomoral change, as they already provided a variety of insights to technological value change. Normatively, I argue that technology ethics should utilize the transcendental perspective to explore and challenge its own presuppositions. I draw on the transcendental perspective to challenge technology ethics’ exclusive understanding of technology as technological artifact.

Finally, I seek to exemplify the value of transcendental technology ethics through a paradigmatic case-study that explores the construction of public values within Dutch municipalities that employ algorithms. Through participant observations, analyses of policy documents and semi-structured interviews with experts and stakeholders within Dutch local governments, the case-study explores how public values are perceived, protected and undergo change. Different from most approaches in technology ethics that centralize the artifact (i.e., data and AI), transcendental technology ethics uniquely allows us to better understand how technological ways of thinking and rationalizing within these and partnering institutions emerge and how these rationalities lead to ethical arguments in favor of the use of algorithms.

References

Brey, P. (2010). Philosophy of technology after the empirical turn. Techné: Research in Philosophy and Technology, 14(1), 36–48. https://doi.org/10.5840/techne20101416

Coeckelbergh, M. (2022). Earth, technology, language: A contribution to holistic and transcendental revisions after the artifactual turn. Foundations of Science, 27(2), 259–270. https://doi.org/10.1007/s10699-020-09730-9

Franssen, M., Vermaas, PE., Kroes, P., & Meijers, A. W. M. (Eds.) (2016). Philosophy of Technology after the Empirical Turn. (Philosophy of Engineering and Technology). Springer. https://doi.org/10.1007/978-3-319-33717-3

Lemmens, P. (2021). Technologizing the Transcendental, not Discarding it. Foundations of Science, 27(4), 1307–1315. https://doi.org/10.1007/s10699-020-09742-5

Verbeek, P.-P. (2021). The Empirical Turn. In S. Vallor (Ed.), The Oxford Handbook of Philosophy of Technology (pp. 1-21). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190851187.001.0001



Sustainable AI and the third wave of AI ethics: a structural turn

Larissa Bolte, Aimee van Wynsberghe

University of Bonn, Germany

The notion of Sustainable Artificial Intelligence (Sustainable AI), and with it considerations of the environmental impact of AI technologies, have gradually made their entrance into AI ethics discussions. It has recently been suggested that this constitutes a new, “third wave of AI ethics” [1]. The idea of a “wave” is suggestive. It implies a new, distinct phase within the discipline. In this paper, we ask what is entailed by Sustainable AI that should warrant such special accentuation.

We begin with an exploration of the landscape of AI ethics. We argue that three approaches, or waves, can be distinguished: A first approach, which is concerned with the unforeseeable, but possibly substantial long-term risks of AI development (e.g., existential threat [2] or machine consciousness [3]); a second, mainstream approach, which engages with particular, existing AI technologies and their ethical design [4] [5]; and a third, emerging approach, which performs a structural turn. This turn is structural in two senses: For one, it deals with systemic issues which cannot be described at the level of individual artefacts, such as particular algorithms or AI applications. This is then often paired with an analysis of power structures that prevent the uncovering of these issues.

We argue that work on Sustainable AI increasingly instantiates this third approach. This means that this work does not consider particular AI applications, but rather the entirety of the material and social infrastructure that constitutes the preconditions for AI’s very existence, e.g., material hardware, energy consumption, conditions along the AI supply chain, effects of AI on society, etc. (see, e.g., [6], [7]). What is more, some authors in Sustainable AI pair this structural analysis with a political outlook. They are concerned with the higher-level question of why an algorithm-centred perspective is favoured by regulators and in public debate and which issues this perspective obscures (see, e.g., [8], [9]).

Finally, we broaden our own perspective and find that the third, structural approach to AI ethics is not the prerogative of work on Sustainable AI alone. In fact, other subfields of AI ethics have performed that same shift in recent years. We present literature on AI bias and fairness as an illustrative example. Hence, what started out as an investigation into the particularity of Sustainable AI concludes with a much broader outlook. We suggest that we might be looking at a more pervasive structural turn in AI ethics as a whole.

References

[1] van Wynsberghe, A.: Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics. 1(3), 213–218 (2021).

[2] Müller, V.C., Bostrom, N.: Future progress in artifcial intelligence: A survey of expert opinion. In: Müller, V.C. (ed.) Fundamental Issues of Artifcial Intelligence. Synthese Library, pp. 553–571. Springer, Berlin (2016)

[3] Parthemore, J., Whitby, B.: What makes any agent a moral agent? Refections on machine consciousness and moral agency. Int. J. Mach. Conscious. 502, 105–129 (2013)

[4] Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., Floridi, L.: The ethics of algorithms: key problems and solutions. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artifcial Intelligence, pp. 97–123. Springer, Cham (2021)

[5] Corrêa, N.K., Galvão, C., Santos, J.W., Del Pino, C., Pinto, E.P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., de Oliveira, N.: Worldwide AI ethics: a review of 200 guidelines and recommendations for AI governance. Patterns 4(10), 100857 (2023)

[6] Crawford, K.: The Atlas of AI: Power, Politics, and the Planetary Costs of Artifcial Intelligence. Yale University Press, New Haven (2021)

[7] Sætra, H.S.: AI in context and the sustainable development goals: factoring in the unsustainability of the sociotechnical system. Sustainability 13(4), 1738 (2021)

[8] Dauvergne, P.: AI in the Wild: Sustainability in the Age of Artifcial Intelligence. MIT Press, Cambridge (2020)

[9] Becker, C.: Insolvent: How to Reorient Computing for Just Sustainability. MIT Press, Cambridge, Massachusetts (2023)



Against an ideal theory of justice for AI ethics

Alice Rangel Teixeira

Universitat Autònoma de Barcelona, Spain

Arguments for adoption of a Rawlsian theory of justice have become prominent in current debates on Artificial Intelligence (AI) ethics (1–7). Some proponents suggest that, while the fair use of AI is central to AI ethics, the prevailing focus on fairness reflects a narrow view, stemming from political theory's failure to critically engage with the complex interplay between technology and social institutions (2,4). Others critique the dominant principlist approach in AI ethics, contending that principles are ineffective without clear and practical guidance (1,8).

This failure to address the political dimensions of technology or provide actionable guidance has drawn criticism of principle-based frameworks for enabling techno-solutionism. Overemphasizing algorithmic de-biasing as fairness reduces fairness to statistical parity (3,9), and overlooks the role of AI systems in shaping the 'basic structure' of society (1,2), consequently failing to account for the broader socio-political implications of AI systems (1). Proponents argue that Rawls’ “fair equality of opportunity” and the “difference principle” provide a robust ethical foundation for AI. Fair equality of opportunity ensures equitable access to opportunities in decision-making (2,3,5), while the difference principle prioritizes the welfare of the least advantaged (1,6,7). Embedding these principles in AI design and governance aims to advance justice across societal structures, aligning with Rawls’s vision of a “well-ordered society.”

While a Rawlsian theory of justice is presented as an alternative to the dominant principlist approach in AI ethics, providing clearer guidance for the application of principles, they nevertheless present the same problem in their idealized conception of justice (10). These principles were designed for hypothetical, well-ordered societies, and fail to address historical and structural injustices, such as gender and racial oppression (11,12). This idealization limits their applicability to the complexities and inequities of real-world contexts. Moreover, the focus on distributism neglects instances of injustice that are not represented as goods or end-states, such as epistemic injustice. Critiques of ideal theory highlight its inability to adequately account for lived experiences (13), arguing that the ahistoricism and distributism that characterize idealized conceptions of justice obscure historical injustice (11,14). Prioritizing resources and goods over the reality of people (13), thus operating to reinforce oppression (10,12,14).

Incorporating ideal theory of justice into AI ethics raises profound questions about the adequacy of these theories in addressing the complex realities that are structurally unjust. This underscores the need for alternative ethical frameworks that prioritize lived experiences, historical context, and are attentive to power asymmetries that permeate our society.

References:

1. Westerstrand S. Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence. Sci Eng Ethics. 2024 Oct 9;30(5):46.

2. Gabriel I. Toward a Theory of Justice for Artificial Intelligence. Daedalus. 2022 May 1;151(2):218–31.

3. Franke U. Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle. Philos Technol. 2024 Jul 13;37(3):87.

4. Franke U. Rawls’s Original Position and Algorithmic Fairness. Philos Technol. 2021 Dec 1;34(4):1803–17.

5. Heidari H, Loi M, Gummadi KP, Krause A. A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity. In: Proceedings of the Conference on Fairness, Accountability, and Transparency [Internet]. New York, NY, USA: Association for Computing Machinery; 2019 [cited 2025 Jan 8]. p. 181–90. (FAT* ’19). Available from: https://doi.org/10.1145/3287560.3287584

6. Peng K. Affirmative Equality: A Revised Goal of De-bias for Artificial Intelligence Based on Difference Principle. In: 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE) [Internet]. 2020 [cited 2025 Jan 14]. p. 15–9. Available from: https://ieeexplore.ieee.org/document/9361347

7. Leben D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol. 2017 Jun 1;19(2):107–15.

8. Munn L. The uselessness of AI ethics. AI Ethics. 2023 Aug 1;3(3):869–77.

9. Lin TA, Chen PHC. Artificial Intelligence in a Structurally Unjust Society. Fem Philos Q [Internet]. 2022 Dec 21 [cited 2024 Jul 19];8(3/4). Available from: https://ojs.lib.uwo.ca/index.php/fpq/article/view/14191

10. Fourie C. “How Could Anybody Think That This is the Appropriate Way to Do Bioethics?” Feminist Challenges for Conceptions of Justice in Bioethics. In: Rogers WA, Mills C, Scully JL, Carter SM, Entwistle V, editors. The Routledge Handbook of Feminist Bioethics [Internet]. Routledge; 2022 [cited 2025 Jan 13]. p. 27–42. Available from: https://philarchive.org/rec/FOUHCA

11. Mills CW. “Ideal Theory” as Ideology. Hypatia. 2005;20(3):165–84.

12. Jaggar AM. L’Imagination au Pouvoir: Comparing John Rawls’s Method of Ideal Theory with Iris Marion Young’s Method of Critical Theory. In: Tessman L, editor. Feminist Ethics and Social and Political Philosophy: Theorizing the Non-Ideal [Internet]. Dordrecht: Springer Netherlands; 2009 [cited 2025 Jan 14]. p. 59–66. Available from: https://doi.org/10.1007/978-1-4020-6841-6_4

13. Sen A. Equality of What? In: Tanner Lectures on Human Values. Cambridge: Cambridge University Press; 1980.

14. Tronto JC. Moral Boundaries: A Political Argument for an Ethic of Care. Psychology Press; 1993. 244 p.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany