Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Ethics II
Time:
Thursday, 26/June/2025:
11:50am - 1:05pm

Session Chair: Maren Behrensen
Location: Auditorium 7


Show help for 'Increase or decrease the abstract text size'
Presentations

Considering the social and economic sustainability of AI

Rosalie Waelen, Aimee Van Wynsberghe

University of Bonn, Germany

Van Wynsberghe (2021) distinguishes three waves of AI ethics. The first ‘wave’ of AI ethics was predominantly concerned with far-future scenarios about the existential threat of superintelligence. The second wave of AI ethics focused on the shorter-term implications of specific AI applications and brought forwards ethical guidelines and value-sensitive-design methods as tools to prevent or mediate AI’s ethical implications. Some ethical issues that are central to this second wave of AI ethics are bias and discrimination, trust and transparency, and responsibility. Van Wynsberghe (2021) proposes that there is a need for a third wave of AI ethics, where sustainability is a, if not the, central concern. This third wave is on its way. Pioneered by the work of Strubell, Ganesh and McCallum (2019), The AI Now Institute (Crawford & Joler, 2018; Dobbe & Whittaker, 2019), Bender and colleagues (2021), Crawford (2021), Brevini (2022), and others, we see that attention for the environmental costs and material reality of AI is growing. While AI has long been seen as something untouchable that exists only ‘in the cloud’, it is now increasingly acknowledged that AI has a material dimension that consumes significant amounts of energy and water (Brevini, 2022; Strubell et al., 2019; van Wynsberghe, 2021).

The aim of this presentation will be to discuss recent research done on the concept of sustainable AI, and particularly its relation to debates about AI ethics and AI governance. Research on sustainable AI, so far, has been predominantly concerned with the material cost of AI, that is, the environmental sustainability of AI. We argue that focusing solely on the environmental sustainability of AI is too narrow. The concept of sustainability is commonly understood as having three dimensions: environmental, social, and economic sustainability. These three pillars also apply in the AI context, because there are not only environmental costs and considerations involved in AI development, but also social and economic ones. Moreover, as we will show, the environmental, social, and economic dimensions of AI development are intimately related. Through a discussion of recent literature on sustainable AI, we argue that sustainable AI and the third wave of AI ethics should therefore be about all three pillars of sustainability.



Synthetic socio-technical systems: poiêsis as meaning making

Federica Russo1, Andrew McIntyre2

1Utrecht University, Netherlands, The; 2University of Amsterdam

With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society.

Building on a Latourian approach that considers legitimate actants in a network human, natural and also artificial ones, we explain why we need to take this conceptualisation a step further. We explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of socio-technical systems as we currently understand them. While large part of analogue technologies, and also of digital ones, are artefacts or systems we interact with, with generative AI systems, we also interact *with*. This means that the process of meaning making is something that is not anymore a peculiarity of human agents. Granted, generative AI systems do not *understand*, but they still part take in this process. We dub this new generation of socio-technical systems *synthetic* to signal the increased interactions between human and artificial

agents, and, in the footsteps of philosophers of information, we cash out agency and meaning making in terms of the concept of ‘poiêsis’ (see Floridi 2013, Russo 2022).

We close the presentation with a discussion of the potential policy implications of synthetic socio-technical system and the need of adopting an 'epistemology-cum-ethics' approach in AI.

References

Floridi, L. (2013). The ethics of information. Oxford University Press.

Russo, F. (2022). Techno-scientific practices: An informational approach. Rowman & Littlefield.



Exploring Kantian Part-Representation and Self-Setting Concepts in the Age of Artificial Intelligence

Pan Deng

Shenzhen University, China

This paper presents a philosophical inquiry into Immanuel Kant’s concepts of representation (Vorstellung) and part-representation (Teilvorstellung) through the lens of contemporary advancements in artificial intelligence (AI). By revisiting Kant’s Critique of Pure Reason and Opus Postumum, the study explores the hierarchical relationship between intuitive perceptions and conceptual functions, drawing parallels to the mechanisms of pattern recognition and machine learning.

Central to Kantian epistemology is the distinction between two modes of representation: intuition (Anschauung) and concept (Begriff). These are associated respectively with the faculties of sensibility and understanding. Kant posits that sensory intuitions provide immediate representations of objects, while concepts mediate these representations through logical functions. This dual structure forms a hierarchy of cognition where part-representations serve as essential building blocks for constructing comprehensive conceptual understandings.

In the context of AI, the hierarchical synthesis of part-representations parallels the process by which neural networks identify features and construct models for data classification. Just as Kant describes the formation of general concepts from particular intuitions, machine learning algorithms synthesize data inputs into abstract feature representations that inform decision-making processes. This paper investigates how Kant’s notion of part-representations as markers of sensory input relates to feature extraction methods in machine learning.

Further exploration is given to Kant’s theory of Selbstsetzung (self-setting), a concept developed in his Opus Postumum, where the mind actively constitutes the conditions for experience and cognition. The theory aligns intriguingly with modern discussions about the autonomy of artificial systems and their capacity for self-optimization. Kant’s insights on the interplay between logical functions and imagination in synthesizing knowledge offer valuable perspectives on how AI systems might simulate forms of cognitive self-regulation and adaptive learning.

The investigation also engages with the limitations of current AI models when contrasted with Kant’s philosophical framework. While AI systems operate based on numerical and probabilistic models devoid of self-awareness, Kantian thought emphasizes the primacy of self-consciousness (transzendentale Apperzeption) in unifying cognitive processes. By juxtaposing these perspectives, the paper raises questions about the possibility and limitations of achieving AI systems that genuinely emulate human cognitive faculties.

In conclusion, this study proposes that Kant’s hierarchical model of part-representations and his self-setting theory provide a robust framework for analyzing the epistemic functions of AI. The paper seeks to contribute to the ongoing dialogue between philosophy and technology by offering a nuanced understanding of machine cognition through Kantian epistemology.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany