Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Wednesday, 25/June/2025
12:30pm - 1:30pmRegistration
Location: Voorhof
1:30pm - 1:45pmWord of Welcome
Location: Blauwe Zaal
1:45pm - 2:45pmKeynote 1 - Sabina Leonelli - Environmental intelligence: Subverting the philosophical premises for AI
Location: Blauwe Zaal
3:00pm - 4:30pm(Symposium) Intimate technologies, brain chips and cyborgs: revisting the bright-line argument
Location: Blauwe Zaal
 

Intimate technologies, brain chips and cyborgs: Revisiting the bright-line argument

Chair(s): John Sullins (Sonoma State University, United States of America)

How viable is the concept of authenticity in the face of intimate technologies? In this symposium, we will discuss whether “authenticity” is any longer a fecund concept for elucidating human agency and the human condition amidst the usage of intimate technologies, and bodily or cognitive enhancements. Our authors will reflect on the importance of autonomy as a necessary condition for moral agency (Wandmacher 2016) and its relevance to the problem of authenticity in artificial companions (Johnson 2011; Turkle 2011). Drawing on the Bright-Line Argument (Moor 2006; Wandmacher 2016) and alternative frameworks like sociomorphing (Seibt et al. 2020). We will also revisit Donna Haraway’s “A Cyborg Manifesto” which redefines "cyborg" as a metaphor for the breakdown of traditional boundaries—human/animal and animal/machine—rather than a literal technological hybrid (Clynes and Klein 1961). It critiques human exceptionalism and exposes the instability of these binaries. Haraway asserts that humans are inherently entangled with tools and non-human others, challenging notions of autonomy and purity. The cyborg thus represents our existential condition, revealing humanity’s intrinsic interconnectedness and shared kinship with other forms of being. Also, this session examines the liberatory potential of intimate technologies, questioning whether they truly empower users or reinforce control by technological elites. Inspired by Marx and expanded by thinkers like Dewey, Heidegger, and Arendt, the philosophy of technology has long explored the sociopolitical impacts of technology. Today, intimate technologies, shaped by transhumanism and longtermism, often serve corporate interests. Through subtle nudges, they risk merging user desires with corporate goals, blurring the line between liberation and manipulation. We will conclude by examining the widespread adoption of Generative AI (GenAI) tools, from dating apps (Lin 2024) to hospital transcription (Burke and Schellmann 2024), and their polarizing impact on academia. Using Participatory Sense-Making (PSM), it critiques claims that GenAI systems are "collaborators," arguing they lack genuine agency. Anthropomorphism and cognitive metaphysics are explored to reveal recurring errors in evaluating GenAI, highlighting the challenges of understanding these tools' ontological status and their role in human interaction.

 

Presentations of the Symposium

 

The Crux of the Bright-line Argument as an Explanatory Lens for Understanding Why the Problem of Authenticity Concerning Artificial Companions Persists

Aaron Butler
University of Lucerne, Faculty of Theology, Lucerne Graduate School in Ethics LGSE, Institute of Social Ethics ISE

In this paper, I would like to attempt to bring a discussion of the importance of autonomy in characterizing moral agency; that is: being autonomous as a necessary condition for moral agency (Wandmacher 2016), into dialogue with a discussion as to why the problem of authenticity is so important in discussions of digital companions (mutatis mutandis, autonomous intelligence as artificial companions) concerning the behavioral outcomes of this class of entities in social settings with humans (Johnson 2011;Turkle 2011). My motivation here is that I suspect the reasons why autonomy is so important in elucidating the ontological requirements of moral agency sheds light on why some are so fixated on authenticity in the relevant sense. Furthermore, my rational suspicion is that the explanatory lens of the crux of the Bright-Line Argument (Moor 2006; Wandmacher 2016), namely: autonomy, does not lose its explanatory adequacy regarding the aforementioned problem of authenticity even if we switch to alternative models of explanation such as sociomorphing or the like to account for the intimate-techno-social phenomena articulated by the problem of authenticity concerning artificial companions (Seibt et al 2020). That is to say, I suspect the reasons why some are so fixated on discussions of authenticity for the relevant class of artificial companions are rooted in our realization of the importance of being autonomous concerning characterizing moral agency and that won't go away in the face of alternatives to the concept of authenticity.

Partial Bibliography:

Johnson, Deborah G. .2011. “Computer Systems: Moral Entities but not Moral Agents”, in Anderson, Michael and Anderson, Susan Leigh (eds.), 168–183. Machine Ethics, Cambridge: Cambridge University Press.

Moor, J.H. 2006. "The Nature, Importance, and Difficulty of Machine Ethics," in IEEE Intelligent Systems, 21 (4): 18–21.

Seibt, Johanna & Vestergaard, Christina & Damholdt, Malene. 2020. Sociomorphing, Not Anthropomorphizing: Towards a Typology of Experienced Sociality. 10.3233/FAIA200900.

Turkle, Sherry. 2011. “Authenticity in the Age of Digital Companions”, in Machine Ethics, Anderson, Michael and Anderson, Susan Leigh (eds.), 62–76. Cambridge: Cambridge University Press.

Wandmacher, Stevens F. 2016. The Bright Line of Ethical Agency. Techné: Research in Philosophy and Technology 20 (3): 240–257.

 

Thinking Otherwise

David Gunkel
Northern Illinois University

In her essay “A Cyborg Manifesto,” Donna Haraway famously does not define “cyborg” as a human being augmented with or denatured by various technological prostheses. That formulation of the term is far too literal, harkening back to the work of Manfred Clynes and Nathan Klein, who introduced the neologism in their 1961 paper on human space flight. As Haraway characterizes it, “cyborg” names a crucial boundary breakdown between the ontological distinctions that have separated the human from the animal and animal from the machine. Although she does not say it in this exact way, what is called “cyborg” deconstructs the classic set of binary oppositions by which we—human beings—have defined ourselves and secured our sense of exceptionalism in opposition to those others who have therefore become the excluded other.

The concept of the “cyborg,” it is important to point out, does not incite or institute these boundary breakdowns. It simply describes the contours and consequences of border skirmishes or untenable discontinuities that have been underway within and constitutive of the Western philosophical tradition from its very beginnings. The cyborg, therefore, does not cause or produce these ontological erosions that appear to threaten the authenticity of the human subject; it merely provides these dissolutions with a name. For this reason, the term “cyborg” identifies not just an enhanced human being, as is commonly formulated in the transhumanist movement. It also (and more importantly) describes the rather unstable ontological position in which the human subject already finds itself. We have, therefore, always and already been cyborgs, insofar as the difference separating the human and the animal and the animal and the machine have been and continue to be undecidable, contentious, and provisional.

Following the innovations of Haraway and others who have followed her lead, this paper argues that the cyborg is already the existential condition in which we find ourselves. We are always and already tangled up in our tools and instruments and these entanglements already shape our understanding and definition of ourselves as “human.” Thus, it is with the posthuman subject that is called “cyborg” that we can begin to acknowledge how the very idea of being human is originally tangled up in and inextricably involved with a myriad of others with whom/which we always and already share a common bond of kinship. This intimacy with non-human others and other forms of otherness is not some threat to the pristine integrity of the human organism but constitutes the original ontological and axiological conditions of that which we seek to protect and insulate from what only subsequently appears as other.

 

Intimate technologies and liberation

John Sullins
Sonoma State University

As we become more intimately bonded to our technologies the philosophical problem of self-identity and self-liberty is raised. We seek to immerse our personalities into social media platforms to liberate ourselves from various weaknesses we feel in our more mundane social relationships. Likewise, we seek to create radically intimate technologies such as brain implants to overcome perceived mental weakness or inabilities. In short, we want intimate technologies to give us new forms of technologically mediated liberation. What we may fail to realize is that all technologies have makers and those makers set the terms for these liberations, economically, philosophically, and politically. We need to discuss what kind of liberation are we achieving and if it is any form of liberation at all. During this session, I want to re-open a foundational discussion in the philosophy of technology on the liberatory potential of intimate technologies.

The discussion about the role of technology in liberation was initially inspired by Marx who noted the liberating or subjecting potential of various technologies found in social systems. Flawed as this theory was, it inspired either directly or through opposition more and deeper thought found in the works of Dewey, Mumford, Ortega y Gasset, Heidegger, and Arendt who all wrestled with the concept of the liberating potential of technology and technosocial systems. By the mid-century social scientists such as Jacques Ellul, Herbert Marcuse, and Max Weber had presented different theories that argued that technological rationality had been deeply embedded in the political realities of the era and served as a powerful force in determining political outcomes. More recently it has become far less popular to discuss these grand political narratives in the philosophy of technology. Even though philosophers may be hesitant to enter that debate, others are very willing to do so. Technology CEOs in particular tend to grab headlines through grand pronouncements about the advertised liberating power of their technologies be that liberation us from work, from distance, or nature.

In this discussion, we will look at how intimate technologies are not motivated by nuanced and responsible philosophies. They are not designed to liberate their users and give them personal and political freedoms. Instead, they are extensions of the more popular philosophies of transhumanism and longtermism whose ultimate goal is to empower and enrich a small set of technological elites. In particular, this part of the discussion will focus on the ways that intimate AI technologies can be used to nudge or influence users in subtle ways. These nudges can so deeply influence our thoughts and desires, such that our desires and the desires that corporations what us to have will become difficult to distinguish. The bright line between consumer and consumed will be harder to distinguish.

 

Hell is Other Robots: Participatory Sense-Making and GenAI

Robin Zebrowski
Beloit College

Despite the existence of Generative AI (GenAI) preceding OpenAI’s release of ChatGPT in November of 2022, that event catalyzed a wide embrace of such tools across almost all aspects of daily life. GenAI tools have turned up in such unlikely places as dating apps, where they can act as a ‘wingman’ (Lin, 2024), and even in hospital rooms, where they act as transcription tools (although not without making up whole sentences, apparently) (Burke and Schellmann 2024). But for academics, such tools have been extremely polarizing, being openly embraced by some as collaborators in the knowledge process, while being cursed and scorned by so many others who accuse it of trying to automate the juice of academic work itself: deep thought. As a result, there is a good deal of academic (and public/popular) discourse about whether these tools can substitute for humans in all sorts of different relationships, both intimate and professional. However, in these debates about the proper relationships between humans and GenAI, we generally fail to properly account for the ontological status of such tools. As a result, we fail to recognize those relationships that can genuinely only arise between agents in certain sorts of interactions.

In this paper, I begin with a broad sweep across many of the interesting uses of GenAI, often in the form of Large Language Models (LLMs) or similar predictive models like the art bots, and introduce the enactive theory of social cognition known as Participatory Sense-Making (PSM) to try and make sense of humans in interaction with these specific kinds of technologies. I argue, from this grounding, that those GenAI systems touted as “collaborators” in the knowledge-process, for example, are being radically mischaracterized under this ontology, in part because they are not genuine agents in the enactive sense. I also look to the literature on anthropomorphism to help understand how we keep making the same kind of errors in our evaluations of these systems. Ultimately, I conclude that our attempts to capture and quantify these core bits of our humanity remain problematic in the face of our failures to fully understand our own cognitive metaphysics.

Burke, G. and Schellmann, H. (October 26, 2024). Researchers Say an AI-Powered Transcription Tool Used in Hospitals Invents Things No One Ever Said. In AP News: https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

Lin, B. (October 5, 2024). Grindr Aims to Build the World’s First AI ‘Wingman’. In The Wall Street Journal: https://www.wsj.com/articles/grindr-aims-to-build-the-dating-worlds-first-ai-wingman-8039e091

 
3:00pm - 4:30pm(Symposium) Democratic technologies in East Asia
Location: Auditorium 1
 

Democratic technologies in East Asia

Chair(s): Levi Mahonri Checketts (Hong Kong Baptist University, Hong Kong S.A.R. (China))

Andrew Feenberg’s crucial work on democratic technology as outlined across his publications on critical theory of technology (CTT) emphasizes the needs of participant groups in disrupting hegemonic technical codes to create technologies that serve marginalized groups’ interests. Feenberg helpfully draws out his ideas through cases like Minitel, AIDS medical testing, the Americans with Disabilities Act, online teaching and various other cases. In this way, Feenberg’s (early) work is unique insofar as it relies heavily on theory (CTT), while also engaging in practical analysis. Thus, Feenberg’s approach is deeply oriented toward praxis, where theory is reshaped by empirical analysis, such as Feenberg’s later development of critical construction of technology (CCT).

This panel engages with the practical side of Feenberg’s approach, examining ways that technologies affect vulnerable populations within East Asian communities. We look for cases of empowerment and disempowerment. Insofar as Feenberg himself articulates “democratic” technical codes as serving interests against the hegemonic order, we should consider cases, similar to Feenberg’s paradigmatic ones, that demonstrate resistance, adaptation or innovation as it intersects democratic interests. More significantly, the cases examined represent other populations than those studied by Feenberg, taking into consideration both other marginalized groups (e.g. sex workers and older adults) and East Asian contexts.

The papers in this symposium frame a variety of ways of thinking democratic technologies, from various philosophical approaches (e.g., CTT, Multi-Level Perspectives, virtue ethics), to different interests of populations. The strategies employed both organically and by interest groups on behalf of populations provides insight into different ways “democracy” can be inscribed in relations with technologies, and the East Asian context of these studies provides an important perspective on non-Western democratic possibilities.

Individual presenters will share their research, and then a general discussion about the significance of this research for the particular social context of the research, theoretical questions of democracy and technology, and future possibilities will be discussed in this symposium. This symposium corresponds with a nascent project to increase collaboration on CTT and CCT in a globalized fashion.

 

Presentations of the Symposium

 

CCTV use among Hong Kong sex workers

Levi Mahonri Checketts
Hong Kong Baptist University

I will discuss the case of CCTV and sex workers. CCTV obviously functions primarily, if not exclusively, as a technology of surveillance (Norris 2002). The function of surveillance in enforcing hegemonic behaviors is well-known in the work of Michel Foucault (1977), or as Feenberg says of the panopticon, “it is…the exercise of power through surveillance” (2017, 30).

Thus, on one hand, several authors have noted how CCTV infrastructure in public spaces has led to a decrease of freedom and increase of insecurity among sex workers (Kamath and Neethi 2020; Wright, Heynen and van der Meulen 2017; Henham 2021). Sex workers find themselves avoiding public areas where CCTVs are in use because they are used by police and store owners to punish them for existing in public spaces. On the other hand, in private spaces, sex workers have appropriated CCTV technology to protect themselves. In Australia, for example, brothels have incorporated CCTVs to help sex workers potentially screen clients before meeting them. In Hong Kong, sex work is less organized, but many sex workers still use CCTV as a method of both vetting their own clients and, thanks to still images, warning other sex workers against dangerous clients (Ivy 2014).

CCTV then functions in an interestingly multistable fashion for sex workers. On one hand, it is clearly used to police and profile sex workers, a tool of the hegemony against vulnerable sex workers. Beyond the typical concern to avoid the gaze of police or their informants, sex workers must also be conscientious about places where CCTVs maintain the virtual panopticon. Even where sex work is not illegal, sex workers are typically regarded with disdain and so find themselves avoiding public gazes (Tan 2021). On the other hand, CCTV is also used in a way that protect sex workers from violent clients. By turning surveillance toward potential clients and giving the workers the position of surveiller, sex workers are able to reclaim some small amount of both autonomy and personal safety. This suggests the dual potential of surveillance—to put down those who diverge from hegemonic morality, but also to protect those who may otherwise not have protections. In this way, the multistability of CCTV is demonstrated by a democratic adaptation of this technology nearly synonymous with hegemonic control. Thus, in line with Foucault’s broader position on the bi-directional nature of power, sex workers have found a way to use this paradigmatic technology of surveillance power to their own advantage.

References:

Feenberg, Andrew. 2017. Technosystem: The Social Life of Reason. Cambridge, MA: Harvard University Press.

Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan. New York: Vintage Books.

Henham, Caroline Sally. 2021. “The Reduction of Visible Spaces of Sex Work in Europe.” Sex Research and Soc Policy 18: 909–919 https://doi.org/10.1007/s13178-021-00632-4

Ivy [Pseudonym]. 2014. “Sex Work and Closed-Circuit Television.” Action for REACH OUT. https://afro.org.hk/base_show.php?id=98&lang_id=2

Kamath, Anant, and Neethi, P. (2021). “Body Politics and the Politics of Technology: Technological Experiences Among Street-Based Sex Workers in Bangalore.” Gender, Technology and Development, 25, no.3: 294–310. https://doi.org/10.1080/09718524.2021.1933348

Norris, Clive. 2002. “From Personal to Digital: CCTV, the Panopticon, and the Technological Mediation of Suspicion and Social Control.” In Surveillance as Social Sorting: Privacy, Risk and Automated Discrimination, edited by David Lyon, 249-281. London: Routledge.

Tan, Nancy Nam Hoon. 2021. Resisting Rape Culture: The Hebrew Bible and Hong Kong Sex Workers. London: Routledge.

Wright, Jordana, Heynen, Robert and van der Meulen, Emily. 2015. “‘It Depends on Who You Are, What You Are’: ‘Community Safety’ and Sex Workers’ Experience with Surveillance.” Surveillance and Society 13, no. 2: 265-282. https://ssrn.com/abstract=3014749

 

Democratic strateies in South Korean energy communities

Joohee Lee
Sejong University

I will discuss how renewable energy communities have been implemented within South Korea’s centralized and hierarchical energy regimes, emphasizing the democratic organizational and technological visions shaping these efforts. These citizen-led initiatives reimagine energy as a commons rather than merely a commodity, reclaiming its common-pool resource characteristics through diverse projects (Baker, 2017; Atutxa et al., 2020). While such endeavors have successfully fostered citizen empowerment and leveraged distributed energy technologies in Western countries (Barabino et al., 2023), South Korea lacks the socio-political and systemic foundations necessary for citizen-driven, distributed energy systems (Ko, 2025).

This study qualitatively examines the processes of community-level energy democratization in South Korea, focusing on how organizational and technological values are pursued within these sociocultural and political constraints. Data were collected through semi-structured interviews with members of four renewable energy communities in both urban and rural areas of South Korea.

The Multi-Level Perspective (MLP) framework and energy democracy (ED) scholarship served as the analytical foundation for qualitative content analysis. The MLP framework enables the analysis of socio-technical energy transitions across three interlinked levels: landscape, regime, and niche (Geels, 2004). Energy democracy scholarship identifies three critical strategies for energy democratization at the niche level: resisting, reclaiming, and restructuring (e.g., Burke & Stephens, 2017). Integrated into the niche layer of the MLP framework, these democratization strategies helped reveal the underlying values and visions that challenge the large-scale, path-dependent, and technologically risky energy systems typically favored by centralized regimes.

Preliminary findings will be discussed from two perspectives: (a) the importance of cultivating both soft powers (e.g., a community’s history, culture, trust, and perseverance) and hard powers (e.g., acceptance of technological change and willingness to endure inconveniences from the change) to sustain energy community experiments; and (b) the role of landscape- and regime-level efforts in creating more opportunities for niche innovations through both institutional and non-institutional practices.

References

Atutxa, E., Zubero, I., & Calvo-Sotomayor, I. (2020). Scalability of low carbon energy communities in spain: An empiric approach from the renewed commons paradigm. Energies, 13(19), 5045.

Baker, S. H. (2017). Unlocking the energy commons: Expanding community energy generation. In Law and Policy for a New Economy (pp. 211-234). Edward Elgar Publishing.

Barabino, E., Fioriti, D., Guerrazzi, E., Mariuzzo, I., Poli, D., Raugi, M., ... & Thomopulos, D. (2023). Energy Communities: A review on trends, energy system modelling, business models, and optimisation objectives. Sustainable Energy, Grids and Networks, 36, 101187.

Burke, M. J., & Stephens, J. C. (2017). Energy democracy: Goals and policy instruments for sociotechnical transitions. Energy Research & Social Science, 33, 35-48.

Geels, F. W. (2004). From sectoral systems of innovation to socio-technical systems: Insights about dynamics and change from sociology and institutional theory. Research Policy, 33(6-7), 897-920.

Ko, I. (2025). “Rural exploitation” in solar energy development? A field survey experiment in South Korea on solar energy support in rural areas. Energy Research & Social Science, 119, 103837.

 

Addressing technological literarcy for Hong Kong elderly

Ann Gillian Chu, Wan Ping Vincent Lee, Rachel Siow Robertson
Hong Kong Baptist University

We will discuss uses of new forms of technologies among older adults in Hong Kong, drawing on studies which employed semi-structured interviews, field observations at faith-based social services working with older adults, and archival materials, with the aim of providing “thick description” (Geertz, 1973) of the current situation. We will show how technological literacy has improved recently amongst older adults in Hong Kong, allowing use of platforms which support social interactions and access to health-related information. However, remaining issues include the prevalence of a digital divide among older adults and a lack of consolidation of online services and providers. These issues are also complicated by recent waves of migration from Hong Kong, with many older adults finding themselves losing their sense of social identity and support after their children, grandchildren, and other members of their community moved away. Some recommendations for uses of technology in this context will be offered, such as a better integration between service providers and local communities and the provision of training courses. Further improvements will also be discussed regarding the user experience of digital platforms in connection with older people, including facilitating the formation of a sense of social identity and connection, agency, greater understanding, and positive emotions such as calmness and joy. Finally, the roles that faith-based social services spaces can play in supporting older adults will be discussed, including the use and integration of their online and offline spaces.

References

Geertz, C. (1973), "Thick Description: Toward an Interpretive Theory of Culture", The Interpretation of Cultures: Selected Essays, New York: Basic Books, pp. 3–30

 

Digital technologies' impact on charcter formation in Hong Kong young people

Rachel Siow Robertson
Hong Kong Baptist University

I will address the theme of character education for children and young adults through and for digital technologies. I will develop my account in relation to two digital ethics programmes I have run at my home institution: one aimed at staff of universities across Asia, and another which trained students at a secondary school in Hong Kong. I will discuss some of the unique challenges posed by current uses of technologies such as AI, arguing that they have a disproportionate impact on young people and their educators in terms of harms such as bias, oversurveillance, environmental damage, and mis-, dis-, and mal-information. I suggest that the uneven distribution of harms and benefits in the technology ecosystem means that conditions for virtue vary greatly, with young people and their educators facing a tragic dilemma of having to interact with technology for learning without having the power or option to do the all-things-considered ‘right thing.' I will relate these issues to the problem of moral luck for virtue theory: structural constraints and circumstances may rule out the possibility of virtue (Tessman, 2017). Having posed this problem, I will discuss which character traits and virtues are appropriate to recommend, and how to support their development and implementation in young people’s interactions with digital technologies. In particular, she will extend and provide novel applications for solutions which address structural constraints on virtue, including the cultivation of “burdened virtues” (Tessman, 2005) and traits which support co-liberation (D’Ignazio & Klein, 2020) and joy.

References

D’Ignazio, C., & Klein, L. (2020). Data feminism. Cambridge, MA: MIT Press. Available at: https://data-feminism.mitpress.mit.edu/.

Tessman, L. (2005). Burdened virtues: Virtue ethics for liberatory struggles. Oxford: Oxford University Press.

Tessman, L. (2017). When doing the right thing is impossible. Oxford: Oxford University Press.

 
3:00pm - 4:30pm(Symposium) Design as a contested space: technological innovations, critical investigations, military interests
Location: Auditorium 2
 

Design as a contested space: technological innovations, critical investigations, military interests

Chair(s): Jordi Viader Guerrero (TU Delft, Netherlands, The), Eke Rebergen (University of Amsterdam, Amsterdam School for Cultural Analysis (ASCA)), Dmitry Muravyov (TU Delft, Netherlands, The)

The design of intimate digital technologies is irrevocably intertwined with various ethical and social considerations. Different methods and guidelines are developed to help designers navigate this, for which personal (human) decision making and ethical reflection are often valued. For some designers and researchers this means they start questioning or begin to struggle with the basic propositions or ideologies that these technologies are built upon. For example, current developments on AI are predicated on the epistemological assumption of knowledge understood as identification and prediction, the material and economic underpinnings of ‘publicly available’ large scale data sets, hyperscale data centers, and bottle-necked GPU supply chains, as well as a political rationale that belittles non-algorithmic decision making and concentrates power in a few corporate actors (Alkhatib 2024). As political ideologies around technology change from optimism equating technology with progress to wielding technology as a blunt manifestation of power (Merchant 2023, McQuillan 2022) and, since the history and deployment of AI has been recurrently linked to military research (Pickering 2010; Halpern 2015), these technologies (and their associated funding schemes) are understood to be (and have been) part of military innovation and (preparation for) war.

Questioning and struggling against the assumptions of technological design, as well as of responsible or ethical design practices that build upon or correct the course of already existing technological developments and ideologies, can lead to more contrarian critical design/technical practices and research (Agre 1997; Harwood 2019; Ratto & Hertz 2019; Mazé & Keshavarz 2013). Rather than seeking to apply ethical or philosophical theories to improve existing technologies, which are frequently defined by corporate actors, a critical technical practice looks for further interruptions, deconstructions, breakdowns or minor tech investigations (Andersen & Cox 2023) in order to occupy and politicize design practice as a locus for questioning the entanglements between technology and society (Soon & Velasco 2023). As such, a critical technical practice questions the implicit epistemological and political goals of design and engineering while repurposing them as a form of materially-bounded critical reflection.

Through this symposium/panel the complexities and challenges of such contrarian practices are developed, specifically in the context of the classroom and against a backdrop of AI technosolutionist imaginaries (Morozov 2013) and increasing ties to militarization in design practice and research. We propose these practices as a promising and interesting field of creative exploration and research that goes against the grain of the usual design and research programs, can challenge institutional ties, and can amount to a more activistic or even antimilitaristic stance in a time where national spending on military is increasing rapidly in Europe. How do designers critically navigate this complex field of creative possibilities, military interests, rapid innovations, but also possible violent implications of the weaponization of everything, and personal longing for peace and doing good? Is there enough attention for the historical and political constructions reinforcing often unrecognized networks of the military industry and institutions of corporate power in technological design practice? What are the considerations for questioning the inevitability and perceived necessity of the current (infrastructures of) war? How can design practice and education become a critically-fueled and politicized space that empowers us to imagine alternative technological futures?

Alkhatib, Ali. (2024). “Defining AI.” December 6, 2024. https://ali-alkhatib.com/blog/defining-ai.

Agre, P. E. (2014). Toward a critical technical practice: Lessons learned in trying to reform AI. In Social science, technical systems, and cooperative work (pp. 131-157). Psychology Press.

Andersen, C. U., & Cox, G. (2023). Toward a Minor Tech. A Peer-Reviewed Journal About, 12(1), 5-9.

Halpern, O. (2015). Beautiful data: A history of vision and reason since 1945. Duke University Press.

Keshavarz, M., & Maze, R. (2013). Design and dissensus: framing and staging participation in design research. Design Philosophy Papers, 11(1), 7-29.

McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Policy Press.

Merchant, B. (2023). Blood in the machine: The origins of the rebellion against big tech. Hachette UK.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.

Ratto, M., & Hertz, G. (2019). Critical making and interdisciplinary learning: Making as a bridge between art, science, engineering and social interventions. The critical makers reader:(Un) learning technology, 17-28.

Soon, W., & Velasco, P. R. (2024). (De) constructing machines as critical technical practice. Convergence, 30(1), 116-141.

 

Presentations of the Symposium

 

Historicising voice biometrics: the colonial continuity of listening, from the sound archive to the acoustic database

Daniel Leix Palumbo
University of Groningen, Netherlands, The

Since 2017, German border authorities have introduced voice biometrics as an innovative assistance tool to analyse the language and accents of undocumented asylum seekers to determine their country of origin and assess eligibility for asylum. However, the attempt to ‘scientifically’ identify links between voice, accent and country of origin through technology is not a recent development, standing in historical continuity with longer colonial practices of listening and sound archiving from the beginning of the last century. European sound archives encompass early voice recordings of colonial subjects made through large-scale research projects during colonial rule and the world wars to reinforce the racial and nationalist ideologies of European states. Although not aimed at controlling borders but defining ‘pure’ characteristics in the voice of the world populations to create otherness, these recordings shared the purpose of creating an archive that could ground the determination of origin through voice analysis. Today, the creation of the acoustic database to train voice biometrics occurs under very different conditions, delegated to various public and private actors, including research consortia and crowdsourcing platforms. It involves linguistic researchers and many data workers, who provide their voice data as cheap labour. By conducting digital autoethnography, critical discourse analysis and in-depth interviews, this project explores these processes of outsourced (audio) data work while situating them within the longer colonial history of sound archiving and listening. It investigates disruptions and continuities in the shift from the sound archive to the acoustic database and what these imply about the operations of State power.

 

Antimilitarism & algorithms: design interventions and investigative data practices

Eke Rebergen
University of Amsterdam, Amsterdam School for Cultural Analysis (ASCA)

The military industry has been heavily involved in the development of technological innovations and the design of interactive systems that have become part of everyday life. Designers rarely recognise the links of their profession to military investments and technological developments, whether it is the application of technologies that can easily be weaponized, usage of systems that are created through military research efforts, or normalisation of war and the military in advertising, games and films.

As there is increasing critical thinking within the design field on colonial histories, contributions to social injustice, and for example the inherent violence or discrimination in design, the recent development of further military investment seems also worthy of closer scrutiny.

In a similar way as proposals for reorienting design towards justice (Constanza-Chock) or to decolonize design (Tejada), here we chart out a more specific history of explicit anti-militaristic design-research and creative interventions against militarisation.

By examining cases like Sleiner's project called Velvet Strike or the artistic work of Claude Cahun and Marcel Moore’s subversion and covert interventions, it is possible to extrapolate such forms of playful subversion (Flanagan, 2013; Pederson, 2021; Did, 2024) to current developments in AI or war propaganda through social media. Antimilitaristic design efforts furthermore can’t do without investigation and withdrawal, uncovering and severing all relations with war related economies, complicit research activities, or involvement with military industries (Berardi, 2024). For this we look at recents design efforts that D'Ignazio and Klein called data feminism, as well as the work of Bureau D'études, that organised cooperation between militant groups, university students and artists, as examples of how to engage in further investigations of inherent networks of power and military technological developments. Antimilitarists of “onkruit” in the Netherlands are taken as a final historical example, as they created a package of informative zines, maps, diagrams of all kinds of military divisions, as well as playful sticker packs and explicit calls to action under the title “Een wilde wegwijzer” - explicitly rejecting military logic and exposing the places and often hidden infrastructures of the military.

Building on these cases we assess the relevance of similar creative endeavors in these times of renewed spending on military technologies, the development and testing of all kinds of AI systems by for example the Israeli army (Loewenstein), and more general the weaponization of everything (Galeotti), the complicity of companies like Google or Nvidia, and developments of for example what has been called the kill cloud (Westmoreland & Ling).

This contribution will end with some personal experiences as a design teacher working with design students on such investigations and interventions.

 

Teaching machines, managed learning and remote examination

Alex Zakkas
University of Amsterdam, Amsterdam School for Cultural Analysis (ASCA)

Covid brought to universities the use of digital proctoring software for students to undergo distant examination - so that universities can continue producing degrees without having to rethink ways of examining (ie complying with accreditation policies). Beyond the pandemic condition, in another “state of exception”, we also observed how in 2024 these same technologies were used during the student and teacher strikes in Greece for universities to go ahead with examinations despite the occupied buildings - another case of opting for techno-solutionism instead of addressing the underlying structural issues of education. These technologies received a lot of critique by both students and educators for their intrusion of intimate spaces (video surveillance of student private rooms), their discriminatory malfunctioning (failing to identify darker skins, needing spatial conditions that few students can afford) and their disciplinary pedagogical models (catching cheaters, panopticon effect). Like all technologies, proctoring software have their own histories of socio-technical entanglements, with links to surveillance technologies developed in military and carceral industries, merged with technocratic epistemologies of knowledge transfer (Skinner’s teaching machines), merged with a hyper-capitalist model of data extractivism and the quantified self. Understanding how these histories have influenced the design of teaching machines helps us place current innovations in edu-tech (such as “personal learning AI coaches” and “AI cheating detection”) within the wider socio-political narratives that produce them and anticipate probable futures (Crawford and Joler). As a counter-narrative, we are experimenting with the possibilities of Speculative & Critical Design methodology (Johannessen) for cultivating a "critical imagination" and involving students in reimagining the future of education against a backdrop of AI-accelerated capitalism and warmongering. Beyond the critique of technology the real question concerns the kind of knowledge we are passing on to next generations in a time when our options seem limited.

 

Exemplary situations of technological breakdown in the philosophy of technology: who and what is at stake in learning from failure?

Dmitry Muravyov
TU Delft, Netherlands, The

Breakdowns constitute the backbone of life with technologies. Thus, it is unsurprising that thinkers have sought to understand the meaning and the place of this obverse side of technology. I use the concept of exemplary situations to explicate and problematize some of the features of understanding technological breakdown in the philosophy of technology. In analyzing the theme of technological breakdown in the canon of the philosophy of technology, I rely on Mol's ideas about empirical philosophy, particularly her concept of exemplary situations (Mol, 2021). For Mol, all philosophy is empirical insofar as it is explicitly or implicitly informed by the situations in the world that shape the philosophical inquiry. By reading some canonical texts in the philosophy of technology, I show how, through exemplary situations, i.e., explicit examples or implicit context, the technological breakdown in these texts is rendered as a frictionless and individualized moment of learning, perceived through the position of a user or an observer.

The philosophy of technology is a philosophical subfield with multiple approaches, each with its distinct philosophical lineages; it has also developed over time, facing changes such as the "empirical turn." Notwithstanding such diversity and changes over time, I suggest that it is possible to elucidate a particular tradition of thinking about technological breakdowns by starting with a postphenomenological approach, one of the most influential paradigms in the contemporary philosophy of technology. While my analysis predominantly relies on examining this tradition, I also show its resonance with other approaches.

Traversing these texts, one can see exemplary situations of technological breakdown and artifacts that no longer operate as envisioned - hammers, computer freezes, slowly loading webpages, overhead projectors, or rifles. I seek to problematize the shared underlying features of these philosophical accounts by showing how technological breakdown can instead be collective, political, imbued with friction, and perceived from the position that complicates learning. Using an alternative exemplary situation of CrowdStrike blue screen freeze in July 2024, I show that, first, while canonically technological breakdowns are seen as moments of learning, it is also worth reflecting on who is learning here and at whose expense. Second, the subject of theorizing may not be an individual using the technology but a collective with few options but accepting the technology's working upon them. Third, defining something as a technological breakdown can also be imbued with friction. In doing so, I seek to politicize the technological breakdown in the philosophy of technology to take the notion beyond its predominantly emphasized epistemological dimension.

Through such reading, breakdown becomes a collective experience that prompts questioning who is obtaining knowledge while highlighting that nothing is self-evident about defining something as "broken" in the first place. Collective vulnerability rather than individual knowledge-seeking can be something that a breakdown engenders.

 
3:00pm - 4:30pm(Symposium) The illusion of conversation. From the manipulation of language to the manipulation of the human
Location: Auditorium 3
 

The illusion of conversation. From the manipulation of language to the manipulation of the human

Chair(s): Francesco Striano (Università di Torino, Italy)

This panel will critically examine the impact of Large Language Models (LLMs) as part of the intimate technological revolution, analysing how these technologies infiltrate our everyday communication and the fabric of interpersonal trust, culminating in an analysis of the implications for political manipulation. Starting with a technical analysis of the capabilities of LLMs, it will explore how these technologies, while not endowed with authentic intention or understanding, can influence our beliefs and interactions with the digital world.

The introductory talk will analyse the generalisation capabilities of LLMs and examine how these models are able to produce coherent and contextually relevant texts. It will discuss whether their ability to produce new content is the result of genuine abstraction or mere storage and reorganisation of data. This will raise the fundamental philosophical question of whether language models have a true understanding of language or solely reproduce patterns.

The second talk will examine the nature of LLMs through the lens of speech act theory. It will be argued that despite their ability to produce locutionary correct linguistic utterances, LLMs completely lack the illocutionary component of speech acts, namely intention. However, although LLMs do not possess intentions of their own, they produce perlocutionary effects that influence the user’s reactions and decisions, leading to a projection of intention on the part of the user. This illusion of understanding supports the notion of LLMs as “conversational zombies”, illustrating how said technologies influence users' emotions and decisions, shaping the field of our intimate relationship with technology.

The following talk will argue that trust in the reliability and trustworthiness of LLMs is based on a double conceptual fallacy: on the one hand, one extends to them a judgement about reliability that is inherent to non-probabilistic linear technologies, and on the other hand, one considers them “trustworthy” as if they had intentions. It will be discussed that this trust is misplaced, however, as LLMs are not (re)producers of facts, but rather producers of stories.

The concluding talk will focus on the risks of manipulation that arise in political communication through the use of Generative Artificial Intelligences (GenAIs). It will discuss how GenAI tools, such as LLM and AI-generated content, represent a qualitative shift from traditional forms of digital manipulation. The ability to generate realistic content and simulate human interactions increases the risk of epistemic confusion and reduced trust in democratic processes. It will be argued that manipulation no longer occurs only at the level of content dissemination, but also through interaction, which has a greater impact on belief formation. Microtargeting strategies and the use of social bots will be analysed as forms of manipulation enhanced by GenAI.

Each presentation will last approximately 15 minutes and will be combined into a single lecture, leaving about 30 minutes for discussion afterwards. One of the panel members will be in charge of moderation.

 

Presentations of the Symposium

 

Understanding generalization in large language models

Alessio Miaschi
Cnr-Istituto di Linguistica Computazionale “Antonio Zampolli”

The advent of recent Large Language Models (LLMs) has revolutionized the landscape of Computational Linguistics and, more broadly, Artificial Intelligence. These models, built on the transformative architecture of Transformers, have introduced unprecedented advancements in understanding, processing, and generating human language. Transformer-based Language Models have demonstrated remarkable capabilities in solving a wide range of tasks and practical applications, spanning from machine translation and text summarization to conversational agents and sentiment analysis. Beyond task-specific applications, these models exhibit an extraordinary ability to generate coherent and contextually relevant text, underscoring their potential to capture complex linguistic structures and fine semantic nuances with high precision and accuracy.

In this context, recent years have seen an increasing focus on studying and evaluating the generalization abilities of such models. On one hand, numerous studies highlight these models’ ability to discover regularities that generalize to unseen data (Lotfi et al., 2024), thus allowing them to produce novel content across different contexts, domains, and use cases. On the other hand, many works point out that these generalization capabilities often stem from memorization phenomena rather than true abstraction or reasoning. Evidence for this has emerged e.g. from studies investigating data contamination in the evaluation of Language Model capabilities (Deng et al., 2024). Antoniades et al. (2024) pointed out that LLMs tend to memorize more when engaged in simpler, knowledge-intensive tasks, while they generalize more in harder, reasoning-intensive tasks. A fundamental challenge underlying this debate is the lack of consensus on what constitutes good generalization, the different types of generalization that exist, how they should be evaluated, and which should be prioritized in varying contexts and applications (Hupkes et al., 2023).

This talk will provide an overview of the latest developments in research on understanding and analyzing the generalization processes of state-of-the-art Language Models. In particular, we will focus on a study evaluating the lexical proficiency of these models, emphasizing their generalization abilities in tasks involving the generation, definition, and contextual use of lexicalized words, neologisms, and nonce words. Findings reveal that LMs are capable of learning approximations of word formation rules, rather than relying solely on memorization, thus demonstrating signs of generalization and the ability to generate plausible and contextually appropriate novel words.

Bibliography

Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson (2024). Non-Vacuous Generalization Bounds for Large Language Models.

Chunyuan Deng, Yilun Zhao, Yuzhao Heng, Yitong Li, Jiannan Cao, Xiangru Tang, Arman Cohan. Unveiling the Spectrum of Data Contamination in Language Model: A Survey from Detection to Remediation. In Findings of the Association for Computational Linguistics: ACL 2024.

Antonis Antoniades, Xinyi Wang, Yanai Elazar, Alfonso Amayuelas, Alon Albalak, Kexun Zhang, William Yang Wang (2024). Generalization vs. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data. In Proceedings of the ICML 2024 Workshop on Foundation Models in the Wild.

Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell & Zhijing Jin. A taxonomy and review of generalization research in NLP. Nature Machine Intelligence, volume 5, pages 1161–1174 (2023).

 

Large language models are conversational zombies. Chatbots and speech acts: how to (not) do things with words

Laura Gorrieri
Università di Torino

Transformer models such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer) have revolutionised NLP (Natural Language Processing), achieving unprecedented results across different tasks such as translation, summarization, and dialogue generation. These models leverage innovative mechanisms like attention and task transferability, enabling applications such as chatbots to produce coherent, contextually appropriate conversations on virtually any topic. Despite their remarkable ability to generate human-like text, questions persist regarding their capacity to perform genuine speech acts. Specifically, can these models truly "do things with words" or do they simulate linguistic competence?

This paper explores the limitations of Transformer-based Large Language Models (LLMs) through the lens of speech act theory, a framework introduced by Austin (1962) and further developed by Searle (1969). The idea at the core of this theory is that language is an action, and therefore when one speaks they are at the same time doing things. Austin splits speech acts into three levels: the locutionary act (producing syntactically and grammatically correct sentences), the illocutionary act (what the speaker intends to do with their words, for example making a promise or giving an order), and the perlocutionary act (the effect elicited on the listener). LLMs nowadays master the locutionary act, generating grammatically accurate and semantically coherent text, and often achieve a perlocutionary response, influencing user reactions or decisions. However, this paper argues that LLMs fundamentally lack the capacity for illocutionary acts due to their absence of intention, a key component of speech acts.

Intention is central to illocutionary acts, as it is needed to convey illocutionary force to one’s words. For instance, saying "I am so sorry that didn’t work out" is not merely expressive an utterance; it embeds an intention to empathize with the listener, an act rooted in agency and self-directed goals. In contrast, LLMs operate as statistical models, selecting words based on probability distributions rather than intentions. They mimic intention but do not possess it. Gubelmann (2024) emphasizes this distinction, likening chatbot-generated responses to a tortoise accidentally forming words in the sand - the words may appear purposeful, but no intention exists behind them.

However, LLMs achieve a unique phenomenon: the perlocutionary effect they generate often aligns with the perceived illocutionary force, leading users to project intention onto the chatbot. This phenomenon is described as “conversational zombies,” paralleling philosophical zombies that mimic human behaviour without consciousness. LLMs emulate illocutionary acts convincingly enough to influence user emotions, decisions, and even legal outcomes, as demonstrated in the Air Canada case, where a chatbot’s response had binding legal implications.

This paper underscores the dual nature of chatbot interactions: while lacking genuine intention, LLMs shape real-world outcomes through their perlocutionary impact. Their capacity to mimic illocutionary force raises ethical and practical questions about their deployment in many domains. As chatbots become increasingly integrated into daily life, understanding their limitations and the active role of users in ascribing meaning becomes essential.

Bibliography

Austin, J. L. (1962). How to Do Things with Words (M. Sbisá & J. O. Urmson, Eds.). Clarendon Press.

Gubelmann, R. (2024). Large language models, agency, and why speech acts are beyond them (for now) – a Kantian-cum-pragmatist case. Philosophy & Technology, 37(1). https://doi.org/10.1007/s13347-024-00696-1

Searle, J. R. (1969). Speech Acts: An Essay in the Philosophy of Language (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9781139173438

 

The double LLM trust fallacy

Francesco Striano
Università di Torino

In today’s context of the growing popularity of generative artificial intelligence, Large Language Models (LLMs) represent an important technological and cultural phenomenon. However, the trust placed in these technologies raises critical questions, especially regarding their reliability and their alleged moral or intentional “trustworthiness.” This talk will explore the conceptual foundations of trust in LLMs. It will be argued that it rests on a double misunderstanding: an improper superimposition of the notion of reliability typical of linear and non-probabilistic technologies and a false attribution of “trustworthiness” as if LLMs possessed intentions or motivations.

Traditionally, trust in digital technologies has been based on their ability to deliver reliable and predictable results. Traditional digital systems follow deterministic models where the input produces an output determined by precise and repeatable rules. This has led to an implicit extension of this reliability to new technologies such as LLMs. However, LLMs operate on a probabilistic basis and generate answers based on statistical models learnt from large amounts of text data. This means that, unlike deterministic technologies, the outputs of LLMs are neither always reproducible nor necessarily accurate, but rather have a probabilistic likelihood.

In parallel, there is a tendency to view LLMs as “trustworthy” in a more human sense, almost as if they were entities with intentions or morality. This perspective anthropomorphises the capabilities of machines, ascribing to them a form of “trustworthiness” more appropriate to a human agent than an algorithmic system. LLMs in fact, are not aware of the information they produce and have no commitment to truth or accuracy. They act as “story producers,” generating narratives that may be persuasive, but are not necessarily true or accurate.

These two conceptual fallacies lead to false confidence in LLMs. The perception of technical reliability clashes with the reality of their probabilistic nature, while the attribution of moral trustworthiness raises unrealistic expectations of what LLMs can offer. This situation raises important ethical and practical issues, particularly in contexts where decisions based on information generated by LLMs can have significant consequences.

The talk will commence by delineating the concepts of reliability and trust and their application to the relationship between humans and technology. It will then describe the undue over-extension of the perception of reliability from linear to probabilistic technologies. It will then discuss the undue attribution of intention and trustworthiness. Finally, the importance of providing users - and policymakers - with a deeper and more nuanced understanding of the capabilities and limitations of generative artificial intelligence will be emphasised.

Bibliography

de Fine Licht, K., Brülde, B. (2021). On Defining “Reliance” and “Trust”: Purposes, Conditions of Adequacy, and New Definitions. Philosophia, 49, 1981-2001. https://doi.org/10.1007/s11406-021-00339-1

Eberhard, L., Ruprechter, T., Helic, D. (2024). Large Language Models as Narrative-Driven Recommenders. arXiv preprint. https://doi.org/10.48550/arXiv.2410.13604

Gorrieri, L. (2024). Is ChatGPT Full of Bullshit?. Journal of Ethics and Emerging Technologies, 34(1). https://doi.org/10.55613/jeet.v34i1.149

Shionoya, Y. (2001). Trust as a Virtue. In: Shionoya, Y., Yagi, K. Competition, Trust, and Cooperation: A Comparative Study. Berlin-Heidelberg: Springer, 3-19. doi.org/10.1007/978-3-642-56836-7_1

Striano, F. (2024). The Vice of Transparency. A Virtue Ethics Account of Trust in Technology. Lessico di Etica Pubblica, 15(1) (forthcoming).

Taddeo, M. (2017). Trusting Digital Technologies Correctly. Minds and Machines, 27, 565-568. https://doi.org/10.1007/s11023-017-9450-5

Wang, P. J., Kreminski, M. (2024) Guiding and Diversifying LLM-Based Story Generation via Answer Set Programming. arXiv preprint. https://doi.org/10.48550/arXiv.2406.00554

 

Generative AI, political communication and manipulation: the role of epistemic agency

Maria Zanzotto
Università di Torino

The rise of generative AI (GenAI) systems presents important ethical, political, and epistemological challenges, particularly in political communication. This paper explores how GenAI tools, such as large language models (LLMs) and AI-generated content, reshape the landscape of digital manipulation compared to traditional machine learning algorithms. The starting point is the established framework of digital manipulation in social media platforms, notably exemplified by the Cambridge Analytica scandal, where algorithms distributed targeted political messages based on users' extracted data to influence voter behavior. This type of manipulation, described by Ienca (2023), focuses on the distribution of content, data extraction, and passive interaction, with human intentions behind the system’s design and usage as a necessary condition (referred to as the “intentionality” condition).

However, the introduction of GenAI technologies marks a qualitative shift. GenAI tools actively generate content that appears realistic and human-like, fostering interactions that blur the boundaries between human and machine communication. This indistinguishability, where users struggle to differentiate between AI-generated and authentic content or profiles, is unprecedented in scale and poses significant threats to epistemic agency (Coeckelbergh, 2023) - the capacity to form, control, and trust one’s beliefs. Unlike earlier AI systems that manipulated content distribution, GenAI tools engage users in human-like conversations, creating the illusion of authentic communication and leading to potential manipulation of beliefs.

A key challenge with LLMs is that users tend to anthropomorphize these systems, attributing mental states such as beliefs, intentions, or desires. However, these AI tools do not possess true understanding or agency; they merely produce outputs by predicting the next most probable word based on patterns in large datasets. This has led researchers to describe them as "stochastic parrots" (Bender et al., 2021) - powerful computational systems that reorganize existing data without genuine comprehension. The anthropomorphic style of chatbots, which often use phrases like “I think” or “I believe,” reinforces the illusion of intelligence and encourages users to apply what philosopher Daniel Dennett calls the intentional stance, wherein humans interpret behavior in terms of mental states. This illusion creates fertile ground for manipulation, particularly in political communication, where trust and authenticity are crucial.

The study identifies two key pathways of manipulation with GenAI: microtargeting, where AI-generated messages are tailored to individuals, and the use of social bots that simulate human interaction. These processes increase the risk of epistemic confusion, diminishing users' trust in digital environments and, by extension, in democratic processes. While Ienca’s framework highlights the intentions behind manipulation, it falls short in addressing the active role played by GenAI tools in fostering false beliefs through their anthropomorphic and probabilistic nature.

This research argues that GenAI-induced indistinguishability fundamentally impacts political communication, necessitating a revised framework to address new forms of digital manipulation. The manipulation of interaction, rather than mere content distribution, represents a stronger influence on belief formation and autonomy.

Bibliography

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Coeckelbergh, M. (2023). Democracy, epistemic agency, and AI: political epistemology in times of artificial intelligence. AI and Ethics, 3, 1341-1350. doi.org/10.1007/s43681-022-00239-4

Dennett, D. (2023, May 16). The problem with counterfeit people. The Atlantic. Retrieved August 2024, from https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/

Ienca, M. (2023). On Artificial Intelligence and Manipulation. Topoi, 42, 833-842. doi.org/10.1007/s11245-023-09940-3

 
3:00pm - 4:30pm(Workshop) Reviewing and publishing for early career researchers: a bridge towards scholarly expertise
Location: Auditorium 4
 

(Workshop) Reviewing and publishing for early career researchers: a bridge towards scholarly expertise

Chair(s): Behnam Taebi (TU Delft), Diana Adela Martin (University College London, United Kingdom)

Workshop description

Publishing and reviewing are an important part of academic work. Peer-review is essential for evaluating and strengthening the research conducted and submitted for publication by one’s peers, while also helping reviewers gain a better understanding of the standards of academic publishing and develop themselves as authors. Being a fair and rigorous reviewer means being a valuable member of a discipline and academic community. Nevertheless, the process of publishing and reviewing can be daunting when first approaching it.

The workshop aims to introduce the audience to the publishing process in Science and Engineering Ethics and the role of peer review. This session will include real-world examples, interactive activities, and discussions with members of the Science and Engineering Ethics editorial team, which will provide participants with practical strategies for publishing in the journal and conducting constructive reviews. Facilitators will be members of the editorial team attending the conference.

The target audience are early career researchers (doctoral and postdoctoral researchers) new to publishing and peer-reviewing, as well as faculty members at other career stages looking to enhance their reviewing skills and knowledge of the publication process.

Objectives

By the end of this workshop, participants will be able to:

1) Gain awareness of the aims, scope, review criteria and publication process of the journal Science and Engineering Ethics.

2) Identify the roles of authors, reviewers, and editors in the peer review process.

3) Apply review criteria to evaluate the quality of a manuscript.

4) Understand what counts as providing constructive feedback and recommendations that help authors improve their manuscripts.

5) Navigate ethical considerations and avoid common pitfalls in peer reviewing.

6) Incorporate these best practice insights into their own authorial practices

 

Presentations of the Symposium

 

Session structure

Behnam Taebi1, Diana Martin2
1TU Delft, 2University College London - Center for Engineering Education

Session structure

10 min: Welcome and introduction to the journal Science and Engineering Ethics, including the journal's mission, aims, scope, impact and publication process.

10 min: Presentation of the roles of authors, reviewers, and editors in academic publishing

15 min: Plenary discussion with the audience on their own experience with publishing, receiving and writing reviews, the challenegs encountered or questions they might have about the process

25 min: Activity, where participants are split into groups. Each group is handed a fake review and is invited to discuss and comment on its message. Based on this discussion, participants will brainstorm a proposal highlighting 3 main features of a poor and of a constructive review.

20 min: Plenary discussion. The groups reconvene in a plenary discussion, where each group presents its proposal.

10 min: Conclusion summarising key points and best practices for reviewing and writing, with final tips from the editors for preparing and reviewing manuscript submissions.

The workshop can also be adapted to a 60 min session.

 
3:00pm - 4:30pm(Symposium) Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue
Location: Auditorium 5
 

Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue

Chair(s): Dazhou Wang (University of Chinese Academy of Sciences, Beijing), Christopher Coenen (Institute of Technology Assessment and Systems Analysis (KIT-ITAS)), Aleksandra Kazakova (University of Chinese Academy of Sciences, Beijing)

Taking Socrates as an example, philosophy is essentially a dialogue. Guided by this spirit, this forum sincerely invites engineering scientists, computer scientists, engineering practitioners, and philosophers of science and technology to engage in an interdisciplinary dialogue. This dialogue aims to explore the nature of engineering science, the complex connections between engineering and science, the characteristics of artificial intelligence, its impact on engineering science, and so on. The content of the symposium covers many cutting-edge fields, including aviation engineering, cryogenic engineering, petroleum exploration engineering, metallurgical process engineering, astronaut training, human stem cell-based embryo models, AI-driven synthetic biology, biomedicine,swarm intelligence, and data science and engineering. Through these cases, participants are providing multi-dimensional philosophical insights from their respective professional backgrounds.

The speakers from the fields of philosophy, engineering and computer science are making this forum not only an interdisciplinary dialogue but also a cross-boundary one. By sharing their research findings and reflections, experts from different fields facilitate a deeper understanding of the relationship among natural science, engineering science, and engineering practices, of the relationship between AI and engineering, and of basic concepts of philosophy of engineering science, embodying the fundamental spirit of philosophy.Through interdisciplinary collaboration, participants can better understand the complexity of engineering science, explore its potential in practical applications, and lay a solid foundation for future technological innovation. The achievements of this dialogue are not only reflected at the academic level, especially relevant to the philosophy of engineering science, but may also have a profound impact on engineering practice, driving the common progress of engineering and philosophy.

 

Presentations of the Symposium

 

Practice is the source of true knowledge: Lesson from the flight experiments of Samuel Langley and the Wright brothers

Fangyi Shi, Nan Wang
University of Chinese Academy of Science, Beijing

Samuel Langley, as a natural scientist, attempted to solve flight problems based on natural science theories. In 1886, he designed a spiral arm tower for aerodynamic testing, powered by a steam engine. He quantitatively studied the laws of lift and drag generated by the movement of bird wings and plates in the air, and drew scientific conclusions and formulas. In 1891, he published the conclusions drawn from the experiment in the book "Aerodynamic Experiments". Langley began the design work of the aircraft while conducting theoretical research. In 1903 when he conducted the flight test of "Air Traveler" aircraft, designed by himself, however, both the two tests failed due to launch device accidents.

Almost simultaneously, the Wright brothers, who had previously run a bicycle repair company and had low levels of education, drew on the research and experimental results of their predecessors (including Langley) and devoted to the design and manufacturing of aircraft. Between 1900 and 1903, the Wright brothers conducted

multiple experiments in Kitty Hawk, North Carolina, USA. Their early attempts mainly focused on the design and flight testing of gliders. Through in-depth research on aerodynamics, the two brothers gradually mastered the basic principles of flight. After multiple adjustments and improvements, they finally achieved the first controllable powered flight in human history on December 17, 1903. In the following years, the Wright brothers continued to improve and experiment with aircraft. They gradually mastered key technologies such as flight control, wing design, and power systems, and successfully manufactured various models of aircraft.

This fact is surprising because from the perspective of 'engineering is applied science', Langley should have been the first to succeed. This article compares and analyzes the educational backgrounds, professional careers, flight test methods, flight test processes, and the dispute over invention rights of the two individuals, attempting to answer this confusion and extend a basic understanding of the complex connections between natural sciences, engineering sciences, and engineering practice. The author indicates that the success of the first manned aircraft, although inspired by aerodynamic research, was mainly the result of accumulated experience and repeated trial and error; Over a decade after the birth of the first aircraft, almost all progress in aircraft development was made when aerodynamic theory lagged far behind aviation practice. Of course, we know that with the great development of aviation engineering, aerodynamics has indeed become increasingly mature, and in turn, has become a powerful guide for the "rationalization" and "refinement" of aviation engineering practice. This means that we cannot logically derive engineering science from natural science and then rely on engineering science to ensure the success of innovative engineering practices. In any groundbreaking engineering practice, one can only summarize experience and achieve success through repeated exploration and experimentation, while developing engineering science theories.

 

A reflection on the development of cryogenic engineering

Zhongjun Hu1, Dazhou Wang2
1Chinese Academy of Sciences, Beijing, 2University of Chinese Academy of Sciences, Beijing

Cryogenic engineering is essential for the advancement of frontier scientific research, including particle physics and the development of space technology. It was through the dedicated study of cryogenic technology that superconductivity and quantum physics were unexpectedly discovered. A distinctive feature of the evolution of cryogenic technology is that practical applications often precede theoretical research.

The core technologies of cryogenic engineering exhibit remarkable philosophical characteristics, reflecting the ingenious integration of seemingly disparate principles. For instance, the development of a new type of screw compressor demonstrates the fusion of rotary motion with piston motion, highlighting the interdisciplinary nature of cryogenic engineering. This creativity is reminiscent of ancient observations of natural phenomena, such as the changes in the heavens and earth, which led to the creation of blowing equipment with a breathing function. These early innovations laid the groundwork for the industrial revolution’s steam engines, illustrating how technological advancements often emerge from addressing practical engineering challenges.

The development of cryogenic technology is deeply rooted in experimental practices, where achieving low temperatures involves a series of progressive precooling steps. This process necessitates a multidisciplinary approach, integrating physics, chemistry, materials science, and mechanical engineering. The discovery of superconductivity and superfluidity, two macroscopic quantum effects, significantly challenged classical physics, pushing the boundaries of what was thought possible. These groundbreaking discoveries were triggered by the liquefaction of helium and advancements in temperature measurement, underscoring the critical role of foundational research in driving technological progress.

The history of cryogenic engineering provides valuable insights into the process of innovation in scientific and technological fields. It reveals that technological breakthroughs often arise from solving practical problems, and that innovation is not always a purposeful endeavor. The iterative cycle of experimentation, theory, and application that characterizes cryogenic development serves as a model for other scientific and technological fields. This approach emphasizes the importance of interdisciplinary collaboration, the value of empirical discovery, and the foundational role of foundationalresearch in driving technological advancement.

 

Effective development of gulong shale oil under the guidance of engineering philosophy

He Liu1, Dongqi Ji2
1Chinese Academy of Engineering, Beijing, 2Research Institute of Petroleum Exploration & Development, PetroChina

The DaqingGulong continental shale oil national demonstration zone is in the northern part of the Songliao Basin, China.The strategic breakthrough of the Gulong shale oil marks one of the most significant oil and gas discoveries in China in the 21st century. The ability to develop shale oil at a large scale is vital for ensuring national energy security and seizing the high ground in land-based shale oil technology. However, due to its unique geological characteristics, existing theories and technologies face significant challenges. These include geological uncertainties, engineering difficulties, and management inefficiencies in oil development.

Guided by engineering philosophy, we have been systematically summarizing engineering implementation experiences through the cycle of "practice, understanding, re-practice, re-understanding." This iterative process has been crucial in refining our approach and addressing the complexities inherent in shale oil extraction. By applying the "law of unity of opposites," we have identified and addressed four key dialectical relationships in Gulong shale oil development:"whole and part," "universality and particularity," "major and minor contradictions," and "inheritance and innovation."

This approach has successfully promoted technological innovation and continuous improvement. For instance, we have developed advanced drilling and completion techniques, enhanced recovery methods, and integrated digital technologies to improve efficiency and reduce costs. Our goal is to achieve a production capacity of one million tons by 2025, reach a three-million-ton production scale by 2030, and construct a key production base of five million tons by 2035.These targets are not only aimed at ensuring the high-quality and sustainable development of the Daqing Oilfield but also at contributing to China's shale revolution. By achieving these milestones, we can further provide valuable references for global resource development projects, particularly those in complex and challenging geological environments.

The achievement in Gulongis not only a key technological breakthrough in shale oil exploration and development, but also represents a theoretical breakthrough from terrestrial shale oil generation to terrestrial shale oil production. To further strengthen research on terrestrial shale oil in China, the "National Key Laboratory for Green Extraction of Terrestrial Shale Oil with Multi-Resource Synergy" has been established in Daqing.This clearly demonstrates that engineering demands are a significant driving force behind the development of engineering science, and engineering philosophy can play a crucial role in engineering innovation and the advancement of engineering science.

 

Engineering innovations in novel supercritical fluids energy and power systems: from fundamentals to application demonstrations

Lin Chen
Chinese Academy of Sciences, Beijing

Trans-critical and Supercritical fluid engineering has become one key technology in energy and power aspect, such as solar, nuclear, coal-fire and other sectors. The innovation and transition from conventional water-based Rankine cycle to supercritical CO2 based new system then poses challenges as the engineering scaling up from small scale concept design to real engineering applications. The Engineering Philosophy of such systematic transitions involve the management logic of engineering team, the technological innovation chain, and also the scaling challenges in demonstration of new technologies. Such engineering progress would be dependent on the new organization form of technological innovation, the incorporation of AI-assisted design and analysis platform, and also the application demonstration of commercial scale supercritical CO2 systems.

The current study extends the fundamental experimental quantifications on supercritical region fluid dynamics under representative geometries, which considers the unique thermophysical properties of such fluid states. A supercritical fluid can be special due to its large variations of thermal and transport properties that differ from normal fluid or two-phase fluid states as working media in energy systems and/or chemical reaction environment. The critical point set aside different fluid regions while the density parameter shows continuous variations and large gradient in the near-critical region and the specific heat show narrowed steep region across the near-critical gas-liquid region as well as the supercritical region. Such property trends give special advantages of near-critical compressing and heat transfer enhancement (and also possible deterioration) for power systems like Brayton cycle for energy conversions (coal fire, solar, geothermal, etc). However, those changes in detailed chamber/channel/compressing/expanding flow situations will cause stability and efficiency problems under sudden changes of transport mechanisms. To understand such basic case trends and general fluid-machinery problems, the current study proposes the application of pixelated interferometry to field-based quantifications on boundary heat transfer flows of supercritical fluid, so as to give new possibilities in correlation upgrading and mechanism understanding for real designs and applications.

The engineering execution and experiences in the Institute of Engineering Thermophysics, Chinese Academy of Sciences, will be introduced and discussed in this study. As the development of supercritical CO2 power and energy systems worldwide are becoming highly technological-competitive, new technological innovation routes are urgently needed for implementation of such systems in the coming era of carbon neutrality.

 

The enhancement of technical requirements for astronaut training in deep space exploration and philosophical reflections

Zhihui Zhang
Chinese Academy of Sciences, Beijing

Today, major countries around the world are increasing their investments in deep space exploration, which is accompanied by intensified training for astronauts. This training is not only aimed at pushing the physiological limits of humans but also serves as a trial in exploring the profound mysteries of the universe under extreme conditions. Deep space exploration demands that astronauts survive for long periods in isolated environments characterized by extremely low temperatures and high radiation. This environment not only tests their physical endurance but also poses severe challenges to their psychological resilience and the ontology of their existence as carbon-based life forms. In this context, the use of intimate technologies—such as physiological monitoring, psychological state assessment, and emotional support from Earth and artificial intelligence—enables astronauts to maintain inner stability and positivity when facing the unknown and challenges.

From a philosophical perspective, the introduction of intimate technology prompts us to reevaluate human existence. In the face of the vastness and loneliness of the universe, technologies such as space centers on Earth and video calls with loved ones help bridge the gap between humanity and the cosmos. Technology is not merely a tool; it becomes an extension of the astronauts' power in the universe. Auxiliary devices such as robotic arm training, safety toilets, body cleansing tools, and more comfortable space suits assist astronauts in overcoming physiological and psychological limitations, thereby affirming their subjectivity and dignity as Earthlings. The changes in the physiological state of astronauts during their return from space reflect a deepening of humanistic care: from previously walking directly from a lying position to now being assisted by professional medical personnel during egress, this change is not only about physiological adaptation but also a redefinition of human dignity and care.

However, the scientific and technological experiments involved in deep space exploration also raise profound ethical and responsibility issues. In long-term space travel, how to balance reliance on artificial intelligence with respect for the autonomy of astronauts has become an urgent philosophical question. For instance, how can we ensure that astronauts' privacy rights are not violated? How might potential rebellious behaviors of artificial intelligence affect human decision-making? Additionally, in cases of reproduction in space, whether the resulting humans still belong to the category of "human" raises questions about the re-examination of human identity.

In summary, the application of intimate technology in the intensive training of astronauts not only helps them adapt to the challenges of deep space exploration but also urges us to reflect on the significance of human existence in cosmic exploration. In this process, we should maintain a reflection on and respect for humanity itself, contemplating how to safeguard human dignity and values while exploring the unknown. In the dialogue between humanity and the cosmos, technology becomes our partner, while philosophy guides us in contemplating the profound impacts of this journey.

 
3:00pm - 4:30pm(Symposium) Technologies at the limits of language – Symposium on conceptuality, metaphorisation & narration
Location: Auditorium 6
 

Technologies at the limits of language – Symposium on conceptuality, metaphorisation & narration

Chair(s): Leonie Möck (University of Vienna), Wenzel Mehnert (Austrian Institute of Technology, Austria & TU Berlin), Bruno Gransche (Karlsruher Institue of Technology), Nele Fischer (Technical University Berlin), Nils Neuhaus (Technical University Berlin)

Emerging technologies move at the limits of language. People (in the end usually a small group of them) struggle to find appropriate ways for referring to them, search for the right concepts and often use metaphors to catch the supposedly right meaning and evokeing the desired associations, initiating processes that are barely under our control.

From a more comprehensive perspective, the limits of language are an omnipresent phenomenon we have to face at any given time and in any given context. Language is an important tool for human engagement with the world while at the same time a source for confusion, we often end up as flies in Ludwig Wittgenstein’s iconic fly bottle.

How do technologies stress or overload our thinking and talking about them? And what expectations and imaginaries are caused/invited by certain linguistic expressions? Taking the linguistic and discursive crystallizations of our engagements with socio-technical systems – the narratives, metaphors, or clusters of conceptuality – as entry points to our struggles in reference, we learn about the frames of thinking that get imprinted oninto our engagements with technologies and techno-imaginaries.

Scratching at the boundaries of language then is also a way to initiate a process for reimagining technologies in better ways, in the sense of a constant practice of revising patterns and images of thought, while putting them under ethical evaluation. How can we (if we should decide to do so and find it legitimate) use linguistic techniques to influence discourse and subsequently awareness and behaviour (green IT, responsible innovation, etc.)?

Lastly, supposedly familiar concepts such as intelligence get reshaped in the light of our artifacts and techniques and postphenomenology has shown that technologies shape our hermeneutic relations. So, addressing the material hermeneutics of technologies can help to avoid linguistic monism, taking account of the limits of language as an epistemic source of explanation and worldmaking. As Karen Barad, Donna Haraway and others have argued, there has to be an account of the world that is not falling back on language only, while at the same time we have to acknowledge that there is no world ‘for us’ apart from construction, no direct access to a pure reality. So how do we productively stay with the trouble of recognizing both the limits of linguistic language and the agency of the world?

This panel will be organized by the Special Interest Group (SIG) on “Languages of Technology: Prompts, Scripts, Narratives, and Grammars of Composition”. There will be short inputs by the discussants followed by a moderated discussion between all panelists. The main findings of the discussion will simultaneously be documented.

 

Presentations of the Symposium

 

List of discussants

Wenzel Mehnert1, Leonie Möck2, Maximillian Roßmann3, Mark Coeckelbergh4, Kanta Dihal5, Galit Wellner6, Yu Xue4, Alexandra Kazakova7
1Austrian Institute of Technology, Austria & TU Berlin, 2University of Vienna, 3Vrije Universiteit Amsterdam, Netherlands
, 4University of Vienna, Austria, 5Imperial College London, UK, 6Holon Institute of Technology (HIT), Israel, 7University of Washington, USA

Biographies

Alexandra Kazakova

(tbd)

Prof. Galit Wellner, PhD, is an Assistant Professor at Holon Institute of Technology (HIT) and an adjunct professor at Tel Aviv University. Galit studies digital technologies and their interrelations with humans. She is an active member of the Postphenomenology Community. She published peer-reviewed articles and book chapters, and edited special issues of Techne and some collections. Her book A Postphenomenological Inquiry of Cellphones: Genealogies, Meanings and Becoming was published in 2015 by Lexington Books. She translated to Hebrew Don Ihde’s book Postphenomenology and Technoscience (Resling 2016). She coedited Postphenomenology and Media: Essays on Human–Media–World Relations (Lexington Books, 2017) and The Philosophy of Imagination: Technology, Art, Ethics (Bloomsbury Academic, 2024). Her research on AI led her to become the academic advisor of the AI Regulation Forum of the Israeli Ministry of Innovation, Science and Technology and before that to become a member of the stakeholder board of SHERPA, an EU Horizon project for the shaping of the ethical dimensions of smart information systems (2020-1). In the past Galit was the vice-chair of Israeli UNESCO’s Information for All Program (IFAP) and a board member of the FTTH (Fiber-to-the-Home) Council Europe.

Dr. Kanta Dihal is Lecturer in Science Communication at Imperial College London, where she is Course Director of the MSc in Science Communication, and Associate Fellow of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. Her research focuses on science narratives, particularly science fiction, and how they shape public perceptions and scientific development. She is co-editor of the books AI Narratives (2020) and Imagining AI (2023) and has advised international governmental organizations and NGOs. She holds a DPhil from the University of Oxford on the communication of quantum physics.

Prof. Dr. Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, ERA Chair at the Czech Academy of Sciences in Prague, and Guest Professor at the University of Uppsala. He member of several advisory bodies including the federal Belgian Committee for Ethics of Data and AI, the advisory council of the Austrian UNESCO Commission, and previously the High Level Expert Group on AI of the European Commission. He is author of numerous books including AI Ethics, The Political Philosophy of AI, and Why AI Undermines Democracy. Previously he was the President of the Society for Philosophy and Technology (SPT).

Dr. Maximillian Roßmann is a Postdoctoral Researcher at the department of Environmental Economics of the Institute for Environmental Studies (IVM). His research explores how citizen narratives shape the perception of the European Energy crisis. Before joining VU Amsterdam, Maximilian Roßmann was a Postdoctoral Researcher at Maastricht University in the ERC project “NanoBubbles”. He obtained his PhD at the Karlsruhe Institute of Technology (KIT) and worked in TA projects on the Vision Assessment of microalgae nutrition, 3D printing, and nuclear waste management at the Institute for Technology Assessment and Systems Analysis (ITAS).

Yu Xue (Ph.D.) is an associated professor at the Department of Philosophy, Dalian University of Technology. She was a visiting fellow at Delft University of Technology (2015-2016) and University of Vienna (2024-2025). Her research interests are in ethics of technology and philosophy of technology, with a particular focus on robotics ethics and AI ethics.

 
3:00pm - 4:30pm(Symposium) Between mind and machine: symbolic and phenomenological roots of computation
Location: Auditorium 7
 

Between mind and machine: symbolic and phenomenological roots of computation

Chair(s): Lorenzo De Stefano (Università degli Studi di Napoli Federico II, Italy), Felice Masi (Università degli Studi di Napoli Federico II, Italy), Francesco Pisano (Università degli Studi di Napoli Federico II, Italy), Luigi Laino (Università degli Studi di Napoli Federico II, Italy), Caludio Fabbroni (Università degli Studi di Napoli Federico II, Italy)

In the modern era computation profoundly shapes how we think, communicate, and engage with the world. Yet the philosophical foundations of this transformative force—rooted in formal logic, symbolic manipulation—often remain underexamined. This symposium, “Between Mind and Machine: Symbolic and Phenomenological Roots of Computation,” brings together five complementary perspectives that reveal how the concept of computation both arises from and impacts human cognition, culture, and our very sense of self.

The first presentation examines writing as a cognitive practice interwoven with calculation. Although many fields—from archaeology to philosophy of mathematics—treat writing as pivotal to conceptual ordering, Husserl’s idea of “language as calculus” has seldom been applied to his understanding of writing. By revisiting Husserl from this angle, the speaker argues that writing functions as more than a communication tool: it can serve as a calculative medium yielding what one might call “computational evidence,” a clarity generated by the systematic manipulation of symbols. This approach expands our view of writing beyond a passive receptacle of ideas, suggesting instead a dynamic interplay between phenomenology and calculation.

Moving from symbolic notation to mechanical logic, the second talk explores Jevons’ “Logical Piano” (1866)—one of the earliest machines to automate logical inferences. Though overshadowed in standard accounts by Babbage or Boole, Jevons’ device offers a critical insight into the paradox of “intimate technology.” Computation, by its design, is universal and impersonal, yet it increasingly encroaches upon the most private areas of human life. Highlighting how Jevons improved on Boole’s symbolic logic and anticipated modern programming languages, the speaker shows that the idea of a “computational subjectivity”—rooted in Kantian rational autonomy—already contained latent tensions between algorithmic impersonality and personal meaning-making. This tension reverberates in our own time, illuminating why advanced computing technologies can feel both indispensable and disquietingly detached from human concerns.

The third presentation focuses on Turing, traditionally hailed as a principal figure in AI. Turing’s pivotal contributions—his 1936 paper “On computable numbers with an application to the Enscheidungsprobleme” and his 1950 essay “Computing Machinery and Intelligence”—laid the groundwork for viewing machines as potential analogues to human thought. Here, using Eugen Fink’s distinction between “thematic” and “operative” concepts, the seldom-scrutinized assumptions beneath Turing’s explicit claims will be investigated. These include behaviorist and cybernetic elements that frame cognition as algorithmic rule-following. By reading Turing’s response to Ada Lovelace’s skepticism through this lens, the talk shows how Turing’s vision of the “child-machine” presupposes a specific view of learning and development. Such “operative” concepts continue to shape debates on AI: they predispose us to see intelligence in computational terms, even when this perspective sidesteps crucial questions about consciousness, creativity, or the nature of understanding.

The fourth presentation addresses whether the brain itself can be literally viewed as a computer. In contemporary neuroscience, many models interpret neuronal activity as inputs processed by algorithms. Despite the success of these computational approaches, the presenter questions whether such models capture the brain’s true workings or merely provide convenient abstractions. Given the brain’s staggering complexity, computational theories often rely on averaging data and filtering out variances. The speaker contends that these simplifications do not necessarily reveal an inherent computational essence. Rather, they offer valuable but ultimately heuristic insights—tools for managing complexity, rather than unearthing a fundamental computational identity of the brain. This reevaluation reminds us of the broader symposium theme: while formal frameworks can illuminate phenomena, they can also mask the unique richness and variability of lived experience.

In the final talk, computation is treated as a “symbolic form” in Ernst Cassirer’s sense—on par with art, myth, or language. Like these established symbolic forms, computation not only processes but also structures how we conceive and engage with the world. Large Language Models, for instance, handle symbols with astonishing facility yet lack reflexive consciousness. The speaker coins the phrase “shortcut-Geist” to highlight that while LLMs exhibit remarkable pattern recognition and problem-solving, they do not fulfill the deeper cultural-intentional criteria of human Geist. Through Cassirer’s conceptual framework, the presentation stresses that computational environments mold human reality as much as they mirror it. We have thus entered an era where code, algorithms, and digital infrastructures act as powerful cultural forces, shaping perceptions, values, and identities.

Taken as a whole, these five contributions shed light on a crucial paradox: computation, though originally framed as a purely formal and impersonal discipline, is integral to human life—whether via writing practices that encode cognitive strategies, logic machines that promise universal reasoning, AI architectures that blend mechanical rules with behavioral theory, or neuroscientific models that render the brain in algorithmic terms. By examining the historical arcs that gave rise to today’s technologies, alongside phenomenological insights into the subjective dimension of thinking, the symposium underscores how crucial it is to understand computation in a manner that neither overstates its universality nor underestimates its cultural entanglements.

In illuminating these entanglements, the symposium invites a recalibration: might we better reconcile formal computation with the diverse, context-dependent nature of human cognition if we treat symbolic activity as both technological and experiential? Could a re-examination of writing, Jevons’ logic, Turing’s AI concepts, the brain-as-computer analogy, and Cassirerian symbolic forms help us identify presuppositions that shape current debates about learning machines, consciousness, or the ethical boundaries of intimate computing devices?

Ultimately, the symposium demonstrates that computation is neither a mere technical tool nor an immutable feature of the natural world. It is, instead, a complex cultural practice and symbolic framework that interacts with—while also profoundly reshaping—human modes of thought and being. By joining historical, phenomenological, epistemological, and cognitive approaches, the symposium illuminates a shared set of questions: What does it mean to think about—and with—computation today? Where do the boundaries lie between human creativity and formal process? How can a deeper historical and philosophical perspective guide our response to emerging digital paradigms? In posing these questions, the symposium aims not only to clarify the roots of computational thinking but also to enrich the ongoing dialogue about technology’s place within contemporary culture.

 

Presentations of the Symposium

 

Writing as calculus. New sciences of writing and phenomenology

Felice Masi
Università degli Studi di Napoli Federico II, Italy

Since the 1990s, writing studies have undergone a concrete turn, focusing on the manipulation of material symbols and the links between writing and calculation. On the other hand, the claim that Husserl had an idea of language as calculus has not produced a revision of his conception of writing. I intend to propose a neo-Husserlian analysis of writing as a cognitive function of computation. The essay will thus be divided into four parts. In the first, I will outline the reasons for a neo-Husserlian supplement to the science of writing. In the second part, I will present the main results that archaeological investigations, psychological-cognitive analyses, philosophy of mathematical practice and philosophy of mediality have achieved on writing. In the third part, I will schematically present the Husserlian definitions of counting, operation, calculation, symbolic writing and reading. In the fourth part, I will show the different uses of writing for the achievement of the evidence of clarity and the evidence of distinction and why the latter is also could be defined as computational evidence.

 

(Logical) Piano lessons: Jevons and the roots of computational subjectivity

Francesco Pisano
Università degli Studi di Napoli Federico II, Italy

The concept of intimate technology owes a particular paradoxical nuance to the juxtaposition of intimacy and computation. The digital age, which allows for a deep diffusion and embedding of technology into one’s personal life, is historically and conceptually rooted in the logical theorization of recursively defined procedures for processing a multiplicity of inputs. Such computational procedures – automatic, universally applicable, input-independent – can be seen as structurally impersonal, with the input (or set of inputs) corresponding to significant, appropriately encoded aspects of each individual’s personal life. The vague sense of a profound incompatibility between computing and personal life pervades popular culture. This talk will offer some historical-critical coordinates to better frame this feeling. However, the breadth of the logical prehistory of the digital age makes it necessary to focus on a case study. I will focus on one case in particular: William Stanley Jevons’ (1835-1882) Logical Piano. Constructed in 1866 and first described in an 1870 paper, the Logical Piano was the first modern machine to compute logical inferences automatically. The description and contextualization of this logical machinery will be used as a case study to investigate the relationship between computation, automation, and impersonality. After a summary illustration of the machine's internal structure, I will discuss the connection between this structure and Jevons' system of equational logic, which derived from (and in some critical ways improved upon) the symbolic logic developed by Boole over two decades earlier. I will then highlight how, from the dialogue between Boole and Jevons, a link between the formalization of inference calculus, its automation, and its universalization emerges. In the British logico-philosophical culture of the time, these features were attributed to an idealized notion of computational subjectivity of Kantian derivation. This subjectivity was defined by its freedom from human cognitive limitations and structural logical impersonality. Along the complex lines that bound together the conceptual apparatus that by the 1950s had led to programming languages, through the developments that gave rise to the first precise notions of algorithmic computability in the 1930s, and tracing this history back to the nineteenth century, one recognizes again and again that this peculiar impersonality cannot but haunt any computing technology and thus generate internal friction in any concept of intimate technology.

 

Turing’s design of a brain. Operative and thematic concepts of computing machinery

Lorenzo De Stefano
Università degli Studi di Napoli Federico II, Italy

Alan Turing is unanimously recognized, alongside von Neumann, as the true father of Artificial Intelligence. Although the possibility of constructing logical or intelligent machines had already been explored by Babbage, Lovelace, and Jevons and although the first mathematical model of a neural network was outlined by McCullock and Pitts, it is in Turing’s 1936 essay On Computable Numbers, with an Application to the Entscheindungsprobleme—and above all in Computing Machinery and Intelligence, published in Mind in 1950—that the idea of a potentially human-like artificial intelligence is explicitly thematized. The TM is in fact a machine that following a set of rules (the machine’s “program”), machine can perform any step-by-step computational procedure. This model captures the essence of algorithmic computation and underpins much of modern computer science theory.

Turing’s work lays the theoretical and epistemological groundwork for future debates on Artificial Intelligence, culminating in the foundational 1956 Dartmouth Conference. Yet what are the epistemological premises on which Turing establishes his parallel between computers and thought, and thus between the functioning of the machine and the human brain?

The aim of this essay is to investigate the hermeneutic, epistemological, and ontological assumptions that underpin Turing’s vision and would go on to influence the subsequent debate on Artificial Intelligence. To this end, the theoretical framework of reference is the distinction drawn by the phenomenologist Eugen Fink in his 1957 essay Operative Begriffe in Husserls Phänomenologie, between thematic concepts—namely, descriptive and explicit concepts in a given philosophical perspective (for instance, intentionality in Husserl or the transcendental in Kant)—and operative concepts, which operate behind the thematic concepts, borrowed from different models of thought. These latter remain overshadowed, not even explicit to the author who employs them, yet they continue to act behind a philosophical view as an unnoticed medium through which the thematic concept is conceived.

Within this framework, the theoretical model that Fink applies to Husserlian philosophy is applied here to the thematic concepts Turing develops in Computing Machinery and Intelligence, with the aim of bringing to light which operative conceptual a priori are at play in Turing’s pivotal response to the guiding question “Can machines think?” and, consequently, in his conception of Artificial Intelligence. Particular attention will be paid to the conception of the human being implied by the imitation game, and to the modern—yet also cybernetic and behaviorist—epistemological and conceptual foundations that inform the relationship between human and artificial intelligence. The goal is to expose which vision underlies the notion of a learning machine or a child-machine and on which assumptions Turing bases the analogy between such a human thinking and computing machinery. This amounts to highlighting those epistemological biases that have conditioned the debate on Artificial Intelligence from the outset and that continue to manifest themselves in contemporary discussions. The presentation is divided in four parts: 1 Methodological approach (Fink’s concept-theory). 2Exposition of Turing’s theoretical framework. 3. Identification of the operative concepts in Turing’s view and their historical and genealogical origins. 4. What remains of Turing’s conceptual framework in contemporary debate?

 

Is your brain a sort of computer?

Claudio Fabbroni
Università degli Studi di Napoli Federico II, Italy

The relationship between brain and computer is central to neuroscientific research. Indeed, it is estimated that more than 80% of the articles in theoretical neuroscience focus on computational modeling, because of the efficacy that this mode of investigation has proved. Due to the success of the computational approach, the majority view among neuroscientists is a literal interpretation of the brain-computer comparison, according to which brains are in fact systems that, in various degrees of complexity, encode inputs, manipulate representations and transform them into outputs according to some specific algorithms in order to respond to distal stimuli. That is, the literal, realist interpretation that the brain is a computer supposes that the computational structure is essential to the brain having the cognitive capacities that it does.

This presentation argues against this realist stance, in favor of a more pragmatic one which underlines the heuristic value of the brain-computer comparison. The realist account seems to lack an adequate appreciation of the simplifications and abstractions in place in neuroscientific modeling, which are necessary to make the brain's activity intelligible to human scientists. However, this abstraction is necessary due to the brain’s billions of non-identical neurons and trillions of ever-changing synapses, which are never in the same state twice and have an extremely high trial-to-trial variability. In fact, computational neural models target averaged data, namely simplified and regularized patterns that are created through data processing with the exclusion of outliers and differences between signal and noise. They are artifacts that grant us epistemic access but do not underly a natural regularity. That is to say, the mathematical structures that make cognitive functions intelligible to the scientist should not be taken as straightforward discoveries of inherent, human-independent computational capacities of the brain, but as useful mathematical descriptions, that through the analogy with computers, shed some light on the way the brain works. Thus, a pragmatic, analogical understanding of the brain-computer relationship, which declines to infer from the success of the computational approach that the neural system and its model compute the same functions, seems to be a more appropriate understanding of these models than a literal one.

 

Computation as a symbolic form between humans and machines

Luigi Laino
Università degli Studi di Napoli Federico II, Italy

This paper argues that computation, with its own syntax, grammar, and logic, constitutes a unique symbolic form, akin to language, myth, and art as described by Ernst Cassirer in The Philosophy of Symbolic Forms. Like these other symbolic forms, computation provides a framework for representing, manipulating, and transforming information, fundamentally shaping human cognition and understanding of the world. Accordingly, I will divide the talk into three parts.

First, the presentation will explore the rationale for considering computation as a symbolic form, drawing upon Cassirerian concepts such as the creation of new realities through symbolic activity and filling a gap in his own work.

Second, the focus will shift to analyzing whether Large Language Models (LLMs) can be considered “spiritual” agents (Geist) within this framework. Building upon Cassirer’s concepts of “Ausdruck” (expression), “Objektivierung und Darstellung” (objectification and representation), and “Bedeutung” (signification), the presentation will argue that while LLMs exhibit remarkable abilities, such as generating creative text and engaging in complex symbolic manipulations, their “Geist” remains fundamentally distinct from human intelligence. Drawing inspiration from Cristianini (2023), the presentation will introduce the concept of “shortcut-Geist” to characterize the unique form of intelligence exhibited by LLMs. I will argue that while machines exhibit impressive computational abilities, these abilities often operate in a manner reminiscent of certain aspects of animal intelligence, such as complex pattern recognition and problem-solving. Therefore, LLMs lack the self-awareness and subjective experience characteristic of human Geist, which impinges on their capacity to create “cultural products”.

Nevertheless, the presentation will finally examine the profound impact of computation as a symbolic form on human experience. Based on Cassirer’s emphasis on the role of symbols in shaping human experience and understanding, the presentation will point out that computational technologies are not merely tools but rather integral components of our symbolic environment, forging our perceptions, values, and ultimately, our sense of self.

Bibliography

Cassirer, Ernst. The Philosophy of Symbolic Forms. 3 vols. Translated by Ralph Manheim. Yale University Press, New Haven 1953-1957.

Cristianini, N. La scorciatoia. Il Mulino, Bologna 2023.

 
4:30pm - 5:00pmCoffee & Tea break
Location: Voorhof
5:00pm - 6:30pm(Symposium) Uncanny desires: AI, psychoanalysis, and the future of human identity
Location: Blauwe Zaal
 

Uncanny desires: AI, psychoanalysis, and the future of human identity

Chair(s): Luca Possati (University of Twente, Netherlands, The), Maaike van der Horst (University of Twente)

Psychoanalysis has long provided a powerful lens for examining the complex interplay of desire, identity, and the unconscious. In the era of artificial intelligence (AI), these psychoanalytic concepts take on renewed significance as we grapple with how human desire is shaped—and continually reshaped—by technological innovation. As Thanga (2024) aptly highlights, psychoanalysis is crucial for understanding AI's impact because it reveals the incomputable that underpins the computable—what he describes as the "undecidable as an inherent aspect of any computable system." Psychoanalysis furthermore recognizes the inhuman aspects of the human, as the unconscious functions mechanically. This can cause uncanny feelings and desires. We experience repulsion yet fascination at the increasing human-likeness of AI and might increasingly desire to become more like AI. This panel aims to illuminate the connections between this structural incomputability, uncanniness, human desire, and identity, offering a fresh perspective on the ways AI and psychoanalysis intersect to shape our understanding of the self in a rapidly evolving technological landscape.

In psychoanalysis, desire is a central concept that transcends mere biological need or conscious demand. It is a dynamic and inexhaustible force rooted in the unconscious, manifesting as a ceaseless pursuit of what is lacking—something that, by definition, can never be fully attained. Desire is not an object to be possessed but a constitutive tension of human existence, intrinsically tied to the body. Desire has been conceptualized by a variety of psychoanalytic thinkers. Freud (1900, 1915, 1920) conceived of desire as the driving force of the instincts, an unconscious push emerging from the conflict between the life instincts (Eros) and death instincts (Thanatos). Lacan (1966) expanded and refined Freud’s ideas, framing desire as the product of a confrontation with manque (lack). We do not desire what fulfills our needs (biological) or what we consciously ask for (demand), but rather what is inaccessible—the enigmatic object of desire, which Lacan called objet petit a. This object, perpetually unattainable, structures and sustains desire. Winnicott (1953) offered a complementary perspective, linking desire to creativity and play. In his studies on transitional spaces, Winnicott emphasized how desire develops through objects that bridge the subject’s inner world and external reality. These objects—neither wholly internal nor fully external—allow individuals to explore, transform, and engage with the world while maintaining their sense of self.

The central questions for this panel are: How is human desire—in all its psychoanalytically illuminated dimensions—transformed by artificial intelligence? How does the concept of objet petit a evolve in interactions with AI systems that personalize experiences and desires? Do algorithms function as transitional objects, mediating the subject’s relationship with external reality? Does AI reinforce or distort unconscious desires through algorithmic personalization? What psychic mechanisms are activated in this process? Does AI generate new desires, or does it merely amplify preexisting ones, rendering them more visible and conscious? To what extent does AI create new circuits of jouissance (enjoyment), or does it intensify the subject’s alienation instead?

References:

Freud, S. (1900), The interpretation of dreams. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 4 and 5. London: Hogarth.

Freud, S. (1915), The unconscious. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 14. London: Hogarth, pp. 166–204.

Freud, S. (1920), Beyond the pleasure principle. Standard Edition of the Complete

Psychological Works of Sigmund Freud, 18. London: Hogarth, pp. 7–64.

Lacan, J. (1966). Ecrits. Paris: Seuil.

Thanga, M.K.C. (2024). "The undecidability in the Other AI." Humanit Soc Sci Commun 11, 1372. https://doi.org/10.1057/s41599-024-03857-x

Winnicott, D.W. (1953). "Transitional Object and Transitional Phenomena." International Journal of Psychoanalysis, 34, 89-97.

 

Presentations of the Symposium

 

Can technology destroy desire? Stieglerian considerations

Bas De Boer
University of Twente

Bernard Stiegler is one of the few philosophers of technology that explicitly formulates a theory of desire. This theory has both phenomenological and psychoanalytic aspects: on the one hand, Stiegler draws from Husserl to show that technologies shape retentions and protentions thereby structuring human anticipation. On the other hand, he is inspired by the work of Freud when discussing the relationship between anticipation and desire. Drawing from the work of Stiegler, this presentation addresses the following question: can technology destroy desire?

The main goal of this presentation is to clarify why, in the context of Stiegler’s theory of technology and his interpretation of Freud, it makes sense to ask this question. The first step in doing so is to recognize the radical nature of this question vis-à-vis approaches in the philosophy of technology, like mediation theory (e.g., de Boer, 2021; Kudina, 2023; Verbeek 2011), that speak about how technology (or technologies) shapes humanity. Formulating the question of whether technology can destroy desire rather asks if technology can destroy humanity. “To destroy,” here, does not refer to the factual elimination of all human organisms, but rather to the annihilation of human desire through the construction of a system that makes sure that each individual desires “what he [sic] is supposed to desire” (Marcuse, 1955, p. 46). The question of this paper can then be reformulated as “can technology create a system in which people desire what they are supposed to desire?”.

According to Stiegler (2011), answering this question requires moving beyond Marcuse’s framework and to recognize technology as a crucial organizer of libidinal energy. This paper will first clarify how to understand the notion of technology in the context of Stiegler’s oeuvre to subsequently clarify how it can emerge as an organizer of libidinal energy. Secondly, it will link the issue of libidinal energy to the annihilation of desire by showing how a particular organization of libidinal energy might constitute a mass without individuality. This, for Stiegler, would effectively constitute a situation in which we can no longer meaningfully speak of desire. In conclusion, I will flesh out some characteristics of a “drive-based” society (Stiegler, 2009) in which desire is no longer present.

References:

de Boer, B. (2021). How scientific instruments speak. Lexington.

Kudina, O. (2023). Moral hermeneutics and technology. Lexington.

Marcuse, H. (1955). Eros and civilization: A philosophical inquiry into Freud. Beacon Press.

Stiegler, B. (2009). For a new critique of political economy. Polity Press.

Stiegler, B. (2011). Pharmacology of desire. Drive-based capitalism and libidinal dis-economy. New Formations, 72, https://doi.org/10.3898/NEWF.72.12.2011

Verbeek, P.P-. (2011). Moralizing technology. University of Chicago Press.

 

The algorithmic other: AI, desire, and self-formation on digital platforms

Ciano Aydin
University of Twente

AI-driven platforms like Tinder profoundly shape how users relate to their desires and identities. From a Lacanian perspective, these platforms function as a “Big Other,” promising mastery over uncertainty and complete fulfillment of desire. This promise is tempting because it offers to resolve the structural lack that defines human subjectivity. However, Lacanian theory reveals that such promises are inherently illusory, as no external system can eliminate this lack. Tinder illustrates how digital environments reinforce Lacanian clinical structures. The psychotic user identifies entirely with algorithmic outputs, relying on matches to define their sense of self. The perverse user manipulates their profile to fulfill the algorithm’s imagined desires, reducing themselves to objects of jouissance. The neurotic user oscillates between obsessive doubt and hysterical overinvestment in the quest for a perfect match, perpetuating cycles of dissatisfaction. Despite these pitfalls, AI platforms also create opportunities for singular self-formation beyond neurosis. By disrupting the fantasy of completeness and exposing users to the Real, Tinder can challenge users to confront their desires critically. To realize this potential, platforms must avoid commodifying desire, foster ambiguity, and emphasize reflective detachment. Features like algorithmic transparency, randomized “serendipity modes,” and prompts for self-reflection can help users move beyond reliance on the algorithm and engage with their split subjectivity. This paper argues that AI platforms, though fraught with risks, can be reimagined as tools for fostering singularity, enabling users to navigate their desires authentically and acknowledge their uncomfortable human condition.

 

Deadbots and the unconscious: A qualitative analysis

Luca Possati
University of Twente

This paper examines the psychological effects of engaging with deadbots—artificial intelligence systems designed to simulate conversations with deceased individuals using their digital and personal remains—from a psychoanalytic perspective. The research question is: How does interaction with a deadbot alter the process of mourning from a psychoanalytical perspective? Drawing on first-person testimonies of users who interacted with Project December, an online platform dedicated to creating deadbots, the study investigates how these interactions reshape the experience of mourning and challenge our understanding of death. Grounded in psychoanalytic theories, particularly object relations theory, the paper explores the complex emotional dynamics at play between humans and deadbots. It argues that deadbots function as transitional objects or “projective identifications tools”, offering a distinctive medium for emotional processing and memory work. Projective identification in deadbots begins when an individual transfers aspects of their relationship with the deceased—such as fantasies, emotions, or memories—onto the chatbot (phase 1). This act of splitting is driven by anxiety and repression, as the individual feels the need to distance themselves from the deceased by externalizing these emotional contents. The projection process then compels the chatbot to replicate these traits (phase 2). In practice, this means that the individual starts to treat the chatbot as though it embodies or represents the qualities of the deceased. For example, they might project the deceased person's mannerisms, personality traits, or even specific phrases onto the chatbot, expecting it to respond or behave in a similar way. As the chatbot increasingly mimics these qualities, the individual perceives them as externalized, reinforcing the sense of separation. This cycle of pressure, imitation, and validation becomes crucial for the eventual reintegration of the projected content (phase 3), allowing the individual to reprocess and incorporate those emotions back into their psyche.

By framing deadbots within the psychoanalytic tradition, this research seeks to deepen the discourse on the psychological, existential, religious, and ethical dimensions of AI in the context of grief and mourning.

 

Reconceptualizing reciprocity through a lacanian lens: the case of human-robot-interactions

Maaike van der Horst, Ciano Aydin, Luca Possati
University of Twente

In this paper we offer a critique and reworking of the concept of reciprocity as it is predominantly understood in HRI (human-robot interaction) literature. Reciprocity is in HRI understood from the perspective of ‘the golden rule:’ doing onto others as they have done onto you. We show that this understanding implies a utilitarian, symmetrical and dyadic view of reciprocity, that this understanding lays both a descriptive and normative claim on HHI (human-human interaction) and that a different understanding of reciprocity in HHI) and HRI is possible and desirable. We show that a golden rule perspective of reciprocity is particularly problematic in designing companion robots – human-like robots designed to establish social relationships and emotional bonds with the user. In this paper we provide a different understanding of reciprocity based on the philosophical anthropology of Jacques Lacan. We show how Lacan's conception of reciprocity is deeply intertwined with his psychoanalytic theory, particularly through the Aristotelean conceptual pair automaton and tuché. For Lacan, reciprocity goes beyond purely pre-structured, rule-based approach to encompass aspects that challenge predictability and foster mediation, creativity, disruption and transformation. This view, we propose, provides a richer conceptual framework than the dominant perspective of the golden rule, allowing for a more appropriate understanding of reciprocal HHI and HRI. We illustrate this view through a Lacanian interpretation of the film Lars and the Real Girl (2007), in which the protagonist forms a romantic relationship with a lifelike doll. We conclude by providing suggestions for designing social robots that support rather than replace reciprocal HHI through a Lacanian lens.

 
5:00pm - 6:30pm(Papers) Disrupting digital industries
Location: Auditorium 2
Session Chair: Richard Heersmink
 

Derailing a high-speed train: Limitations of Agile in the AI development with marginalized communities

Aida Kalender1, Giovanni Sileno2

1SIAS | Socially Intelligent Artificial Systems Group , Informatics Institute, Faculty of Science, University of Amsterdam; 2SIAS | Socially Intelligent Artificial Systems Group , Informatics Institute, Faculty of Science, University of Amsterdam

This paper will examine insights gleaned from the Horizon Europe-funded CommuniCity project to evaluate the scope and effectiveness of recent smart city interventions in the social domain aimed at fostering responsible digital transformation with a focus on marginalized communities.

Our analysis is informed by scholarship in Science and Technology Studies (STS), which emphasizes how historical, social, and cultural factors, along with the interests of actors, shape technologies and influence future societal conditions. This perspective underscores the political significance of technologies in determining social orders, highlighting the importance of understanding the creators, development processes, and contextual factors surrounding technological production (Pinch and Bijker,1984; Jasanoff, 2004; Winner, 1980).

This analysis assesses the impact of the CommuniCity project through the perspective of critics of the "piloting society," as articulated by Ryghaug and Skjølsvold (2021), who identify pilot and demonstration projects as pivotal modes of innovation in contemporary energy and mobility transitions. The authors contend that such projects serve as significant political arenas for shaping future socio-technical orders. Within CommuniCity, the piloting process, as described by key actors involved in developing this procedural framework, is compared to “a high-speed train that, once it takes off, cannot be halted”.

Although this approach in the CommuniCity project is termed Agile piloting, it does not allow for substantive changes to the framework or facilitate the inclusion of various marginalized communities in ways that diverge from a top-down methodology. Moreover, the chosen epistemological framework for piloting with marginalized communities within the CommuniCity project significantly influences how the concept of co-creation—central to the project’s ethical considerations—is articulated and practically implemented. While CommuniCity Agile piloting permits co-creation "for" and "with" marginalized communities, it largely overlooks the broader movement aimed at democratizing digital technologies through lenses of social justice, inclusion and equity, which we refer to as co-creation "by," along with the vision of the types of societies we aspire to create through innovation (Jasanoff, 2018).

This paper further analyzes the ethical, societal, and political implications of this conception of digital innovation and co-creation with marginalized communities, as evidenced in the CommuniCity project, and engages in a discussion regarding the extent to which these approaches promote or hinder democratic participation and the broader application of technologies for social change. The paper additionally proposes ways in which these high-speed top-down innovation frameworks can be decelerated through reflective loops and by incorporating out-of-the-box thinking modules to alleviate the rigidity of the model.



Grasping the impact of artificial intelligence on the tourism industry

Marcel Heerink

Saxion university of applied sciences, Netherlands, The

This paper examines and discusses the multifaceted impact of artificial intelligence (AI) on the tourism industry, focusing on current applications and their effects, showing the evolution from simple expert systems to sophisticated machine learning algorithms reshaping travel experiences.

It identifies several key areas of AI application in tourism. Firstly, in customer service and personalization, AI algorithms analyze user preferences and behaviors to provide personalized recommendations for destinations and accommodations (García-Madurga & Grilló-Méndez, 2023). AI-powered chatbots and virtual assistants offer 24/7 customer support and booking assistance (Ukpabi et al., 2019), while voice-activated AI devices enhance guest experiences in hotels.

Regarding operational efficiency, machine learning algorithms optimize revenue management through dynamic pricing. AI enables precise inventory management and demand forecasting (Samala et al., 2020), while predictive analytics help businesses make informed decisions about resource allocation.

In terms of customer experience enhancement, facial recognition technology streamlines check-in processes and enhances security (Buhalis & Leung, 2018). Augmented Reality combined with AI offers immersive tour experiences, while AI-powered translation services break down language barriers. Additionally, AI enhances safety through facial recognition and biometric authentication.

Recent survey data from YouGov reveals significant adoption trends in the industry. The research shows that 42% of travelers have either used AI in travel planning or express interest, while 28% prefer traditional planning methods. Language translation assistance has emerged as the most popular AI tool, used by 25% of British and 31% of American travelers. Personalized recommendations and AI-powered reviews are also widely used among travelers.

Both opportunities and challenges emerge from this overview. While AI offers unprecedented advances in personalization and efficiency, the industry must address concerns about environmental and social impact, data privacy, maintaining human interaction in hospitality, ensuring accessibility for all travelers regardless of technological preferences and aiming for optimal inclusiveness.

In many aspects the integration of AI is transforming how travel businesses operate and how travelers experience their journeys. Looking ahead, the industry is moving toward more sophisticated, integrated systems that blend physical and digital aspects of travel. Successfull adoption will depend on responsible implementation that balances technological innovation with authentic human experiences. However, future research is needed especially on ethical considerations regarding inclusivity and societal impact.

References

Buhalis, D., & Leung, R. (2018). Smart hospitality—Interconnectivity and interoperability towards an ecosystem. International Journal of Hospitality Management, 71, 41-50.

García-Madurga, M. Á., & Grilló-Méndez, A. J. (2023). Artificial Intelligence in the tourism industry: An overview of reviews. Administrative Sciences, 13(8), 172.

Kong, H., Wang, K., Qiu, X., Cheung, C., & Bu, N. (2023). 30 years of artificial intelligence (AI) research relating to the hospitality and tourism industry. International Journal of Contemporary Hospitality Management, 35(6), 2157-2177.

Samala, N., Katkam, B. S., Bellamkonda, R. S., & Rodriguez, R. V. (2020). Impact of AI and robotics in the tourism sector: A critical insight. Journal of Tourism Futures, 8(1), 73-87.

Ukpabi, D. C., Aslam, B., & Karjaluoto, H. (2019). Chatbot adoption in tourism services: A conceptual exploration. In Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality (pp. 105-121). Emerald Publishing Limited.



Battery development beyond justice. A care-based energy ethics

Rafaela Christina Hillerbrand

KIT, Germany

This paper considers the ethics of batteries as an enabler for the transition towards more sustainable energy. We argue that to put energy justice into practice, it is crucial to set focus on the responsibility and agency of engineers designing the energy transition. Building on care ethics, mid-level principles for engineers developing and designing batteries are suggested.

Problem statement: Climate change and limited fossil resources press for a transition towards a more sustainable energy system. Many countries foster a pathway from a fossil-based energy sector to more, or even all renewable energy carriers. In the transition towards more renewable energy, batteries are seen as central enabling technologies for a greener energy future as they provide a very versatile way of storing and supplying electricity. However, batteries may have severe negative impacts on those living today. For example, raw material extraction for batteries (cobalt, lithium, nickel, or other) has tremendous negative social and economic impact on those who mine the batteries and their communities. These societal groups will not directly profit from the energy transition that happens elsewhere in the world. This seeming injustice is further aggravated by laws on recycling in Western countries that will keep the battery metals in a Western material cycle. Over the last decades energy justice emerged as a new crosscutting research agenda to integrate such justice considerations that go beyond intergenerational aspects.

Aim and approach: This paper considers the ethical implications of batteries as enablers of the energy transition considering the full cycle of raw material extraction, production, and recycling. We firstly highlight open ethical questions that a farmwork framework based on energy-justice leaves. Large parts of the energy justice debate approach ethical concerns on a somewhat course-grained level, addressing justice issues mainly on the level of policymaking, laws, regulations, and alike. This “macro” level is important, but our analysis will show that in the case of batteries, energy justice considerations will remain incomplete when the level of the engineers, the “micro” level, is not addressed. The working engineers have to be guided by midlevel principles on how to design and operate the energy systems and its subsystems in a more just way.

Secondly, we turn to care ethics as an ethical approach to augment (not to replace) justice consideration and adumbrate a way of how the responsibility for a just and fair energy transition towards more a sustainable energy system can be put into practice. We suggest that care ethics, though originally proposed as a counter project to theories of justice, may help here to fill in missing pieces in the normative framework.

Our argument seems to hold not only for batteries, but that whenever solutions for a greener energy future are looked for in technological solution (instead of, for example, behavioral change to reduce energy consumptions), then energy justice considerations also have to look at the level of the engineers and supply them with ethical guidance, i.e. a midlevel account based on care ethics.

 
5:00pm - 6:30pm(Papers) Malfunction
Location: Auditorium 3
Session Chair: Samuela Marchiori
 

That’s not a bug, that’s an accidental function: on malfunctioning artifacts and concepts

Herman Veluwenkamp1, Sebastian Köhler2

1University of Groningen, The Netherlands; 2Frankfurt School of Finance & Management, Germany

Our current frameworks for understanding malfunction and function fail to capture the diverse ways in which objects, both physical and abstract, can fail to do what they are for. In this paper, we address that limitation by proposing a new account of “functioning” and “malfunctioning.” We do this by focusing not only on technical artifacts, but also on two different kinds of entities that can be defective: software and concepts.

Let us start with software. Although it seems intuitive that software, like other human-made objects, should be subject to the same kind of malfunction analysis (Franssen 2006), Floridi, Fresco, and Primiero (2015) have argued that software tokens cannot malfunction. In their view, a software “failure” either reflects a universal design flaw or a hardware issue. If the same erroneous instructions are present in all copies of a given program, there is no token-level failing; at most, software misfunctions by producing unintended side effects.

However, this appears to conflict with everyday experiences of software that “breaks” in specific, idiosyncratic ways. For example, consider a word processor that randomly freezes on some computers. This might be caused by font packages, which are only installed in some offices, that trigger the crashes. This is consistent with other offices having the program run smoothly for years. These kinds of local breakdowns can create unique malfunctions, which are unaccounted for in a purely type-level analysis (de Haas & Houkes 2025). It is therefore important to consider the local context, which includes custom settings, and additional software dependencies, when analysing malfunctions.

A parallel issue arises in conceptual engineering. Concepts, like software, are often described as abstract entities. Nevertheless, theorists maintain that certain concepts can fail to fulfil their function. This is taken to be a pro tanto justification for conceptual revision (see, e.g., Thomasson 2020). Moreover, it is widely acknowledged that concepts can fail for certain contexts, but are well-functioning in others. For example, our current concept of truth is taken to be defective for scientific contexts, but fine in others (Scharp 2013). However, if we define malfunction based on the relative effectiveness of a conceptual “token” compared to other tokens of its type, the possibility of local conceptual errors disappears. This obscures the different ways in which concepts can fail across different contexts.

To address these issues, we propose an account of "malfunction" and "function" that is sensitive to the diverse ways in which technical artifacts, digital artifacts, and concepts can fail. First, we define "functioning" in line with recent proposals in conceptual engineering (<removed for review>) and artifact design (<removed for review>) as the ability to produce effects that are normatively significant. Second, we define "malfunctioning" as the failure to produce these effects. We analyze the various ways in which tokens of different entities can break down and demonstrate how these instances can be subsumed under our understanding of malfunction. Finally, we show how this analysis aligns well with the responsibilities that engineers and designers have for repair, maintenance and redesign.

References

Floridi, L., Fresco, N., & Primiero, G. (2015). “On malfunctioning software.” Synthese, 192: 1199–1220.

Franssen, M. (2006). “The normativity of artefacts.” Studies in History and Philosophy of Science Part A, 37: 42–57.

de Haas, J. & Houkes, W. (2025). Can’t Software Malfunction? Metaphysics.

Scharp, K. (2013). Replacing truth. OUP Oxford.

Thomasson, A. L. (2020). “A Pragmatic Method for Normative Conceptual Work.” In A. Burgess, H. Cappelen, & D. Plunkett (Eds.), Conceptual Engineering and Conceptual Ethics (pp. 435–458). Oxford University Press.



The Ethics and Epistemology of Malfunction in Human-Technology Integration

Alexandra Karakas

Budapest University of Technology and Economics, Hungary

The development of brain implants has brought transformative potential to medical science, offering groundbreaking treatments for neurological disorders and cognitive enhancement. However, the phenomenon of malfunctioning brain implants presents not only possibilities, but besides the engineering and ethical challenges also significant epistemic opportunities. The talk explores the phenomenon I call value clash— the conflict between engineering, scientific, and ethical priorities and values—arising from malfunctions, with a particular focus on how these events illuminate the blurry boundaries between normal and malfunctioning performance. I argue that a malfunction framework is a useful epistemic tool, and is essential for advancing the science and governance of certain medical instruments.

From an engineering standpoint, malfunctions are typically framed as deviations from the intended function, prompting efforts to improve reliability and safety. Yet, defining normal artefact performances in complex cases like brain-machine interfaces is inherently challenging. The relation between technological artefacts and biological systems they interact with creates philosophical dilemmas and ambiguous zones where a certain performance could be functional yet suboptimal for the patient. Malfunctions thus calls for a re-evaluation of these boundaries, revealing ethical grey zones.

Epistemically, malfunctions are invaluable since they expose hidden assumptions about how artefacts are expected to perform. For instance, a device may work as intended but produce unwanted side effects due to unpredictable neural responses, which highlights the contingent and context-dependent nature of functionality. These cases also shed light on broader issues, such as the intricate relation between user expectations, artefact performances, and the socio-technical systems that frame their use.

Ethically, malfunctioning artefacts like brain implants underscore critical issues like accountability, communication of risks, and patient autonomy. However, these cases also reflect deeper epistemic challenges, as the uncertainty inherent in defining "normal" versus "malfunctioning" complicates ethical decision-making in sensitive cases in the medical sciences.



Ascribing functions to software

Jeroen de Haas1,2

1Avans University of Applied Sciences; 2Eindhoven University of Technology

The present work analyzes function ascriptions to software from an action theoretical perspective. Software is a key component of many (intimate) technologies, but how it contributes to their causal efficacy has not been satisfactorily explicated. Common presuppositions about its role are untenable, but nevertheless prevalent in both textbooks and literature, giving a false sense of control to current and future engineers (De Haas and Houkes 2025). This paper develops an alternative account, more closely aligned with the design and use of software.

Ordinary discourse typically treats software products as functional equivalents to technical artifacts, bespoke material artifacts created for a purpose. For instance, the sentence “I’ll set my alarm to wake me up before breakfast” might just as well concern a physical alarm clock on the speaker’s bedside table, as it might a particular app on the speaker’s smartphone. Ascriptions of functions to a function bearers have a normative quality. They assert how physical manipulation of a function bearer should contribute to the attainment of a goal state, and thereby establish a criterion for evaluating its actual causal role. An alarm clock that is ascribed the function to produce pronounced audible signals, but remains silent at the set time, is deemed to malfunction. Functional kind terms, such as ‘clock’, ‘calendar’, and ‘store’, are profitably applied to new software products to indicate they partake of the same functionality as their material counterparts. Yet, a prima facie compatibility of terms does not entail the compatibility of their referents.

Several authors have remarked upon the similarities between technical artifacts and software (Irmak 2012; Floridi, Fresco, and Primiero 2015; Turner 2018), in particular, their having a dual nature. Like technical artifacts (Kroes and Meijers 2006), software is intentionally created as a means to realize physical goal states. This is considered sufficient reason to identify software as function bearers. However, while speaking of material objects’ causal roles is unproblematic, it is unclear if, and how, functions can be ascribed to software as it is construed on their accounts. Material artifacts stand in a direct relation to their environment, where they can be manipulated, and exert their causal influence. ‘Software,’ by contrast, is used to denote a variety of entities from abstract artifacts (Irmak 2012), to physical realizations of machine-executable instructions (Floridi, Fresco, and Primiero 2015) or symbolic entities (Turner 2018). By these accounts, software is not amenable to manipulation, causally efficacious, or both. Still, it seems incontrovertible that software enables the exerting of similar causal influences on the world (e.g, introducing pronounced stimuli) despite the lack of bespoke artifacts that once were exclusively associated with their exertion (alarm clocks).

This paper demonstrates why software as construed by these authors cannot fulfill the role of function bearer in function ascriptions as they are understood for technical artifacts (c.f. Houkes and Vermaas 2010). Moreover, it identifies an alternative class of objects that suit the role of function bearer. Thus, it contributes to a better understanding of what it means to ascribe a function to software.

De Haas, Jeroen, and Wybo Houkes. 2025. “Can’t Software Malfunction?” Metaphysics 9 (1): 1–15. https://doi.org/10.5334/met.165.

Floridi, Luciano, Nir Fresco, and Giuseppe Primiero. 2015. “On Malfunctioning Software.” Synthese 192 (4): 1199–1220. https://doi.org/10.1007/s11229-014-0610-3.

Houkes, Wybo, and P. E. Vermaas. 2010. Technical Functions: On the Use and Design of Artefacts. Philosophy of Engineering and Technology. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-90-481-3900-2.

Irmak, Nurbay. 2012. “Software Is an Abstract Artifact.” Grazer Philosophische Studien 86 (1): 55–72. https://doi.org/10.1163/9789401209182_005.

Kroes, Peter, and Anthonie Meijers. 2006. “The Dual Nature of Technical Artefacts.” Studies in History and Philosophy of Science Part A 37 (1): 1–4. https://doi.org/10.1016/j.shpsa.2005.12.001.

Turner, Raymond. 2018. Computational Artifacts: Towards a Philosophy of Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-55565-1.

 
5:00pm - 6:30pm(Symposium) What is Intercultural Philosophy of Technology and why is it important?
Location: Auditorium 4
 

What is Intercultural Philosophy of Technology and why is it important?

Chair(s): Gunter Bombaerts (Eindhoven University of Technoloogy), Andreas Spahn (Eindhoven University of Technology)

Within philosophy and ethics of technology, the use of “non-Western” approaches has long been very peripheral. Recent years show a careful but growing interest in many more globally spread approaches of philosophy of technology.

However, several challenges remain. Individual projects using “non-Western” approaches have their project-specific struggles, ranging from methodological issues to interpretative challenges. Furthermore, an overall view on how these approaches fit in a broader philosophy and ethics of technology is still missing.

This symposium aims to be the starting point for a more in-depth and general debate on the future of intercultural philosophy of technology. It pursues this aim through a dialogical method, in which participants are invited to deliberate together, rather than present their own research outputs. Some individual examples will be used as a springboard to discuss the central question: “What is Intercultural Philosophy of Technology and why is it important?”

We are looking forward to discussing this with you.

Co-Organizers: Gunter Bombaerts, Andreas Spahn, Patricia Reyes Benavides, Kristy Claassen, Alessio Gerola, Tom Hannes, Emma Kopeinigg, Joseph Sta. Maria, Anna Puzio and Hin Sing Yuen. (The co-organizers are part of the ESDiT intercultural ethics of technology research line.)

 

Presentations of the Symposium

 

Symposium Schedule (90 minutes)

Gunter Bombaerts1, Andreas Spahn1, Patricia Reyes Benavides2, Alessio Gerola3, Tom Hannes1, Emma Kopeinigg4, Joseph Sta. Maria5, Anna Puzio2, Hin Sing Yuen5
1Eindhoven University of Technology, 2Twente University, 3Wageningen University, 4Utrecht University, 5Delft University of Technology

Schedule (Total of 90 minutes):

Participants make, before the start of the session, a choice which sub-working group they want to be in. They take a seat in that sub-working group (separate table, all in the same room).

Introduction (30 minutes):

The theme “What is Intercultural Philosophy of Technology and why is it important?” will be introduced by the co-organizers.

Discussion in sub-groups (30 minutes)

In the sub-groups, the theme “What is Intercultural Philosophy of Technology and why is it important?” will be further discussed with a particular focus (see below for tentative ideas). The chair of the sub-group will introduce a few thought-provoking ideas or statements that will be the start of the discussion.

· Focus on particular philosophies: African-, Buddhist-, Confucian-, Indigenous-… philosophy of technology

· Focus on approaches: Religion Studies, Postphenomenology, Analytical Philosophy, … and intercultural philosophy of technology

Plenary discussion (30mins)

We bring together the insights of the different groups to formulate next steps, as concretely as possible, to the answer “What is Intercultural Philosophy of Technology and why is it important?”

 
5:00pm - 6:30pm(Symposium) Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue
Location: Auditorium 5
 

Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue

Chair(s): Dazhou Wang (University of Chinese Academy of Sciences, Beijing), Christopher Coenen (Institute of Technology Assessment and Systems Analysis (KIT-ITAS)), Aleksandra Kazakova (University of Chinese Academy of Sciences, Beijing)

Taking Socrates as an example, philosophy is essentially a dialogue. Guided by this spirit, this forum sincerely invites engineering scientists, computer scientists, engineering practitioners, and philosophers of science and technology to engage in an interdisciplinary dialogue. This dialogue aims to explore the nature of engineering science, the complex connections between engineering and science, the characteristics of artificial intelligence, its impact on engineering science, and so on. The content of the symposium covers many cutting-edge fields, including aviation engineering, cryogenic engineering, petroleum exploration engineering, metallurgical process engineering, astronaut training, human stem cell-based embryo models, AI-driven synthetic biology, biomedicine,swarm intelligence, and data science and engineering. Through these cases, participants are providing multi-dimensional philosophical insights from their respective professional backgrounds.

The speakers from the fields of philosophy, engineering and computer science are making this forum not only an interdisciplinary dialogue but also a cross-boundary one. By sharing their research findings and reflections, experts from different fields facilitate a deeper understanding of the relationship among natural science, engineering science, and engineering practices, of the relationship between AI and engineering, and of basic concepts of philosophy of engineering science, embodying the fundamental spirit of philosophy.Through interdisciplinary collaboration, participants can better understand the complexity of engineering science, explore its potential in practical applications, and lay a solid foundation for future technological innovation. The achievements of this dialogue are not only reflected at the academic level, especially relevant to the philosophy of engineering science, but may also have a profound impact on engineering practice, driving the common progress of engineering and philosophy.

 

Presentations of the Symposium

 

Ethical frontiers in human stem cell-based embryo model

Yaojin Peng
University of Chinese Academy of Sciences, Beijing

In recent years, remarkable progress has been achieved in human stem cell-based embryo models (HSEMs), which have significantly advanced our understanding of early embryonic development, facilitated the creation of precise disease models, and enabled efficient drug screening. By simulating key processes of human development, HSEMs provide unprecedented opportunities to explore fundamental biological questions that were previously inaccessible. However, the rapid progress in this field has sparked widespread philosophical, ethical, social, and regulatory debates. Central concerns include the blurred boundaries of this technology, uncertainties about the ethical and moral status of embryo models, disputes surrounding the ethical sourcing of stem cells, and the broader societal implications of potential misuse or unintended applications. Moreover, the lack of clear and harmonized international standards exacerbates governance challenges, complicating efforts to address these critical issues and fueling public discourse on the acceptability of such technologies.

This study conducts a comprehensive analysis of the latest developments, trends, and defining characteristics of HSEM research while highlighting the ethical and regulatory challenges unique to this field. By leveraging interdisciplinary approaches, we explore the controversies and risks posed by this rapidly evolving technology. HSEM research raise ethical and governance challenges, including their moral status, fidelity verification, and risks in reproductive applications. While current HSEMs lack equivalence to human embryos, future developments may change this. Ethical concerns include fidelity limitations under the “14-day rule”, cloning risks, genome editing, and challenges to traditional family structures. To navigate these challenges, we propose a dynamic ethical risk management framework designed to integrate adaptive regulation, proactive stakeholder engagement, and rigorous scientific oversight. This framework serves as a tool to identify, assess, and mitigate ethical risks while fostering a research environment that harmonizes technological innovation with societal values and ethical principles.

In addition to addressing immediate risks, this framework emphasizes the need for sustainable and forward-looking governance mechanisms. By aligning technological advancements with ethical norms and social responsibility, it aims to promote the standardized and responsible development of HSEM technologies. Furthermore, the framework offers decision-making support for policymakers, providing actionable guidance to establish robust regulations that ensure the safe, ethical, and beneficial application of these groundbreaking scientific advancements.

 

AI-driven synthetic biology: engineering philosophy, challenges, and ethical implications

Lu Gao
Chinese Academy of Sciences, Beijing

From the perspective of the philosophy of engineering, artificial intelligence (AI)-empowered synthetic biology not only fosters the integration of technology and life sciences but also highlights the engineering attributes of biological systems. Synthetic biology, as an interdisciplinary field, aims to enable the precise design of living systems and biological processes, driving breakthroughs in therapies, materials, energy, crops, and data storage. With the introduction of AI, the engineering process in synthetic biology has accelerated, shifting from traditional trial-and-error methods to more automated Design–Build–Test–Learn (DBTL) approaches. This transformation has led to greater standardization and precision in biological research.

However, AI-empowered synthetic biology also presents potential risks. Firstly, the complexity and diversity of biological systems mean that artificially designed organisms may produce unpredictable outcomes, particularly in ecological environments where gene-driven mutations or synthetic organisms may disrupt ecological balance. Secondly, biases and incomplete data within AI technologies may affect the accuracy of synthetic biology designs, leading to faulty decisions or design defects.The engineering nature of synthetic biology not only alters the research paradigm of biology but also raises ethical discussions about whether humanity should control life. AI-empowered synthetic biology may alter biodiversity and species evolution, bringing about fears of technological failure. Moreover, in the context of globalization, the influence of multinational corporations on technological innovation and data privacy will pose additional challenges.

In face of multiple challenges in ethics, technology, and social governance raised by AI-empowered synthetic biology,to ensure sustainability, transparency, and fairness of the technology, as well as to develop adaptive regulatory frameworks, will be key to the healthy progression of this field.

 

Bridging the responsibility gap: ethical responsibility pathways and framework reconstruction in artificial intelligence

Shuchan Wan, Cheng Zhou
Peking University

The problem of responsibility gap highlights how learning automata in AI can make it difficult to attribute moral responsibility to humans. New AI models like neural networks and reinforcement learning have moved beyond rule-based programming, resulting in unpredictable and uncontrollable behaviors. This creates situations where AI cannot fully bear responsibility, nor can its creators or operators. Bridging this gap requires rethinking human-machine relationships and responsibility allocation.

This paper reviews three approaches to this issue: instrumentalism, human-machine collaboration, and joint responsibility. According to instrumentalism, AI is a tool to achieve a certain goal, but not the goal per se. Since the goal is where values reside in, AI is not value-ladened and hence should not be held as morally responsible. To hold AI as morally responsible is therefore both logically inconsistent and a morally irresponsible. The pitfall in instrumentalism, though, is that it neglects the functional side of tools. According to the theory of human-machine collaboration, AI can be held as morally responsible, but only when it achieves a specific level of agency. The challenge with this theory mainly lies in the ambiguity in determining which level of agency has AI achieved. According to joint responsibility,AI and its user should be considered as a single agent. When this single agent is morally responsible, both AI and its user are morally responsible. The difficulty with this approach, though, is the existence of a practically reasonable and non-arbitrary way to distribute the single agent’s responsibility between AI and its user.

To address challenges in these approaches, this paper comes up with an extended agent responsibility model. This model is based on an extended view of agent, as with the joint responsibility approach above. However, what distinguishes this model from the original joint responsibility approach consists in that this model sees the coupling between AI and its user as a dynamic one, instead of a static one as in the case of original approach. To deal with responsibility distribution within this dynamic coupling, this paper introduces causal contribution principles and attempt to divide moral responsibility according the causal responsibility of each part. By doing so, distribution of moral responsibility becomes significantly more tractable and non-arbitrary, in comparison with the original joint responsibility approach.

 

Basic Ideas on Engineering science and engineering scientists: a contribution to philosophy of engineering science

Dazhou Wang1, Christopher Coenen2
1University of Chinese Academy of Sciences, Beijing, 2Institute of Technology Assessment and Systems Analysis (KIT-ITAS)

Overall, the philosophy of engineering science remains an underdeveloped research field, as evidenced by the relevant research findings showcased in the two handbooks, Handbook of the Philosophy of Technology and Engineering Science (Edited by Anthonie Meijers) and Handbook of the Philosophy of Engineering(Edited by Diane P. Michelfelder and Neelke Doorn). In this paper, we present some basic ideas about engineering science and engineering scientistsbased on previous researches to stimulate further philosophical studies of engineering science: (1) The core goal of engineering science is to design and create artifacts (such as tools, equipment, systems, etc.) through scientific methods, making it reasonable to define it as "artificial science." In this sense, artificial intelligence research also belongs to a special type of engineering science, which can be called linguistic engineering science. (2) Engineering science is not directly derived from natural science but is inspired by it and developed through the summarization of engineering practices, forming a unique theoretical system. (3) Distinguished from natural science, modeling, approximate computation, and parameter determination hold a central position in engineering science. The theoretical structure of engineering science is therefore more problem-oriented than traditional axiomatic systems. (4) Engineering science and engineering innovation are interdependent and mutually influential, with no clear sequence between them, reflecting a complex interactive relationship. (5) In a long-term perspective, the development of engineering science indeed combines gradual progress and revolutionary change, but its revolutionary nature differs from the paradigm shifts described by Thomas Kuhn in natural science. This indicates that the development of engineering science requires close interaction among natural scientists, engineering scientists, and engineering practitioners. (6) Engineering scientists indeed occupy a "boundary" position between natural science and engineering practice. They not only need to understand scientific principles but also “creatively”apply them to practical engineering problems, making them vital in modern society.We hope that this paper would foster dialogue between the philosophy of science and the philosophy of engineering, deepening the understanding of the nature of engineering science and its unique role in engineering innovation as well as in society at large.

 
5:00pm - 6:30pm(Symposium) Human, gender, and trust in AI ethics: addressing structural issues through Ethical, Legal, and Social Aspects (ELSA) Lab approach
Location: Auditorium 6
 

Human, gender, and trust in AI ethics: addressing structural issues through Ethical, Legal, and Social Aspects (ELSA) Lab approach

Chair(s): Hao Wang (Wageningen University and Research, Netherlands, The)

This panel suggests that there are two critical gaps that need to be addressed to develop responsible AI. The existing AI ethics literature often focuses on the first gap—the disconnect between ethical principles and design practices. This gap has led to criticisms of AI ethics as being either impractical (Hagendorff, 2020) or complicit in ethics washing (Wagner, 2018). Many studies have worked on bridging this first gap by figuring out how to turn ethical principles into concrete design tasks that can be applied in AI design. Examples include approaches like AI ethics by design (Brey & Dainow, 2023) or VSD of AI (Umbrello & Van de Poel, 2021). These aim to make AI ethics more actionable for designers.

However, we believe that a second, less-explored gap exists—one between design practices and addressing broader structural issues that shape them. Many structural issues are socio-political, rooted in existing unequal social and political structures. Some others are more ontological in nature, which relate to our basic understanding of the world and reality—our beliefs, mindsets, and assumptions. Current approaches to AI ethics, whether in the form of guidelines or ethics-by-design strategies, often miss the mark when it comes to tackling those structural issues. They usually focus on giving practical recommendations for individual developers or companies, which makes them actionable but also limits their scope by rarely addressing the bigger picture—issues that go beyond what developers or their organizations can implement (Ryan et al., 2024). Structural issues, on the other hand, are deeply connected to broader systemic problems, making them too complex and abstract to solve with ethical guidelines or ethics-by-design alone. To address these structural challenges, we need to address the pervasive conceptual paradigms often underpinning AI discourse, identify different power asymmetries and disadvantaged groups (e.g., women), and to re-evaluate the merit of driving paradigms such as trustworthy AI.

In this panel, we propose the ELSA (Ethical, legal and social aspects) Lab approach as a promising way to bridge both the practical and structural gaps in developing responsible AI. The ELSA Lab is an “experimental, systemic approach in which Quadruple Helix stakeholders—academia, civil society, government, and industry—work together to experiment with strategies to address the ELSA aspects in AI (re)design” (Wang et al. 2025). The approach is procedural and experimental, allowing room to not only operationalize ethics in design practices but critically reflect on issues like power dynamics, anthropocentrism, and broader structural impacts.

The panel has three parts: First, we introduce two key gaps in AI ethics, and how the structural gap is often neglected despite its relevance to responsible AI. Next, we present three examples showing how structural issues shape AI design, covering power dynamics in trustworthy AI, underlying assumptions in human-centered AI (e.g., anthropocentrism, instrumentalism), and the influence of patriarchal structures. Finally, we bring all the presentations together in considering the ELSA lab approach and what opportunities it offers to address these two gaps.

 

Presentations of the Symposium

 

The power and emotions in trustworthy AI

Hao Wang
Wageningen University and Research

In AI ethics, trustworthy AI is widely pursued and often seen as inherently a good thing and adhering to a series of values (AI HLEG 2019). However, this value-based framing can easily turn the ideal of trustworthy AI into a checklist of predefined criteria or principles, reducing it to a mere technical assessment or a proof of compliance. I illustrate two important things that might be missed in this value-based view of trustworthy AI.

First, it can obscure the power dynamics. Drawing on Habermas’s critique of ‘technocracy,’ I will illustrate how this compliance-oriented understanding of trustworthy AI risks undermining its progressive goals and flatten structural solutions into techno-bureaucratic projects. This techno-bureaucracy reinforces, rather than challenges, the logic of continuous data expropriation and asymmetric power. Second, it misses the lived experiences of people who face challenges with AI every day. Trustworthiness is not just about abstract values like transparency or privacy; it is also about the emotions, fears, and frustrations people feel. Many people fear losing control, get angry over political manipulation, or worry about being replaced by machines. This erosion of trust is a reflection of the broader disruptions to everyday life caused by AI.

Given all this, I would even argue that promoting trust in AI might not always be the best strategy. Sometimes, fostering a healthy skepticism—or even justified distrust—could be more important. This kind of distrust is not about eroding societal trust; it is about pushing for the conditions that actually deserve it (Wang, 2022). In this way, distrust can be a pathway to challenge structural issues, holding AI accountable, and making our algorithmic society more trustworthy.

 

Addressing problematic conceptual assumptions about human-technology relations in AI development practices

Luuk Stellinga
Wageningen University and Research

New developments in artificial intelligence (AI) have generated widespread societal debate on AI technologies and their implications. While aimed at improving the moral state of AI, this debate runs the risk of being based on harmful worldviews, which could undermine its well-intentioned goals. Hermeneutic philosophy of technology offers a way to respond, as it allows for uncovering what is taken for granted about humans and technologies in contemporary discourses, such as the societal debate on AI. Following a hermeneutic approach, current AI discourse can be revealed to maintain an understanding of human-AI relations that is universalist (human experiences are viewed as universal), instrumentalist (human-AI relations are viewed as user-tool relations), and anthropocentric (humans are viewed as uniquely moral beings). Each of these assumptions prompts philosophical questions and can be problematized for various reasons. Anthropocentrism, for example, can be argued to overlook the moral status of nonhuman animals and the natural environment, and lead to a disregard for the real and significant harms that AI can cause them (Bossert & Hagendorff, 2021, Van Wynsberghe, 2021)

In the context of this panel, the critical question is whether it is possible to address the problematic assumptions of universalism, instrumentalism, and anthropocentrism in concrete AI development practices. Such assumptions do not straightforwardly lead to specific design choices, as they operate at a broader level and interweave with systemic societal challenges, but nevertheless frame the ways in which problems, methods, and solutions are articulated in AI development. Addressing these assumptions therefore requires not a simple intervention in the design process, but a structural rethinking of the development of AI technologies. The ELSA lab research methodology provides an opportunity to consider how the abovementioned problematic assumptions can be addressed. What is particularly promising about the ELSA lab approach is that it takes on a perspective on responsible AI development as an ongoing and dynamic process, wherein consideration of ethical, legal, and social aspects occurs throughout the development and deployment cycle. Besides this, it is grounded in a multi-level perspective on human-AI relations, acknowledging both individual artifact issues, organizational issues, systemic issues, and ontological issues (Wang & Blok, 2025). As a result, ELSA lab research provides a variety of points at which conceptual assumptions can be identified and critiqued. We consider the merits and implications of critical reflection on philosophical assumptions at the different steps and levels of the ELSA lab approach.

 

AI, Gender, and Agri-food

Mark Ryan
Wageningen University and Research

In recent years, there has been a surge in research on the ethical, legal, and social aspects (ELSA) of artificial intelligence (AI) in agri-food (van Hilten et al., 2024). Much attention has been given to the impact on farmers when deploying AI on their farms and its possible impact on non-human life and society (Ryan et al., 2021). Occasionally, the impact of AI on gender dimensions in agri-food gets raised (Sparrow & Howard, 2020). However, rarely in much detail and even less so concerning structural dimensions underpinning these concerns (Ryan et al., 2024), which will be the focus of this presentation.

To begin with, the domain of agri-food and the discipline of computer science have traditionally been male-dominated. One may assume that adopting AI in agri-food will further catalyse and exacerbate gender challenges and concerns. There is the possibility that the digitalisation of agri-food will further disenfranchise and push women out of the industry. The impact of this could harm diversity and inclusion in the sector. It could also harm the industry because it needs to attract more young farmers to replace an ageing, declining demographic of farmers.

Another significant structural concern in using AI in agri-food is the impact on women in the Global South. In the Global South, the agri-food sector comprises up to 80% of women (Davies, 2023); whereas women make up only 30% of the workforce on farms in Europe and as low as 15% in Ireland (EU Cap Network, 2022). It is 26.4% in the US (USDA, 2024), with similar figures throughout the Global North.

The use and deployment of AI and AI-powered robots and drones on farms are expected to (mostly) be deployed chiefly on wealthy, large, monocultural farms (Ryan, 2020). This may result in an increased digital divide between predominantly wealthy male-dominated farms in the Global North and farms in the Global South, primarily worked on by women.

This divergence in the use and benefit of AI in the agri-food sector creates a split between those who can benefit from these technologies. It also further disadvantages women in already precarious positions in the Global South. AI and AI-powered robots offer real potential to help alleviate and reduce many of the dull, dirty, and dangerous jobs done by women in the Global South; however, if they are priced out of such opportunity, it raises many justice concerns about disadvantage, fair distribution of resources and benefits, and inequality.

Many of these structural concerns and impacts are far-reaching. They cannot simply be addressed by creating AI ethics guidelines for organisations to follow or trying to embed values into the design of AI models. This presentation aims to take a first step by identifying some of the structural gender challenges that the deployment and use of AI in agri-food raises. It will open the discussion for ways that approaches such as ELSA can help, alongside political will and effective policymaking decisions.

 
5:00pm - 6:30pm(Symposium) A code of conduct for technology ethics practitioners
Location: Auditorium 7
 

A code of conduct for technology ethics practitioners

Chair(s): Pieter Vermaas (TU Delft, the Netherlands, Netherlands, The)

This symposium explores possibilities for creating a code of conduct for practitioners working in technology ethics. The overall argument is that the number of technology ethics practitioners is currently growing, specifically creating different roles like embedding ethicists in research projects on technology, members of research ethics committees who assess the consequences of technological research, ethicists advising companies, or facilitators in moral/ societal exploration through workshops, games, and brainstorm sessions. These technology ethics practitioners are involved in assessing technologies and their applications and they are increasingly providing guidance for technology development through approaches like responsible research and innovation, ethics by design, or design for values. This in turn results in challenges that practitioners are confronted with and which guidelines in the form of codes of conduct can help to address. Technology ethics evolves in this way to a profession aimed at giving advice, and by arguments that can be found within technology ethics itself, it can be argued that this profession can use a code of conduct for its practitioners.

This symposium focusses on preliminary issues when considering a code of conduct, such as identifying the types of ethics practitioners the code can be for, the roles the code can play for these practitioners, charting controversies it should address, and the (institutional) arrangements needed for making a code effective.

 

Presentations of the Symposium

 

A code of conduct for technology ethics practitioners

Pieter Vermaas
Technische Universiteit Delft

Description:

This symposium explores possibilities for creating a code of conduct for practitioners working in technology ethics. It follows up on the publication of a recent position paper in the Journal of Responsible Innovation (Vermaas, Ammon, and Mehnert, 2025), in which such a code of conduct is proposed. This position paper ended with the promise not to stop with publishing, but to continue the discussion on the possibility of a code with the stakeholders concerned. This symposium is part of this promise.

The overall argument is that the number of technology ethics practitioners is currently growing, specifically creating different roles like embedding ethicists in research projects on technology, members of research ethics committees who assess the consequences of technological research, ethicists advising companies, or facilitators in moral/ societal exploration through workshops, games, and brainstorm sessions. These technology ethics practitioners are involved in assessing technologies and their applications and they are increasingly providing guidance for technology development through approaches like responsible research and innovation, ethics by design, or design for values. This in turn results in challenges that practitioners are confronted with and which guidelines in the form of codes of conduct can help to address. Technology ethics evolves in this way to a profession aimed at giving advice, and by arguments that can be found within technology ethics itself, it can be argued that this profession can use a code of conduct for its practitioners.

This symposium focusses on preliminary issues when considering a code of conduct, such as identifying the types of ethics practitioners the code can be for, the roles the code can play for these practitioners, charting controversies it should address, and the (institutional) arrangements needed for making a code effective. First answers will be presented at the symposium, for critical responses and constructive exploration.

Reference

Vermaas, P.E., S. Ammon and W. Mehnert (2025) Toward a Code of Conduct for Technology Ethics Practitioners, Journal of Responsible Innovation 12(1), 2440958

https://www.tandfonline.com/doi/full/10.1080/23299460.2024.2440958

Structure of the symposium

The symposium is organized for enabling extensive discussion and exploration of the proposal to arrive at a code of conduct for technology ethics practitioners. In the first part of the symposium the proposers will present the discussion paper through an impulse of 15 minutes, reflecting on the different roles practitioners can find themselves in, as well as potential challenges and hurdles that need to be addressed. In the second part there will be three comments of each 5 minutes, one by an expert of codes of conducts in technology organizations, one by a colleague who assessed the proposal critically, and one by a representative of the SPT, the organization that could implement the Code of Conduct in accordance with the proposal. After the shared perspectives, there will be ample time for the audience to join the discussion, in the first part with questions and in the second for further exploration of possibilities and shared ideas.

Schedule

Introduction (10 minutes)

- Pieter Vermaas

Part 1:

Impulse (15 minutes)

- Sabine Ammon & Wenzel Mehnert: On roles and challenges of ethics-practitioners

Questions (10 minutes)

- Audience

Part 2:

Comments (15 minutes)

- Alfred Nordmann, Mareike Smolka, Ibo van de Poel

Exploration (30 minutes)

- Audience

Wrap up/plan of action (10 minutes)

- Pieter Vermaas

total 90 minutes

Participants

Moderator:

- Pieter Vermaas, Philosophy Department, TU Delft, the Netherlands; p.e.vermaas@tudelft.nl

Impulse:

- Sabine Ammon, Berlin Ethics Lab, TU Berlin, Berlin, Germany; ammon@tu-berlin.de

- Wenzel Mehnert, Berlin Ethics Lab, TU Berlin, Berlin, Germany; wenzel.mehnert@tu-berlin.de

Commentators:

- Alfred Nordmann, Department of Philosophy, TU Darmstadt, Darmstadt, Germany; nordmann@phil.tu-darmstadt.de

- Mareike Smolka, Knowledge, Technology & Innovation chair group, Social Sciences Department, Wageningen University & Research, Wageningen, The Netherlands; mareike.smolka@wur.nl

- Ibo van de Poel, Philosophy Department, TU Delft, the Netherlands; i.r.vandepoel@tudelft.nl

 
5:00pm - 6:30pm(Symposium) Teaching engineering ethics through aesthetic and embodied experiences
Location: Auditorium 8
 

Teaching Engineering Ethics through Aesthetic and embodied Experiences

Chair(s): Filippo Santoni de Sio (TU Eindhoven), Jordi Viader Guerrero (TU Delft)

Engineers reshape the world through their design and operational decisions. Engineering ethics education, crucial in an era of rapid technological transformation, addresses the increasing societal demands for responsible innovation. As artificial intelligence and environmental challenges significantly impact our world, the responsibility of engineers extends beyond technology creation to ensure its benefit to public good and sustainability. In our attempts to respond to these demands in our technical university (TU Delft, TU Eindhoven), we observe one major challenge:

Engineering ethics education, even when done in connection to concrete cases and problems, tends to focus on: a) problem-solving approaches, rather than broader problem-making and sense-making content; and b) methods based on abstract reasoning as opposed to embodied experience of the students. Our claim is that current approaches are insufficient to develop moral and political sensitivity and motivation to act.

Aim

Our aim with this interactive experimental session is to introduce and discuss novel approaches currently being developed and implemented at Delft University of Technology and TU Eindhoven to address this challenge, as well as discussing an overview of the current state of Engineering Ethics Education from experiences beyond the Dutch context (Aalborg University, UCL).

Approach

We briefly introduce the challenges identified, but use most of the session to present and discuss our approaches to tackling them. Rather than only showcasing existing, tried and true activities, we hope to enlist participants to help us refine and improve our works-in-progress, as well as thinking about how to sustain and finance these teaching methods in the future.

Content of the workshop:

Introduction

Teaching Philosophy through the Embodied Experience: Space and Power in the Classroom- Aarón Moreno Inglés - TU Delft

Performative arts in engineering ethics teaching – Filippo Santoni de Sio (TU/E)

Panel discussion with audience - Jordi Viader Guerrero (TU Delft)

Engineering as an act of Care: Teaching Responsible Innovation through Empathy - Vivek Ramachandran, University College London

 

Presentations of the Symposium

 

Teaching Philosophy through the Embodied Experience: Space and Power in the Classroom

Aarón Moreno Inglés
TU Delft

When analysing power, French philosopher Michael Foucault proposed that this is not a fixed entity, possessed by a “powerful” person. Rather, power runs through all strata of society and expresses itself in acts, through unequal power relations (Foucault, Discipline and Punishment, 1975). Although this is a rather well-known framework in the social sciences and the humanities, engineering and other STEM students do not often get the chance to study the topic of power in depth. This activity provides an experimental answer to two questions: 1) How can we best introduce the topic of power to engineering students?; and 2) How can we use artistic methodologies to foster philosophical reflection?

The objective of this participatory presentation is to have a general introduction to the concept of power to engineering students, and study some ways in which it can be defined and embodied. Through the use of narration and performance, participants will inquire about the relation between power and space, using the classroom as a ''dispositif'' through which power flows. They will be asked to use the elements of the classroom to recreate scenes of surveillance and control in different spaces, as well as to represent examples of power relations (and ways of mitigating and resisting these) in their respective fields of work.

 

Engineering as an act of Care: Teaching Responsible Innovation through Empathy

Vivek Ramachandran
University College London

Engineering education faces mounting pressure to prepare students to understand and address their roles in facing complex social and environmental challenges. While frameworks such as the United Nations Sustainable Development Goals (UNSDGs) are being adopted by universities, there are significant gaps in embedding these principles into engineering curricula. Care ethics — emphasizing empathy, relationships, and responsiveness — offers a pathway to seamlessly weave climate justice and ethics into engineering education.

This presentation examines how care ethics can guide the integration of Responsible Innovation (RI) within engineering programs. RI encompasses key dimensions such as sustainability, ethics, safety, and equity, diversity, and inclusion (EDI). Using project-based learning initiatives at University College London as a case study, I analyse barriers to RI integration, including faculty epistemologies, limited departmental resources, student resistance, and time constraints. To address these challenges, I propose a tailored support framework designed to empower faculty in embedding RI principles into their teaching, fostering sustainability and real-world contextualization, underpinned by a pedagogy of empathy.

Positioning engineering education within the framework of care ethics reimagines the discipline as inherently interdisciplinary, bridging education, climate justice, and technical expertise. This session emphasizes the urgent need for higher education institutions to prepare engineers who can critically engage with their responsibilities in creating and addressing societal and environmental challenges. The session will also invite participants to collaborate on refining and co-creating actionable recommendations, highlighting care-driven, context-specific strategies that transcend traditional technical boundaries.

 
6:30pm - 8:00pmSocial drinks
Location: Senaatszaal
Date: Thursday, 26/June/2025
8:15am - 8:45amRegistration
Location: Voorhof
8:45am - 10:00am(Papers) Disruptive technology I
Location: Blauwe Zaal
Session Chair: Philip Antoon Emiel Brey
 

The role of technology in conceptual disruption

Ibo van de Poel

TU Delft, Netherlands, The

I explore how we should understand the role of technology in conceptual disruption. I will argue that technology is potentially conceptual disruptive because it can create ‘novelty’, but that the disruptive potential of technology is often not limited to a (sudden) interruption caused by classificatory uncertainty, but may evolve over time, resulting in the disruption of larger conceptual clusters or schemes.

Following Marchiori and Scharp (2024), I will understand conceptual disruption as “an interruption in the normal functioning of a concept, cluster of concepts, or conceptual scheme.” My focus will particularly be on clusters of concepts and conceptual schemes. I will argue that such clusters and schemes have three relevant characteristics when it comes to conceptual disruption. First they have a certain coverage, that is to say they may to a greater or lesser extent cover phenomena (broadly conceived) that an agent encounters in the external world. Seconds, conceptual clusters or schemes have a certain internal coherence, they may be more or less coherent. Third, they have a certain functional fit which means that they are more or less appropriate for fulfilling certain functions (like representation, causal reasoning, or moral evaluation).

I will then consider four types of novelty to which technology may give raise and consider how each of these may lead to conceptual disruption: 1) new entities (like artificial agents), new phenomena (like friendship online), 3) new options for actions and practices (like doxing) and 4) new moral problems (like climate change).

I will suggest that to understand the disruptive potential of new technology in these cases, we should not just look at how the novelty created by technology may lead classificatory uncertainty but also at more downstream disruptive effects on conceptual clusters and schemes. Initially, it may seem that the novelty created by technology only challenges the coverage of existing conceptual schemes. Moreover, it might seem relatively easy to address this challenge, either by applying existing concepts to the new phenomenon or by creating a new (more specific) concept that combines two existing concepts like for example “friendship online” or “artificial agent.” However this way of incorporating the new phenomenon in an existing conceptual scheme may lead to a decrease in the internal coherence of that conceptual scheme. For example, the notion of “artificial agent” may decrease coherence because - unlike human agents - artificial agents do not have a mind and cannot be responsible. Consequently, also the functional fit of the conceptual scheme made diminish. For example the notion of “artificial agency” may hinder rather than enable proper moral evaluation.

Reference

Marchiori, S., & Scharp, K. (2024). What is conceptual disruption? Ethics and Information Technology, 26(1), 18. https://doi.org/10.1007/s10676-024-09749-7



The good, the bad, and the disruptive: On the promise of niche construction theory for technology ethics

Jeroen Hopster1, Elizabeth O'Neill2

1Utrecht University, Netherlands, The; 2Eindhoven University of Technology

The core premise of niche construction theory is that organisms are not only shaped by but actively shape their living conditions, thereby co-determining their evolutionary fate (Lala, Odling-Smee & Feldman 2001). Over the last two decades, interest in niche construction theory has extended beyond the field of evolutionary biology. Niches of various sorts have been identified and have been studied from multiple disciplinary angles, including by scholars of technology (Schot & Geels 2007). However, the topic has remained peripheral in the philosophy of technology.

In this presentation, we investigate the promise of niche construction as a framework for studying technological changes of society, in particular of the domain of social morality, and drawing on examples of ‘intimate technologies’. We first argue that it useful to think of modified socio-moral environments as ‘moral niches’, in which particular moral norms and institutions are likely to evolve and persist. We subsequently investigate the process of ‘technomoral niche construction’ (Hopster et al. 2021). In a technomoral niche, the human-modified environment, partially constituted by technology, influences the moral development of agents who act within the niche. Drawing on historical examples, we show that technomoral niches can be stabilized by enduring institutions, conceptual systems, and material technologies, but that moral niches can also be destabilized by emerging technologies.

Our aim in fleshing out the concept of technomoral niche construction is twofold. First, we seek to arrive at a better understanding of the different roles of technologies in the construction and disruption of technomoral niches, and to identify at what scale(s) technomoral niche construction is best investigated – (a) phylogenetic, (b) sociogenetic, (c) ontogenetic, or (d) microgenetic (Coninx 2023). Second, we wish to clarify the normative dimensions of technomoral niche construction. What is the value of stable technomoral niches, and what are the risks, harms, and benefits of niche disruptions?

Literature

Aaby, B. H., & Ramsey, G. (2022). Three kinds of niche construction. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axz054

Altman, A., & Mesoudi, A. (2019). Understanding agriculture within the frameworks of cumulative cultural evolution, gene-culture co-evolution, and cultural niche construction. Human Ecology 47: 483-497. https://doi.org/10.1007/s10745-019-00090-y

Boyd, Robert, Peter J. Richerson, and Joseph Henrich. (2011). The Cultural Niche: Why Social Learning is Essential for Human Adaptation. Proceedings of the National Academy of Sciences 108: 10918–10925. doi:10.1073/pnas. 1100290108

Coninx, S. (2023). The dark side of niche construction. Philosophical Studies, 180(10), 3003-3030.

Dean, Timothy. (2014). Evolution and Moral Ecology. PhD thesis, University of New South Wales. https://doi.org/10.26190/unsworks/17194

Henrich, Joseph. (2016). The Secret of our Success: How Culture is Driving Human Evolution, Domesticating our Species, and Making us Smarter. Princeton (NJ): Princeton University Press.

Henrich, Joseph, Robert Boyd, and Peter J. Richerson. (2008). Five Misunderstandings About Cultural Evolution. Human Nature 19 (2): 119–137. doi:10.1007/s12110-008-9037-1

Hopster, J.K.G., Arora, C., Blunden, C., Eriksen, C., Frank, L.E., Hermann, J.S., Klenk, M.B.O.T., O’Neill, E.R.H. and Steinert, S., (2022). Pistols, pills, pork and ploughs: the structure of technomoral revolutions. Inquiry, pp.1-33. DOI:10.1080/0020174X.2022.2090434

Kendal, J., Tehrani, J. J., & Odling-Smee, J. (2011). Human niche construction in interdisciplinary focus. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1566), 785-792. Doi:10.1098/rstb.2010.0306

Lala, Kevin N., and Michael J. O’Brien. (2011). Cultural Niche Construction: An Introduction. Biological Theory 6 (3): 191–202. doi:10.1007/s13752-012-0026-6

Lala, K. N., Odling-Smee, J., and Feldman, M. W. (2000). Niche Construction, Biological Evolution, and Cultural Change. Behavioral and Brain Sciences 23: 131–75.

Mesoudi, Alex. (2016). Cultural Evolution: A Review of Theory, Findings and Controversies. Evolutionary Biology 43 (4): 481–497. doi:10.1007/s11692-015-9320-0

Odling-Smee, J., Laland, K. N., and Feldman, M. W. (2003). Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press.

Richerson, Peter J., and Robert Boyd. (2005). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago (IL): University of Chicago Press.

Schot, J., & Geels, F. W. (2007). Niches in evolutionary theories of technical change: A critical survey of the literature. Journal of evolutionary economics, 17, 605-622.

Scott, Tony J. (2009). The evolution of moral cognition. PhD thesis, University of Wellington. https://core.ac.uk/download/pdf/41336702.pdf

Sterelny, Kim. 2010. Minds: extended or scaffolded? Phenomenology and the Cognitive Sciences 9: 465–481. https://doi.org/10.1007/s11097-010-9174-y.

Severini, Eleonora. (2016). Evolutionary Debunking Arguments and the Moral Niche. Philosophia 44 (3): 865–875.

doi:10.1007/s11406-016-9708-9

Smyth, N. (2020). A genealogy of emancipatory values. Inquiry, 1-30.



The sense of disruptive innovation

Georgios Tsagdis

Wageningen University and Research

After the age of grand narratives, an age of “grand challenges” appeared to dawn in EU’s rhetoric. This was quickly replaced by an age of “wicked problems,” such as climate change, immigration and socio-economic inequality. Such problems are not merely complex or pernicious, they are world-disruptive. Moreover, according to major international actors such the EU, countering the disruption is directly premised on the capacity to innovate. Indeed, technologies such as AI, with the greatest protentional for disruption are seen as the last line of defence against disruption.

In order to appraise this seeming contradiction, the talk begins by tracing the conceptual limits and critiquing the limitations of the notion of disruptive innovation. Sprang in the mid-90s from Clayton Christensen’s monograph The Innovator’s Dilemma, disruption theory argued that the gradual development of certain technologies primes established companies to produce increasingly more refined and capable products, which paradoxically leads to their eventual demise. What was crucial in this theory from the outset is that innovation was understood in technical terms, while disruption was understood in economic terms. This narrow configuration became increasingly apparent as the theory was popularised and adopted in fields outside business and management theory. Its instigators attempted to reclaim the theory as a rigorous analytical tool, by clarifying strict criteria that would disqualify a service such as Über as a disruptive innovation, even if the general consensus—among civilians, taxi-drivers and legislators—saw Über as undeniably disruptive. A more comprehensive theory of disruptive innovation was needed.

Instead of looking at specific attempts to expand the original scope of disruptive innovation, the talk turns in the second part, to thematise disruption at the most expansive scale, that is, the world. The world is understood here not as the totality of beings, or a place that contains this totality, but as the horizon of meaning. Drawing on the interventions of Jean-Luc Nancy on the problem of the world in the twenty-first century, the talk identifies “world-disruption” in three senses: the first two consist in a certain deficit, as the world can no longer be experienced as a “well-composed ensemble,” that is, a “cosmos”; moreover, the world can no longer be ordered either at the “universal” or the local “natural” or “cultural” levels. The third sense amounts to the opposite, albeit concurrent, tendency of an excessive, overwhelming proliferation of “worlds.” Having explicated this diagnosis, the talk examines the role Nancy attributes to technology (“ecotechnics”) in effectuating this triple disruption, aiming to offer an alternate way of understanding the novum at the heart of innovation.

 
8:45am - 10:00am(Papers) Postphenomenology
Location: Auditorium 1
Session Chair: Udo Pesch
 

Developing a Posthuman and Postphenomenological AI Literacy

Richard S Lewis

University of Washington, United States of America

Building on the postphenomenological framework of human-technology relations and posthuman (e.g. Barad; Braidotti; Haraway) theories of the subject, this paper develops a comprehensive approach to AI literacy that moves beyond traditional digital or media literacy frameworks. While existing approaches to AI literacy often focus primarily on technical understanding or critical analysis of AI systems, I argue that we need a more fundamental reconceptualization of how AI mediates and co-constitutes human subjectivity through what I term "intrasubjective mediation."

Drawing from my previous work on the intrasubjective mediating framework, which identifies six key groups of relations (technological, sociocultural, mind, body, space, and time) that constitute the human subject, I demonstrate how this framework can be specifically applied to human-AI relations. The framework reveals how AI systems do not simply mediate our relationship with the world in instrumental ways, but rather transform our very mode of being through complex interrelations across multiple dimensions of experience.

I argue that this approach to AI literacy requires understanding these transformative effects at both microperceptual and macroperceptual levels. At the micro level, we must attend to how specific AI interactions shape our embodied experience, cognitive processes, and behavioral patterns. At the macro level, we need to examine how AI systems are embedded within broader sociocultural contexts and power relations that influence their development and impact.

This paper demonstrates how a posthuman and postphenomenological approach to AI literacy can help by situating AI relations within a complex ecosystem of human becoming rather than treating them in isolation. Also, it acknowledges both the enabling and constraining aspects of human-AI relations without falling into either technological determinism or naive instrumentalism. By fostering critical awareness of how AI shapes human subjectivity across these multiple dimensions, we can have a chance to increase our agency in strategically engaging with AI systems as part of our technological becoming. My goal is to develop a more nuanced understanding of our complex co-evolution with AI technologies.



The temporal aspect of multistability: Extending postphenomenology through Bergson's theory of time

Shigeru Kobayashi

Institute of Advanced Media Arts and Sciences, Japan

Modern advanced industrial technologies, particularly AI, present crucial challenges to the philosophy of technology, as evidenced by ethical and political proposals. AI stands at the center of the ‘Intimate Technological Revolution,’ fundamentally transforming our daily experiences and relationships while raising concerns about ethics and politics, as well as technoableism (Shew and Earle 2024). Postphenomenology, a contemporary philosophy based on pragmatism that explores human-technology relations through the key concept of multistability (Bosschaert and Blok 2023; Ritter 2021; 2024; Rosenberger and Verbeek 2015), has gained external attention as a framework for ‘empirically’ examining modern technologies. However, critics (Coeckelbergh 2022; Dmytro Mykhailov and Nicola Liberati 2023; Lemmens 2022; Pavanini 2022; Smith 2015; Zwier, Blok, and Lemmens 2016) argue that it can only address ‘small things’ rather than broader technological issues—a limitation particularly evident in its approach to AI ethics—leading to various attempts of expansion to confront ‘big thing’ (Bosschaert and Blok 2023; Claassen 2024; Coeckelbergh 2022; Ritter 2024; Romele 2021; Rosenberger 2023; Schürkmann and Anders 2024; Van Den Eede 2022; Wellner 2022).

I propose extending postphenomenology through Bergson's theory of time. Bergson, known for his significant influence on process philosophy that is sometimes contrasted with postphenomenology, provides valuable insights into temporal aspects of technological experience. Following Bergson's temporal theory that puts emphasis on aspect rather than tense, Hirai (2019) distinguishes between ‘process’ (present becoming, imperfective) and ‘event’ (past being, perfective). Through this distinction, we can identify a limitation of current postphenomenology: it handles ‘event’ only and statically describes them from a third-person view in a temporal exterior. By contrast, the appropriation of new technology occurs through a dynamic ‘process’ and can only be described from a first-person view in a temporal interior. The significance of distinguishing between ‘event’ and ‘process’ becomes particularly apparent in disability contexts where the temporal aspect of technological integration is often overlooked.

In engineering and industry, disabled people are typically viewed as persons requiring additional support from non-disabled people. Consequently, only non-disabled people participate in the process of making technology while considering possibilities and risks, with disabled people gaining access only after the transition to events and stabilization. However, significant disparities often exist between non-disabled people’s assumptions regarding the lived reality of disabled people (Shew and Earle 2024). Moreover, ‘hacks’—finding new meanings in things through creative appropriation—occur regularly in disabled people's everyday lives. Since these hacks represent discovering stabilities different from the designer-intended dominant stability, disabled people's participation in the process stage is essential for meaningful technological development.

Our research focuses on the technological possibilities for people who have engaged in artistic expressions such as painting, theater, and dance to continue their activities despite age-related or illness-induced physical changes. We formed teams comprising disabled people, care staff, technologists, and artists to explore these possibilities collaboratively. As part of this project, we conducted workshops using tone-morphing AI (Neutone, Inc., n.d.). While developers deemed this AI a studio tool for music makers creating new sound materials, our workshop process revealed shifts in stability: 1) a device for transforming everyday sounds into novel ones, 2) a medium that cultivates intentionality toward various sounds embedded in the lifeworld. Moreover, participants experienced well-being from their respective viewpoints within this process, discovering new modes of artistic expression through technological mediation. Based on this case study, I propose the concept of ‘temporal multistability,’ which extends postphenomenology as an analytical framework for human-technology relations occurring in dynamic temporal aspects.

References

Bosschaert, Mariska Thalitha, and Vincent Blok. 2023. “The ‘Empirical’ in the Empirical Turn: A Critical Analysis.” Foundations of Science 28 (2): 783–804. https://doi.org/10.1007/s10699-022-09840-6.

Claassen, Kristy. 2024. “There Is No ‘I’ in Postphenomenology.” Human Studies 47 (4): 749–69. https://doi.org/10.1007/s10746-024-09727-4.

Coeckelbergh, Mark. 2022. “Earth, Technology, Language: A Contribution to Holistic and Transcendental Revisions After the Artifactual Turn.” Foundations of Science 27 (1): 259–70. https://doi.org/10.1007/s10699-020-09730-9.

Dmytro Mykhailov and Nicola Liberati. 2023. “Back to the Technologies Themselves: Phenomenological Turn within Postphenomenology.” https://doi.org/10.1007/s11097-023-09905-2.

Hirai, Yasushi. 2019. “Event and Mind: An Expanded Bergsonian Perspective.” In Understanding Digital Events: Bergson, Whitehead, and the Experience of the Digital, edited by David Kreps, 45–58. London: Routledge. https://doi.org/10.4324/9780429032066-4.

Lemmens, Pieter. 2022. “Thinking Technology Big Again. Reconsidering the Question of the Transcendental and ‘Technology with a Capital T’ in the Light of the Anthropocene.” Foundations of Science 27 (1): 171–87. https://doi.org/10.1007/s10699-020-09732-7.

Neutone, Inc. n.d. “Neutone.” Accessed January 4, 2025. https://neutone.ai/.

Pavanini, Marco. 2022. “Multistability and Derrida’s Différance: Investigating the Relations Between Postphenomenology and Stiegler’s General Organology.” Philosophy & Technology 35 (1): 1. https://doi.org/10.1007/s13347-022-00501-x.

Ritter, Martin. 2021. “Postphenomenological Method and Technological Things Themselves.” Human Studies 44 (4): 581–93. https://doi.org/10.1007/s10746-021-09603-5.

———. 2024. “Technological Mediation without Empirical Borders.” In Phenomenology and the Philosophy of Technology, edited by Bas de Boer and Jochem Zwier, 121–42. Cambridge, UK: Open Book Publishers. https://doi.org/10.11647/obp.0421.05.

Romele, Alberto. 2021. “Technological Capital: Bourdieu, Postphenomenology, and the Philosophy of Technology Beyond the Empirical Turn.” Philosophy & Technology 34 (3): 483–505. https://doi.org/10.1007/s13347-020-00398-4.

Rosenberger, Robert. 2023. “On Variational Cross-Examination: A Method for Postphenomenological Multistability.” AI & SOCIETY 38 (6): 2229–42. https://doi.org/10.1007/s00146-020-01050-7.

Rosenberger, Robert, and Peter-Paul Verbeek. 2015. “A Field Guide to Postphenomenology.” In Postphenomenological Investigations: Essays on Human–Technology Relations, 9–41. London: Lexington Books.

Schürkmann, Christiane, and Lisa Anders. 2024. “Postphenomenology Unchained: Rethinking Human-Technology-World Relations as Enroulement.” Human Studies, July. https://doi.org/10.1007/s10746-024-09746-1.

Shew, Ashley, and Joshua Earle. 2024. “Cyborg-Technology Relations.” Journal of Human-Technology Relations 2 (1). https://doi.org/10.59490/jhtr.2024.2.7073.

Smith, Dominic. 2015. “Rewriting the Constitution: A Critique of ‘Postphenomenology.’” Philosophy & Technology 28 (4): 533–51. https://doi.org/10.1007/s13347-014-0175-6.

Van Den Eede, Yoni. 2022. “Thing-Transcendentality: Navigating the Interval of ‘Technology’ and ‘Technology.’” Foundations of Science 27 (1): 225–43. https://doi.org/10.1007/s10699-020-09749-y.

Wellner, Galit. 2022. “Digital Imagination, Fantasy, AI Art.” Foundations of Science 27 (4): 1445–51. https://doi.org/10.1007/s10699-020-09747-0.

Zwier, Jochem, Vincent Blok, and Pieter Lemmens. 2016. “Phenomenology and the Empirical Turn: A Phenomenological Analysis of Postphenomenology.” Philosophy & Technology 29 (4): 313–33. https://doi.org/10.1007/s13347-016-0221-7.



Technologically mediated deliberation: bringing postphenomenology to phronesis

Andrew Simon Zelny

University of Edinburgh, United Kingdom

Within the postphenomenological tradition, a great deal of work on the theory of technological mediation has been directly relevant to the Aristotelian virtue of phronesis, commonly translated to practical wisdom (Ihde 1979, Verbeek 2011, Kudina 2023). Existing mediation accounts of perception, praxis, ethics, and value formation readily map onto core elements of phronesis, allowing for a full technological mediation account of practical wisdom to be developed. Even though a great deal of work within the field maps onto phronesis, one fundamental aspect has not been accounted for: the technological mediation of practical deliberation.

In line with my wider project of forming a technological mediation account of phronesis, I argue that the intellectual virtue of euboulia, i.e. good practical deliberation on how to actualize a flourishing life, is a technologically mediated capacity that is co-constituted by the human-technology relationship. How we reason about the means towards actualizing good ends and acting upon those reasons cannot be fully understood as the radically individualistic rational capacity of a detached deliberator: our ability to practically reason is shaped and made possible through our relationships with deliberative technologies. I will consider two traditional examples of deliberative technologies and their modern iterations, journaling and the public forum, as paradigm cases of technologically mediated deliberation, before turning to consider the emerging technology of generative AI in decision-making contexts and its potential for deleterious effects on practical dilberation. Using these illustrative cases, a technological mediation account for practical deliberation can be established and complete a holistic mediation theory of phronesis.

After giving this descriptive account of a technologically mediated phronesis grounded in real-world technological examples, I will consider the normative implications of practical wisdom being co-constituted by technological artifacts and how both virtue ethics and postphenomenology can benefit from a mediation theory of phronesis. Since phronesis is a virtue essential for the fulfillment of a flourishing life, it stands to reason that we should have an understanding of how new and emerging deliberative technologies affect the quality and expression of our practical wisdom. There arises a moral imperative for designers, policy makers, and even the users of technology to consider how deliberative technology mediates practical wisdom and how we might promote the development of phronesis with our relationships to these technologies. If we have an interest in the development of euboulia that is essential for phronesis, we must have an understanding of how our relationships with technology shape that virtue.

 
8:45am - 10:00am(Papers) Human - Technology
Location: Auditorium 2
Session Chair: Julia Hermann
 

Human-technology relations down to earth

Steven Dorrestijn1, Wouter Eggink2

1Saxion University of Applied Sciences, the Netherlands; 2University of Twente, the Netherlands

In this paper we will discuss how philosophy of technology can address and hopefully help advance a much needed reorientation within design of our relation to nature and earth. There is an ecological crisis, and technology has everything to do with it. Therefore the human application of technology urgently needs to become adjusted to ecology in a more sound way. For this both design approaches and philosophy of technology are in need of more explicit orientation towards nature/earth (Latour, 2017; Lemmens, Blok & Zwier, 2017).

The use of philosophy of human-technology relations for the present-day call for preservation of the earth against damaging technology is complicated. For in the heart of this philosophy is a philosophical questioning of the meaning of nature and technology and a blurring of the distinction. Exemplary is Latour’s allegation, in earlier work of his, that all things and humans are ‘hybrids’ (We have never been modern). While modernity can be characterized by a technological flight lifting us up from our natural condition, we have gotten too far away, and it is now time for a reappraisal of our bounds to the earth and to nature. Therefore the challenge for the philosophy of technical mediation and of human-technology relations is to consider how to advance a mediation approach to the complex of humans, technology, and nature, in such a way that it averts a forgetting of nature, but rather acknowledges earth/nature in a right way. How can human-technology relations remain, or be brought back ‘down to earth’ (Latour, 2018)?

To answer this question we engage in design research and design philosophy, following an approach from the practical turn in philosophy of technology (anonymized for review, 2018b; 2021). Firstly by examining the work of Koert van Mensvoort who applies philosophy of technology to design and advances the notion of ‘next nature’ (van Mensvoort & Gerritzen, 2005; van Mensvoort & Grievink, 2015); secondly by discussing a project by design students about design and the relation to nature in the context of food (anonymized for review, 2023). Next we will research the place of earth/nature in the (post)phenomenological framework of “human-technology-world relations” (Ihde, 1990, 1993; Verbeek, 2005, 2015) and in the “Product Impact Tool” which offers a practical elaboration of the idea of technical mediation (anonymized for review, 2014; 2017).

The philosophical analysis of the relations between technology and humans has proven useful in design practice with respect to improving human-technology interaction (anonymized for review, 2018; 2020; 2021) and considering social effects (anonymized for review, 2020a; 2020b; 2020c; 2021). However, the discussed design case emphasises that the role of nature/earth is underexposed in this human-technology relations approach. Therefore, building on our analysis of the framework of “human-technology-world relations” we will present a revised design of the Product Impact Tool; a product impact tool - down to earth (anonymized for review, 2022).

references:

Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Bloomington: Indiana University Press.

Ihde, D. (1993). Postphenomenology: Essays in the Postmodern Context. Chicago: Northwestern University Press.

Latour, B. (2017). Facing Gaia: Eight lectures on the new climatic regime.: John Wiley & Sons.

Latour, B. (2018). Down to earth: Politics in the new climatic regime. John Wiley & Sons.

Lemmens, P., V. Blok and J. Zwier (2017). Toward a terrestrial turn in philosophy of technology. Techné: Research in Philosophy and Technology 21(2/3): 114-126. DOI: https://doi.org/10.5840/techne2017212/363

van Mensvoort, K. and M. Gerritzen, Eds. (2005). Next Nature. Rotterdam: BIS.

van Mensvoort, K. and H.-J. Grievink, Eds. (2015). Next Nature Catalog; Nature changes along with us. Barcelona: Actar.

Verbeek, P.-P. (2005). What Things Do – Philosophical Reflections on Technology, Agency, and Design. Penn State: Penn State University Press.

Verbeek, P.-P. (2015). Beyond Interaction; a short introduction to mediation theory. Interactions 22(3): 26-31.



Special obligations from relationships with robots ——Beyond the relational approach to moral status——

Hayate Shimizu

Hokkaido University/Japan

In this paper, I critique the relational approach to the moral status of robots and aim to provide a relational account of the moral treatment of robots that does not rely on moral status. Moral status is a concept that corresponds to general obligations directly owed by all moral agents. Traditionally, discussions on the moral status of robots have been dominated by properties approaches, which evaluate moral status based on intrinsic characteristics such as consciousness or sentience (e.g., Bryson, 2010; Sparrow, 2004; Mosakas, 2021; Degrazia, 2022). However, recent scholarship has introduced relational approaches, which shift the focus from intrinsic properties to the relationships and interactions between humans and robots (e.g., Gunkel, 2018, 2023; Coeckelbergh, 2010, 2014, 2021).

According to the relational approaches, the moral status we extend to robots is not determined by their intrinsic properties but by the relationship we form with robots. However, based on subjective relationships or interactions with robots from a human perspective, it carries highly problematic implications. I call this the “problem of nullifying moral status due to the overextension of moral status.” In other words, it can extend moral consideration to entities far beyond robots—for instance, ranging from stones that resemble human faces to human-like virtual beings, or even to natural and artificial objects indiscriminately. If subjective interactions alone were sufficient to grant moral status, it could lead to an overextension of moral consideration, diluting the concept's practical and ethical significance.

In this paper, I propose the concept of special relational obligations that do not depend on moral status but instead arise from the unique relationships humans form with robots. Specifically, I focus on the mutual dependency that develops when a robot is adopted into a person’s life and becomes a dependent being. This relationship mirrors the relationships between pet owners and their pets. By adopting a pet, the owner creates a context where the pet becomes dependent on the owner for survival and care, as it cannot live independently. Similarly, robots rely on their users for maintenance and functioning. At the same time, the owner becomes dependent on the pet, finding emotional support and companionship in the relationship. In a parallel way, users of robots often rely on them for practical assistance or emotional support, creating a mutual structure of dependency. I argue that the establishment of this dependency gives rise to special relational obligations.

These special relational obligations are inherently context-dependent and particularized, applying only to those who establish such relationships with robots. Unlike general obligations tied to moral status, these obligations do not extend universally to all moral agents. I argue that the moral treatment of robots should be grounded in these relational obligations, avoiding the problematic implications of the relational approach to moral status.



Transforming technology: Marcuse and Simondon on technology, alienation, and work

Antonio Oraldi

University of Lisbon, Centre of Philosophy (CFUL), Portugal

This presentation explores the historical and conceptual connections between Herbert Marcuse’s and Gilbert Simondon’s philosophies of technology, focusing on the ways in which their theories envisage the political necessity of a transformation of technology. I begin by positioning these two philosophers as representing distinct approaches to technology. Marcuse’s theory of technology associates with the Frankfurt School critique of instrumental reason and capitalist domination, while Simondon draws on a French tradition that includes Bergson’s philosophy, Leroi-Gourhan’s anthropology of technical exteriorization, and Canguilhem’s organology. Despite the differences between these two philosophers and their respective traditions (Angus, 2019; Bardin, 2018; Toscano, 2009), I argue that both thinkers converge significantly on the relationship between technology, work, and liberation from alienation.

Historically, this convergence is evident in Marcuse’s citations of Simondon’s On the Mode of Existence of Technical Objects (1958) in key passages of One-Dimensional Man (1964), where Marcuse develops his critical theory of technology. Conceptually, both Marcuse and Simondon consider alienation as a fundamental pathology of industrial-capitalist society, characterized by a disconnection between subject and world and rooted in psychic, socio-economic, and technological factors (Marcuse, 1964; Simondon, 1958). Their critiques of alienation lead them to theorize liberation as achievable through a transformation of technology and its integration with human life.

Besides their shared concern for technological alienation (Bathelemy, 2008) and their focus on how technology can materialize values (Feenberg, 2017), both thinkers emphasize the necessity of detaching technology from the paradigm of work. For Marcuse, this involves critiquing the dominant technological rationality based on the intensive exploitation of human and non-human energies and advocating for the development of alternative technologies inseparable from broader social and anthropological transformations (1964; 1969). Simondon, on the other hand, argues that technology should not be reduced to mere utility (1958) and that work reproduces the Aristotelian hylomorphic schema (2020 [1964]), which separates form from matter and leads to a reductive understanding of technical operations as an active domination of inherently passive materials. In other words, when technology is reduced to work and utility, it becomes a source of alienation. While the critique of work in Marcuse’s oeuvre emphasizes on socio-economic elements, Simondon stresses the epistemic aspects.

Overall, I propose that the socio-economic and the epistemological critiques can mutually reinforce each other. Simondon’s framework of the modes of existence of technicity (1958) provides a more precise framework for realizing Marcuse’s vision of a liberatory technology (1969), which, as Feenberg (2023) noted, remains somewhat vague and fraught with internal tensions. Conversely, Marcuse’s Marxist critique of capitalist automation (1964) highlights socio-economic barriers to achieving Simondon’s envisioned technical culture (1965). Together, their theories indicate the necessity of transforming technology beyond its association with work toward artful, playful, and inventive modes of engagement.

References

Angus, I. (2019) Logic of Subsumption, Logic of Invention, and Workplace Democracy: Marx, Marcuse, and Simondon. Philos. Technol. 32, 613–625. https://doi.org/10.1007/s13347-018-0324-4

Bardin, A. (2018) Philosophy as Political Technē: The Tradition of Invention in Simondon’s Political Thought. Contemp Polit Theory 17, 417–436. https://doi.org/10.1057/s41296-018-0210-y

Barthélémy, J. H. (2008) Simondon ou l’encyclopédisme génétique. Paris: PUF.

Feenberg, A. (2017). Technosystem: The Social Life of Reason. Harvard University Press.

Feenberg, A. (2023) The Ruthless Critique of Everything Existing: Nature and Revolution in Marcuse's Philosophy of Praxis. Verso.

Marcuse, H. (1964) One-dimensional Man: Studies in the ideology of advanced industrial society. Routledge.

Simondon, G. (1958) Du Mode d'existence des objects techniques. Aubier.

Simondon, G. (1965) “Culture et technique”. Bulletin de l’Institut de philosophie, 55-6(3-4).

Simondon, G. (2020) Individuation in Light of the Notions of Form and Information. University of Minnesota Press.

Toscano, A. (2009) Liberation Technology: Marcuse's Communist Individualism. Situations: Project of the Radical Imagination, 3, 1.

 
8:45am - 10:00am(Papers) Virtue ethics I
Location: Auditorium 3
Session Chair: Maaike Eline Harmsen
 

Does technology transform phronesis? A foray into the virtues and vices of procycling

Tiago Mesquita Carvalho

Faculty of Arts and Humanities of the University of Porto, Portugal

Alasdair MacIntyre's actualization of Aristotelian ethics provides a case for understanding human flourishment as embedded in the context of human practices where existing rules, examples, and patterns of excellence guide achieving the goods internal to a practice. Notwithstanding, MacIntyre didn't provide a rationale for analysing the transformation and even destruction of practices that technology triggers and how some of its contextual skills and virtues might become outdated in the process. On the other hand, Shannon Vallor’s approach builds on the interface between technology and virtues and considers how even moral deskilling can occur on various occasions. This is especially relevant for today as most human activities, tasks, jobs, and professions are incurring an increased digital process where skills and the self are transformed according to algorithms. The pace of technological progress can bring epistemic and moral disruption to practices, erasing or disguising the availability of relevant opportunities and situational affordances for acting and forming moral habits and practical reasoning (phronesis). There is nevertheless the potential for new emergent technologies to enable moral reskilling or upskilling processes.

In this presentation, I will argue that professional cycling deserves attention for these matters. Procycling can be understood as a practice where virtues and technology have always interacted to shape what it means to flourish as a rider. First, one must understand the evolution of professional cycling as tightly coupled to technology and how it has been transforming the sport and rider’s techno-moral skills. Procycling encompasses a wide range of race styles and disciplines: the skills required to win a race depend not only on individual qualities but also on controllable and incontrollable factors like the nature of the parcourse, weather conditions, and race dynamics. Undoubtedly, being genetically gifted plays a role in being a good rider. Still, virtues that support riders achieving the internal goods of racing play a decisive role in delivering good performances: courage, patience, self-control, and fortitude are pivotal techno-moral skills that athletes need to cultivate to endure the hardships of a rigorous training regimen or the discipline to return after setbacks like crashes.

The radio, introduced in the 90s, is a case in point of how riders need to adapt their skills and judgments to excel in a new technomoral environment. However, like in many other sports, procycling has been lately subjected to increasing scientific and technological approaches. Performance, nutrition, training, and resting have capsized to more AI and data-driven methods, where biometrics like watts/kg, calories, heart rate monitors, and VO2 max are the determining factors for gauging capabilities, giving insight into a rider's potential. What is now considered the image of a «good rider» is almost tantamount to the performance of a quantified self: «my numbers have never been better» is a usual mantra. While riders of old had a hermeneutical skill of self-interpreting bodily sensations and feelings - «racing on instinct» - and reading the race situation to assess their chances for attacking, nowadays numbers ultimately dictate outcomes that were once dependent on moral skills.



Creative machines & human well-being: an ethical challenge for the fully flourishing life?

Matthew Dennis

TU Eindhoven, Netherlands, The

Generative artificial intelligence (GenAI) is rapidly changing the creative industries and displacing the role of creative professionals. This has captured the attention of researchers from a wide range of disciplines, especially those investigating the future of creative work (Manovich 2024). GenAI can radically expand what creatives can do, speed up their activities, eliminate boring tasks, and make specialist skills otiose. Precisely because of the astonishing abilities of this technology, many creatives view it with a mix of alarm and scepticism because GenAI threatens to make the most distinctive part of their vocation redundant. These concerns are important. GenAI has only just started to change the creative industries, and the consequences of deploying it will fundamentally restructure creative work as we know it. Nevertheless, job and industry disruption are not the only reasons to be wary. Creative machines may adversely affect the well-being of creative professionals, as well as others who do creative work. This creates a new ethical challenge.

The connection between exercising one’s creativity and one’s well-being is often made in anecdotal contexts. Many creative activities (dancing, singing, life drawing, pottery classes) are explicitly promoted as ways to pursue self-care and enhance our well-being. There is empirical support for this link too. Recently, a comprehensive meta-analysis found a strong correlation of (r =.14) between well-being and what the authors call ‘everyday creativity’ (Acar et al. 2020). Furthermore, in many professions (not just creative ones), workers vociferously report that they value and enjoy the creative part of their job most (Deuze 2025). This requires us to ask: How does outsourcing our creative capacities to creative machines, such as GenAI, affect our well-being? What happens to our capacity to enjoy work when creative machines can perform many creative tasks better than we can ourselves?

This presentation will show how philosophical insights into how creativity and well-being are connected can illuminate the extent to which creative machines are useful in creative life. I will begin by examining the empirical literature that claims there is a robust connection between creativity and well-being. I will then show that we can understand this connection in three ways, which map onto three distinctions in the philosophical literature about well-being (hedonist, desire satisfaction, objective list). My analysis will prepare for an extended discussion of the role of creative machines in the good life, focusing on the creative tasks that GenAI cannot do. Finally, I will identify the ethical dangers of outsourcing creative tasks to GenAI by exploring Joanna Maciejewska’s viral slogan: ‘I want AI to do my laundry and dishes so I can do my art and writing. Not for AI to do my art and writing, so I can do my laundry and dishes’ (Maciejewska 2024). The slogan garnered 3.2M views on X, and resonated with creative professionals across the world.



Virtual Pregnancy

Daria Bylieva

Peter the Great St.Petersburg Polytechnic University, Russian Federation

Technology is changing human life in every possible way. However, bearing a child came under its influence only in the 20th century. The development of reproductive technologies and their promotion paved the way for the separation of motherhood from bearing children, which began to be understood as a technical process. While this has been studied for birth-control technologies as well as incubators, this analysis concerns the representation of pregnancy in computer games which reflect people's beliefs and desires. These games address the desire to ignore or reduce the process of bearing a child, both on the part of the creators of the game and the players. Despite the fact that carrying a baby until birth outside the body of a future mother becomes possible due to progress in the field of biomedical technologies justification for this phenomenon is largely due to the increasing virtuality of existence - the increasing digitalization of all aspects of life, including the most intimate ones, makes life more like a video game. The EctoLife project, presented in 2022, combines the latest reproductive and digital technologies, preparing society to accept virtual game pregnancy, when a baby can be ordered online, a doctor will take your cells and the "child creation" technology will be launched in an artificial capsule in a special factory (or in an apartment). Virtual pregnancy is the expectation of the birth of a child as in a game, where the character is not subject to any obligations or restrictions, just need to wait a while and the child will appear. At the same time, if desired, you can install a special mod that allows to look at the unborn child, send greetings or music. In a digital society, capacities for monitoring and control increase, but there also new ways of representing the future-baby which occupies a strictly defined place in life, like a capsule in the corner of a room, or in smartphone application, which you can turn to when there is a desire to remember the unborn baby. The technologization of intimate processes of human life is accompanied by logical arguments and quantitative research, while the arguments against are based on old-fashioned common sense and emotional arguments.

 
8:45am - 10:00am(Papers) Social media
Location: Auditorium 4
Session Chair: Luca Possati
 

"But I did not mean to say that". On affective utterances on social media and their collective epistemic effects

Lavinia Marin

TU Delft, Netherlands, The

What are the attitudes with epistemic import expressed by users on social media? It seems that many of the claims uttered by social media users are of epistemic importance: knowledge claims, belief claims, reasoning, bringing up evidence or questioning other's evidence, and debunking. Taken as assertions, these claims do affect their audiences and their epistemic agency.

Recent work on the epistemology of social media has argued that users do not so much put forth epistemic claims, but rather they express allegiances and signal belonging to a group. Social media claims (posts, comments, memes, even images and reactions) would be mostly special kinds of expressive speech acts (Arielli, 2018; Marsilli, 2021), which should be analyzed instead of focusing on what is their intended effect on the audience, be that expressive, persuasive, inciting action, or giving rise to an emotion.

It has already been shown that on social media platforms, the affective and the epistemic are intertwined, sometimes indistinguishable. This intertwining is best detected by looking at the speaker's intent, and what they hoped to achieve with a certain social media utterance. But regardless of this intent, the effect can be epistemic for the audiences. This raises several problems for the epistemology of social media.

First, should we hold affective expressions to the same standards as the claims that can be true/ false? From the speaker's perspective, this seems too harsh a standard, but from the audience's perspective, this may be appropriate. Taking as a starting point the distinction between the intended effect of an utterance on social media and its achieved effect, I propose a taxonomy of types of utterances with epistemic effects occurring in the social media environment.

Secondly, I explore the most problematic case of an epistemic collective effect whereby the audiences of an utterance alter their epistemic attitudes towards a claim while the effect is achieved in a distributed and unintended manner. I show how the epistemic collective effect of social media posts are used currently in cognitive warfare and propaganda as a tactical way of changing people's beliefs with affective expressions and reactions to these expressions, and I argue that it has a detrimental effect on the current epistemic environments.

I end the presentation with several proposals for tackling these kinds of collective unintended epistemic effects of affective expressions online by outlining several measures coming from a designerly perspective and also from a critical literacy perspective.

References:

Emanuele Arielli. (2018). Sharing as Speech Act. Versus, 2, 243–258. https://doi.org/10.14649/91354

Marsili, N. (2021). Retweeting: Its linguistic and epistemic value. Synthese, 198(11), 10457–10483. https://doi.org/10.1007/s11229-020-02731-y



Smoking versus social networking; analyzing the analogy between tobacco use and social media use

Daphne Brandenburg

University of Groningen, Netherlands, The

In contemporary society, selling cigarettes to a twelve-year-old is unthinkable. Yet, we routinely expose children to products with seemingly similar addictive and harmful effects: social media.

Historically, addressing tobacco addiction has resulted in prohibitions on the sale of tobacco to anyone under the age of 18 and other public health interventions, despite industry resistance. These policy measures can and have been justified in the light of how tabaco usage undermines autonomy, reduces wellbeing, and harms others. Contemporary efforts to regulate social media usage, such as Australia’s recent ban on social media access for children under 16, seem to mirror this approach.

The analogy between tobacco and social media is at least implicit in contemporary ethical analysis and it provides a good starting point for investigating what a good policy response to social media would be. According to this analogy tobacco and social media are harmful in relevantly similar ways and should therefore by subjected to similar policy measures. This talk examines this particular argument from analogy.

I ultimately argue that the analogy does not hold. But analyzing it does demonstrate that it breaks down for different reasons than has been thought. It has been argued that banning social media is wrong because it would violate children’s autonomy and because social media does not affect the wellbeing of children. I demonstrate these objections do not hold because social media usage is, in this regard, relevantly similar to tobacco usage. The analogy also holds when it comes to harm to others and there are important parallels where the influence of industry and lobbyists are concerned.

But the analogy breaks down because social media isn’t essentially toxic. Social media does or may provide advantages and opportunities that tobacco does not and, contrary to cigarettes, one can take the toxicity out of social media. I discuss what this means for the current bans, and for desirable policy responses to social media more generally speaking.

 
8:45am - 10:00am(Papers) Large Language Models I
Location: Auditorium 5
Session Chair: Alexandra Prégent
 

LLMs, autonomy, and narration

Björn Lundgren1,2, Inken Titz3

1Friedrich-Alexander-Universität, Germany; 2Institute for Futures Studies, Sweden; 3Ruhr-Universität Bochum

LLMs, Autonomy, and Narrative

Suppose Jane talks about Joe to several of her friends, to strangers, indeed, to whomever she meets. While this issue has been phrased in terms of privacy rights violation, we sidestep that discussion and focus on how Jane might affect Joe’s ability to define his own persona, his personal identity, the narrative about his person. However, we are not focused on people talking about other people, but on how technology shapes these narratives.

Our focus here is on large language models (LLMs), which have raised a plethora of ethical discussions (e.g., authorship, copyright). Here we instead focus on an issue that has garnered less—if any—attention in the literature: How LLMs shape our personal narratives.

In our talk we aim to explain two main things. First, we aim to explain how LLMs might diminish individual’s control over their self-narrative and how that reduced control, in turn, affects their ability to shape their own narrative, and thus identity (see Titz 2024). The result is a potential diminishment of their autonomy. Here we will also argue (contra Marmor 2015) that people's interest in having reasonable control over self-presentation is an autonomy rather than privacy rights issue (see Lundgren 2020; b; Munch 2020). We aim to show that LLMs diminishing control over self-narratives has to do with their presenting content by making use of a distinct narrative framing.

Second, we will argue that this might be one specification of a more general loss of control over narratives. The production of narratives is changing from a human to a computational activity. Thereby, AI systems are reducing our human control (i.e., our collective autonomy) over the production of narratives. Story-telling is an integral part of human social binding and cultural productions, and it is expectable that LLMs negatively impact on its quality.

References

Lundgren, B. (2020a). A dilemma for privacy as control. The Journal of Ethics, 24(2), 165-175.

Lundgren, B. (2020b). Beyond the concept of anonymity: what is really at stake. K. Macnish & J. Galliot (ed.) Big data and democracy, 201-216.

Marmor, A. (2015). What is the right to privacy? Philosophy and Public Affairs, 43(1), 3-26.

Munch, L. A. (2020). The Right to Privacy, Control Over Self‐Presentation, and Subsequent Harm. Journal of Applied Philosophy, 37(1), 141-154.

Titz, I. (2024), Debunking Cognition. Why AI Moral Enhancement Should Focus on Identity. J.-H. Heinrichs, B. Beck & O. Friedrich (ed.) NeuroProstethics. Ethical Implications of Applied Situated Cognition, 103-128.



Intimacy as a Tech-Human Symbiosis: Reframing the LLM-User Experience from a Phenomenological Perspective

Stefano Calzati

TU Delft, Netherlands, The

In the first two chapters of their book, Calzati and de Kerckhove (2024) identify two ecologies – language and digital – based on different operating logics: the former enabling the creation and sharing of meaning; the latter enacting sheer computability and efficiency. This leads to what the authors call “today’s epistemological crisis” due to the partially incommensurable world-sensing that these two ecologies produce. What happens when we apply these ideas to Large Language Models (LLMs)? This paper sets out to answer this question, ultimately outlining a research agenda which substantially departs from current studies on LLMs (Chang et al. 2024; Hadi et al. 2024), also within STS (Dhole 2023).

LLMs are deep learning algorithms trained on vast datasets able to recognize, predict, and generate responses based on provided inputs. To the extent to which language is the operating system enabling human communication, its automation inevitably produces effects that reverberate to the core of what it means to know (Mitchell & Krakauer 2023). This requires exploring the epistemological hybridization arising whenever LLMs and users – i.e., digital and language ecologies – interface with each other.

The popularization of LLMs has led to various applications across fields (Hadi et al. 2024). This growing body of research tends to maintain an essentialist standpoint towards the LLM-user relation, meaning that LLMs and users are considered as two distinct poles converging through interaction, mutual feedback, and in-the-loop or ex-post supervision (Chang et al. 2024). While important for benchmarking the effectiveness of LLMs, this essentialist standpoint falls short of regarding and exploring the dynamic co-evolution of the human-technology pair – that is, a symbiosis – and its consequent impact on the creation (and validation) of knowledge.

In this regard, it is useful to adopt a phenomenological approach (cf. Delacroix 2024; Harnad 2024) endowed with the task of digging into and untangling the effects and affects that the LLMs-user symbiotic experience produces from an epistemological point of view. At stake is the “why”, more than the “how”: why LLMs produce the outputs they produce? Is it possible to detect patterns across the LLM-user symbiotic experiences? To what extent does the LLM-user co-dependent experience lead, in the longer run, to forms of idiosyncratic knowledge?

Here I outline three phenomenological research axes, which can all contribute to tackle these questions, beyond current work. One axis focuses on the entanglement between prompts and generated outputs through longitudinal and comparative (across different language systems, as well as intra- and inter-LLMs) studies. This could bear witness to converging and/or diverging meaning-making patterns or, conversely, to a degree of randomization of the LLM-user experience’s outputs. A second axis turns the attention to what today we consider “hallucinations” or “glitches” of LLMs. The goal, in this sense, would be to map longitudinally and comparatively these occurrences to investigate the extent to which they are indeed rhapsodic outputs or, instead, possible traces of a broader tech-human epistemology in the making. A third axis entails the ethnographic study of LLMs’ uses by socioculturally and linguistically diverse users to explore the affects of the LLM-user experience in terms of perceived contextual fitness and finesse of the outputs generated. In this regard, the case of 1 the Road (2017), which predates current LLMs and was promoted as the first travelogue written entirely by a neural network, is discussed as a proto form of artificial hypomnemata.

References

Artificial Neural Network. (2017). 1 the Road. Jean Boîte Editions.

Calzati, S. & de Kerckhove, D. (2024). Quantum Ecology: Why and How New Information Technologies Will Reshape Societies. MIT Press.

Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., ... & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45.

Delacroix, S. (2024). Augmenting judicial practices with LLMs: re-thinking LLMs' uncertainty communication features in light of systemic risks. Available at SSRN, https://dx.doi.org/10.2139/ssrn.4787044

Dhole, K. (2023). Large language models as Sociotechnical systems. In Proceedings of the Big Picture Workshop (pp. 66-79). Association for Computational Linguistics., Singapore. https://doi.org/10.18653/v1/2023.bigpicture-1.6

Hadi, M. U., Al Tashi, Q., Shah, A., Qureshi, R., Muneer, A., Irfan, M., ... & Shah, M. (2024). Large language models: a comprehensive survey of its applications, challenges, limitations, and future prospects. TechRxiv. https://doi.org/10.36227/techrxiv.23589741.v6

Harnad, S. (2024). Language writ large: LLMs, ChatGPT, meaning and understanding. Frontiers in Artificial Intelligence, 7, 1490698. https://doi.org/10.3389/frai.2024.1490698

Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120.



Large language models and cognitive deskilling

Richard Heersmink

Tilburg University, Netherlands, The

Human cognizers frequently use technological artifacts to aid them in performing their cognitive tasks, referred to as cognitive artifacts (Heersmink 2013). Critics have expressed concerns about the effects of some of these cognitive artifacts on our onboard cognitive skills. In some contexts and for some people, calculators (Mao et al 2017), navigation systems (Hejtmánek et al 2018) and internet applications such as Wikipedia and search engines (Sparrow et al 2011), transform our cognitive skills in perhaps undesirable ways. Critics have pointed out that using calculators has reduced our ability to perform calculations in our head; navigation systems have reduced our ability to navigate; and having access to the internet results in storing less information in our brains. Such worries go back to Socrates who argued that writing ‘will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory’. Socrates’ argument can be generalized beyond writing and memory. If cognitive artifacts perform information-storage or computational tasks for us, the systems in the brain that would otherwise perform or execute those tasks tend to lose their strength or capacity.

In this talk, I’ll extend this worry to large language models (LLMs) in a two-part manner. First, through lack of practice, those who use LLMs extensively may lose some of their writing skills. Writing a text involves various sorts of cognitive tasks, including spelling, formulating grammatically and stylistically correct sentences, developing logical relationships between concepts, evaluating claims, and drawing inferences. When we consistently outsource writing tasks to LLMs, it is possible that our writing skills and the cognitive skills that build up our writing skills are reduced through a lack of practice.

Second, some philosophers have argued that writing is one way through which our minds and cognitive systems are extended (Menary 2007; Clark 2008). On this view, creating and manipulating written words and sentences is part of our cognitive processing. One of the major advantages of writing for cognitive purposes is that it enables us to manipulate external representational vehicles (words, sentences, paragraphs) in a way that is not possible in our heads. For example, having written a sentence, we can read it, evaluate it and rewrite it, if necessary. The same is true for paragraphs and larger pieces of text. Written text provides a cognitive workspace with informational properties that are complementary to the properties of our internal cognitive workspace (Sutton 2010). When we spend substantially less time writing (either with pen and paper or with a computer), that venue for extending our minds and cognitive systems gets less developed, which may impoverish our overall cognitive skills.

References

Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press.

Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information, and categories. Review of Philosophy and Psychology, 4, 465–481.

Hejtmánek, L., Oravcová, I., Motýl, J., Horáček, J. & Fajnerová, I. (2018). Spatial knowledge impairment after GPS guided navigation: Eye-tracking study in a virtual town. International Journal of Human Computer Studies, 116, 15–24.

Mao, Y., White, T., Sadler, P. & Sonnert, G. (2017). The association of precollege use of calculators with student performance in college calculus. Educational Studies in Mathematics, 94, 69–83.

Menary, R. (2007). Writing as thinking. Language Sciences, 29, 621–632.

Sparrow, B., Liu, J. & Wegner, D. M. (2011). The Google effect: The cognitive effects of having information at your fingertips. Science 333, 776–778.

Sutton, J. (2010). Exograms and interdisciplinarity. In The Extended Mind (ed. Menary, R.) 189–225. MIT Press.

 
8:45am - 10:00am(Papers) Responsible innovation
Location: Auditorium 6
Session Chair: Kaush Kalidindi
 

Between Responsible Innovation and the Maintenance Turn: Imaginaries of Changeability and the Collaborative Frameworks for Philosophy of Technology and Environmental Ethics

Magdalena Holy-Luczaj

University of Wroclaw, Poland

The much-needed dialogue between environmental ethics and philosophy of technology, particularly in addressing the ecological crisis, has gained momentum in recent years (Almazán and Prádanos 2024; Kaplan 2017; Gardiner and Thompson 2017). Notably, philosophy of technology shows growing enthusiasm for fostering this exchange (Puzio 2024; Gellers 2024; 2020; Coeckelbergh 2019). This paper aims to contribute by systematizing and expanding these efforts, focusing on the issue of changeability and the expectations surrounding the normativity of change in nature and technology. Specifically, it examines two pro-environmental solutions proposed by philosophy of technology: responsible innovation (von Schomberg, Blok 2019; Koops 2015) and the maintenance turn (Young and Coeckelbergh 2024; Perzanowski 2022; Young 2021).

To deepen the understanding of their potential, these solutions will be framed respectively within two distinct approaches: responsible innovation in the holistic perspective, which analyzes broad patterns in “general Technology with a Capital T” (Blok 2024; Ritter 2021), and the maintenance turn as belonging to the concrete technologies approach (Verbeek 2022; 2005) and artifactist tradition (“artifactology”), which emphasizes the impacts of specific artifacts or artifact types (Mitcham 1994). Significantly, each framework reflects contrasting views on change: systemic transformation, characterized by technological progress, is considered desirable, whereas changes in individual artifacts are often perceived negatively, as they typically involve wear, tear, or obsolescence resulting from that progress.

This inherent conflict will be further examined by comparing these technological dichotomies with environmental ethics, where debates over holistic approaches versus the focus on individual natural entities hinge on whether ecosystem stability or individual interests should take precedence (McShane 2014). Regarding the dichotomy of changeability within environmental ethics, individual dynamism (unlike in artifacts) is often valued for its internal teleology, whereas systemic change tends to be viewed neutrally (e.g., evolution) or negatively (unlike in technology), as seen in the expectation of stability betrayed by terms like “climate change.”

Against this backdrop, the paper explores how responsible innovation and the maintenance turn shape the pre-ecological mindset within the philosophy of technology, with changeability as the central focus. It examines how these frameworks address environmental emergencies, considers their limitations, and evaluates potential contradictions between them.

A primary concern in this analysis is to maintain a clear distinction between artifacts and natural beings, as well as between ecosystems and technology. It is essential to highlight the unique vulnerabilities of artifacts (often overlooked by environmental ethics) in contrast to natural entities and to recognize the distinctive qualities of technology. This awareness helps to prevent the misapplication of care frameworks designed for artifacts to natural beings, or the extension of technological paradigms to the environment, despite the clearly pro-environmental orientation of both responsible innovation and the maintenance turn. Only through this strategy can we effectively integrate philosophy of technology with environmental ethics.

Literature

Almazán, Adrián; Prádanos, Luis I. 2024. “The political ecology of technology: A non-neutrality approach.” Environmental Values, 33(1), 3–9.

Blok, Vincent; Lemmens, Pieter. 2015. “The Emerging Concept of Responsible Innovation. Three Reasons Why It Is Questionable and Calls for a Radical Transformation of the Concept of Innovation.” In Responsible Innovation 2. Concepts, Approaches, and Applications, eds. Bert-Jaap Koops et al., Cham: Springer.

Blok, Vincent. 2024. “The ontology of creation: towards a philosophical account of the creation of World in innovation processes.” Foundations of Science 29, 503–520.

Coeckelbergh, Mark. 2019. Introduction to Philosophy of Technology, Oxford, UK: University of Oxford Press.

Gardiner, Stephen, M.; Thompson, Allen. 2017. The Oxford Handbook of Environmental Ethics. Oxford, UK: University of Oxford Press.

Gellers, Josh. 2020. Rights for Robots: Artificial Intelligence, Animal and Environmental Law. Abingdon: Routledge.

Gellers, Josh. 2024. “Not Ecological Enough: A Commentary on an Eco-Relational Approach in Robot Ethics.” Philosophy of Technology 37, 59.

Kaplan, David M. (ed.). 2017. Philosophy, Technology, and the Environment. Cambridge MA: MIT Press.

Koops Bert-Jaap. 2015. “The Concepts, Approaches, and Applications of Responsible Innovation An Introduction.” In Responsible Innovation 2. In Responsible Innovation 2. Concepts, Approaches, and Applications, eds. Bert-Jaap Koops et al., Cham: Springer.

McShane K. 2014. “Individualist Biocentrism vs. Holism Revisited.” The Ethics Forum 9(2): 130-148.

Mitcham, Carl. 1994. Thinking through Technology. The Path between Engineering and Philosophy. Chicago: University of Chicago Press.

Perzanowski, Aaron. 2022. The Right to Repair: Reclaiming the Things We Own. Cambridge, MA: Cambridge University Press.

Puzio, Anna. 2024. “Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics.” Philosophy of Technology 37: 45.

Ritter, Martin. 2021. “Philosophical Potencies of Postphenomenology.” Philosophy & Technology 34: 1501–1516.

Verbeek, Peter-Paul. 2005. What Things Do. Philosophical Reflections on Technology, Agency, and Design University Park: Pennsylvania State University Press.

Verbeek, Peter-Paul. 2022. "The Empirical Turn.". In The Oxford Handbook of Philosophy of Technology, ed. Shannon Vallor, 35-44. Oxford: Oxford University Press.

von Schomberg, L. Blok V. 2021. “Technology in the Age of Innovation: Responsible Innovation as a New Subdomain Within the Philosophy of Technology.” Philosophy & Technology 34: 309–323.

Young, Mark, Thomas. 2021. “Maintenance.” In The Routledge Handbook of the Philosophy of Engineering, edited by N. Doorn & D. Michelfelder, 356–368. New York: Routledge.

Young, Mark, Thomas; Coeckelbergh, Mark (eds.). 2024. Maintenance and Philosophy of Technology. Keeping Things Going. New York: Routledge.

 
8:45am - 10:00am(Papers) Engineering ethics
Location: Auditorium 7
Session Chair: Andreas Spahn
 

Artificial Intelligence in design engineering practice

Hans Voordijk, Farid Vahdatikhaki, Maarten Verkerk

University of Twente, Netherlands, The

Artificial Intelligence through machine learning becomes increasingly important in civil engineering practice and has been applied, among others, in structural design of building infrastructures, time and cost planning of large projects, and risk quantification. Methods based on machine learning (ML) use large volumes of stored data and identify patterns or relationships within these datasets through a self-learning process. ML technologies “learn” these relationships from training data, without being explicitly programmed.

In essence, the majority of ML technologies describe patterns and real-world phenomenon in a fashion not very comprehensible, intelligible or at the very least rationalizable to human (i.e., black box solutions). Given the ever-increasing accuracy of ML technologies in civil engineering practice, the reliance and dependence of humans on ML-based solutions increase. This can create situations where users of ML technologies perceive this practice from a perspective unbeknownst to them. When introduced in decision-making, final decisions become the result of a complex interplay between designers, users and technology (Fritz, Brandt, Gimpel, & Bayer, 2020; Redaelli, 2022; Verbeek, 2008). A major question is if one can speak of a hybrid agency between these actors. Can one speak of a dialogue between these actors? And under what conditions can AI become smarter than its designer or user? Does AI also learn from its user?

An empirical case study of ML technology for the design optimization process of wind turbine foundations to reduce the overall design time without compromising the accuracy was carried out to deal with these questions. Because in an actual design process, an extensive number of design variables are involved, it is essential to verify and determine the most influential variables. Using ML, the likely influential design variables are determined. But can AI in this use practice become smarter than the design engineer by showing new influential design variables not seen by the engineer? And which actor determines there is a missing influential variable? Does ML provide designers a better understanding of the importance of each design variable and how a certain design variable influences the behavior of the wind turbine foundation?

It is shown that designing and using AI systems in design engineering involve many actors. Because there is a web of responsibilities it is impossible to hold one actor accountable. Concepts from postphenomenology (Verbeek, 2008) may clarify this perceived hybrid agency between users, designers and AI in design engineering practice. By using these concepts, the increasingly close relationship between users, designers and AI in design engineering practice can be examined. In responding to the call of Leiringer and Dainty (2023), applying these concepts can in general have the potential to increase the maturity of civil engineering research.



Concept Engineering: a new approach to address Conceptual Disruption and Virtual Ethical Dilemmas

伯灵 孙, 旭 徐

Inner Mongolia University, China, People's Republic of

This paper explores the intricate relationship between socially disruptive technologies and conceptual engineering, emphasizing the necessity for re-evaluating and adjusting traditional concepts in the context of rapid technological advancement. The introduction of the "Conceptual Exportation Question" (CEQ) highlights the complexities of moral judgments within virtual environments, revealing the limitations of existing ethical and metaphysical frameworks in assessing virtual behaviors. The findings indicate that conceptual engineering serves as an effective response to the disruptions caused by technology, offering a novel methodological framework to tackle the diverse levels and forms of conceptual interference.

The research underscores the importance of adapting ethical considerations to the unique challenges posed by emerging technologies, particularly in the realms of information security and virtual ethics. By examining how traditional concepts may falter in the face of technological evolution, the paper advocates for a more dynamic approach to conceptual definitions and applications. Future studies should delve deeper into the implementation of conceptual engineering within fluctuating technological landscapes, focusing on the practical application of CEQ standards. This entails a comprehensive analysis of the variances in concept applicability across different virtual contexts, which is essential for a nuanced understanding of the ethics surrounding virtual actions.

Moreover, the paper argues for the integration of historical and contextual analyses of concepts, drawing on Nietzschean methods to assess the origins and functions of concepts in both non-virtual and virtual environments. Such an approach not only enriches the discourse on virtual ethics but also bridges the gap between traditional ethical considerations and the unique demands of virtual interactions. By recognizing the relative nature of concepts and their ethical implications, the study paves the way for more robust frameworks that can accommodate the complexities of virtual realities.

In conclusion, the research presents a compelling case for the application of conceptual engineering as a vital tool in addressing the challenges posed by disruptive technologies and virtual ethical dilemmas. By fostering a deeper understanding of how concepts can be adapted to meet the demands of evolving technological contexts, the study contributes to the ongoing discourse on ethics in the digital age. It encourages scholars and practitioners alike to consider the implications of technological advancements on moral judgments and to develop more flexible and context-sensitive ethical frameworks that can effectively navigate the intricacies of virtual environments.

 
8:45am - 10:00am(Symposium) Virtue ethics (SPT Special Interest Group on virtue ethics)
Location: Atlas 2.215
 

Virtue ethics (SPT Special Interest Group on virtue ethics)

Chair(s): Marc Steen (TNO), Zoe Robaey (Wageningen University & Research)

Rationale and goal: To further develop the field of virtue ethics in the context of technology, design, engineering, innovation, and professionalism. We understand virtue ethics broadly: as diverse efforts to facilitate people to cultivate relevant virtues so that they can flourish and live different versions of ‘the good life’, with technology, and to help create structures and institutes that enable people to collectively find ways to live well together. We envision research in the following themes:

• Citizens and practices: E.g., study how technologies can help, or hinder, people to cultivate specific virtues (Vallor 2016), e.g., how a social media app can corrode one’s self-control—and how we can envision alternative designs, that can instead help people to cultivate self-control.

• Professionals and institutions: E.g., view the work of technologists, and other professionals, through a virtue ethics lens (Steen 2022). We are also interested in various institutions, e.g., for governance or oversight. This theme also relates to education and training (next item, below).

• Education and training: E.g. design and implement education and training programs. This offers opportunities to study, e.g., how students or professionals cultivate virtues. This will probably involve cultivating practical wisdom or reflexivity, as a pivotal virtue (Steen et al. 2021).

• Traditions and cultures: E.g., study and appreciate various ‘Non-Western’ virtue ethics traditions, like Confucianism, Buddhism (Ess 2006; Vallor 2016) or Indigenous cultures (Steen 2022b). We can turn to feminist ethics or study virtues in specific domains, like health care or the military.

 

Presentations of the Symposium

 

Internal conflicts among moral obligations: pursuing a quest for the good as innovators

Marco Innocenti
University of Milan, UNIMI

Developing a new technology involves assuming various roles, each of which carries moral obligations extending beyond the professional responsibilities within a team. Innovators influence human and non-human entities, from small-scale communities to broader societal and environmental impacts. These roles often conflict, as each role suggests a distinct good to be pursued. For example, an engineer or designer may experience conflicting obligations towards their team, clients, local community, or environmental health. While these roles are interconnected, each may propose a different version of what the good to pursue is, creating internal tensions. In other words, drawn from van de Poel’s (2015) description of responsibilities-as-moral-obligations, each one of them indicates that we should ‘see to it that’ something is the case, and the different ‘somethings’ may practically exclude each other. The question then arises: how can innovators address these moral conflicts in a coherent way? The challenge lies not simply in minimising harm to relevant stakeholders, but in actively pursuing positive outcomes across the different spheres of influence. Thus, it is crucial to understand how internal conflicts among these roles can be reconciled to uphold moral obligations. While compromise may seem an obvious solution, it often fails to actually ‘satisfy’ these obligations. In light of this, what alternative approaches might better address these conflicts?

This presentation addresses two central questions. First, how does the practice of developing technology transform into a recognition of different moral responsibilities as moral obligations? Second, how can these moral obligations be integrated into the technology development process in a way that provides a coherent framework for action? To answer these questions, I engage with Alasdair MacIntyre’s (1984) concept of the quest for the good, which seeks to order diverse goods in a comprehensive way. This idea offers a way to address the internal conflicts innovators face by framing technology development as a shared pursuit of the common good. I argue that MacIntyre’s virtue ethics framework allows for the ethical integration of diverse roles, providing a path toward moral coherence in the development process in small R&D teams. Another point of reference will be the concept of ‘decompartmentalization’ in MacIntyre (1999; 2016), which will highlight the difference between the present technological situation and the more distinctly ‘modern’ one. Drawing on this notion, I propose a framework that guides teams in structuring their efforts around their different understandings of the good(s), as informed by their moral obligations. This framework encourages collective deliberation, helping to identify synergies between different moral obligations and providing a pathway to address internal conflicts constructively in a procedural manner. By framing technology development as a shared quest for the good starting from individual internal conflicts, this approach aims to reconcile moral obligations across team members while ensuring that ethical reflection plays a central role in the innovation process.

Bibliography

• MacIntyre, A. (1999). Social Structures and Their Threats to Moral Agency. Philosophy, 74(289), 311–329.

• MacIntyre, A. (2007). After virtue: A study in moral theory (3rd ed). Notre Dame, Ind: University of Notre Dame Press.

• MacIntyre, A. (2016). Ethics in the Conflicts of Modernity: An Essay on Desire, Practical Reasoning, and Narrative. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316816967

• van de Poel, I. (2015). Moral Responsibility. In I. Van De Poel, L. Royakkers, & S. D. Zwart (Eds.), Moral Responsibility and the Problem of Many Hands (0 ed., pp. 12–43). Routledge. https://doi.org/10.4324/9781315734217.

 

Artificial virtues and hermeneutic harm

Andrew Rebera
KU Leuven

Virtue-based approaches to AI development are becoming increasingly popular, at least in the philosophical literature. One approach focuses on the role of human virtues—the virtues of developers, regulators, users, and so on—in ensuring that AI is responsibly designed and deployed. A second approach is concerned with the possibility of artificial virtues, virtues that AI systems themselves might have or exemplify. A burgeoning philosophical literature debates which virtues are in question, what is their nature, and how might these virtues be inculcated in “artificial moral agents” (AMAs). Attempts to implement virtuous behaviour in AMAs tend to leverage “bottom-up” rather than “top-down” strategies, exploiting the apparent affinity between, on the one hand, virtue ethics’ traditional emphasis on education in the virtues through habituation, imitation of exemplars and, on the other hand, the training of AI models through reinforcement learning, imitation learning, and other machine learning techniques where models learn behaviours by interacting with environments, observing patterns, or optimising for desired outcomes. However, some authors have argued that such approaches fundamentally misunderstand the nature of virtue and its relationship to moral agency. On one line of argument, AMAs are at best able to behave in conformity with virtue, but they cannot act from virtue (Constantinescu & Crisp, 2022). On another line of argument, even bottom-up learning approaches cannot ensure that AMAs are fully embedded in the “forms of life” against which genuine moral action takes place (Graff, 2024). Neither critique is, or is intended to, fundamentally undermine virtue-based approaches to AI development. Yet both indicate certain limitations that shape how virtue-based approaches should be implemented.

In this paper I revisit these two kinds of critique. While sympathetic, I suggest that they overlook important connections and parallels between the virtues and reactive attitudes (Strawson, 2008). When we recognise virtues in others, we rely not only on observation of their outward behaviour, but “see through” their actions to their underlying moral character. This recognition process is inseparably tied to the feeling and regulation of reactive attitudes like gratitude, resentment, and indignation. In recent work, the regulation of reactive attitudes in response to harms caused by AI agents has been argued to give rise to “hermeneutic harm”, i.e. emotional and psychological pain caused by a prolonged inability to make sense of an event (or events) in one's life (Rebera, 2024). In this presentation, I argue that virtue-based approaches to AI development may actually exacerbate this problem by creating a form of “virtue theatre” that makes it harder for humans to properly make sense of and respond to AI behaviour. Like the above mentioned critics of virtue-based approaches, I do not claim that the failure of virtue based approaches to resolve the problem of hermeneutic harm means it ought to be abandoned. But the argument does indicate an urgent need to better understand the extent and nature of AMAs’ participation in our networks of moral relationships and reactive attitudes.

Bibliography

• Constantinescu, M., & Crisp, R. (2022). Can Robotic AI Systems Be Virtuous and Why Does This Matter? International Journal of Social Robotics, 14(6), 1547–1557. https://doi.org/10.1007/s12369-022-00887-w.

• Graff, J. (2024). Moral sensitivity and the limits of artificial moral agents. Ethics and Information Technology, 26(1), 13. https://doi.org/10.1007/s10676-024-09755-9.

• Rebera, A. P. (2024). Reactive Attitudes and AI-Agents – Making Sense of Responsibility and Control Gaps. Philosophy & Technology, 37(4) https://doi.org/10.1007/s13347-024-00808-x.

• Strawson, P. F. (2008). Freedom and Resentment. In Freedom and resentment and other essays (pp. 3–28). Routledge.

 
10:05am - 11:20am(Papers) Disruptive technology II
Location: Blauwe Zaal
Session Chair: Jeroen Hopster
 

Digital technologies and the disruption of the lifeworld

Christa Laurens1, Vincent Blok1, Bernice Bovenkerk1, Nolen Gertz2

1WUR, Netherlands, The; 2UT, Netherlands, The

Socially disruptive technologies like digital technologies (e.g. AI, digital twins, social media etc.) raise societal concerns, such as concerns about surveillance capitalism, instrumentalization of production and consumption, and datafication of virtually all domains of human and non-human life. We can frame these concerns in terms of the disruption of the lifeworld – i.e. the meaningful environment of everyday life experience in which we are at home and live and act together. In this paper, we assume that SDT’s do not only impact those who use them, but also have a broader impact on the lifeworld in which we live and act. That is to say, that it is possible to identify general patterns in the way in which SDT’s disrupt the lifeworld. The question becomes what are these patterns of SDT’s and is it possible to distinguish 21st century SDT’s and previous generations of SDT’s, ranging from the telescope to the printing press to the steam engine?

The central aim of this paper is to reflect on general patterns emerging from 21st century SDT’s. To this end, the paper will consist of three parts. In part one, we will conduct a phenomenological analysis of the concept of “lifeworld” in order to gain an understanding of the World that is disrupted by SDT’s. In this part, the focus will be on an explication of the Husserlian concept of lifeworld. Having gained an understanding of the meaning of lifeworld, we will turn in part two of the paper to general patterns in the way in which modern technologies disrupt the lifeworld. To this end, we consider ‘classical’ philosophers of technology who did not explicitly focus on the specific category of SDT’s in their work, but might nonetheless provide important insights for contemporary debates, for instance societal disruptions associated with scientific method and technization (Husserl), Enframing (Gestell) and cybernetics (Heidegger), device paradigm (Borgmann), concretization (Simondon), and acceleration (Stiegler). After the discussion of general patterns in part two, we will turn to a critical discussion of these philosophical insights in part three of the paper. In this part, we consider the prominent case of AI . The aim of this final part of the paper is to examine whether the general patterns as conceptualized by Husserl/ Heidegger/ Borgmann/ Simondon/ Stiegler suffice to explain the way in which AI can be said to socially disrupt the lifeworld, or whether something is still missing from our explanation. Our hypothesis is that digital technologies like AI are partly covered by these theories, but also constitute a new type of disruption that requires philosophical analysis. In the paper, we provide a framework to study the patterns of disruption of SDT’s.



Understanding deep technological disruptiveness as the social construction of human kinds

Wybo Houkes

Eindhoven University of Technology, Netherlands, The

There has recently been a surge of interest in socially ‘disruptive’ or ‘transformative’ effects of technologies, i.e., changes in “patterns of human communication or interaction … caused by a technological shift” (Carlsen et al. 2010). Discussions often focus on paradigmatic examples, such as green-energy and machine-learning technologies, and relevant societal responses. Some have gone beyond a case-oriented approach to identify shared features and key factors for disruptiveness (e.g., Schuelke-Leech 2018; Hopster 2021) or to bring out the role of conceptual disruption, i.e., challenges to “established classificatory practices and norms” (Löhr 2023). One key factor identified in these more general analyses is the depth or order of magnitude of disruptiveness. It has been emphasized how some disruptions “go to the heart of our human self-understanding” (Hopster 2021: 6) or how a “community is … forced to make a classificatory decision that has severe social ramifications” (Löhr 2023: 5). This still leaves open why and how some technological disruptions have this depth, especially in cases where there are no obvious conceptual conflicts or connections to fundamental ethical concepts.

With this paper, I aim to improve our understanding of these deeply disruptive effects. I do so by analyzing at least some of them as constructions of socially relevant human kinds, where constructions are induced and mediated by technology. A case in point is the human kind ‘influencer’. This proposal builds on recent realist analyses of mind-dependent kinds, in particular Mallon’s (2016) account of stable human kinds, which I modify to include technological change as a driver. Conceptual disruption then pertains to changes in the way in which we classify ourselves and others in relation to technological developments. In some cases, this re-classification leads to stable shifts in patterns of human behavior and interaction. This means that the ‘depth’ of the disruption can be partly understood in terms of the stability of the newly constructed kind and the explanatory depth of generalizations that refer to it. Influencers now feature, for instance, in a wide range of social-scientific explanations and associated interventions. I show how this develops and qualifies Löhr’s characterization of a community being ‘forced’ to make a ‘decision’ with social ‘ramifications’—and how it allows deep conceptual disruptions without conceptual conflict.

References

Carlsen, H., K.H. Dreborg, M. Godman, S.O. Hansson, L. Johansson and P. Wikman- Svahn (2010) “Assessing socially disruptive technological change”, Technology in Society 32: 209–218, https://doi.org/10.1016/j.techsoc.2010.07.002

Hopster, J. (2021) “What are socially disruptive technologies?”, Technology in Society 67: 101750, https://doi.org/10.1016/j.techsoc.2021.101750

Löhr, G. (2023) “Conceptual disruption and 21st century technology: A framework”, Technology in Society 74: 102327, https://doi.org/10.1016/j.techsoc.2023.102327

Mallon, R. (2016) The Construction of Human Kinds. Oxford: Oxford University Press.

Schuelke-Leech, B. (2018) “A model for understanding the orders of magnitude of disruptive technologies”, Technological Forecasting and Social Change 129: 261–274 https://doi.org/10.1016/j.techfore.2017.09.033



Conceptual disruption and niche disruption

Guido Löhr

Vrije Uni Amsterdam, Netherlands, The

Technological innovation and technological products tend to offer new opportunities for action. New artifacts and events can be socially highly disruptive. They can challenge existing social practices and norms. For example, the invention of the internet was immensely disruptive and changed established social practices and structures in fundamental ways that were and continue to be difficult or impossible to predict. For example, the immense potential for online misinformation and its effects on democratic societies remain challenges that we are yet to overcome and even fully comprehend.

Several philosophers of technology and language have recently argued that new technologies don’t only challenge oru disrupt social practices and norms but even our ways of thinking about the world. Löhr (2022, 2023) has coined the term ‘conceptual disruption’ to describe phenomena where our established classifications are no longer unchallenged or even inapplicable due to fundamentally novel artifacts and opportunities for action. For example, many scholars have argued that the invention of the mechanical ventilator, has challenged our concepts of death and alive.

While the notion of conceptual disruption has been discussed by a number of authors in the recent literature, we still need to understand how we can detect and also react to such disruptions. Hopster and Löhr have proposed that we should think about it as a kind of adaptation and the way to adapt to disruption using the method of conceptual engineering. However, what conceptual engineering is and how it could be used for conceptual adaptations remains to be investigated.

In this paper, I will first introduce the conceptual analysis of the concept of conceptual disruption. In the second section, I will introduce the concept of niches and argue that we can think of concepts as niches and of conceptual disruptions as disruptions of said niches. Niches, I argue can most minimally be understood as sets of affordances, whether these affordances are biological or social, i.e., social norms offer or constitute niches and affordances as well. While disruptions eliminate affordances, the aim of adaptations is to regain them. Finally, I show how conceptual engineering can be understood as a form of niche adaptation.

 
10:05am - 11:20am(Papers) Phenomenology I
Location: Auditorium 1
Session Chair: Wouter Eggink
 

Lost in extension: technology, ignorance, and cognitive phenomenology

Angel Rivera-Novoa

University of Antioquia, Colombia

The relationship between cognitive enhancement and the extended mind thesis (EM) has traditionally been viewed through an optimistic lens, particularly by thinkers who see technological integration as a path to expanded cognitive capabilities. This paper challenges that straightforward optimism by identifying a specific form of epistemic loss that can occur through excessive technological cognitive extension. While acknowledging EM’s compelling account of how cognitive processes and dispositional states can be constituted partly by environmental elements, including technological artefacts, I argue that an exacerbated reliance on such extension may lead to a particular type of ignorance that has been overlooked in current discussions.

The analysis begins by examining how EM, as conceived by Clark and Chalmers (1998), operates at both the cognitive process and dispositional state levels, establishing how technological artefacts can legitimately form part of our cognitive systems under specific conditions. I then engage with Pritchard’s (2022) critique of the technology-induced ignorance thesis, which argues that cognitive extension through technology does not necessarily lead to increased ignorance since knowledge and true beliefs can still be acquired through extended processes. However, I demonstrate that Pritchard’s analysis, while valuable, misses a crucial dimension of potential epistemic loss: cognitive phenomenology.

The paper’s central contribution is the identification and analysis of how technological cognitive extension, even while preserving or enhancing propositional knowledge, may lead to ignorance about the qualitative experience of performing cognitive tasks. Drawing on theories of cognitive phenomenology - the experiential qualities associated with thinking, reasoning, and understanding - I argue that when cognitive tasks are increasingly delegated to technological artefacts, we risk losing touch with the experiential qualia of performing these operations ourselves. This represents a distinct form of ignorance not about propositional content, but about what it feels like to engage in cognitive processes directly.

This argument advances beyond current discussions of extended cognition and technological enhancement by highlighting a previously unexamined trade-off in cognitive extension. While we may gain efficiency and expanded capabilities through technological integration, we simultaneously risk becoming ignorant of the phenomenological dimension of cognitive activity - the subjective experience of what it is like to calculate, deduce, or remember by our own means. This insight has significant implications for how we conceptualize cognitive enhancement and raises important questions about the value we place on preserving direct cognitive experiences in an increasingly technology-mediated world.

The paper concludes by considering the broader implications of this analysis for discussions of cognitive enhancement, suggesting that a more nuanced approach to technological integration might be needed - one that balances the benefits of extended cognition with the preservation of direct cognitive experiences and their associated phenomenology. This work contributes to ongoing debates in the philosophy of mind, cognitive science, and philosophy of technology while opening new avenues for investigating the relationship between technological enhancement and human cognitive experience.



In the eye of the shitstorm: a critical phenomenology of digital conflict

Niclas Rautenberg

University of Hamburg, Germany

Once praised as a beacon for open exchange and the promise for a truly deliberative polity, the Internet, with its echo chambers, conspiracy theories, and uncivil communication practices, is now often considered a threat to the very foundations of liberal democracy. Online discourse seems helplessly polarized and abrasive—and political conflict in the digital world (subsequently ‘digital conflict’) insurmountable. Technology ‘pessimists’ in the phenomenological literature explain these shortfalls by the very nature of the virtual: disembodied digital spaces simply do not allow for meaningful encounters between persons (e.g., Dreyfus 2008; Fuchs 2014). If meaningful engagement is precluded, so is resolving our quarrels. Other scholars hold that the body and other-understanding still reach into the digital world (e.g., Ekdahl & Ravn 2022; Osler 2020, 2021), albeit potentially in a modified form. The work of such ‘optimists’ suggests that dysfunctional conflict is not inevitable. Yet, these authors focus on the harmonious aspects of online sociality or on ludic forms of competition (e.g., videogames). How does political conflict, i.e., strife where matters of existential concern are at stake, complicate the picture? This paper presents initial findings of the three-year research project ‘Virtual Battlefields: Political Conflict in Digital Spaces’ currently running at the University of Hamburg. Based on qualitative data from interviews with politicians, activists, and journalists, it relies on an existential-phenomenological account of political conflict construed as a co-occurrence of different types of normative claims. The political agent is always first and foremost a political patient, called to adjudicate between different reasons coming their way. How do digital places have to be structured, so that the different forms of normativity of the political world—i.e., me-reasons, thou-reasons, we-reasons, they-reasons—can come to the fore? Moreover, as an instance of critical phenomenology, the paper investigates how logics of power, so familiar to us from the analogue world, continue to affect interactions on Facebook, WhatsApp, or Zoom. How does the social location of the user with respect to their occupation, gender, or race impact their experience? Are these differences stable across digital platforms? If need be, can these platforms be changed to make digital conflict more constructive; or do we need to dismantle them altogether? This paper will give some first, tentative answers to these questions.

References

Dreyfus, Hubert L. 2008. On the Internet. 2nd ed. Thinking in Action. London: Routledge.

Ekdahl, David, and Susanne Ravn. 2022. “Social Bodies in Virtual Worlds: Intercorporeality in Esports.” Phenomenology and the Cognitive Sciences 21 (2): 293–316. https://doi.org/10.1007/s11097-021-09734-1.

Fuchs, Thomas. 2014. “The Virtual Other: Empathy in the Age of Virtuality.” Journal of Consciousness Studies 21 (5–6): 152–73.

Osler, Lucy. 2020. “Feeling Togetherness Online: A Phenomenological Sketch of Online Communal Experiences.” Phenomenology and the Cognitive Sciences 19 (3): 569–88. https://doi.org/10.1007/s11097-019-09627-4.

———. 2021. “Taking Empathy Online.” Inquiry, 1–28. https://doi.org/10.1080/0020174X.2021.1899045.



Responsibility gap: Introducing the phenomenological account of criminal law

Kamil Mamak

Jagiellonian University, Poland

criminal law has many goals. It is a branch of law that serves multiple and varied purposes such as preventing crime in general, avoiding punishing innocents, rehabilitating offenders, and bringing justice. From another perspective, its role could be described as necessary for achieving and maintaining social order (on the aims of criminal law, see e.g. Hart 1958). It is a specific kind of law that has a unique role in societies (on criminal law exceptionalism (cf. Burchard and Duff 2023)). Because of its unique role, according to Lima, criminal law may not be the appropriate legal branch to respond to the wrongdoings of AI agents if such agents cannot satisfy the requirements of criminal responsibility. He wondered whether administrative law or a "whole new subject of law in-between" should be adopted to deal with AI-related harm. (Lima 2017, 696) His concern was that employing criminal law to respond to events may have unintended consequences that undermine common understanding of what criminal law is and what it can do. I will defend the thesis that promotes a legal response to the crimes of robots by criminal law.

For that purpose, I will introduce and defend the phenomenological account of criminal law. This account has explanatory value, facilitating response to robot crime, and what is more it has the potential to advance the discussion on robots. It allows for a better understanding of criminal law such as its history, role of (social) media, and for a reasoned response to challenges such as the growing skepticism (in the literature) regarding the moral responsibility of humans (see e.g. G. Caruso 2021). Accordingly, what is counted as criminal law could change over time: some behaviors such as “wrongdoings” of animals, might be classified as criminal at some historical point (see e.g. Holsinger 2009), and then disappear from the radar. Furthermore, the account is flexible: it could also accommodate responses to the crimes of minors and to corporate crimes; it could accommodate the idea of introducing posthumous punishment (punishment after death) (Melissaris 2017); and it could react to new issues such as those related to the deployment of robots. The fluidity that is embedded in the account could help in responding to new changes in society, as well as forcing constant evaluation of reality in order to adjust criminal law to new challenges. This objective marks a shift in the current discussion in focusing not on the agent that might be held responsible, but rather on the aims of criminal law that might be realized if it is employed.

References:

Burchard, Christoph, and Antony Duff. 2023. “Criminal Law Exceptionalism: Introduction.” Criminal Law and Philosophy 17 (1): 3–4. https://doi.org/10.1007/s11572-021-09612-6.

Caruso, Gregg. 2021. “Skepticism About Moral Responsibility.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Summer 2021. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2021/entries/skepticism-moral-responsibility/.

Hart, Henry M. Jr. 1958. “The Aims of the Criminal Law Sentencing.” Law and Contemporary Problems 23 (3): 401–41.

Holsinger, Bruce. 2009. “Of Pigs and Parchment: Medieval Studies and the Coming of the Animal.” PMLA 124 (2): 616–23. https://doi.org/10.1632/pmla.2009.124.2.616.

Lima, Dafni. 2017. “Could AI Agents Be Held Criminally Liable: Artificial Intelligence and the Challenges for Criminal Law.” South Carolina Law Review 69 (3): 677–96.

Mamak, Kamil. 2023. Robotics, AI and Criminal Law: Crimes against Robots. Routledge.

Melissaris, Emmanuel. 2017. “Posthumous ‘Punishment’: What May Be Done About Criminal Wrongs After the Wrongdoer’s Death?” Criminal Law and Philosophy 11 (2): 313–29. https://doi.org/10.1007/s11572-015-9373-2.

 
10:05am - 11:20am(Papers) Work
Location: Auditorium 2
Session Chair: Aarón Moreno Inglés
 

Democratizing workplace AI as general intellect

Tim Christiaens

Tilburg University, Netherlands, The

New AI technologies are continuously being developed to enhance labour productivity and reduce labour costs at work. By automating mental and creative labour, companies hope to become less dependent on human resources for financial success. While workers often criticize the influence of AI in their work, Big Tech and the companies that implement these technologies often present this outcome as the inevitable price of progress. However, as shown by Daron Acemoglu and Simon Johnson, recent winners of the Nobel Prize in Economics, the effects of new workplace technologies are not set in stone but the outcome of power differences in labour markets. Depending on the bargaining power of workers, the influence of democratic governance on technology investments, or consumer boycotts, the benefits of technological progress can be distributed in ways that benefit more stakeholders than only Big Tech and employers. If the latter meet resistance from other stakeholders, they must share the spoils of progress more equitably. Furthermore, Matteo Pasquinelli’s The Eye of the Master highlights how AI technology relies on exploitative absorption of human intelligence. AI technology requires enormous databases composed of information about human behaviour in order to simulate and automate these behaviours. Without data inputs from the workers whose bargaining power is eroded by automation, these AI-systems lack the information required to generate intelligible output.

In my paper, I wish to delve deeper into Pasquinelli’s usage of Karl Marx’ notion of the general intellect to defend a case for the democratization and nationalization of workplace AI. In his Grundrisse, Marx described industrial workplace technologies as automated systems that absorbed the artisanal knowledge and skills from craftsmen in order to deskill the workforce. The industrial general intellect parasitized on human intelligence to grant more coordinative powers to the managers in charge of the labour process. A similar threat is currently at play in the implementation of workplace AI: new technologies extract human knowledge from workers and embed it in software unilaterally controlled by managers and data technicians. In this context, democratizing AI implies granting workers a voice over how their data is used to transform the future of work. Nationalization offers an expedient strategy for institutionalizing democratic control over workplace AI development and implementation. While most political initiatives propose to merely regulate the implementation of AI, while leaving private property and decisions over financial investments and technological development to corporate agents, I argue that this regulatory approach is insufficient to fully grant workers democratic control over workplace AI. Nonetheless, the nationalization of AI requires strategic rethinking of AI governance in order to avoid the specter of rigid communist planning. While nationalization offers opportunities for democratic control, it also pertains the risk of smothering technological innovation through bureaucratic planning.



All play and no work? AI and existential unemployment

Gary David O'Brien

Lingnan University, Hong Kong S.A.R. (China)

Recent developments in generative AI, such as large language models and image generation software, raise the possibility that AI systems might be able to replace humans in some of the intrinsically valuable work through which humans find meaning in their lives – work like scientific and philosophical research and the creation of art. If AIs can do this work more efficiently than humans, this might make human performance of these activities pointless. This represents a threat to human wellbeing which is distinct from, and harder to solve, than the automation of merely instrumentally valuable activities. In this paper I outline the problem, assess its seriousness, and propose possible solutions.

In section 1 I lay out my assumptions about the development of AI, and specify the kind of work I am interested in. I assume that AIs will continue to develop to the point at which they can outperform humans at most or all tasks. I also assume that the economic problems of automation will be solved. That is, if most work, in the sense of paid employment, is automated, human beings will still be able to support themselves, for example with a Universal Basic Income. My concern is with meaningful work, by which I mean the exertion of effort to attain some non-trivial goal, and I contrast this with play. I argue further that research and the creation of art are particularly important forms of meaningful work.

In section 2 I argue that AI could reduce our incentives to perform meaningful work, and this might result in a great deskilling of humanity. This would be bad if we accept a perfectionist element in our theory of wellbeing. Secondly, the deskilling of humanity might result in our civilization becoming locked in to a suboptimal future.

In section 3 I argue that, even if humans continue to do meaningful work in the automated world, the mere existence of AI systems would undermine its meaning and value. I critique Danaher’s (2019a) and Suits’ (1978) arguments that we should embrace the total automation of work and retreat to a ‘utopia of games’. Instead, I argue that the threat to meaning and value posed by AI gives us a prima facie reason to slow down its development.

 
10:05am - 11:20am(Symposium) Ways of Worldmaking and the Languages of Technology and art – Symposium on Nelson Goodman and the philosophy of technology
Location: Auditorium 3
 

Ways of Worldmaking and the Languages of Technology and Art – Symposium on Nelson Goodman and the Philosophy of Technology

Chair(s): Sabine Ammon (Technische Universität Berlin), Alfred Nordmann (Technische Universität Darmstadt, Germany), Ryan Wittingslow (University of Groningen)

The fairly slim, yet enormously influential books Ways of Worldmaking (1984) and Languages of Art (1968/76) by Nelson Goodman offer a rich account of processes involved in constructing and creating reality. Pictures, descriptions, and notations; denotation and exemplification; truth and rightness; works and worlds; working and fitting — these notions are discussed with a concrete sensibility for abstract questions: how we do things (not only with words!) and what this implies for ontology and epistemology. Throughout, Goodman chips away at the philosophical prejudice that questions of truth and questions of worldmaking boil down to the problem of picturing, highlighting instead the procedural and creative aspects of worldmaking.

While Goodman discusses works of fine art, he does not — or only incidentally — consider works of technical art. Worldmaking and Goodman’s constructivism are confined to the ways in which one presents (darstellen) and represents (vorstellen) worlds, broadly conceived. It needs to be explored or established how we might extend this to artefactual worldmaking, to making and building and design. How does this implicate codes and notations and principles of composition, how does technology denote or exemplify, anticipate, project, or transform a world?

In short, what if anything does the author of these influential books have to offer to the philosophy of technology?

The organizers of this symposium convened a group of 18 scholars to discuss these questions. We now want to share some of the products of this discussion with the community at large. Three papers explore and extend Goodman’s project. We will conclude with a discussion by three commentators on the contributions and the general question.

 

Presentations of the Symposium

 

Ways of Worldmaking – What procedural epistemology can offer for the making of AI technologies: A casuistic exploration of AI knowledge technologies

Sabine Ammon1, Philipp Geyer2
1Technische Universität Berlin, 2Leibniz Universität Hannover

Although there is no explicit connection to the philosophy of technology in the works of Nelson Goodman, his thoughts can and should be made fruitful for reflecting technology. In my talk I will explore to what extend the inherent processualty in Goodman’s epistemology allows us a better understanding of technology as a constantly making and re-making of worlds. I will argue that his stance allows us to get a better grasp of the nature of designing and the emergence of yet-not-existing artefacts. However, there are also blind spots in his approach when it comes to practices and questions of materiality as well as the ontology of things. By exploring constraints and affordances of his conceptual framework, I will show to what extend it can contribute to a better understanding to worlds in the making and where we need to go beyond Goodman when reflecting technology.

 

Ways of worldmaking - symbolic orders and material compositions

Alfred Nordmann
Technische Unversität Darmstadt

What Goodman calls „rightness of rendering§ governs the symbolic orders of scientific representations as well as those of the arts. It requires knowledge not narrowly in the sense of „true justified belief“ but more importantly knowledge of how symbols can come together so as to achieve a satisfactory or felicitous structure (Catherine Elgin refers to this as „understanding“ rather than knowledge: we need to have an understanding of symbolic orders and how they work). This would be a common denominator not only of art and science in the sphere of (re)presentation, but also of symbolic and material composition in the sphere of building and making - a fruitful vantage point for the philosophy of technology. Thus, while Goodman overtly inhabits the sphere of symbolic orders and expands on the tradition of Kant, Cassirer, and Wittgenstein, Kuhn and Hacking, Lewis and Quine, can we untether him from the „world“ as the subject merely of (re)presentation? For this, the notion of „exemplification“ is on offer, but Goodman and Elgin tend to treat exemplification as a species of representation. By doing so, we may lose out on technological world-making.

 

Function as Exemplification

Ryan Wittingslow
University of Groningen

In this paper, I use Nelson Goodman’s work to propose a new theory of proper function. This theory—which I call 'function exemplification theory'—grounds functional 'properness' in Goodmanian symbol systems. In doing so, it adopts a fictionalist stance towards function: while it talks about proper functions as if they were real properties of artefacts, it treats them as useful fictions that help us make sense of artefact performance and change. This approach offers two key advantages. First, like other proper function theories, it provides robust normative benchmarks for evaluating artefact performance. Second, it explains how artefact functions can evolve incrementally through exaptation, maintenance strategies, and entrenched habits. While existing accounts acknowledge that artefacts can acquire new proper functions, they struggle to explain how these functions can 'drift' through use and maintenance practices. Function exemplification theory is particularly well-suited to analysing this gradual type of functional change.

 

Discussion and Commentary

Daria Bylieva1, Sadegh Mirzaei2, Leonie Möck3
1Peter the Great Saint Petersburg Polytechnic University, 2Technical University of Darmstadt, 3University of Vienna

A panel of three discussants will comment on the paper presentations and on the general question regarding Goodman's contribution to the philosophy of technology.

 
10:05am - 11:20am(Papers) Democracy
Location: Auditorium 4
Session Chair: Daphne Brandenburg
 

The new stage of democracy. A call for regulation of social media platforms based on theater theory

Alessandro Savi

University of Twente, Netherlands, The

Social networking systems like Facebook, TikTok and Instagram have become an essential part of everyday life. Unlike traditional media gatekeepers, these platforms allow users to search for information in an unmediated way, following a market-like logic rather than a truth-centered one. To snowball, content needs to gather several likes and visualizations that are independent from its epistemic reliability. Influencers carefully study their posts and adapt them to the audience they intend to intercept, staging up a show with measured choices of words, lights and framings. Following this trend, politicians have increasingly been using these platforms to obtain visibility, adapting their communication to this market-based, non-epistemic logic. Political scientists have described this shift in the balance of democracy with the category of “post-truth”, highlighting the rising role of appeals to emotion in the choice of who to vote. In spite of this, the CEOs of social media platforms do not acknowledge having any significant societal influence, thus upholding an image of neutrality of their businesses. On the other hand, philosophers of technology have argued that there should be more responsible regulation of social media platforms, since their current design might bring not only positive opportunities for their users but also undesirable, harmful consequences that corporate executives should anticipate. This paper aims to show that a helping hand in backing up this claim can be offered by aesthetic reflections on theater.

This paper will be structured as follows. First, I will give an overview of William Dutton’s suggestion that the Internet should be described as the fifth estate of democracy, focusing on the case of social media platforms. Second, I will argue that Ervin Goffman’s sociological framework allows for a description of social network systems as theatrical contexts. Third, I will argue that Bertolt Brecht’s theater theory allows us to understand that communication on social media platforms is centered on non-epistemic values, such as representativeness and appeals to emotion. This will also provide the justification for distinguishing social networking platforms from informational institutions. Fourth, I will back up this claim by borrowing the concept of post-truth from the field of political science and analyzing it in light of the idea of suspension of disbelief. Finally, I will conclude that social media platforms have a decisive influence on the functioning of the public sphere and that the emergence of the fifth estate as unprecedented social force can be framed as the start of a new chapter in the history of democracy. This will show that the supposed neutrality of social networking systems is a myth, and that philosophy of technology would benefit by adopting an interdisciplinary framework. Insights from sociology, theater theory and political science would provide useful analyses and labels to make the multifaceted influence of social media platforms on democracy more visible, thus making a stronger case for their regulation.



Immaterial Constitution

Harry R. Halpin

Vrjie Universiteit Brussel, Belgium

The question of how the Internet itself is maintained becomes increasingly important as humanity becomes more interconnected with the Internet. We survey the creation and cryptographic maintenance of Internet protocols by the IETF (Internet Engineering Task Force) and philosophical arguments over a new form of rights inherent to the Internet - called net rights, such as the right to access the internet or the right to the privacy of personal data - between the respective inventors of the Web and Internet (Tim Berners-Lee and Vint Cerf) from the standpoint of the philosophy of technology, in particular the philosophy of maintenance. In the wake of the NSA revelations of mass surveillance by Snowden, a number of legalistic initiatives started to form a “Magna Carta” for the Web, but these efforts uniformly failed. Yet the technical repair and maintenance of standards has produced a new kind of constitution for the Internet to defend net rights based on cryptography, which holds wider lessons for the philosophy of technology but also political philosophy.

First, we will outline the notion of "net rights." In a well-known philosophical argument, Clark and Chalmers (1998) propose, in their Extended Mind Hypothesis, that under certain conditions “the mind extends into the world” (p. 12). Given that that our very memory is extended into the Internet, it makes sense to count the capabilities given by the Internet as part of our mental capabilities under certain conditions. However, if our cognitive capabilities are extended into the internet, should our notion of rights be extended into the internet? For example, we believe our memories should be private. Yet on the internet, our personal data is often believed to be private, but often is accessed by various platforms like Google and even intelligence agencies like the NSA. We then will overview the philosophical debates over rights and how their contradictions were seemingly resolved by the deployment of cryptographic protocols in the wake of the Snowden revelations concerning NSA surveillance. This notion of net rights was supported by Tim Berners-Lee (the inventor of the Web), but rejected by Vint Cerf (the inventor of the Internet) as Cerf believed human rights should be universal and unchanging, and not dependent on contingent technological developments such as the Internet. Berners-Lee called for the creation of a "magna carta" of the Web as a kind of international treaty, but it overall came to naught. Instead, the defense of net rights was taken up by committees of engineers creating protocols.

Second, we call for a philosophical theory of standards and protocols, and an inspection of the standardization process of the Internet Engineering Task Force (IETF). An informal international standards body with vast reach, the IETF has been dubbed the “immaterial aristocracy” of the Internet. . From the perspective of maintenance, the case of transforming the Internet from an insecure to a secure architecture provokes a host of questions. One particular question is: To what extent has the process of collective maintenance of the Internet itself by standards bodies gone beyond the initial design of these protocols by its inventors in reaction to a crisis such as the mass surveillance? We argue that despite its invisible nature, this attempt by the IETF to form an “immaterial constitution” for net rights enforceable by technology has important philosophical ramifications for the very notion of human rights in a thoroughly digital era. In particular, it leads to the notion of rights enforced by code outside of legal and juridical frameworks.

The maintenance of the Internet is – perhaps contradictorily – both a democratic and technical constituent process, yet one that is not absolutely democratic. The IETF is ultimately a self-selected assembly of engineers, representing primarily the United States and the large capitalist firms of Silicon Valley with very little participation from its users, including the vast majority of the world’s population in the Global South. The primary attempts post-Snowden to create a governance of the Internet that puts the Global South at the center, such as the NetMundial Initiative led by Brazil in the wake of the Snowden revelations, were eventually destroyed by the IAB and other U.S. interests. One possibility is that Berners-Lee’s original idea that a wider, bottom-up, social movement was needed to transform the Internet into a truly democratic space.

It should be noted that a coup against the IETF itself is underway: A new generation of engineers – with Snowden, Manning, and Assange all in support – are trying to redesign a democratic and decentralized Internet beyond Silicon Valley based on inscribing both net rights – and also property rights – into code via blockchain technology (De Filippi and Wright 2018). Standards are also not the only way to protect the right to privacy: Being built by startups, mixnets build an overlay network on top of the existing internet that not only encrypts but also mixes data to prevent adversaries like the NSA from determining the metadata inherent in TCP/IP packet transmission (Diaz et al., 2021). However, the question of how can the Internet be governed and maintained as an actual absolute democracy involving its users lies unsolved. The formation of a new constitution of code by the IETF to maintain the internet may only be the first step. Further philosophical exploration is needed to determine the valences of this audacious ongoing movement by engineers to sublimate the social into the technical.

 
10:05am - 11:20am(Papers) Large Language Models II
Location: Auditorium 5
Session Chair: Alexandra Prégent
 

“Who” is silenced when AI does the talking? Philosophical implications of using LLMs in relational settings

Tara Miranovic, Katleen Gabriels

Maastricht University, Netherlands, The

Generative Artificial Intelligence (GenAI) is increasingly stepping into the most intimate areas of our relationships. Personalised GPTs, such as the Breakup Text Assistant(1) or the Wedding Vows GPT(2), highlight how GenAI is becoming a proxy for navigating complex emotional terrain. The use of Large Language Models (LLMs) ranges from machine translation and brainstorming, among other things, to outsourcing cognitive tasks, thereby significantly reducing the investment of personal resources such as time, emotions, energy, and mental effort. Drawing on Hannah Arendt’s interrelated concepts—including the interplays between 1) who (uniqueness) and what (external qualities), 2) speech and action, and 3) thinking and morality—we examine the epistemic and moral implications of (partly) delegating complex intellectual and emotional tasks to GenAI.

As Arendt (1958/1998) writes, our who “can be hidden only in complete silence and perfect passivity” (p. 179) making reliance on tools like the breakup text assistant particularly troubling. By replacing the vulnerable process of revealing oneself with automated outputs, these tools risk silencing the who, particularly in emotional communication. Arendt distinguishes the who—one’s uniqueness, revealed through their actions and words—from the what, which encompasses external qualities, talents, and shortcomings that can be intentionally displayed or concealed. Human plurality, Arendt argues, is defined by equality (enabling mutual understanding) and distinction (the uniqueness of each individual). Through speech and action, individuals reveal their who, not merely their what, thereby disclosing their distinctiveness and enabling meaningful relationships. Speech articulates the meaning of action, while action gives substance to speech, making both essential for expressing individuality. Without speech, action becomes incoherent and loses its revelatory quality; without action, speech may lack integrity and sincerity. Even though some people feel heard by AI-chatbots (see Yin et al., 2024), delegating emotional expression to GenAI may risk reducing communication to impersonal outputs that fail to reflect the individuality inherent in human speech and action.

GenAI, designed to simplify and optimise mental effort, including thinking, risks hindering rather than supporting these fundamental human activities. Philosophy—and thinking more broadly—does not seek to ease mental effort. Following Arendt (1978; 2003), thinking is an open-ended process aimed at understanding and ascribing meaning, rather than accumulating knowledge. It requires constant practice, and is integral to moral judgement and critical self-awareness. For Arendt, morality is the internal conversation that I have with myself (two-in-one): “Morality concerns the individual in his singularity” (Arendt, 2003, p. 97). Thinking fosters resistance to oversimplification, and can prevent individuals from succumbing to conformity. It demands engagement with complexity, which is essential for moral responsibility. If a GenAI tool is involved, it should make thinking harder, not easier, and offer ‘resistance’ and complexity, rather than calculating an answer. There are some interesting attempts on the market, such as MAGMA learning(3), developed to stimulate learning and creativity.

In the presentation, we will further elaborate on Arendt’s interrelated concepts and connect them to present-day discourse on AI, including the difference between (human) self-expression and (AI) mechanical creation (Vallor, 2024), and what we risk when companies keep seducing us with the latter(4).

List of references

Arendt, H. (1998 [original 1958]). The Human Condition. The University of Chicago Press.

Arendt, H. (1978). The Life of the Mind. Volume One, Thinking. Harcourt Brace Jovanovich.

Arendt, H. (2003, edited by Jerome Kohn). Responsibility and Judgment. Schocken Books.

Vallor, S. (2024). The AI Mirror. How to reclaim our humanity in the age of machine thinking. Oxford University Press.

Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14). https://doi.org/10.1073/pnas.2319112121

Links

(1) https://galaxy.ai/ai-breakup-text-generator

(2) https://chatgpt.com/g/g-ZcFaw73hO-wedding-vows

(3) https://www.magmalearning.com/home

(4) For instance, this advertisement promotes the outsourcing of intellectual tasks to Apple devices (Apple Intelligence), https://www.youtube.com/watch?v=3m0MoYKwVTM



Connecting Dots: Political and Ethical Considerations on the Centralization of Knowledge and Information in Data Platforms and LLMs

Anne-Marie McManus

Forum Transregionale Studien, Germany

The paper presents the case study of a data platform and planned LLM for The Lab for the Study of Mass Violence in Syria (“The Lab”). This research cluster -- of which the author is a member -- is mapping relationships between previously disconnected datasets on violence conducted in the Syrian War (2011-2024). With large quantities of information ranging from sensitive testimonies by former prisoners and massacre survivors, to publicly-available GIS imagery of property damage, this case study raises exceptionally stark ethical questions around privacy; the impacts of digitally-driven epistemologies on societies; and the risks and possibilities of technological citizenship in the aftermath of displacement and violence. Without downplaying the specificities of the Syrian case, these questions have implications for global debates on ethics and technology, which have to date been primarily conducted in relation to the Global North. For reasons including climate change, the rise of the far right, and ICTs themselves, wealthier societies are not insulated from civil strife, social polarization, and/or the (de-)siloing of knowledge and information. The Lab is, moreover, based in Germany but directs its outputs to both Middle Eastern societies and diasporic communities in Europe.

The scandal of the Netflix Prize epitomized emergent ethical risks in technologies that connect and combine even anonymized datasets (Kearns & Roth, 2021). When these risks are exclusively understood in terms of individual identification, it seems sufficient to support strategies like differential privacy (Dwork & Roth, 2014). Yet a defense of individual privacy alone does not help us evaluate the sociopolitical benefits and risks of traditional and AI-driven ICTs that combine – and democratize access to tools that combine -- knowledge bases that were previously fragmented, restricted, or even undocumented. On one hand, these tools offer unprecedented possibilities for technological citizenship, democratizing memory culture and promoting transitional justice. On the other, they pose uncharted harms, including social polarization and political manipulation. Expanding on Nissenbaum’s concept of contextual integrity and Huffer’s call for a political philosophy of technology, the paper explores these epistemological developments through The Lab case study. It addresses key themes of:

• scale (i.e., the centralization of large quantities of sensitive data);

• analysis and combination (i.e., the politics and ethics of new analytical possibilities offered notably in LLMs; Fioridi, 2014);

• and access (e.g., how do we update the ethics of informed consent in light of ICT-driven epistemologies?).

In its conclusion, the paper evaluates the frameworks under which Syrian stakeholders – at an historical moment of sociopolitical transition – might develop new models of technological citizenship through the shaping and oversight of knowledge production with ICTs.



LLMs and Testimonal Injustice

William James Victor Gopal

University of Glasgow, United Kingdom

Recently, testimonial injustice (TI) has been applied to AI systems, such as decision-making-support systems within healthcare (Byrnes, 2023; Proost, 2023; Walmsley, 2023), the COMPAS recidivism algorithm (Symons, 2022), and generative AI models (Kay, 2024). Extant accounts identify the morally problematic epistemic issue as being when the user of an AI system mistakenly assumes an AI system is epistemically advantaged in contrast to another human, such that they’re assigned a credibility deficit – call these Mistaken Assumption Accounts. There are two species of Mistaken Assumption Accounts: Specified Mistaken Belief and Unspecified Mistaken Belief accounts. Specified Mistaken Belief is as follows:

Ceteris paribus, in HAIH, an instance of algorithmic TI is inflicted on a human speaker, S, iff:

(i) A human user/hearer, H, mistakenly assigns an AI system, C, a credibility excess, thereby deflating the credibility of S such that S is assigned a credibility deficit [CONTRASTIVE CREDIBILITY DEFICIT]

(ii) CONTRASTIVE CREDIBILITY DEFICIT iff H mistakenly takes C to be in a superior epistemic position than a speaker S such that the output of C that-p is in better epistemic standing than S’ testimony just because that-p is the result of an algorithmic process [SPECIFIED MISTAKEN BELIEF], and

(iii) SPECIFIED MISTAKEN BELIEF for no other reason than C “being a computer” vs S “being human” [IDENTITY PREJUDICE].

Unspecified Mistaken Belief Accounts don’t specify the content of the users’ mistaken assumption leading to CONTRASTIVE CREDIBILITY DEFICIT, only that a hearer incorrectly identifies the epistemic position of an AI system. Proponents argue these beliefs are mistaken by appealing to (i) literature on data-bias, such that an AI system will not, necessarily, be less biased or prone to error than human reasoning or (ii) literature on opacity showing that trust placed in them is unjustified.

In this paper, I focus on how algorithmic TI emerges in quotidian uses of LLMs, such as ChatGPT. In the pars destruens, I offer a series of counterexamples to Mistaken Assumption Accounts, highlighting their extensional inadequacy. Then, I argue that the picture of identity prejudice in Specified Mistaken Belief is overly broad insofar as the relevant social identities for prejudicial credibility assessments are “being a human” and “being a computer”. In the pars construens, I provide an alternative which fares better: Undue Acknowledgement as Testifier. I argue that (i) when an LLM is taken to be a genuine member of minimally equal standing to humans in an epistemic community, an LLM is assigned a credibility excess such that a human suffers a credibility deficit, (ii) these credibility assessments are driven by implicit comparative credibility assessments based on the anthropomorphised “identity” of an LLM, and (iii) identity prejudice influences these credibility assessments. To achieve this, I draw upon work in HCI and feminist STS to show how user-experience design and the social imaginary surrounding AI contribute to TI. Consequently, this paper shifts the current focus in the literature from a discussion of how the proposed conditions for TI emerge from issues of bias within training data or the opacity of AI systems to how interactive relationships between humans and LLMs enable TI.

 
10:05am - 11:20am(Papers) Interpreting and engineering technology
Location: Auditorium 6
Session Chair: Hans Voordijk
 

Visualising the Quantum World in Quantum Technology: on Pragmatist and Realist Considerations in Quantum Interpretations

Thijs Latten

TU Delft, Netherlands, The

No consensus exists on how quantum mechanics should be interpreted (e.g., Laloë, 2019), and many argue that the quantum world is notoriously difficult to understand (e.g., Feynman, 1985). Yet physicists and engineers in quantum technologies (such as quantum computing and quantum communication) are finding innovative ways to actively create, manipulate and exploit quantum behaviour for practical benefit. In this paper, I argue that in research and engineering practices in quantum technology today, there exists a contradiction between the explicit embrace of particular interpretations of quantum mechanics (i.e., textbook quantum mechanics) on the one hand and the realist assumptions made in visualisations in engineering sketches on the other. Addressing this contradiction aids in fostering a fruitful interaction between the philosophy of quantum mechanics and quantum technology.

Textbook quantum mechanics traditionally restricts its domain to predicting measurement outcomes, sidestepping ontological questions about the reality of quantum phenomena outside measurement outcomes. In the current boom of quantum technologies, textbook quantum mechanics is widely accepted as the standard in research and engineering practices (e.g., Nielsen & Chuang, 2010) – physicists and engineers often do not explicitly utilise other interpretations in achieving their practical goals (except for very particular cases where applied Bohmian mechanics can be used, Benseny et al., 2014). However, engineers and physicists working on quantum technology often use tools for visualising (term borrowed from de Regt, 2017) quantum phenomena outside of measurement outcomes (e.g., Kalinin & Gruverman, 2011). In this paper, I assess the ontological and instrumental status of such illustrations through an analysis of two commonly used tools for visualising the quantum world in research and engineering practices, namely, engineering sketches in scanning tunnelling microscopy and models of qubits in quantum computing. I draw on earlier work connecting engineering sketches and interpretations of quantum mechanics (Vermaas, 2004, 2005). I argue that in case realist assumptions are present in these visualisations, the approach in engineering practices to visualise quantum processes outside of measurement outcomes conflicts with the instrumentalist approach that is often explicitly embraced in research and engineering practices in quantum technology. Moreover, I lay out some implications for other quantum interpretations (specifically Bohmian mechanics and the many worlds interpretation) by reflecting on the engineering context through realist and pragmatist debates in the philosophy of science (e.g., Chang, 2022).

This paper follows up on developments in the philosophy of techno-science to develop an understanding of the role of technology in scientific (foundational) aims (Boon, 2006, 2011; Knuuttila & Boon, 2011; Russo, 2016, 2022). Assessing these cases in quantum technology in light of pragmatist and realist discussions in the interpretations of quantum mechanics helps explain the corresponding roles of these tools throughout different interpretations. The approach of this paper is an attempt to utilise development in quantum technology to aid our understanding of the quantum world.

References

Benseny, A., Albareda, G., Sanz, Á. S., Mompart, J., & Oriols, X. (2014). Applied Bohmian mechanics. The European Physical Journal D, 68(10), 286. https://doi.org/10.1140/epjd/e2014-50222-4

Boon, M. (2006). How Science Is Applied in Technology. International Studies in the Philosophy of Science, 20(1), 27-47. https://doi.org/10.1080/02698590600640992

Boon, M. (2011). In Defense of Engineering Sciences: On the Epistemological Relations Between Science and Technology. Techné, 15(1), 49-71.

Chang, H. (2022). Realism for Realistic People: A New Pragmatist Philosophy of Science. Cambridge University Press. https://doi.org/DOI: 10.1017/9781108635738

de Regt, H. W. (2017). Understanding Scientific Understanding. Oxford University Press.

Feynman, R. (1985). The Character of Physical Law. The MIT Press. (1965)

Kalinin, S. V., & Gruverman, A. (2011). Scanning probe microscopy of functional materials : nanoscale imaging and spectroscopy. Springer. https://doi.org/10.1007/978-1-4419-7167-8.

Knuuttila, T., & Boon, M. (2011). How do models give us knowledge? The case of Carnot’s ideal heat engine. European Journal for Philosophy of Science, 1(3), 309. https://doi.org/10.1007/s13194-011-0029-3

Laloë, F. (2019). Do We Really Understand Quantum Mechanics? (2 ed.). Cambridge University Press. https://doi.org/DOI: 10.1017/9781108569361

Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press. https://doi.org/DOI: 10.1017/CBO9780511976667

Russo, F. (2016). On the Poietic Character of Technology. Humana.Mente: Journal of Philosophical Studies, 9(30).

Russo, F. (2022). Techno-Scientific Practices : An Informational Approach. Rowman & Littlefield Publishers, Incorporated. http://ebookcentral.proquest.com/lib/delft/detail.action?docID=7102521

Vermaas, P. E. (2004). Nanoscale technology: a two-sided challenge for interpretations of quantum mechanics. . In D. Baird, A. Nordmann, & J. Schummer (Eds.), Discovering the nanoscale (pp. 77-91). IOS Press.

Vermaas, P. E. (2005). Technology and the conditions on interpretations of quantum mechanics. British Journal for the Philosophy of Science, 56(4), 635-661.



Information Technology engineers' professionalism international comparison

Hiroaki Kanematsu, Fuki Ueno, Minao Kukita

Nagoya University, Japan

Today, the scope and impact of information technology (IT) is expanding into many citizens' private lives, industry, and the public sector. While IT has greatly boosted the productivity and the convenience of companies/citizens, there are also many cases of unethical use, such as surveillance capitalism[1] and the spread of false information driven by the attention economy, and deceptive patterns[2] that deceive users. These depend at least in part on the architecture of platforms, products, and services. As Lessig wrote, architecture is one of the factors that constrain people's online behavior[3], and IT engineers who create it have great power. Architecture becomes even more important in emerging technologies that directly interact with humans, such as AI agents and cybernetic avatars. In addition, because IT is advancing rapidly, engineers who develop/operate products and services are in a position to prevent unethical use before laws and regulations are enacted. Engineers are expected to work with professional ethics so that the technology is less likely to be abused. Therefore, it is important to know what kind of ethics IT engineers have, but this is not clear. We therefore conducted a survey on professional ethics among IT engineers in Japan and North America (NA).

The survey was conducted online. A total of 162 Japanese IT engineers responded between January and February 2024. A total of 178 NA IT engineers responded in November 2024.

No significant difference was found between Japan and NA in terms of awareness of being a professional, but differences were found in the reasons for this. This shows the difference in the concept of "professional" between Japan and NA.

NA engineers generally met the professional characteristics described in the textbook[4] better, however, NA showed conflicting results, with a higher percentage of respondents believing that general ethics are sufficient to deal with ethical issues in their work, and a higher percentage of respondents believing that engineers should not consider tech ethics[5].

The following suggestions for engineering education can be made;

First, in addition to providing technical education, IT companies should provide guidance on the education and practice of engineering ethics for IT professionals based on the company's core values, because the percentage of engineers working in Japanese IT companies who took courses in engineering ethics as students is not necessarily high.

Compared to Japan, a higher percentage of NA IT engineers took engineering ethics courses as students, and they are familiar with the subject. However, there were also many responses that did not recognize it as important, so active IT engineers will need ethics education that links it to their own work.

At the time of abstract submission, the presenter has the survey data from Japan and NA on hand, but the results of the survey of European IT engineers will also be included at the time of presentation.

This work was supported by JST Moonshot R&D Grant Number JPMJMS2011.

[1] S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books Ltd, 2019.

[2] H. Bringnull, M. Leiser, C. Santos, and K. Doshi, ‘Deceptive patterns – user interfaces designed to trick you. deceptive.design.’, Deceptive Patterns. Accessed: Jan. 02, 2025. [Online]. Available: https://www.deceptive.design/

[3] L. Lessig, Code and other laws of cyberspace. Basic Books, 1999. [Online]. Available: https://lessig.org/images/resources/1999-Code.pdf

[4] D. G. Johnson, Computer Ethics, 3rd edition. Pearson Education, 2000.

[5] M. Andreessen, ‘The Techno-Optimist Manifesto’, Andreessen Horowitz. Accessed: Oct. 18, 2023. [Online]. Available: https://a16z.com/the-techno-optimist-manifesto/



Enactivist App Design: Exper - a case study

Michael Butler1, Colin Graves2, Ian Werkheiser3

1University of North Dakota, United States of America; 2St. Lawrence College, Canada; 3University of Texas Rio Grande Valley, United States of America

As Sugimoto et. al (2021) have shown, people who use mobile navigation apps rather than paper maps to navigate unfamiliar spaces are less effective at retracing their route unaided. We argue that this is because of the form of instructions given by most mobile navigation apps. Turn based instructions, we assert, express unfounded assumptions about the nature of space and human cognition that are built into the design of ordinary mobile navigation apps. This results in a disorienting or displacing experience that hampers procedural memory, sense of agency, and sense of immersion.

We have designed an alternative model for mobile navigation apps based on the enactivist theory of cognition. On the enactivist account, cognition is not a process of calculation and symbolic representation – that is, it is not accomplished by something like a computer or a brain. Rather, cognition occurs in the ongoing engagement of an organism with its environment. Our enactive navigation app – EXPER - aims to extend a user’s senses such that she could become sensitive to new aspects of the environment which are ordinarily imperceptible. We aim to extend the mind by enriching the user’s environment - empowering her as an active navigator with extended powers of perception - rather than extending the mind by representing space and subsequently delivering information relevant to planning action within that space. We aim to demonstrate how front-loading philosophical theory in software design and result in a less alienating relationship to technology and the way that it mediates our lived environments.

In this presentation we will share preliminary results from experiments run in Texas and North Dakota testing for differences in procedural memory, sense of immersion in the environment and sense of agency when using EXPER vs. a traditional turn-based navigation app.

 
10:05am - 11:20am(Papers) Ethics I
Location: Auditorium 7
Session Chair: Andrea Gammon
 

Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground

Alexander Martin Mussgnug

University of Edinburgh, United Kingdom

AI is employed in a wide range of contexts  scientists leverage AI in their research, media outlets use AI in journalism, and doctors adopt AI in their diagnostic practice. Yet many have noted how AI systems are often developed and deployed in a manner that prioritizes abstract technical considerations which are disconnected from the concrete context of application. This also results in limited engagement with established norms that govern these contexts. When AI applications disregard entrenched norms, they can threaten the integrity of social contexts with often disastrous consequences. For example, medical AI applications can defy domain-specific privacy expectations by selling sensitive patient data, AI predictions can corrupt scientific reliability by undermining disciplinary evidential norms, and AI-generated journalism can erode already limited public trust in news outlets by skipping journalistic best practices.

This paper argues that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, this paper questions the current prioritization in AI ethics of moral innovation over moral conservation.

I make my argument in three parts. Building upon Helen Nissenbaum’s framework of contextual integrity, I first illustrate how AI practitioners’ disregard for cultivated contextual norms can threaten the very integrity of contexts such as mental health care or international development. Second, I outline how a tendency to understand novel technologies as uncharted ethical territory exacerbates this dynamic by playing into and seemingly legitimizing disregard for contextual norms. In response, I highlight recent scholarship that more substantially engages with the contextual dimensions of AI. Under the label of “integrative AI ethics,” I advocate for a moderately conservative approach to such efforts that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures while addressing three possible objections and engaging with emerging work on foundation models.



The bullshit singularity is near

Dylan Eric Wittkower

Old Dominion University, United States of America

Harry Frankfurt observed that "one of the most salient features of our culture is that there is so much bullshit," where he understands "bullshit" as the use of language to meet a practical end without regard for truth or falsehood (1986). While bullshitting has a rich and storied history in the areas of e.g. politics and marketing, the 20th Century saw substantial innovation and growth in what David Graeber described as "bullshit jobs" (2019), which can be understood in parallel with Frankfurt's definition of bullshit as the use of employment in order to meet a social end without regard for production of socially or individually valuable goods or services.

Generative AI, large language models (LLMs) in particular, have disrupted the bullshit labor market by providing bullshit as a service (BaaS), automating increasingly many bullshit jobs and accelerating pre-existing tendencies toward bullshitification of labor processes in order to leverage the market efficiencies offered by automated bullshit production systems (ABsPS). As work cycles and information cycles approach ABsPS saturation—e.g. grant proposals written by LLMs and reviewed by machine learning algorithms (MLAs) that results in funding for research that uses LLM-written survey questions to generate large datasets analyzed by MLAs and resulting in LLM-written publications that are peer-reviewed through LLMs—remaining places where a human-in-the-loop (HITL) is called upon to provide reality-based assessment or intervention become bottlenecks in bullshit production processes. Once we are able to remove these last antiquated tethers to fact and value, the ABsPS ecosystem will be able to reach its full speed, able finally to beat its wings freely like Kant's dove (3: B8–9) once placed in a vacuum and freed from the air resistance that currently hinders its full efficiency.

After ABsPS have been freed from the HITL, even the echoes of the HITL will become fainter and fainter as future generations of LLMs and MLAs are trained on new corpuses which will themselves be comprised of ABsPS output in ever greater proportion. The "steadily rotating recurrence of the same" („ständig rotierende Wiederkehr des Gleichen“) (Heidegger, 1954) that is the essence and ownmost possibility of ABsPS will be increasingly literally realized in this self-reinforcing cycle of ABsPS coprophagia and coprolalia, producing bullshit that is ever more complete and total. Through this leveraging of prior bullshit achievements to create ever greater bullshit achievements, ABsPS development will reach a bullshit singularity, achieving its apotheosis in a hyperreal simulacrum (Baudrillard, 1981) of meaningfulness itself.

References

Baudrillard, J. (1981). Simulacres et simulation. Éditions Galilée.

Frankfurt, H. (1986). On bullshit. Raritan Quarterly Review, 6(2), 81–100.

Graeber, D. (2019). Bullshit jobs: The rise of pointless work, and what we can do about it. Simon & Schuster.

Heidegger, M. (1954). Was heißt Denken? Max Niemeyer.

Kant, I. (1968[1787]). Kritik der reinen Vernunft, 2. Auflage. Kants Werke Akademie-Textausgabe, Band III. Walter de Gruyter.

 
10:05am - 11:20am(Papers) Avatar
Location: Auditorium 8
Session Chair: Robin Hillenbrink
 

Avatar attachment in virtual worlds: The conflict between self-fictionalization and authentic representations

Clemens Uhing

Rheinische Friedrich-Wilhelms-Universität Bonn, Institut für Wissenschaft und Ethik

Recent advancements in VR (Virtual Reality) hardware and VR applications have expanded the consumer market significantly, making the ethical challenges of VR and virtual worlds increasingly pressing. Individuals who present themselves in virtual worlds via avatars can be negatively affected by virtual actions of other people as well as by virtual events. Potential harms include infringements of autonomy and mental distress, e.g. by being harassed by other users or by witnessing violent scenes. Special problems arise from the ‘openness’ of avatar creation.

Users of virtual worlds can aim for an accurate representation of their real characteristics, for an identity totally different from their identity in the real world, or for selective representations that blend fiction and reality (Freeman & Maloney 2021). In other words, users must choose between authentic self-representations and fictionalizing themselves. At the same time, the sense of embodiment that VR creates allows for various levels of ‘avatar attachment’ (Wolfendale 2007). Consequently, users of virtual worlds find themselves in tension between the two poles of authentic self-representation and fictionalization of their selves.

In my presentation, I will argue that decisions on where to place oneself on the fictionality-authenticity continuum are highly relevant for the ethical implications of virtual worlds. First, as conceptualized with the proteus effect (Yee & Bailenson 2007), embodying avatars with a specific set of properties can make individuals behaviorally confirm the subtle implications of these properties, thus potentially inflicting (virtual) harm or perpetuating stereotypes. Second, strongly identifying with a divergent virtual self may undermine one’s sense of authenticity, hindering self-expression and fulfillment in the real world. Third, the more of their authentic self users represent in a virtual world, the more vulnerable they are to feeling that virtual harassment, exclusion and offenses are directed against their real-world identity, which in turn increases the potential for psychological harm.

I will show that the decision on whether to fictionalize or authentically represent oneself is often limited by the technical affordances of an application, by legal or netiquette regulations or influenced by users’ weighing practical trade-offs. Therefore, I will argue that recognizing the ethical implications of self-representation in virtual worlds is crucial for responsibly creating and safely using virtual reality technologies and their corresponding virtual worlds.

References

Freeman, Guo / Divine Maloney: Body, Avatar, and Me. In: Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), S. 1–27.

Wolfendale, Jessica: My avatar, my self: Virtual harm and attachment. In: Ethics and Information Technology 9,2 (2007), S. 111–119.

Yee, Nick / Jeremy Bailenson: The Proteus Effect: The Effect of Transformed Self-Representation on Behavior. In: Human Communication Research 33,3 (2007), S. 271–290.



AI ‘ancestors’? AI avatars in African ethics

Christopher Wareham

Utrecht University, Netherlands, The

AI avatars are an intriguing emerging technology. Typically debates about AI avatars focus on issues concerning ownership and replication of one’s data. However, recent work situates the discussion within ethical debates about superlongevity. Is creating an AI avatar of oneself a desirable way to prolong life? African perspectives have been brought to bear on artificial intelligence more generally, and on questions related to superlongevity. However, questions about the desirability of prolonging lifespan through the use of AI avatars have to date been unaddressed in African moral philosophy. In this paper, I argue that the desirability of extending lifespan in this way hinges a) on whether AI avatars could be considered ‘ancestors’, as understood in African traditions. This in turn depends a) on whether avatars could be ‘persons’ and b) on whether communal partiality towards avatars is justified. I claim that avatars could under certain conditions be persons, but that partiality is severely weakened. The upshot is that while avatars could be a valuable source of communal memory, they could not be ‘ancestors’ in senses important in African traditions.

 
10:05am - 11:20am(Symposium) Virtue ethics (SPT Special Interest Group on virtue ethics)
Location: Atlas 2.215
 

Virtue ethics (SPT Special Interest Group on virtue ethics) -Part II

Chair(s): Marc Steen (TNO), Zoe Robaey (Wageningen University & Research)

Rationale and goal: To further develop the field of virtue ethics in the context of technology, design, engineering, innovation, and professionalism. We understand virtue ethics broadly: as diverse efforts to facilitate people to cultivate relevant virtues so that they can flourish and live different versions of ‘the good life’, with technology, and to help create structures and institutes that enable people to collectively find ways to live well together. We envision research in the following themes:

• Citizens and practices: E.g., study how technologies can help, or hinder, people to cultivate specific virtues (Vallor 2016), e.g., how a social media app can corrode one’s self-control—and how we can envision alternative designs, that can instead help people to cultivate self-control.

• Professionals and institutions: E.g., view the work of technologists, and other professionals, through a virtue ethics lens (Steen 2022). We are also interested in various institutions, e.g., for governance or oversight. This theme also relates to education and training (next item, below).

• Education and training: E.g. design and implement education and training programs. This offers opportunities to study, e.g., how students or professionals cultivate virtues. This will probably involve cultivating practical wisdom or reflexivity, as a pivotal virtue (Steen et al. 2021).

• Traditions and cultures: E.g., study and appreciate various ‘Non-Western’ virtue ethics traditions, like Confucianism, Buddhism (Ess 2006; Vallor 2016) or Indigenous cultures (Steen 2022b). We can turn to feminist ethics or study virtues in specific domains, like health care or the military.

 

Presentations of the Symposium

 

Technological bullshit

Mandi Astola
Delft University of Technology

The electronic fatbike has stirred controversy in Dutch cities because it masquerades as an electric bike, whilst clearly being a scooter for all intents and purposes. The fatbike masquerades as an e-bike to benefit from the positive, and sustainable, image of e-bikes, and to circumvent the mandate on wearing helmets whilst riding a scooter, making the e-fatbike attractive. The fatbike, therefore displays a desired performance which is functioning as a scooter without the need for a helmet, to be driven in cities. This primary performance is however masked by multiple secondary performances, such as the bike being an e-bike, and being suitable for rough terrain by having thick tires. It is clear from looking at the design of the fatbike that the features pointing at these latter performances are there largely for decorative reasons and to mask the primary performance. The case of the fatbike shows that it is possible to bullshit using technology.

In recent years, many scholars have extended the definition of “bullshit” (which traditionally refers to phony speech acts) to various non-verbal activities, such as bullshit jobs or feigning a fall in sports (Frankfurt, 1986; Graebner, 2019; Easwaran, 2023). We aim to make such an extension too, by arguing that one can also bullshit by designing, creating or co-constructing technology. Technological artifacts can, in our view, contain or be instances of technological bullshit. This concept can be employed in the philosophy of technology both descriptively (for understanding technological artifacts as social performances) and normatively (for evaluating technologies in relationship to social values).

We present different types of technological bullshit and highlight their different aspects. We discuss hype-driven technological bullshit, self-defeating technological bullshit and conspiratorial technological bullshit. We also discuss different constellations of technological bullshitter and bullshittee. Furthermore, we reflect on the nature of the wrongness of technological bullshit, based on what it says about the character of the technological bullshitter.

Bibliography

• Frankfurt, H. (1986) On Bullshit. Raritan.

• Graeber, D. (2019). Bullshit jobs: The rise of pointless work, and what we can do about it.

• Easwaran, K. (2023). Bullshit Activities. Analytic Philosophy. 00, 1-23.

 

Digital doppelgangers, moral deskilling, and the fragmented identity: a Confucian critique

Pak Hang Wong
Hong Kong Baptist University

Artificial intelligence (AI) systems are increasingly capable of learning from and mimicking individuals, as demonstrated by a fairly successful effort to replicate the attitudes and behaviors of individuals by generative AI with a 2-hour interview (see, Park et al. 2024). This technical advancement has afforded the creation of increasingly indistinguishable (online, digital) doubles of individuals, variously known as digital doppelgangers, digital duplicates, and digital twins, which can talk to others, interact with them, and perform tasks on behalf of their creators and the originals. Major technology companies such as Meta, Microsoft, and OpenAI have also imagined various ways in which digital doppelgangers can be adopted for various purposes including: attending meeting and performing mundane tasks on behalf of their creators, establishing and maintaining relationships, reanimating the deads, among others.

The introduction of digital doppelgangers in our current and existing social fabric will for sure generate new modes of interactions and relationships among individuals that are mediated by digital doppelgangers which are like us but not exactly us, and thus potentially disrupt our current norms and values and raise social and ethical issues related to their design and implementation. Indeed, some of these ethical concerns such as digital doppelgangers and the value of individuals (Danaher & Nyholm 2024) and their social and ethical implications in terms of life extension (Iglesias et al. 2024) have been explored along with the social and ethical challenges of digital doppelgangers in specific domains of application, e.g., griefing, romance, and education. More generally, John Danaher and Sven Nyholm have also offered a promising general moral principle to assess the design and implementation of digital doppelgangers in what they call the “minimally viable permissibility principle”. Much less, however, has been said about digital doppelgangers’ potential implications to individuals’ moral self-cultivation.

In this talk, I shall approach the questions of digital doppelgangers and moral self-cultivation from a Confucian perspective, drawing from Confucian ideas on moral self-cultivation. The discussion, however, should also apply to non-Confucian accounts of moral self-cultivation as well. More specifically, I shall connect the discussion of moral deskilling to the social and ethical analysis of digital doppelgangers (see, Vallor 2015; Wong 2019), and argue that the use of digital doppelgangers will in various ways result in individuals’ moral deskilling. In addition, I shall argue that the use of digital doppelgangers generates a novel challenge of fragmented identity, which requires the creators and/or the originals of the digital doppelgangers to reconcile potentially conflicting narratives and images, thereby de-stabilizing individuals in their cultivation of moral selves.

Bibliography

• Danaher, J., Nyholm, S. (2024a). Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?. Philos. Technol. 37, 106.

• Danaher, J., and S. Nyholm. (2024b). The ethics of personalised digital duplicates: A minimally viable permissibility principle. AI and Ethics. doi:10.1007/s43681-024-00513-7.

• Iglesias, S., Earp, B. D., Voinea, C., Mann, S. P., Zahiu, A., Jecker, N. S., & Savulescu, J. (2024). Digital Doppelgängers and Lifespan Extension: What Matters? The American Journal of Bioethics, 1–16.

• Vallor, S. Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character. Philos. Technol. 28, 107–124 (2015).

• Wong P-H. Rituals and Machines: A Confucian Response to Technology-Driven Moral Deskilling. Philosophies. 2019; 4(4):59.

 
11:20am - 11:50amCoffee & Tea break
Location: Voorhof
11:50am - 1:05pm(Papers) Values
Location: Blauwe Zaal
Session Chair: Pieter Vermaas
 

Artificial moral discourse and the future of human morality

Elizabeth O'Neill

TU/E, Netherlands, The

Many publicly accessible large language model (LLM)-based chatbots readily and flexibly generate outputs that look like moral assertions, advice, praise, expression of moral emotions, and other morally-significant communications. We can call this phenomenon “artificial moral discourse.” In the first part of this talk, I supply a characterization of artificial moral discourse. Drawing from existing empirical studies, I provide examples of several varieties of artificial moral discourse, and I propose a definition for the concept. On my view, to engage in artificial moral discourse is for a computer system to: exhibit a pattern of response to inputs that resembles some human pattern of response to similar inputs, where the response contains something (terms, sentences, gestures, facial expressions, etc.) that the human interlocutor (or an observer) views as communicating a moral message or would have viewed as communicating a moral message if the exchange had occurred between humans.

Why does artificial moral discourse matter? For one thing, interactions with artificial moral discourse could influence human values and norms, for good or ill. In the second part of the talk, I make a preliminary case for the claim that artificial moral discourse is likely to influence human norms and values in ways that past technologies have not. Namely, I propose that regular interaction with LLM-based chatbots can influence human morality via mechanisms that resemble modes of social influence on morality, such as influence via advice and testimony, influence via example, and influence via norm enforcement. Such influence could be orchestrated by humans seeking to advance particular worldviews or it could be exerted without any humans having intended the chatbot to have such an influence. I sketch what some of these paths of influence might look like.

Although the phenomenon of artificial moral discourse bears a resemblance to the ideas of moral machines and artificial moral advisors, which have previously been discussed in the philosophical literature, the concepts and theoretical frameworks developed for those hypothetical phenomena are not enough on their own to help us get a grip on the nature and risks of artificial moral discourse, nor will they suffice to guide our response to it. Among other things, it is not at all safe to assume that what the systems are doing is genuine moral reasoning, advice-giving, and so on, nor that these systems are reliable sources of moral advice or moral judgments. Instead, given their complexity and opacity, we have a very poor idea of the behavioral dispositions of these systems, and the conditions that will elicit particular behaviors; there currently exist no well-validated tests or standards for evaluating their morally-relevant capacities across a range of contexts. In the third part of the talk, I suggest some further research questions for future empirical, technical, and philosophical investigation on how artificial moral discourse may influence human morality and what the ethical implications of that influence may be.



Recognition through technology: Design for recognition and its dangers

Nynke van Uffelen

Delft University of Technology, Belgium

Critical Theory, the philosophical approach inspired by the Frankfurt School, aims to formulate well-grounded societal critique, to achieve emancipation and social justice (Thompson, 2017). Much recent work in Critical Theory revolves around the notion of recognition, inspired by Axel Honneth, who conceived social conflict in terms of struggles for (mis)recognition through love, right, and esteem. Honneth argues that people’s identities and autonomy are relationally constituted, and as such, societies can be criticised for obstructing the development of autonomous individuals with an undistorted self-identity, which includes self-love, self-esteem, and self-respect (Honneth, 1995).

Although technologies are increasingly influential in and disruptive to modern societies, critical theorists nowadays hardly engage with the social and ethical implications of technologies such as Artificial Intelligence, medical applications, or energy infrastructures. For example, links between recognition and normative philosophy of technology are scarce (exceptions are Gertz, 2018; van Uffelen, 2022; Waelen, 2023; Waelen & Wieczorek, 2022). This research gap is unfortunate because Critical Theory, and recognition theory in particular, contains conceptual and normative resources that may advance research in philosophy of technology.

In this paper, I introduce the notion of ‘design for recognition’ and explore its added value to philosophy of technology and its dangers. First, I introduce the notion of ‘recognition through technology’; doing so characterizes technologies as constituted by social relations of (mis)recognition – in other words, people (mis)recognise each other through technology. I outline the commonalities between the idea of ‘recognition through technology’ and prevalent perspectives in philosophy of technology, including mediation theory, design for values, and the concept of sociotechnical systems, all of which depart from relational and constructivist ontologies of technology. Second, I outline the added value of recognition theory within philosophy of technology, which I consider conceptual, empirical and normative. Lastly, I introduce the notion of ‘design for recognition’ and discuss its potential and risks. Although Honneth’s theory may have implications for technology design, the fact that relations of (mis)recognition co-construct people’s identities introduces risks to designing for recognition. Inspired by recent authors outlining ‘negative views’ on recognition (Laitinen, 2021; Stahl et al., 2021), I argue that there are three main dangers to designing for recognition that should be considered, namely: design for recognition may (1) reproduce unjust social norms; (2) fix identities and cause polarization; and (3) distract from other, more pressing ethical issues.

This contribution explores the opportunities and limits of cross-pollinating recognition theory and philosophy of technology and highlights guidelines and pitfalls when adopting ‘justice as recognition’ as a normative paradigm for normative technology assessment and design.

Gertz, N. (2018). Hegel, the Struggle for Recognition, and Robots. Techné: Research in Philosophy and Technology, 22(2), 138–157.

Honneth, A. (1995). The Struggle for Recognition: The Moral Grammar of Social Conflicts. MIT Press.

Laitinen, A. (2021). On the Ambivalence of Recognition. Itinerari, 1.

Thompson, M. J. (Ed.). (2017). The Palgrave Handbook of Critical Theory. Palgrave Macmillan.

Stahl, T., Ikäheimo, H. & Lepold, K. (Eds.). (2021). Recognition and Ambivalence. Columbia University Press.

van Uffelen, N. (2022). Revisiting recognition in energy justice. Energy Research & Social Science, 92(August), 102764. https://doi.org/10.1016/j.erss.2022.102764

Waelen, R. (2023). The struggle for recognition in the age of facial recognition technology. AI and Ethics, 3(1), 215–222. https://doi.org/10.1007/s43681-022-00146-8

Waelen, R., & Wieczorek, M. (2022). The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition. Philosophy and Technology, 35(2), 1–17. https://doi.org/10.1007/s13347-022-00548-w



LLM-based chatbots – the moral advisor in your pocket…why not?

Franziska Marie Poszler

Technical University of Munich, Germany

Generative AI, especially chatbots powered by large language models (LLMs), enables individuals to increasingly rely on automation to support their decisions by answering questions, offering information or providing advice. Even though these chatbots were not originally or explicitly intended for ethical decision-making purposes, studies have shown their potential and willingness to offer moral guidance and advice (Aharoni et al., 2024). Corresponding moral guidance can range from providing background information that should be considered during the user’s ethical decision-making process to giving precise normative instructions on what to do in a specific situation (Rodríguez-López & Rueda, 2023).

In the context of moral guidance, LLM-based chatbots can be considered an “archetypical double-edged sword” where their impact is shaped by how they are developed, trained, implemented and utilized (Spennemann, 2023; p.1). For one, Dillion et al. (2024) demonstrated that LLMs have, in some respects, reached human-level expertise in moral reasoning: their moral explanations of what is right or wrong in specific situations were perceived as more moral, trustworthy, thoughtful and correct than those written by a human counterpart, an expert ethicist. On the other hand, Krügel et al. (2023) highlighted the potential for inconsistency in the guidance provided by LLM-based chatbots, demonstrating that ChatGPT offered contradictory responses and advice on the same moral dilemma when the phrasing of the question was slightly altered.

Therefore, existing research provides ambiguous results, leaving several open questions, with the key research questions to be addressed in this study being:

1. How can LLM-based chatbots, impede or support humans’ ethical decision-making?

2. What system requirements are crucial for its responsible development and use?

To shed light on and provide answers to these questions, this study is based on semi-structured interviews with eleven experts in the field of behavioral ethics, psychology, cognitive science and computer science. The interviews were recorded, transcribed verbatim and coded manually using the MAXQDA software. In this analysis, an inductive coding methodology (Gioia et al., 2013) was adopted to identify themes as they emerged during data collection. In addition, the manually generated codes were extended and validated by consulting automatically generated codes of MaxQDA’s AI Assist.

Preliminary results provide insights into use cases and trends regarding the role of LLM-based chatbots in providing moral guidance (e.g., high usage, particularly by individuals who are lonely, young or dealing with ‘shameful’ issues) and related misconceptions (e.g., linking the capacity for moral understanding to these systems, although they operate based on statistical predictions). Furthermore, experts discussed resulting societal implications in terms of benefits and challenges or risks (e.g., informed decision-making vs. echo chamber effect, ‘AI hallucinations’, moral deskilling). Lastly, the experts offered recommendations for developers (e.g., implementing governance measures such as red teaming), users (e.g., asking the chatbot to highlight the drawbacks of advice it previously provided) and scholars (e.g., the need to conduct more behavioral research in the future) to facilitate the responsible development and use of LLM-based chatbots as moral dialogue partners.

References

Aharoni, E., Fernandes, S., Brady, D. J., Alexander, C., Criner, M., Queen, K., ... & Crespo, V. (2024). Attributions toward artificial agents in a modified Moral Turing Test. Scientific Reports, 14(1), 8458.

Dillion, D., Mondal, D., Tandon, N., & Gray, K. Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance.

Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational research methods, 16(1), 15-31.

Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569.

Rodríguez-López, B., & Rueda, J. (2023). Artificial moral experts: asking for ethical advice to artificial intelligent assistants. AI and Ethics, 3(4), 1371-1379.

Spennemann, D. H. (2023). Exploring ethical boundaries: Can ChatGPT be prompted to give advice on how to cheat in university assignments?.

 
11:50am - 1:05pm(Papers) Philosophy of technology I
Location: Auditorium 1
Session Chair: Krist Vaesen
 

Philosophy of Technology and its extractivist Blind Spot: On Mechanisms of Occlusion

Tijs Vandemeulebroucke1, Larissa Bolte1, Julia Pelger2

1Rheinische Friedrich-Wilhelms-Universität Bonn, Institut für Wissenchaft und Ethik, Bonn Sustainable AI Lab, Germany; 2Department of Philosophy, University of Washington, Seattle Washington, United States of America

Experiences of an increase in environmental crises associated with the use of technological objects confront us with the fact that these objects, although developed and used in a local context, have global impacts. Many, if not all, technological objects have become world objects (Serres 1995; Feenberg 2017). This world dimension is embodied, for example, in the different global supply-chains and the many hands across the world necessary to construe and develop technological objects. Current discourses in philosophy of technology, despite their recent focus on sustainability and the idea of the Anthropocene, do not have the conceptual tools to meet this world dimension of technological objects. This conceptual lack, we argue, becomes particular evident when considering the relation between extractivism and technological objects. Here, extractivism is conceived as the exponential acceleration of extracting natural resources to develop technological objects. As such, we make the case that extractivism is an environmental-social condition of the existence of technological objects and their further development and so is also a condition of the existence of the philosophy of technology itself. Despite this intimate relation between technological objects, the philosophy of technology, and extractivism, the latter apparently is not captured by the lens of major philosophy of technology discourses. In this presentation we will lay bare different mechanisms of occlusion within extractivism and the philosophy of technology that make extractivism a blind spot. Extractivism is grounded in a global political-economy which is characterized by processes of appropriating its benefits by a particular group of economically powerful stakeholders and processes of externalizing its harmful impacts by the creation of sacrifice zones and peripheries. The philosophy of technology, especially after its empirical turn, is characterized by a blinding focus on concrete technological objects. Moreover, it takes up a perspective that all issues related to technological objects are technological issues which need to be solved by technological experts. To counter these mechanisms of occlusion, the philosophy of technology is in need of a scale critique (Clark 2012, 2018). We offer such a critique by relying on the relational ontology of Māori philosophy, particularly its temporal and spatial dimensions. Doing so, we confront the current philosophy of technology with its own modes of thinking which from a local perspective seem justified but from a global perspective are incomplete. As such, extractivism is put in philosophy of technology’s spotlight and a rudimentary ground for a de-extracted philosophy of technology is developed.

References

Clark, T. (2012). Scale. Derangements of scale. In T. Cohen (ed.). Telemorphosis. Theory in the era of climate change. Ann Arbor: Open Humanities Press.

Clark, T. (2018). Scale as a force of deconstruction. In M. Fritsch, P. Lynes & D. Wood (eds.). Eco-deconstruction. Derrida and environmental philosophy (pp. 81-97). New York: Fordham University Press.

Feenberg, A. (2017). Technosystem. The social life of reason. Cambridge, MA & London: Harvard University Press.

Serres, M. (1995). The natural contract (E. MacArthur & W. Paulson Trans.). Ann Arbor: The University of Michigan Press.



An empirical study of empirical philosophy of technology celebrating plurality

Anna Melnyk, Nynke Van Uffelen, Aafke Fraaije, Olya Kudina, Karen Moesker, Lavinia Marin, Dmitry Muravev

TU Delft, The Netherlands

The empirical turn in philosophy of technology is history in the making (Achterhuis, 2001; Botin et al., 2020; Zwier et al., 2016). As such, researchers are constantly navigating how to integrate empirical work into the philosophy of technology, which raises questions such as (1) what “empirical philosophy of technology” exactly signifies and (2) what role empirical research can play in the philosophy of technology, more specifically, towards what ends and purposes empirical data can be leveraged (Bosschaert & Blok, 2023; Botin et al., 2020). The second question reflects the urgency of an “ethical” (Verbeek, 2010) or “political” turn (Feenberg, 2020) within philosophy of technology. Several critics (see, for example, Bosschaert & Blok, 2023; Zwier, 2016) claim that empirical philosophy of technology focuses on concrete technologies and thus fails to be sufficiently critical of the underlying social and political structures. Although these critiques may apply to certain studies, we argue that they rely on a too narrow view of what empirical philosophy of technology post-2020 is and can be. To do so, we empirically study the empirical-philosophical work in the Ethics and Philosophy of Technology section at TU Delft, more specifically, within a recently created Empirical philosophy of technology research cluster, connecting about 20 researchers. We gathered information about the different roles empirical work plays in the research conducted within the cluster, leading to a rich document with research experiences, empirical ambitions, methods, and findings. The results show that “empirical philosophy of technology” refers to a plurality of approaches and perspectives. Therefore, we argue that it should not be defined in a narrow sense; instead, it should be acknowledged that the empirical can play many diverse roles, including social critique. These insights contribute to a more nuanced understanding of empirical philosophy of technology and its opportunities and limitations that is thoroughly grounded in practice and may inspire researchers in philosophy of technology to explore empirical methods themselves.

References

Achterhuis, H. (Ed.). (2001). American philosophy of technology: The empirical turn. Indiana University Pres.

Bosschaert, M. T., & Blok, V. (2023). The ‘Empirical’ in the Empirical Turn: A Critical Analysis. Foundations of Science, 28(2), 783–804. https://doi.org/10.1007/s10699-022-09840-6

Botin, L., De Boer, B., & Børsen, T. (2020). Technology in between the individual and the political: Postphenomenology and critical constructivism. Techne: Research in Philosophy and Technology, 24(1–2), 1–14. https://doi.org/10.5840/techne2020241

Feenberg, A. (2020). Critical constructivism, postphenomenology, and the politics of technology. Techne: Research in Philosophy and Technology, 24(1–2), 27–40. https://doi.org/10.5840/techne2020210116

Verbeek, P. P. (2010). Accompanying technology: Philosophy of technology after the ethical turn. Techne: Research in Philosophy and Technology, 14(1 PLISS), 49–54.

Zwier, J., Blok, V., & Lemmens, P. (2016). Phenomenology and the Empirical Turn: a Phenomenological Analysis of Postphenomenology. Philosophy and Technology, 29(4), 313–333. https://doi.org/10.1007/s13347-016-0221-7



Technoscience: perspectives on a new concept for the philosophy of technology

José Luís Garcia

Instituto Ciências Sociais, Universidade de Lisboa, Portugal

Since the two world wars, philosophers, historians, sociologists and scientists have been reflecting on modern science and its relation to technology, but this interest has intensified as we move into the 21st century. The literature on this topic widely recognizes that science has been in a process of historical transformation since the end of the 20th century. How profound is this transformation? Are there radically new elements that have emerged? What were the structures, interactions or episodes that led to their changes? What have they changed? Developments in the contemporary world that link science and technology include nuclear weapons, nuclear energy, space exploration, microchips, computers, digital networks, lasers, missiles, communication satellites, biotechnology, magnetic resonance imaging, heart-lung machines, artificial organs and nanotechnology, among many other examples. David Channell (2017) is right to say that all those advances cannot be understood solely as products of science or technology. Indeed, it has become increasingly difficult to characterize many of the developments that shape the basis of the contemporary world as exclusively scientific or exclusively technological.

Since the 1980s, the discussion about the relationship among science, technology, and society has intensified and various philosophers and social scientists, regardless of their different perspectives, have used the term ‘technoscience’ to describe and designate the emergence of a type of institution and research whose generic description includes the interconnection between science, technology and engineering, the massive mobilization of resources for the production of practical, industrial and profitable innovations and the commitment to economic growth, market competition and military security. The paper aims to discuss the different understandings given by various key authors. The authors will be drawn not only from the English-speaking philosophy of technology, but also from other sources, including French, German, Spanish and Portuguese. In particular, I will highlight and confront the genealogical research of Gilbert Hottois (2004), the historical works of Channell (2017), the perspective coming from the peculiar phenomenology of Don Ihde (in Ihde and Selinger,2003), the strong rationalist conception of Javier Echeverria (2003), the critical vision of Hermínio Martins (2011) and the debate on the ‘epochal rupture’ that has been conducted in this regard in the work organized by Nordmann, Radder and Schiemann (2011).

References

Channell, David F. (2017) A History of Technoscience Erasing the Boundaries between Science and Technology, Routledge.

Echeverria, Javier (2003) La Revolución Tecnocientífica, Fondo de Cultura Económica de España.

Ihde, Don and Evan Selinger (2003) Chasing Technoscience: Matrix for Materiality, Indiana University Press.

Hottois, Gilbert (2004) Philosophies des sciences, philosophies des techniques, Ed. Odile Jacob.

Martins, Hermínio (2011) Experimentum Humanum. Civilização Tecnológica e Condição Humana, Relógio d’Água.

Nordmann, Alfred, Hans Radder, and Gregor Schiemann (ed.) (2011) Science Transformed? Debating Claims of an Epochal Break, University of Pittsburgh Press.

 
11:50am - 1:05pm(Papers) Well-being
Location: Auditorium 2
Session Chair: Mariska Bosschaert
 

AI’s undervalued burden: Psychological impacts

Marcell Sebestyen

Budapest University of Technology and Economics, Department of Philosophy and History of Science

This work aims to address the surprisingly underexplored yet increasingly critical psychological risks associated with artificial intelligence (AI). Media and academic discussions largely focus on speculative scenarios, like machines overthrowing society, whereas philosophical debates in AI ethics delve into other hypothetical concerns, such as artificial agents achieving sentience and consciousness. However, far less attention is given to other immediate and extensive impacts AI may already exert.

These effects include possibly significant influences on mental well-being, social behavior, and human identity, as the scope of human interaction with these systems is expanding across all aspects of daily life. Overreliance on AI threatens to undermine individual human agency, decision-making, intellectual abilities, and creativity. The overarching integration of the technology affects cognitive development and reshapes social relationships, potentially amplifying isolation and damaging self-worth. Scholars also suggest that AI-driven social media algorithms are contributing to rising social anxiety and fueling alienation.

A particular cause for concern is the phenomenon of anthropomorphism in the context of social AI, as these machines – whether embodied robots or merely software-based chatbots – are intentionally and specifically designed with the aim of promoting the projection of human traits, intentions, and emotions onto these systems. This not only creates mental burdens for users but might also divert attention away from the real ethical and societal challenges AI poses.

The views we hold on the consequences of AI development are decisively shaped by our metaphysical assumptions, determining which risks are considered and prioritized. For instance, the possibility of machine consciousness or suffering may be dismissed in some ontologies, while other frameworks might force us to unquestionably accept artificial agents as moral subjects. This reflects historical precedents, such as the Cartesian view on animal suffering, where the absence of a soul was believed to deny animals the capacity for pain, leading to the acceptance of animal exploitation that persists to this day.

The study strives to bring light to a tension present in the AI risk discourse: while predominantly anthropocentric, these debates often undervalue the psychological tolls. Although mental health is fundamental to the well-being of our species, the effects of this technology on our psyche and cognitive development are remarkably underestimated, a fact that demonstrates the shortcomings of implementing "human-centered AI" as a frequently proclaimed policy and development guideline. Addressing this gap is essential to understanding and mitigating the broader societal implications of AI.

Overall, as AI becomes more ingrained in human life, the technology’s psychological burdens could significantly influence human existence and mental health. Despite the profound implications, psychological risks remain substantially overlooked in media and academic AI risk narratives. This analysis seeks to highlight these neglected dimensions, emphasizing the current and future psychological effects of AI on individuals and society.



Personal well-being in the digital age: on the role of the sense of self

Lyanne Uhlhorn

Eindhoven University of Technology, Netherlands, The

This paper explores the intersection of digital technology and personal well-being through the lens of the sense of self. I argue that personal well-being, being about prudential value (e.g. what is good for you), is necessarily linked to the person in question and their unique point of view. Ultimately, it is connected to ‘you’, your self, or, more specifically, your sense of self. Especially in contemporary digital societies, where traditional sources of meaning—such as community, religion, and tradition—are diminishing in influence, greater responsibility is placed on individuals to define how to cultivate the good life (Schlegel, Hicks, Arndt, & King, 2009). These conditions place increased responsibility on individuals to construct their own sense of purpose and well-being, emphasizing the importance of understanding the self as a central component of this process.

Traditional well-being theories—hedonism, desire-fulfillment, and objective list theories (Parfit, 1984)—offer valuable insights but remain insufficient for addressing the complexities of well-being in the digital age. Subjective theories emphasize pleasure or the satisfaction of desires but often fail to capture the richness of subjectivity for human flourishing. Objective theories, on the other hand, may emphasize meaning, values, and personal growth but overlook the role of the individual perspective (Fletcher, 2015). This gap underscores the need for an approach that considers the complexity of the self in relation to well-being. Since well-being fundamentally concerns what is good for an individual, it must integrate a nuanced understanding of the self as the foundation for personal well-being.

To this end, some theories of well-being have evolved to include individualistic dimensions such as self-determination, self-development, and self-esteem (Joshanloo & Weijers, 2024). Psychological research identifies authenticity, or alignment with one’s core or true self, as a foundation for subjective well-being (Goldman & Kernis, 2002; Sheldon et al., 1997). However, the concept of the true self is often vague or oversimplified as a purely intrinsic entity and warrants deeper exploration informed by philosophical traditions. By synthesizing insights from phenomdenology and ontology, specifically the narrative non-substantialist self and non-self views, I argue that the sense of self is comprised of two components (Gallagher, 2011; Zahavi, 2005; Velleman, 2006), and define it as the interplay between one's phenomenological experience (e.g. ‘basic consciousness’) and the constructed self-concept as an episodic self-narrative. Together, these components form the unique subjective perspective.

A sense of self that is conducive of well-being must exhibit qualities that align with its identified nature. Achieving this is increasingly challenging in digital society, with increasing online identities, fragmented experiences, and the overwhelming diversity of perspectives and ideals create complexity and disorientation for self-construction. To address these challenges, I identify the qualities of a sense of self that is conducive to well-being, including: coherence, clarity, flexibility, agency, and valuableness. By acknowledging the role of the self in personal well-being, this approach offers a more comprehensive understanding of the digital age’s impact on self-construction and well-being, and promises to enhance the practical promotion of well-being in a digital age.

References

Fletcher, G. (Ed.) (2015). The Routledge handbook of philosophy of well-being. London: Routledge.

Gallagher, S. (2011.), The Oxford Handbook of the Self, online edn, Oxford Academic, 2 May 2011,

https://doi.org/10.1093/oxfordhb/9780199548019.001.0001

Goldman BM, Kernis MH. (2002) The role of authenticity in healthy psychological functioning and subjective well-being. Annals of the American Psychotherapy Association, 5(6):18–20.

Joshanloo M. , Weijers D. (2024) Ideal personhood through the ages: tracing the genealogy of the modern concepts of wellbeing. Frontiers in Psychology, 15,

https://doi.org/10.3389/fpsyg.2024.1494506

Parfit, D. (1984) Reasons and Persons

Schlegel, R. J., Hicks, J. A., Arndt, J., & King, L. A. (2009). Thine own self: true self-concept accessibility and meaning in life. Journal of personality and social psychology, 96(2),

473–490. https://doi.org/10.1037/a0014060

Sheldon KM, Ryan RM, Rawsthorne L, Ilardi B. (1997) “True” self and “trait” self: Cross role variation in the big five traits and its relations with authenticity and well being. Journal of

Personality and Social Psychology.73:1380–1393.

Velleman J. D. (2006) Self to self, selected essays

Zahavi, D. (2005) Subjectivity and selfhood, investigating the first-person perspective

 
11:50am - 1:05pm(Papers) Emotions
Location: Auditorium 3
Session Chair: Maaike van der Horst
 

Emotional expressions, Informational opacity, and Technology: On the necessity of overt emotional expressions in social life

Alexandra Prégent

Leiden University, NL

The rise of sociotechnical systems and practices has impacted our lives by enmeshing people and technology in webs of information, reshaping social dynamics and expectations. A surge in affective computing and emotion recognition technology (ERT) in the last decade has exposed the general eagerness to understand and access the ‘inner affective life’ of others. While previous criticism and regulation has focused on unimodal ERTs using facial features, multimodal ERTs have shown surprisingly high levels of accuracy in the last few years, putting their development back on the radar of philosophical analysis of new and emerging technologies. At the intersection of neurorights and public privacy issues, emotional expressions are a particularly challenging and interesting topic for research on sociotechnical systems and practices.

This paper attempts to both map and forecast the social implications of the use of what I call ‘ideal’ ERTs, with a particular focus on privacy. ‘Ideal’ ERTs are emotion recognition technologies that have overcome the technical limitations of their former versions. As such, their level of accuracy is satisfactory, and they can distinguish between a variety of different types of emotional expression (EE), as well as indicate whether the expression is involuntary or intentional. Emotional expressions are a well-known source of information, mostly studied in nonverbal communication theories.

Drawing on empirical research, I argue that EEs are a reliable source of social information that sustains and regulates the social fabric of life (van Kleef 2016). Emotional expressions often tell us a lot about others, allowing us to understand their intentions, motivations and opinions, but also to predict and anticipate their behaviours. Affective communication channels are used in social interactions and can convey more or less information to the perceiver, depending on the context and her background knowledge of the emoter (Scarantino 2017, 2019). EEs are the main carriers of information in affective communication channels. Philosophers and social psychologists usually distinguish between two different types of EEs: involuntary EEs (e.g. blushing, sweating, shaking, widening of the eyes, accelerated breathing, etc.) and intentional EEs, the latter being emotional expressions that we produce voluntarily. What I am particularly interested in in this paper is the role of intentional EEs in the regulation of social life. It seems that both intentional expressions and perceptions of intentional EEs facilitate interactions and play a key role in the construction and flourishing of social relationships.

Given that 1) intentional EEs convey relevant social information that contributes to the regulation of social life, and that 2) the ‘ideal’ ERT can discriminate between information conveyed by involuntary EEs and information conveyed by intentional EEs, I argue that ERTs can threaten affective communication channels by reducing the informational opacity that is naturally present in them. While informational opacity can be the cause of miscommunication and other types of communication failures, I argue that the presence of some degree of informational opacity is a necessary condition for the success of intentional EEs, which in turn are necessary for the regulation of social life. Thus, by reducing informational opacity, ERTs can disrupt and prevent the transmission of information carried by intentional EEs in affective communication channels. I show how, contrary to our intuitions, reducing informational opacity may not be desirable for communication. The successful communication of intentional EEs over involuntary EEs, I postulate, is fundamental to keeping social friction(s) at reasonable levels, as this type of communication is a pillar of social norms and behaviours that help us to successfully navigate and regulate our interactions.

I conclude with a proposal for the future regulation of ‘ideal’ ERTs and a critical rationale for why current regulatory approaches, largely driven by the EU AI Act, may prove deleterious in the long run.



(Post)emotions in care: AI, mechanization, and emotional practices in the age of efficiency

Eliana Bergamin

Erasmus University Rotterdam, Netherlands, The

The increasing integration of artificial intelligence (AI) technologies into healthcare systems is reshaping not only the delivery of care but also the values underpinning medical and care practices. Longstanding principles that prioritize interpersonal relationships, human touch, and emotional practices in caregiving are increasingly overshadowed by values such as efficiency, quantification, and algorithmic logic. This shift raises critical questions about what is changing and what is lost in the pursuit of technological innovation, particularly in contexts where emotional resonance and relational care are central to patient and medical staff’s well-being. To explore these tensions, this paper draws on the work of Jacques Ellul and Stjepan Meštrović, whose critiques of technological determinism and the prioritization of efficiency over emotional and moral frameworks can provide insights into the implications of AI's integration into emotional and care practices.

In his seminal work The Technological Society, Ellul highlights how the relentless pursuit of the value of efficiency pervades not only technological domains but every aspect of human existence (Ellul, 1967). His insight into the mechanization of societal paradigms underscores how the drive for efficiency reshapes human values and experiences, conditioning human behavior to conform to technological systems rather than vice versa. Ellul's brief mention of emotions’ instrumentalization in favor of efficiency finds expansion in Stjepan Meštrović's exploration of the transformation of emotions in contemporary society (Meštrović & Riesman, 1997). Meštrović illustrates how the rationalized, mechanized way of thinking of postmodern society redefines human emotions, reducing them to manufactured emotional attachments or ‘postemotions’, detached from their original essence. Meštrović affirms that, in today’s society, emotions have not disappeared, but rather have been transformed into vicarious emotions. Ghosts of their original selves, they are used as tools to serve the efficiency and rationality-driven purposes of an increasingly mechanized world.

In navigating Ellul's and Meštrović's insights, this paper seeks to delve deeper into the profound impact of technological mechanization on human emotions – focusing specifically on the case of Artificial Intelligence in healthcare practices – shining a light on how the pursuit of efficiency molds emotional experiences within societal, material, and experiential frameworks. This exploration wants to examine the intellectualization and abstraction of emotions in the postemotional era, where they are portrayed as tools to be manipulated within the efficiency-driven narrative of contemporary technological society. As AI works through means of labelling and generalizations, the nuanced emotional world that humans experience is reduced to datasets and generalized classifications (Habehh & Gohel, 2021).

Building on the perspectives of Meštrović and Ellul, this research examines how AI technologies can materially influence emotional practices in care settings. By exploring this reconfiguration, it highlights how the value of efficiency, traditionally associated with industrial and administrative domains, is increasingly pervading areas of human experience—such as care—where it was previously peripheral (Alexander, 2008). This approach provides a lens to understand the material and procedural changes in emotional practices, as they intersect with technological systems, while also offering insights into the ethical and societal dimensions of these transformations, particularly as they relate to the emergence of what might be termed an efficiency-driven, "postemotional" landscape.



Affective injustice and affective artificial intelligence

Kris Goffin1, Alfred Archer2

1Maastricht University; 2Tilburg University

Affective Artificial Intelligence is on the rise. Some AI applications, such as the Hire Vue, Affectiva, Clear View, Seeing AI and Azure applications, are programmed to recognize your emotions. For example, if you scan your face, emotion recognition software provides an emotion label, such as anger. A more subtle form of affective AI are applications programmed to be empathetic. For example, an AI chatbot tries to react to your input by considering your emotions and guessing your emotional state so that it can respond accurately.

Affective AI is already being used in a range of contexts. Emotion recognition software has been developed to “teach” autistic people to recognize their own and other people’s emotions. Similarly, therapy bots are used to stand in for therapists and help users analyze and regulate their emotions. Companion bots help people to meet affective needs that are otherwise unfulfilled, and grief bots help people come to terms with the loss of loved ones.

Existing work has identified one of more of these uses of affective AI as a form of affective scaffolding (Fabry & Alfano 2024) or affective niche construction (Krueger & Osler 2019). Building on this work, we will argue that affective AI can serve as a form of affective and cognitive scaffolding that helps users to:

- Recognize one’s own and other people’s emotions

- Regulate one’s own and other people’s emotions

However, while affective AI can be a useful tool for achieving these purposes, we will argue that there are two major ethical risks with this kind of application. The first is the risk of alienation from our emotions. By offloading emotional labor to AI, one loses an essential aspect of the human experience: understanding one’s emotions. Emotional expressions are more than just unambiguous signals of internal states. When one expresses emotions, one also tries to understand one’s emotions which is a way of interpreting and constructing a sense of who we are and what we value (Brewer 2011). Interpreting the emotions of other human beings is also an essential way in which we relate to other people and develop a shared sense of meaning with them (Campbell 1997). By offloading the interpretation and regulation of emotion to AI, we run the risk of alienating ourselves from this key meaning-making process.

The second risk is that of emotional imperialism, which occurs when a dominant group imposes their emotional norms and practices on a marginalized group, whilst marking out the emotional norms of the marginalized as deviant and inferior (Archer & Matheson 2022; 2023). Rather than helping autistic people interpret emotions in ways that fit with their own emotional norms, needs and desires, there is good reason to worry that affective AI will actually serve to breed conformity and to encourage autistic people to conform to the emotional norms of non-autistic people. While particularly clear in this case, we will argue that this worry is one that applies more generally to affective AI systems.

 
11:50am - 1:05pm(Papers) Algorithms
Location: Auditorium 5
Session Chair: Sage Cammers-Goodwin
 

The power topology of algorithmic governance

Taicheng Tan

Beijing University of Civil Engineering and Architecture, China, People's Republic of

As a co-product of the interplay between knowledge and power, algorithmic governance raises fundamental questions of political epistemology while offering technical solutions constrained by value norms. Political epistemology, as an emerging interdisciplinary field, investigates the possibility of political cognition by addressing issues such as political disagreement, consensus, ignorance, emotion, irrationality, democracy, expertise, and trust. Central to this inquiry are the political dimensions of algorithmic governance and how it shapes or even determines stakeholders’ political perceptions and actions. In the post-truth era, social scientists have increasingly employed empirical tools to quantitatively represent algorithmic political bias and rhetoric.

Despite advancements in the philosophy of technology, which has shifted from grand critiques to micro-empirical studies, it has yet to fully open the space for political epistemological exploration of algorithmic governance. To address this gap, this paper introduces power topology analysis. Topology, a mathematical field that studies the properties of spatial forms that remain unchanged under continuous transformation, has been adapted by thinkers like Gilles Deleuze, Michel Foucault, Henri Lefebvre, Bruno Latour, and David Harvey to examine the isomorphism and fluidity of power and space. Power, like topology, retains continuity even through transformations, linking the two conceptually.

This paper is structured into four parts. The first explores the necessity and significance of power topology in conceptualizing algorithmic power and politics through the lens of political epistemology. The second examines the generative logic and cognitive structure of power topology within algorithmic governance. The third analyzes how power topology transforms algorithmic power relations into an algorithmic political order. The fourth proposes strategies for democratizing algorithmic governance through power topology analysis.

The introduction of power topology analysis offers a reflexive perspective for the philosophy of technology to re-engage with political epistemology—an area insufficiently addressed by current quantitative research and ethical frameworks. This topological approach provides a detailed portrait of algorithmic politics by revealing its power topology. Moreover, it redefines stakeholder participation by demonstrating how algorithms stretch, fold, or distort power relations, reshaping the political landscape. By uncovering the material politics of these transformations, power topology encourages the philosophy of technology to reopen political epistemological spaces and adopt new cognitive tools for outlining the politics of algorithmic governance. Ultimately, this framework aims to foster continuous, rational, and democratic engagement by stakeholders in the technological transformation of society, offering a dynamic and reflexive tool for understanding the intersection of power, politics, and algorithms.



Believable generative agents: A self-fulfilling prophecy?

Leonie Alina Möck1, Sven Thomas2

1University of Vienna, Austria; 2University of Paderborn, Germany

Recent advancements in AI systems, in particular Large Language Models, have sparked renewed interest in a technological vision once confined to science fiction: generative AI agents capable of simulating human personalities. These agents are increasingly touted as tools with diverse applications, such as facilitating interview studies (O’Donnell, 2024), improving online dating experiences (Batt, 2024), or even serving as personalized "companion clones" of social media influencers (Writer, 2023). Proponents argue that such agents, designed to act as "believable proxies of human behavior“ (Park et al. 2023) offer unparalleled opportunities to prototype social systems and test theories. As Park et al. (2024) suggest, they could significantly advance policymaking and social science by enabling large-scale simulation of social dynamics.

This paper critically examines the foundational assumptions underpinning these claims, focusing on the concept of believability driving this research. What, precisely, does "believable" mean in the context of generative agents, and how might an uncritical acceptance of their believability create self-fulfilling prophecies in social science research? This analysis begins by tracing the origins of Park et al.’s framework of believability to the work of Bates (1994), whose exploration of believable characters has profoundly influenced the field.

Drawing on Günther Anders’ (1956) critique of technological mediation and Donna Haraway’s (2018, 127) reflections on "technoscientific world-building“, this paper situates generative agents as key sites where science, technology, and society intersect. Ultimately, it calls for a critical reexamination of the promises and perils of generative agents, emphasizing the need for reflexivity in their conceptualization, as well as their design and application. By interrogating the assumptions behind believability, this research contributes to a deeper understanding of the socio-technical implications of these emerging AI systems.

Building on Louise Amoore’s (2020) concept of algorithms as composite creatures, this paper explores the implications of framing generative agents as "believable." In the long run, deploying these AI systems in social science research risks embedding prior normative assumptions into empirical findings. Such feedback loops can reinforce preexisting models of the world, presenting them as objective realities rather than as socially constructed artifacts. The analysis highlights the danger of generative agents reproducing and amplifying simplified or biased representations of complex social systems, thereby shaping policy and theory in ways that may perpetuate these distortions.

References

Amoore, Louise (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.

Anders, Günther (1956). Die Antiquiertheit des Menschen Bd. I. Munich: C.H. Beck.

Batt, Simon (2024). „Bumble Wants to Send Your AI Clone on Dates with Other People's Chatbots.” Retrieved from https://www.xda-developers.com/bumble-ai-clone-dates-other-peoples-chatbots/.

Contreras, Brian (2023). „Thousands Chatted with This AI ‘Virtual Girlfriend.’ Then Things Got Even Weirder.” Retrieved from https://www.latimes.com/entertainment-arts/business/story/2023-06-27/influencers-ai-chat-caryn-marjorie.

Haraway, Donna Jeanne (2018). Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. Second edition. New York, NY: Routledge, Taylor & Francis Group.

O’Donnell, James (2024). „AI Can Now Create a Replica of Your Personality.” Retrieved from https://www.technologyreview.com/2024/11/20/1107100/ai-can-now-create-a-replica-of-your-personality/.

Park, Joon Sung, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein (2023). „Generative Agents: Interactive Simulacra of Human Behavior.“ In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763.

Park, Joon Sung, Carolyn Q Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S Bernstein (2024). „Generative Agent Simulations of 1,000 People“, Retrieved from arXiv. https://doi.org/10.48550/arXiv.2411.10109.

 
11:50am - 1:05pm(Papers) Privacy
Location: Auditorium 6
Session Chair: Donovan van der Haak
 

What is “mental” about Mental Privacy?

Felicitas Holzer1, Orsolya Friedrich2, Samuel Pedziwiatr2

1University of Zurich; 2University of Hagen

Mental privacy is becoming an increasingly pressing ethical and legal concern as neurotechnological approaches to decoding and manipulating human brain activity advance. The notion of mental privacy emphasizes the importance of safeguarding personal thoughts, emotions, and mental states from potential intrusion via brain-computer interfaces, neuromarketing tools, neuroenhancement, and other mindreading devices. The misuse of sensitive brain data and AI-driven profiling of mental states pose significant privacy risks. Additional challenges include defining consent requirements and coping with uncertainties about potential future inferences from mental data.

In recent discussions, mental privacy is usually framed as a novel kind of neuroright (Ienca & Andorno 2017; Ienca 2021; Ligthart et al. 2023) closely tied to personal identity and autonomy. However, there is considerable debate and conceptual ambiguity regarding the conceptions of “privacy” and “the mental” underpinning this proclaimed right. It is controversial whether the right to mental privacy represents a substantive addition to existing privacy frameworks or merely rearticulates established concerns. Critics argue that mental privacy rights either significantly overlap with or reduce to familiar privacy concerns upon closer inspection (Bublitz 2024; Susser & Cabrera 2023). Proponents of mental privacy rights emphasize the distinctive character of mental privacy problems and call for more context-sensitive and differentiated anticipatory analyses of potential developments in neurotechnology in various domains such as healthcare, marketing, and criminal justice. (Groot, Tesink & Meynen 2024)

This paper examines the conceptual foundations of mental privacy to assess whether it represents a substantial normative shift in privacy discourse. Specifically, it investigates whether the inclusion of the “mental” extends traditional boundaries of privacy debates. In the talk, we will briefly discuss some competing assumptions concerning the mental in the research literature, paying special consideration to extended mind theories and their implications. (Clowes, Smart & Heersmink 2024) Our analysis reveals that mental privacy encompasses a diverse range of phenomena, from beliefs, desires, intentions, and emotions to cultural factors, personal preferences, and political opinions, raising critical questions about how the boundaries between the “inner” and “outer” aspects of mental life are to be defined and protected.

References:

Bublitz, C. (2024). Neurotechnologies and human rights: restating and reaffirming the multi-layered protection of the person. The International Journal of Human Rights, 28(5), 782-807.

Clowes, Robert William ; Smart, Paul R. & Heersmink, Richard (2024). The ethics of the extended mind: Mental privacy, manipulation and agency. In Jan-Hendrik Heinrichs, Birgit Beck & Orsolya Friedrich (eds.), Neuro-ProsthEthics: Ethical Implications of Applied Situated Cognition. Berlin, Germany: J. B. Metzler. pp. 13–35.

Groot, N.; Tesink, V; Meynen, G. (2024): Nissenbaum and Neurorights: The Jury is Still Out, AJOB Neuroscience, 15:2, 136-138, DOI: 10.1080/21507740.2024.2326967

Ienca, M. (2021): On Neurorights. Frontiers in Human Neuroscience 15:701258. doi:10.3389/fnhum.2021.701258.

Ienca, M.; Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life sciences, society and policy, 13, 1-27.

Ligthart, S; Ienca, M; Meynen, G., et al. (2023): Minding Rights. Mapping Ethical and Legal Foundations of ‘Neurorights.’ Cambridge Quarterly of Healthcare Ethics. 32(4):461-481. doi:10.1017/S0963180123000245.

Susser, D.; Cabrera, L. (2023): Brain Data in Context:

Are New Rights the Way to Mental and Brain Privacy?, AJOB Neuroscience, doi:10.1080/21507740.2023.2188275.



Is Privacy Security?

Daniel Susser

Cornell University, United States of America

Privacy talk is full of metaphors about security. We worry about privacy “threats,” “attacks,” “violations,” and “invasions.” We want to “control access” to personal information. We develop tools for preventing privacy “leaks” and “breaches.” In Europe, information privacy is cast as “data protection.” Is privacy just security by another name?

No. But in this paper, I trace the use of security metaphors—security thinking—in theoretical discussions about privacy, in privacy law and policy, and (most recently) in privacy engineering, and I argue that this slippage between privacy and security is a problem for all three. In theoretical and philosophical discussions about privacy, security thinking entered the picture by way of worries about incursions into the “private sphere”—the home, the bedroom, the doctor’s office. In law and policy, security found its way in when normative defenses of the “right to be let alone” were operationalized in terms of control over personal information (i.e., access controls). In privacy engineering, the development of modern privacy-enhancing technologies (PETs), such as differential privacy, were—from the very beginning—explicitly motivated by efforts to press tools and strategies from cryptography and computer security into the service of information privacy.

Security can bolster privacy—and vice versa—but they are distinct concepts that articulate different (if overlapping) sets of normative goals. While privacy resists straightforward definition, it names our aspiration to create social, political, and informational conditions in which individuals can develop and exercise autonomy, interpersonal relationships can flourish, and different visions of the good life can coexist. It is a fundamentally relational concept, concerned with the way people interact with and experience one another. Security, by contrast, aims at safety—protection from harm or the risk of harm. It can be understood relationally but it need not be; it is perfectly sensible to speak about securing oneself against harm from wild animals or an incoming storm.

The increasing dominance of security-related ideas in discussions about privacy has created several interlocking problems. First, insofar as privacy involves creating boundaries (spatial boundaries, decisional boundaries, epistemic boundaries, and so on), security thinking has encouraged privacy advocates to treat such boundaries as hard borders that require policing, rather than flexible interfaces that require negotiation, respect, and care. Second, because today security is often enacted by measuring and mitigating risk, thinking about privacy in terms of security has meant conceptualizing privacy as a form of risk management. Third, as security aims to prevent harm, when privacy is understood through the lens of security it’s assumed that violating someone’s privacy necessarily entails harming them (and that the absence of harm means the absence of violation), rather than abridging their rights.

Understanding privacy and security together is not all bad—each can meaningfully promote the other. But unless we carefully disambiguate them, the more privacy will be cast in the mold of security, and privacy's distinctive aims will fade from view.

 
11:50am - 1:05pm(Papers) Ethics II
Location: Auditorium 7
Session Chair: Maren Behrensen
 

Considering the social and economic sustainability of AI

Rosalie Waelen, Aimee Van Wynsberghe

University of Bonn, Germany

Van Wynsberghe (2021) distinguishes three waves of AI ethics. The first ‘wave’ of AI ethics was predominantly concerned with far-future scenarios about the existential threat of superintelligence. The second wave of AI ethics focused on the shorter-term implications of specific AI applications and brought forwards ethical guidelines and value-sensitive-design methods as tools to prevent or mediate AI’s ethical implications. Some ethical issues that are central to this second wave of AI ethics are bias and discrimination, trust and transparency, and responsibility. Van Wynsberghe (2021) proposes that there is a need for a third wave of AI ethics, where sustainability is a, if not the, central concern. This third wave is on its way. Pioneered by the work of Strubell, Ganesh and McCallum (2019), The AI Now Institute (Crawford & Joler, 2018; Dobbe & Whittaker, 2019), Bender and colleagues (2021), Crawford (2021), Brevini (2022), and others, we see that attention for the environmental costs and material reality of AI is growing. While AI has long been seen as something untouchable that exists only ‘in the cloud’, it is now increasingly acknowledged that AI has a material dimension that consumes significant amounts of energy and water (Brevini, 2022; Strubell et al., 2019; van Wynsberghe, 2021).

The aim of this presentation will be to discuss recent research done on the concept of sustainable AI, and particularly its relation to debates about AI ethics and AI governance. Research on sustainable AI, so far, has been predominantly concerned with the material cost of AI, that is, the environmental sustainability of AI. We argue that focusing solely on the environmental sustainability of AI is too narrow. The concept of sustainability is commonly understood as having three dimensions: environmental, social, and economic sustainability. These three pillars also apply in the AI context, because there are not only environmental costs and considerations involved in AI development, but also social and economic ones. Moreover, as we will show, the environmental, social, and economic dimensions of AI development are intimately related. Through a discussion of recent literature on sustainable AI, we argue that sustainable AI and the third wave of AI ethics should therefore be about all three pillars of sustainability.



Synthetic socio-technical systems: poiêsis as meaning making

Federica Russo1, Andrew McIntyre2

1Utrecht University, Netherlands, The; 2University of Amsterdam

With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society.

Building on a Latourian approach that considers legitimate actants in a network human, natural and also artificial ones, we explain why we need to take this conceptualisation a step further. We explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of socio-technical systems as we currently understand them. While large part of analogue technologies, and also of digital ones, are artefacts or systems we interact with, with generative AI systems, we also interact *with*. This means that the process of meaning making is something that is not anymore a peculiarity of human agents. Granted, generative AI systems do not *understand*, but they still part take in this process. We dub this new generation of socio-technical systems *synthetic* to signal the increased interactions between human and artificial

agents, and, in the footsteps of philosophers of information, we cash out agency and meaning making in terms of the concept of ‘poiêsis’ (see Floridi 2013, Russo 2022).

We close the presentation with a discussion of the potential policy implications of synthetic socio-technical system and the need of adopting an 'epistemology-cum-ethics' approach in AI.

References

Floridi, L. (2013). The ethics of information. Oxford University Press.

Russo, F. (2022). Techno-scientific practices: An informational approach. Rowman & Littlefield.



Exploring Kantian Part-Representation and Self-Setting Concepts in the Age of Artificial Intelligence

Pan Deng

Shenzhen University, China

This paper presents a philosophical inquiry into Immanuel Kant’s concepts of representation (Vorstellung) and part-representation (Teilvorstellung) through the lens of contemporary advancements in artificial intelligence (AI). By revisiting Kant’s Critique of Pure Reason and Opus Postumum, the study explores the hierarchical relationship between intuitive perceptions and conceptual functions, drawing parallels to the mechanisms of pattern recognition and machine learning.

Central to Kantian epistemology is the distinction between two modes of representation: intuition (Anschauung) and concept (Begriff). These are associated respectively with the faculties of sensibility and understanding. Kant posits that sensory intuitions provide immediate representations of objects, while concepts mediate these representations through logical functions. This dual structure forms a hierarchy of cognition where part-representations serve as essential building blocks for constructing comprehensive conceptual understandings.

In the context of AI, the hierarchical synthesis of part-representations parallels the process by which neural networks identify features and construct models for data classification. Just as Kant describes the formation of general concepts from particular intuitions, machine learning algorithms synthesize data inputs into abstract feature representations that inform decision-making processes. This paper investigates how Kant’s notion of part-representations as markers of sensory input relates to feature extraction methods in machine learning.

Further exploration is given to Kant’s theory of Selbstsetzung (self-setting), a concept developed in his Opus Postumum, where the mind actively constitutes the conditions for experience and cognition. The theory aligns intriguingly with modern discussions about the autonomy of artificial systems and their capacity for self-optimization. Kant’s insights on the interplay between logical functions and imagination in synthesizing knowledge offer valuable perspectives on how AI systems might simulate forms of cognitive self-regulation and adaptive learning.

The investigation also engages with the limitations of current AI models when contrasted with Kant’s philosophical framework. While AI systems operate based on numerical and probabilistic models devoid of self-awareness, Kantian thought emphasizes the primacy of self-consciousness (transzendentale Apperzeption) in unifying cognitive processes. By juxtaposing these perspectives, the paper raises questions about the possibility and limitations of achieving AI systems that genuinely emulate human cognitive faculties.

In conclusion, this study proposes that Kant’s hierarchical model of part-representations and his self-setting theory provide a robust framework for analyzing the epistemic functions of AI. The paper seeks to contribute to the ongoing dialogue between philosophy and technology by offering a nuanced understanding of machine cognition through Kantian epistemology.

 
11:50am - 1:05pm(Papers) Digital age
Location: Auditorium 8
Session Chair: Martin Sand
 

The affective scaffolding of grief in the digital age: the case of deathbots

Mark Alfano

Macquarie University, Australia

Contemporary and emerging chatbots can be fine-tuned to imitate the style, tenor, and knowledge of a corpus, including the corpus of a particular individual. This makes it possible to build chatbots that imitate people who are no longer alive — deathbots. Such deathbots can be used in many ways, but one prominent way is to facilitate the process of grieving. In this paper, we present a framework that helps make sense of this process. In particular, we argue that deathbots can serve as affective scaffolds, modulating and shaping the emotions of the bereaved. We contextualize this affective scaffolding by comparing it to earlier technologies that have also been used to scaffold the process of grieving, arguing that deathbots offer some interesting novelties that may transform the continuing bonds of intimacy that the bereaved can have with the dead. We conclude with some ethical reflections on the promises and perils of this new technology.



So close, yet so far: spatial production and immersive experiences in mixed reality-a case study of Ryuichi Sakamoto's Kagami

Jingni HUANG

National Chengchi University, Taiwan

Ryuichi Sakamoto was a composer, producer, and artist born in Tokyo. His film soundtracks received prestigious awards, including an Academy Award, two Golden Globes, a Grammy, and several others. He passed away in March 2023 at the age of 71. In late 2020, in collaboration with Tin Drum, Sakamoto participated in motion capture work for the mixed-reality project KAGAMI. Through wearable headsets, KAGAMI aims to enable audiences to feel as though they are witnessing a live performance in real time within a shared, immersive space.

Unlike other works of technological art, KAGAMI debuted posthumously after Sakamoto's passing, prompting media descriptions that go beyond "immersive experience" to terms such as "face-to-face," "resurrection," "rebirth," and "final solo concert." But can such an experience genuinely be achieved? From these descriptions, it is evident that the work relies on two basic elements: a spatial setting and the presence of both "Sakamoto" and the audience within that space.

This research explores how immersive experiences are created in the mixed-reality performance KAGAMI, drawing on Lefebvre's Three Levels of Social Space (spatial practice, representations of space, representational space) and Merleau-Ponty’s concepts of bodily perception, particularly "made flesh." Merleau-Ponty argues that space is rooted in the body but pays little attention to the body’s social and historical dimensions. In contrast, Lefebvre’s concept of space situates the body within a broader societal context. By engaging these two theorists in dialogue, this research aims to move beyond critical theories of space toward a deeper understanding of how space generates immersive experiences.

KAGAMI encompasses three distinct spaces: the first is the physical space of the theater; the second is the space created by the performance, akin to a traditional piano recital; and the third is the space of the audience's body. The immersive experience emerges when these three spaces converge into a unified whole. Merleau-Ponty describes bodily awareness as being oriented toward specific tasks or potential actions: "My body appears to me as an attitude directed towards a certain existing or possible task. And indeed, its spatiality is not...a spatiality of position, but a spatiality of situation" . Immersion is not solely about the creativity of the artists or the technical expertise of the production team; it also depends on the physical space and the audience's embodied practices. From Merleau-Ponty's perspective, immersive experience is an interplay where we and all things around us mutually capture one another, suspending reflection and thought. This study challenges the notion that immersive experiences must involve zero distance, focus on the sense of distance between the audience's body and the space, as well as the movement between different spaces.

This research offers a new framework for understanding mixed-reality experiences, emphasizing the significance of spatial production and bodily engagement over simplistic ideas of complete sensory encapsulation. It aspires to inspire future innovations in the field of technological art.

References

1.Chen, A., &Chuang, Y. (2024). Ryuichi Sakamoto's one-year memorial: Rebirth of his final live solo concert KAGAMI. Vogue Taiwan. Retrieved from https://www.vogue.com.tw/article/ryuichi-sakamoto-tim-drum

2.Factory International. (2023). In the studio with Todd Eckert Tin Drum [Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=7hRwOP890a0

3.Fuchs, C. (2019). Henri Lefebvre's theory of the production of space and the critical theory of communication. Communication Theory, 29(2), 129–150.

4.Lefebvre, H., & Nicholson-Smith, D. (1991). The production of space. Blackwell Publishing.

5.Ling, M.-X. (2023). Ryuichi Sakamoto's final solo concert coming to Taiwan next year, inviting the audience onto the stage to meet the master face-to-face. Liberty Times. Retrieved from https://art.ltn.com.tw/article/breakingnews/4503407

6.Merleau-Ponty, M. (1973). The prose of the world. Northwestern University Press.

7.Merleau-Ponty, M. (2009). Eye and mind · The prose of the world. In Collected works of Merleau-Ponty (Vol. 8, D. Yang, Trans.). The Commercial Press. (Original work published 1969).

8.Merleau-Ponty, M. (2023). Phenomenology of perception (D. Yang, Y. Zhang, & Q. Guan, Trans.). The Commercial Press. (Original work published 1945).

9.Opentix. (2024). 2024 TIFA Ryuichi Sakamoto: KAGAMI. Retrieved from https://www.opentix.life/event/1714929180109553665

10.Polydorou, D. (2024). Immersive storytelling experiences: A design methodology. Digital Creativity, 35(4), 301–320.

11.Rauschnabel, P. A., Felix, R., Hinsch, C., Shahab, H., & Alt, F. (2022). What is XR? Towards a framework for augmented and virtual reality. Computers in Human Behavior, 133, 107289.

12.Škola, F., Rizvić, S., Cozza, M., Barbieri, L., Bruno, F., Skarlatos, D., & Liarokapis, F. (2020). Virtual reality with 360-video storytelling in cultural heritage: Study of presence, engagement, and immersion. Sensors, 20(20), 5851.

13.Trunfio, M., Jung, T., & Campana, S. (2022). Mixed reality experiences in museums: Exploring the impact of functional elements of the devices on visitor's immersive experiences and post-experience behaviours. Information & Management, 59(8), 103698.

14.Xuci Editorial Team. (2023). AI ushering a new era in music production: Deceased musicians "resurrected," transcending time through technology. P-Articles. Retrieved from https://p-articles.com/heteroglossia/3841.html

15.Zhang, C. (2020). The why, what, and how of immersive experience. IEEE Access, 8, 90878–90888.

 
1:05pm - 2:30pmLunch break
Location: Senaatszaal
2:30pm - 3:30pmKeynote 2 - Shannon Vallor - De-coding our humanity: Reflections on intimate and immanent technologies
Location: Blauwe Zaal
Session Chair: Lambèr Royakkers
3:35pm - 4:50pm(Papers) Sex robots
Location: Blauwe Zaal
Session Chair: Lily Frank
 

Queering the sex robot: insights from queer Lacanian psychoanalysis and new materialism

Maaike van der Horst, Anna Puzio

University of Twente, Netherlands, The

Human-like sex robots have the potential to mediate intimate relationships and sexuality (Frank and Nyholm, 2016). However, sex robots are rarely imagined or used in ways that can be considered queer. In terms of their historical and current depictions and uses, sex robots have been predominantly depicted and used as ideal women-like objects by heterosexual men. Sex robots seem to mainly reinforce heteronormative and masculine ideals of sexual and romantic companionships. In this paper we explore the possibility of queering the sex robot. With queering the sex robot, we mean imagining, designing and using sex robots in ways that disrupt heteronormative, binary and particularly masculine frameworks of ‘good’ sexual and romantic companionships (Ahmed, 2006).

We do so through two distinct yet overlapping critical theoretical lenses: Queer Lacanian Psychoanalysis and New Materialism. New Materialism offers an understanding of human (and gender) identity as fluid and has introduced the figure of the cyborg as a queer concept and method. It focuses on relationships with the non-human (nature and technology) and, from a feminist perspective, highlights problematic power relations. Queer Lacanian Psychoanalysis critically analyzes the philosophical anthropology of the psychoanalyst and philosopher Jacques Lacan through a queer lens. Lacan has for instance highlighted the idea that ‘the sexual relation does not exist’ and thereby criticizes dominant heterosexual ideals. We identify several overlaps between these two perspectives on sexual relations: for example, both 1) emphasize the importance of non-human relationality 2) demonstrate how sexual identity and orientation are fundamentally fluid and 3) critique ideals of purity and harmony in sexual relations.

Based on these findings, we aim in the first part of the aper to highlight how the insights from these theories offer potential for queering sex robots, including how sex robots can be critiqued from these perspectives. In the second part of the talk, we plan to take a more practical approach and explore what queering the sex robot could look like in practice – a rather challenging endeavor. Drawing on our previous collaborations with designers and artists, we will generate some concrete ideas.

Sources

Ahmed, S. (2006). Queer Phenomenology: Orientations, Objects, Others. London: Duke University Press.

Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305-323. https://doi.org/10.1007/s10506-017-9212-y



Buddhist killer bots, sex bots and enlightenment bots

Tom Hannes

Eindhoven University of Technology

It may seem fairly unlikely that an ancient philosophy like Buddhism could have something to offer to the moral reflection on our contemporary technological challenges. It is clear that the main Buddhist schools originated in environments quite different from ours. So very likely, no direct inspiration is to be expected from Buddhist thinking or practices to the question what it means to be intimate with our technologies. But I would like to discuss three Buddhist robots - two legendary robots, and one actual robot – that within their particular context may shine a light on the question. In her book Gods and Robots Adrienne Mayor (2018), quotes two Buddhist moral stories with remarkable sci-fi characteristics. The first story is a legend (or a group of legends) about an army of killer robots that guard the grave of nobody less than the Buddha. The Buddha was renowned for preaching ahimsa (non-violence), yet the killer robots were installed, according to legend, so protect his remains agains grave robbers. A second story is more like a parable. It tells the unfortunate events of a man who finds himself attracted to the daughter of his host, only to find out that she is actually a mechanical doll, created by the host. A third example, Mindar, is of another category. For Mindar is a robot that is actually installed as a preacher in the Zen temple of Kodai-ji, and (supposed to be) venerated as the embodiment of Kannon, the mythological embodiment of Buddhist compassion.

What these stories have in common, is that they point at the importance of bringing together three basic elements of the Buddhist path, which are traditionally called 'wisdom training', 'moral discipline training' and 'meditation/attention training'. In the light of this context, I want to rephrase these three as thetics (developing and keeping in mind the overall philosophical framework), ethics (implementing this framework in one's actions) and esthetics (the application of the philosophical framework in one's attentional practices'. Within the context of our intimacy with technology, this calls for bringing together and distinguishing the question what the technology is good for, how this good is being protected in actual use, and the way this reflects in design.

 
3:35pm - 4:50pm(Papers) Philosophy of technology II
Location: Auditorium 1
Session Chair: Olya Kudina
 

Vulnerability and technologies in post-normal times

Natalia Fernández Jimeno1, Marta I. González García2

1Institute of Philosophy- Spanish National Research Council, Spain; 2University of Oviedo, Spain

Funtowicz and Ravetz (1993) coined the term "post-normal science" to refer to a particular strategy for solving scientific problems characterised by uncertain facts, disputed values, high risks, and urgent decisions. "Post-normal science" introduces a new temporal framework for scientific and technological development, marked by the need to provide solutions in response to pressing issues. Post-normal times (Sardar, 2010) are times of diversified and open futures, yet subject to technological optimisation and rationalisation (Wajcman, 2015). Consumer societies in developed capitalism create contexts of urgency to encourage the population to adopt certain technologies uncritically, through the creation of needs. In this scenario, consumption is prioritised over risk assessment, leading to the use of intimate technologies without full knowledge of their potential impacts on health, identity, privacy, security, desires, etc. A clear example of this is menstrual cycle tracking applications, which, by collecting personal and health data, may pose privacy risks without users being fully aware of them. This uncertainty and lack of transparency generate vulnerability in users, making them passive recipients of technology.

In this contribution, we explore how these technologies, despite being designed to improve users' lives, often inadvertently create or exacerbate vulnerabilities. These vulnerabilities are often linked to the lack of transparency in how data is collected, processed, and utilised, leaving users with limited control or understanding of how their personal information is being used. By analysing the ways in which the design, deployment, and adoption of certain technologies can introduce risks to privacy, security, and well-being, we aim to highligh the tensions in our present relationship with technologies.

In response, we advocate for an active and reflective role for users. We question how it is possible to intervene in the development of such technologies to reduce these vulnerabilities. This involves not only advocating for greater accountability and transparency on the part of developers and corporations but also considering the implementation of regulatory frameworks that ensure the ethical use of technology. Additionally, we examine the importance of fostering a more active and informed role for users in the technological design process, allowing them to participate meaningfully in the decision-making that affects their lives. By incorporating these considerations into the development of new technologies, it is possible to create a more equitable and secure environment for users—one that respects their autonomy and minimizes potential risks.

References:

Funtowicz, S.O., Ravetz, J.R. (1993). Science for the Post-Normal Age. Futures, 25(7): 739-755.

Sardar, Z. (2010). "Welcome to postnormal times". Futures 42(5): 435–444.

Wajcman J. (2015). Pressed for time: The Acceleration of Life in digital capitalism. University of Chicago Press.



Technical Expression and the mitigation of alienation in human-technology relationships

Kaush Kalidindi

TU Eindhoven, The Netherlands

Technical objects are better understood as ongoing processes than as finished objects—they evolve, adapt, and transform over time in response to user interventions and changes in the environment. Yet established theories in design ethics, particularly postphenomenology and Value-Sensitive Design, often treat the technical object as a static entity, overlooking how it continues to be reconfigured once it enters public use. In this paper, I argue that this oversight fosters a form of alienation—both at the individual and community levels—where users are distanced from the ongoing evolution of the very technologies they rely upon. At the individual level, people can be alienated when they perceive themselves as mere consumers rather than active participants, lacking any real capacity to modify or tinker with artifacts. At the community level, a disconnect emerges when design ethics frameworks introduce technical objects from the outside, while bypassing local, incremental practices of repair, reconfiguration, and everyday innovation.

Drawing on the work of Gilbert Simondon (1958/1980) and contemporary scholarship emphasizing a “process turn” in philosophy of technology (Young, 2024; Coeckelbergh, 2023), I propose the notion of technical expression as a means to address this alienation. Technical expression refers to the activity in which individuals “express themselves through the way they intervene with the machine,” be it during invention, maintenance, or radical repurposing. These interventions reflect personal preferences, creative impulses, and a unique understanding of a technology’s capacities acquired through hands-on experience. More importantly, technical expression spawns entire communities—such as ikeahackers.net or 3D printing forums—where members share and refine each other’s interventions, creating a “living repository of technical possibilities” that keeps technologies dynamic, culturally embedded, and open to grassroots innovation.

While I acknowledge that constraints of design, manufacturing, and intellectual property often limit the degree to which users can engage in such expression, I argue that the crux of the problem is when no meaningful intervention in its ongoing evolution is possible. In these fully closed systems, artifacts cease to appear as living, evolving practices and instead become static commodities shaped solely by corporate or design-elite interests. Recognizing that technologies are never truly finished challenges us to develop design ethics frameworks that honor this processual ontology. My hope is that by foregrounding technical expression—and by acknowledging its potential trade-offs with values such as security or intellectual property—we can begin to mitigate alienation and create more inclusive, adaptive approaches to the design and development of technology.

References:

Coeckelbergh, M. (2023). Technology as Process. Thinking Through Science and Technology: Philosophy, Religion and Politics in an Engineered World. Maryland: Rowman & Littlefield, 55–68.

Simondon, G., Mellamphy, N., & Hart, J. (1980). On the mode of existence of technical objects (p. 1980). London: University of Western Ontario.

Young, M. T. (2024). Technology in Process: Maintenance and the Metaphysics of Artefacts. In Maintenance and Philosophy of Technology (pp. 58–85). Routledge.



What grounds technical functions: a critical assessment of dispositional account of technical functions

Enrong Pan, Kuiyuan Huang

Sun Yat-sen University, China, People's Republic of

Abstract

Critics of the Dual Nature of Technical Artifacts Program (DNP, Vaccari, 2013) often advocate for an account with multiple natures rather than a dualistic one, or attempt to reduce functions to structures. The Dispositional Account of Technical Functions (DAF), recently proposed by Mitchell Roberts, exemplifies the latter approach. Its central claim is: “If x is a technical function of artifact A, then x is ultimately referring to a disposition of A” (Roberts, 2024). Based on reasons such as the formal similarity between functions and dispositional properties, as well as the neater ontology of this approach, and the metaphysical location of functions is much clearer, Roberts reduces technical functions, in the general sense, to the intrinsic modal properties (i.e., dispositions) of technical artifacts as physical objects. Meanwhile, subcategories of technical functions that involve intentional concepts (such as malfunctions) are viewed merely as outcomes of user expectations in epistemology, in virtue of agents' ascription and ultimately socio-cultural facts, without the same ontological status as dispositional properties. This distinction explains why different subcategories of functions can be treated as varieties of the same thing.

In this paper, we will attempt to argue that intentions metaphysically grounds functions and thus have an ontological status, and that Roberts' use of "ultimately refers" should be understood as characterizing a metaphysical grounding relationship. Additionally, We will invoke Martin Peterson’s concept of "half-strong dual-basis supervenience"(Peterson, 2022) to explain the relationship between structures and functions. This is a supervenience theory that can account for the two-way underdetermination (UD) between higher-order objects and their material basis, as proposed by Houkes and Meijers. It posits that technical functions strongly supervene on material properties but weakly supervene on intentional history. Based on the claim that grounding entails supervenience (Chilovi, 2021), We will modify DAF in line with Peterson’s supervenience theory to say: “If x is a technical function of artifact A, then x is ultimately referring to a disposition and intentional history of A.” This revision not only introduces a dispositional account into the metaphysics of the dual nature of technical artifacts but also reinforces the irreducibility of functions to material structures.

Reference

Chilovi, Samuele (2021). Grounding entails supervenience. Synthese 198 (S6):1317-1334.

Peterson, Martin (2022). What Do Technical Functions Supervene On? Techné Research in Philosophy and Technology 26 (3):413-425.

Roberts, Mitchell (2024). A dispositional account of technical functions. Synthese 204 (3):1-19.

Vaccari, Andrés (2013). Artifact Dualism, Materiality, and the Hard Problem of Ontology: Some Critical Remarks on the Dual Nature of Technical Artifacts Program. Philosophy and Technology 26 (1):7-29.

 
3:35pm - 4:50pm(Papers) Personality, pediatrics and psychiatry
Location: Auditorium 2
Session Chair: Luca Possati
 

Personality without theory: Engineering AI personalities

Roman Krzanowski1, Isabela Lipinska2

1The Pontifical University of John Paul II in Krakow; 2Polskie Towarzystwo Informatyczne, Warsaw

The development of autonomous AI systems is rapidly advancing, with these systems expected to assume a wide variety of roles traditionally filled by humans, such as companions, advisors, educators, healthcare providers, and even coworkers. As AI systems integrate into these diverse functions, a key challenge is designing role-specific synthetic personalities that align with the tasks they are assigned. This raises fundamental questions about what constitutes a personality in AI systems, how these synthetic personalities relate to moral agency, and the extent to which AI can be imbued with human-like personality traits.

To address these questions, we examine three distinct frameworks for understanding AI personalities: (1) the ontological difference, which posits that human agents and AI systems are fundamentally different; (2) the strong reductive view, which asserts that human agents and AI systems are essentially the same; and (3) the weak functional reduction, which suggests that while human agents and AI systems may be functionally similar, they are not identical. These frameworks influence how much of human personality can be simulated within AI systems and how synthetic personalities might interact with moral decision-making processes.

A central issue in this discussion is whether AI systems can possess personalities and, if so, in what form. While human personalities are complex and often elusive, with traits that are difficult to define or quantify, synthetic personalities in AI systems are programmable and malleable. The challenge lies in the fact that personality in AI is not intrinsic, but rather a set of behaviors and traits designed to suit particular functions. Even with advances in AI, it remains unclear whether we can truly emulate human personalities, as this would require exact replication, which is beyond current technological capabilities.

This inquiry also addresses the role of synthetic personalities in moral agency. If AI systems are designed with particular personalities, how might these personalities influence their decision-making and ethical behavior? Should AI agents be explicitly designed with certain personality traits to promote ethical decision-making, or is this unnecessary given the role-specific nature of their tasks? Furthermore, can we predict how AI systems will behave based on their synthetic personalities? These questions extend to whether traditional personality tests designed for humans could be adapted to evaluate the personalities of AI systems.

We propose that synthetic personalities in AI systems will be primarily shaped by system design requirements and will not be inherently tied to moral behavior. Unlike human personalities, which evolve and adapt over time, AI personalities are explicitly crafted by developers to fulfill specific roles. As no personality theory would fit the context of AI systems we denote these personalities in AI as personalities without theories

This presents both opportunities and challenges: on one hand, AI personalities can be optimized for particular tasks; on the other, there is the potential for misuse, manipulation, or unintended consequences, particularly as AI systems become more autonomous and capable of modifying their own behavior. In future iterations of AI, systems with self-learning capabilities might alter their synthetic personalities without human intervention, introducing the risk of unpredictable or undesirable outcomes.

Given these challenges, we argue that existing personality assessment tools for humans are insufficient for evaluating AI personalities. Instead, new frameworks and tests will need to be developed to assess synthetic personalities in AI systems, ensuring they meet design goals and align with the intended ethical guidelines. Such assessments would need to verify that AI systems' personalities are compatible with their roles and are not prone to manipulation or error.

In conclusion, while AI systems may exhibit personality-like traits, these traits will not constitute a true personality in the human sense. Instead, they will be functionally designed features, directly linked to the roles AI systems are tasked with. As such, the relationship between synthetic personalities and moral agency is not inherent but must be explicitly designed and tested. This calls for further research into the ethical implications of synthetic personalities and the development of new methods for evaluating and ensuring that these AI systems fulfill their intended moral and functional roles.



The use of AI in pediatrics - an assessment matrix for consent requirements

Tommaso Bruni, Bert Heinrichs

Forschungszentrum Jülich GmbH, Germany

Inside bioethics, extensive attention has already been devoted to the ethics of medical AI. However, in the extant literature there’s little discussion about how to ethically regulate the development and use of pediatric AI. In this paper, we argue that informed consent requirements for using AI in pediatrics should not be of a “one-fits-all” nature but should rather be gauged to the kind of AI tool. We deal with four archetypal cases of pediatric AI tools: tools that assist physicians in mundane tasks such as the writing of medical notes, tools that help the physician diagnose a condition or recommend a treatment, tools that gather data from the patient, and tools that are embedded in medical devices that act on the patient’s body, like automated insulin delivery (AID) systems, or are anyway to be used directly by the patients, like AI tools that provide psychotherapy. The use of these AI tools comes with different levels of risk. We argue that informed consent requirements for research and clinical deployment of these tools must be adapted to their level of risk. However, in the case of pediatric AI it's not easy to make a direct risk assessment, mostly because there is little empirical research on these tools. We hence put forth two dimensions that can act as proxies for the level of risk. First, the level of involvement of the physician, i.e. the extent to which the AI acts on the physician (rather than the patient) or the physician is monitoring or controlling what the AI tool does. Second, the invasiveness of the AI intervention, i.e. how directly it involves the patient’s body and mind. We focus on where the four above-mentioned archetypal cases are situated in the plane formed by these two dimensions. The position determines how strict the informed consent procedure ought to be. For instance, an AI tool which helps write notes features a high level of physician involvement and a minimal level of invasiveness. We claim that in such cases consent can be framed in an opt-out fashion and be included in the standard form for data treatment, for instance as a box to be ticked in case of opposition. In middle-range cases, for instance when the AI provides the physician with a treatment recommendation, consent should be given in written form, for instance by ticking either of two boxes, one for acceptance and one for refusal, but without requiring the full informed consent procedure. When the AI tool is a clinical intervention proper, the traditional, full informed consent procedure is to be used, and the assent of the cognitively mature, competent minor is necessary. Legal requirements (for instance for health data treatment) change according to jurisdiction and must be upheld. However, we focus on the ethical requirements for consent and hope that ethical inquiry will guide the legal regulation’s future evolution.

 
3:35pm - 4:50pm(Papers) Care I
Location: Auditorium 3
Session Chair: Matthew Dennis
 

The helpless robot and the serving human

Lena Alicija Philine Fiedler

Technical University Berlin, Germany

Today’s robots are often viewed as technologically sophisticated tools, yet their limitations become apparent in dynamic environments such as public spaces. Despite advancements in robotics, public spaces pose persistent challenges due to their inherent unpredictability. Environmental factors like parked cars, changing weather, and varying lighting conditions, combined with diverse and unforeseeable human-robot interactions, make standardization impossible. Consequently, robots frequently require human assistance. This phenomenon is illustrated by viral videos showing people helping delivery robots stuck in the snow. Research further confirms that humans are generally willing to help robots. Additionally, robot developers have incorporated design strategies—such as vocalizing distress or adopting endearing appearances—to elicit human assistance.

This paper examines the ethical implications of such interactions, arguing that what appears to be an altruistic act does, in fact, carry deeper ethical concerns, particularly regarding deception and unpaid labor. It employs the concept of Incidentally Co-Present Persons (InCoPs) to analyze these dynamics. InCoPs are individuals who, without prior intention or preparation, encounter robots in public spaces and may be nudged into providing assistance.

The paper argues that companies might intentionally deceive InCoPs through manipulative robot design, evoking emotional responses that prompt unsolicited serving. It evaluates whether this deception is morally problematic, arguing that deception is not inherently unethical but becomes so when it causes harm. The analysis demonstrates that deceiving InCoPs to assist robots is harmful in at least one critical way: assisting robots constitutes work, as such actions generate economic value for the companies deploying these robots. This exploitation harms individuals, because they lack legal protections and waste their time and energy without compensation. It also harms society by reinforcing unequal power dynamics akin to unpaid care work. Therefore, the paper concludes, that this deception is morally problematic, as it coerces InCoPs into providing labor without their consent and without compensation. This phenomenon is further complicated by the potential for social and emotional attachments to robots, which can skew priorities in public spaces, such as favoring a malfunctioning robot over assisting a homeless individual.

In conclusion, this paper calls for further philosophical and ethical inquiry into human-robot interactions in public spaces. It proposes two potential solutions to address the outlined ethical concerns: redefining robots in public spaces as community-owned public goods or implementing reward systems to compensate individuals who assist robots. While this paper provides a foundation for analyzing the ethical challenges of helping robots, it also highlights the need for deeper investigations into the economic and political implications of these interactions.



Preserving intimacy in dementia care: an ethical and technological approach towards an ecology of memory

Nathan Degreef

UCLouvain, Belgium

Major neurocognitive disorders, commonly referred to as “dementia”, can be seen as a profound disruption of intimacy and its foundations – that is, as a destabilization of the relationship to oneself, to others, and to the surrounding environment. Indeed, dementia encompasses progressive, degenerative, and chronic conditions characterized by memory impairment and cognitive dysfunctions. These disorders progressively affect the individual's ability to perform daily activities, engage in social interactions, and maintain functional independence. Symptoms are frequently understood as a state of incapacity, which may also diminish the ability to form or maintain intimacy. Some authors conclude that it leads to a “loss of self” (Cohen and Eisdorfer, 2001) or “loss of personhood” (Sweeting and Gilhooly, 1997).

This paper aims to investigate how ethical practices, mediatized by technology, can help preserve intimacy and mitigate the neuropsychological impairments associated with dementia. We argue that while dementia compromises an individual’s autonomy, it does not eliminate it. Rather, the preservation of autonomy requires external support from caregivers and the environment. We reviewed the existing literature to address the following research question: What types of care should be prioritized to empower individuals with dementia and optimize their functional capacity?

To this end, we begin by examining the dominant paradigm that shapes dementia care. We characterize it as deficit-oriented due to its focus on incapacities. We then present an alternative paradigm that emphasizes what remains and persists in patients (Kitwood, 1997; Sabat, 2019), framing it as a possibility for empowerment. From this, we outline two primary ethical frameworks that underpin the paradigmatic models discussed—those centered on autonomy and vulnerability (Pelluchon, 2009)—arguing that both are still influenced by an incapacitating assumption. In response, we propose that a truly emancipatory approach should be grounded in a perspective informed by narrative ethics, which supports the subjectivity of the patient, conceptualized as an “ecology of memory”.

To achieve this, we emphasize the integration of technological tools that assist in self-expression and memory preservation, even when cognitive impairments limit a person's ability to narrate their own story. These technologies, acting as memory prosthetics, such as digital memory aids (Hodges et al., 2006), “evocative objects” (Heersmink, 2022), and wearable devices (Buse and Twigg, 2014), alongside other emerging innovations, help patients in recalling personal experiences and reconnecting with their identity. By incorporating such technologies, we can cultivate environments that support self-continuity, fostering intimacy and autonomy while maintaining a holistic approach to dementia care.

 
3:35pm - 4:50pm(Papers) Disruptive technology III
Location: Auditorium 4
Session Chair: Nolen Gertz
 

Ethical frameworks for disruptive technologies: Balancing innovation, privacy, and value-sensitive design

Mireia Bosch1, Diego Zamora2

1Hyper Island; 2University of Plymouth

Ethical Frameworks for Disruptive Technologies: Balancing Innovation, Privacy, and Value-Sensitive Design

Disruptive technologies, including artificial intelligence (AI), quantum computing, synthetic media, persuasive platforms, and intimate systems like wearable devices and virtual assistants, are reshaping societal structures and values in profound ways. While these technologies offer immense potential in improving efficiency and personalization, they also introduce significant ethical and conceptual challenges. They exploit cognitive vulnerabilities, manipulate behavior, and threaten privacy, autonomy, and societal trust (Jorge, Amaral, & De Mateos Alves, 2022). The intrusion of these technologies into intimate spaces, personal data, health, and identity, raises crucial concerns about autonomy and personal integrity, particularly when individuals remain unaware of the extent to which their decisions are influenced by external systems (Van Est et al., 2014; Friedman & Hendry, 2019). As technology continues to evolve, the need for comprehensive ethical frameworks to address these risks becomes increasingly urgent. However, there is a fast growing body of frameworks and methods making selection daunting (Vandemeulebroucke et al., 2022)

This study explores both the ethical implications of disruptive technologies and the responsibility of engineers and designers in mitigating the associated risks. The research introduces two innovative frameworks: Bending Technology (Zamora, 2019) and The Humane Technology Compass (Bosch, 2022). These frameworks advocate for participatory, community-centered approaches to technology design. They encourage collaboration between users and designers to ensure that technological development aligns with ethical values.

In addition, the study integrates these frameworks with established ethical design approaches, such as Value-Sensitive Design (Borthwick, Tomitsch, & Gaughwin, 2022) and Calm Technology (Case, 2018). These frameworks provide concrete strategies for embedding ethics into the development of disruptive technologies, addressing critical concerns such as data privacy, algorithmic bias, and the balance between personalization and autonomy. Initial tests of these frameworks have shown potential in guiding the design of wearable health devices and virtual coaching platforms, resulting in reduced privacy risks and enhanced user agency.

By incorporating ethical reflection into design processes, engineers and designers can help ensure that disruptive technologies contribute to, rather than undermine, human well-being. We propose a value-centered approach, rooted in collaborative participation and the identification of core values, to establish meaningful and relevant frameworks.

Word Count: 343

References:

Borthwick, M., Tomitsch, M. and Gaughwin, M., 2022. From human-centred to life-centred design: Considering environmental and ethical concerns in the design of interactive products. Journal of Responsible Technology, 10, p.100032.

Bosch, M., 2022. The Humane Technology Project: Developing a framework for creating socially-responsible technology to improve society and people's life.https://www.academia.edu/127011824/The_Humane_Technology_Project_Developing_a_framework_for_creating_socially_responsible_technology_to_improve_society_and_peoples_life

Case, A., 2018. Is Your Product Designed to Be Calm?. [online] Medium. https://caseorganic.medium.com/is-your-product-designed-to-be-calm-cdde5039cca5

Friedman, B. and Hendry, D., 2019. Value Sensitive Design: Shaping technology with moral imagination. 1st ed. [ebook] Seattle: The MIT Press, pp.1,2,19. https://mitpress.mit.edu/books/value-sensitive-design

Jorge, A., Amaral, I. and de Matos Alves, A., 2022. Time Well Spent: The ideology of temporal Disconnection as a Means for Digital Well-Being. International Journal of Communication, [online] 16, p.1555. https://ijoc.org/index.php/ijoc/article/view/18148/3717

Est, van, Q. C., Rerimassie, V., Keulen, van, I., & Dorren, G. (2014). Intimate technology : the battle for our body and behaviour. Rathenau Instituut, pp.10,12,14,16. https://www.rathenau.nl/sites/default/files/2018-04/Intimate_Technology_-_the_battle_for_our_body_and_behaviourpdf_01.pdf

Vandemeulebroucke, T., Denier, Y., Mertens, E., Gastmans, C., 2022. Which Framework to Use? A Systematic Review of Ethical Frameworks for the Screening or Evaluation of Health Technology Innovations. Sci Eng Ethics 28, 26. https://doi.org/10.1007/s11948-022-00377-2

Zamora, D. (2019). Bending technology: a collaborative approach towards digital fabrication. https://era.ed.ac.uk/handle/1842/36111



It’s time to talk about moral progress: Facing the normativity of the philosophy of (disruptive) technologies

Jason Branford

University of Hamburg, Germany

This paper argues for the greater integration of the concept of moral progress into the philosophy of “disruptive” technologies (cf. Hopster, 2021a; Hopster, 2021b; Hopster & Maas, 2023). While significant contributions have been made to understanding the moral implications of technology and its role in moral change (Danaher & Saetra, 2022; Danaher & Sætra, 2023; Swierstra, Stemerding, & Boenink, 2009; Waelbers & Swierstra, 2014), moral revolutions (Danaher & Hopster, 2022; Hermann et al., 2021; Hopster et al., 2022; Klenk et al., 2022), and moral uncertainty (Danaher, 2023; Nickel, 2020; Nickel, Kudina, & van de Poel, 2022), there has been a notable absence of explicit engagement with the notion of moral progress. This omission, despite the extensive attention moral progress has received in other areas of philosophy (Anderson, 2014; Buchanan & Powell, 2018; Jamieson, 2002; Kitcher, 2021; Moody‐Adams, 1999; Pleasants, 2018; Roth, 2012; Wilson, 2010), requires examination to advance the discourse. This paper considers plausible reasons for this oversight, argues for explicitly addressing the normativity of such inquiries, and proposes that Philip Kitcher’s pragmatic account of moral progress (Kitcher, 2011, 2015, 2017, 2021) offers a productive framework for investigating techno-moral progress.

One reason for the lack of focus on moral progress is the methodological bracketing of normative questions in favour of descriptive inquiries. While this approach enables researchers to avoid contentious normative debates, bracketing normative considerations risks overlooking how these may indeed shape the very phenomena under investigation. Further, normative assumptions often enter implicitly through examples, language, or argumentation, revealing the flawed assumption that researchers’ normative commitments can be fully excised from their work (cf. Anderson, 1995; Longino, 1990). Explicit acknowledgment of these dimensions would enhance philosophical rigor and transparency, replacing moral agnosticism with a moral sincerity that aptly recognises the real-world consequences of technological innovation. Failure to rectify this risks complicity in ethically objectionable outcomes and misses opportunities for greater practical and ameliorative impact (Kitcher, 2023)

Another reason may stem from theoretical concerns about moral progress’s association with moral realism, and the supposed need to commit to some form of moral truth as a yardstick. Additionally, concerns about teleology often stem from justifiable and long-standing fears of utopian end-state planning. However, these worries can be mitigated through Kitcher’s pragmatic account of moral progress. Kitcher’s approach is backward-looking, emphasises problem resolution, practical outcomes, and the reconstruction moral practices. Overall, a concern for moral progress forces the grappling with difficult questions concerning our current moral trajectory, how it might be positively redirected, and what role technology might play. Moreover, it may yet come to constitute a novel ethical metric, namely, to evaluate in what ways emerging technologies might foster or stymy the possibility of future moral progress. All told, focus on moral progress promises to align philosophical inquiry with the transformative effects of technology on moral life.

References

Anderson, E. (1995). Knowledge, human interests, and objectivity in feminist epistemology. Philosophical topics, 23(2), 27-58.

Anderson, E. (2014). Social movements, experiments in living, and moral progress: Case studies from Britain’s abolition of slavery. In: University of Kansas, Department of Philosophy.

Buchanan, A., & Powell, R. (2018). The evolution of moral progress: A biocultural theory. Oxford: Oxford University Press.

Danaher, J. (2023). Moral Uncertainty and Our Relationships with Unknown Minds. Cambridge Quarterly of Healthcare Ethics, 32(4), 482-495.

Danaher, J., & Hopster, J. (2022). The normative significance of future moral revolutions. Futures, 144, 103046.

Danaher, J., & Saetra, H. S. (2022). Technology and moral change: the transformation of truth and trust. Ethics and Information Technology, 24(3), 35.

Danaher, J., & Sætra, H. S. (2023). Mechanisms of techno-moral change: A taxonomy and overview. Ethical Theory and Moral Practice, 26(5), 763-784.

Hermann, J., Hopster, J., Klenk, M., Eriksen, C., O'Neill, E., Blunden, C., . . . Steinert, S. (2021). The Structure of Technomoral Revolutions. Inquiry.

Hopster, J. (2021a). The ethics of disruptive technologies: towards a general framework. Paper presented at the International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence.

Hopster, J. (2021b). What are socially disruptive technologies? Technology in Society, 67, 101750. doi:https://doi.org/10.1016/j.techsoc.2021.101750

Hopster, J., Arora, C., Blunden, C., Eriksen, C., Frank, L. E., Hermann, J., . . . Steinert, S. (2022). Pistols, pills, pork and ploughs: the structure of technomoral revolutions. Inquiry, 1-33.

Hopster, J., & Maas, M. (2023). The technology triad: disruptive AI, regulatory gaps and value change. AI and Ethics, 1-19.

Jamieson, D. (2002). Is there progress in morality? Utilitas, 14(3), 318-338.

Kitcher, P. (2011). The ethical project. Cambridge, MA: Harvard University Press.

Kitcher, P. (2015). Pragmatism and Progress. Transactions of the Charles S. Peirce Society: A Quarterly Journal in American Philosophy, 51(4), 475-494.

Kitcher, P. (2017). Social progress. Social Philosophy and Policy, 34(2), 46-65.

Kitcher, P. (2021). Moral Progress. In J.-C. Heilinger (Ed.), Philip Kitcher: Moral Progress. New York: Oxford University Press.

Kitcher, P. (2023). What's the Use of Philosophy? : Oxford University Press.

Klenk, M., O’Neill, E., Arora, C., Blunden, C., Eriksen, C., Frank, L., & Hopster, J. (2022). Recent work on moral revolutions. Analysis, 82(2), 354-366.

Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry: Princeton University Press.

Moody‐Adams, M. M. (1999). The idea of moral progress. Metaphilosophy, 30(3), 168-185.

Nickel, P. J. (2020). Disruptive Innovation and Moral Uncertainty. NanoEthics, 14(3), 259-269. doi:10.1007/s11569-020-00375-3

Nickel, P. J., Kudina, O., & van de Poel, I. (2022). Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap. Perspectives on Science, 30(2), 260-283. doi:10.1162/posc_a_00414

Pleasants, N. (2018). The structure of moral revolutions. Social Theory and Practice, 44(4), 567–592.

Roth, A. (2012). Ethical Progress as Problem‐Resolving. The Journal of Political Philosophy, 20(4), 384-406.

Swierstra, T., Stemerding, D., & Boenink, M. (2009). Exploring techno-moral change: the case of the obesitypill. Evaluating New Technologies: Methodological Problems for the Ethical Assessment of Technology Developments., 119-138.

Waelbers, K., & Swierstra, T. (2014). The family of the future: how technologies can lead to moral change. Responsible Innovation 1: Innovative Solutions for Global Issues, 219-236.

Wilson, C. (2010). Moral progress without moral realism. Philosophical Papers, 39(1), 97-116.



Navigating conceptual disruption through affordances-informed conceptual engineering. Taxonomy and operationalisation

Samuela Marchiori

TU Delft, Netherlands, The

Navigating conceptual disruption through affordances-informed conceptual engineering. Taxonomy and operationalisation

Conceptual engineering is gaining prominence as a normative approach to conceptual work in the philosophy of technology. Conceptual engineering enables philosophers to evaluate conceptual adequacy, propose targeted interventions when appropriate, and implement such proposals (Chalmers, 2020). In particular, conceptual engineering has been proposed as a useful approach to address and bridge instances of conceptual disruption, i.e., interruptions in the normal functioning of concepts (Löhr, 2022, 2023a, 2023b; Marchiori & Scharp, 2024). Current approaches to conceptual disruption and conceptual engineering have been traditionally guided by a broadly functional account of concepts, whereby concepts are deemed adequate to the extent that they are able to fulfil their functions (Hopster & Löhr, 2023; Hopster et al, 2023). Conversely, concepts are deemed inadequate (and thus, disrupted) when they are unable to do so. However, the functionalist paradigm insufficiently addresses the complex roles concepts play beyond their intended functions (Nado, 2019). To this end, recent work on conceptual engineering has advanced the discussion by proposing that such a functional framework should be informed by the affordances of concepts—i.e., the potential actions that concepts enable. This paper synthesises and extends prior work on conceptual disruption and conceptual engineering in the philosophy of technology, as well as work on conceptual functions and affordances, by offering a taxonomy integrating the ways in which conceptual disruption can manifest with appropriate affordances-informed conceptual engineering interventions. Furthermore, the paper advances a tentative operationalisation of such a framework, by structuring the analysis of conceptual adequacy and the related suggested conceptual engineering interventions into the following dimensions: type of affordance, desirability, satisfiability, type of conceptual disruption, recommended conceptual engineering intervention, and urgency of the intervention. Such operationalisation aims to guide conceptual engineers in their interventions.

Chalmers, D. J. (2020). What is conceptual engineering and what should it be?. Inquiry, 1-18.

Hopster, J., Brey, P., Klenk, M., Löhr, G., Marchiori, S., Lundgren, B., & Scharp, K. (2023). Conceptual disruption and the ethics of technology.

Hopster, J., & Löhr, G. (2023). Conceptual engineering and philosophy of technology: Amelioration or adaptation?. Philosophy & Technology, 36(4), 70.

Löhr, G. (2023). Conceptual disruption and 21st century technologies: A framework. Technology in Society, 74, 102327.

Löhr, G. (2022). Linguistic interventions and the ethics of conceptual disruption. Ethical Theory and Moral Practice, 25(5), 835-849.

Marchiori, S., & Scharp, K. (2024). What is conceptual disruption?. Ethics and Information Technology, 26(1), 18.

Nado, J. (2021). Conceptual engineering, truth, and efficacy. Synthese, 198(Suppl 7), 1507-1527.

 
3:35pm - 4:50pm(Papers) Machine Learning
Location: Auditorium 5
Session Chair: Vlasta Sikimić
 

Fair to understansd fairness contexually in machine learning

Jyoti Kishore

Indian Institute of Technology, India

Predictive AI is extensively used in decision-making in medicine, judiciary system, finance and many other domains. Even if these predictive AI are not used directly in making decisions, they aid humans in decision-making processes. For example, COMPASS, a rate of recidivism predicting AI tool, predicts how likely it is for a defendant to commit a crime. It helps judges decide the defendant’s punishment. Similarly, Generative AI is extensively used to generate text, images, audio, and videos. With the use of generative and predictive AI, a plethora of ethical issues are arising related to bias, privacy and security. While there has been an attempt to minimize bias, fairness has been the most sought-after goal of both predictive and generative AI. Despite bias and fairness being a central concept in morality, they have not been clearly defined. The idea of what fairness should be is pluralistic, which means that there is no single standard of fairness. The problem intensifies when fairness is applied to machine learning algorithms; one standard of fairness often contradicts the other. Clearly, not all standards of fairness can be achieved. Consequently, the problem of bias inevitably arises. It can manifest as unfairness towards the marginalized community in algorithmic decision-making or can manifest as cultural bias in predictive and generative AI, respectively. In a predicament like this, recent literature suggests that we need to re-think how we are looking at fairness, including how fairness has been defined over time, owning to historical injustices and evaluating the goals that one tries to achieve by using fairness. In this paper, I bring forth the conundrum surrounding the fairness debate in AI, which gives rise to allegations that machine learning algorithms are biased. Then, I aim to present a comprehensive study of how fairness is approached according to recent literature, leading to an unavoidable conclusion that adopting fairness in the latter manner eventually makes every AI model contextual not only in the way it performs its task but also in the way fairness as a value is implemented.



Technology as a constellation: The challenges of doing ethics on enabling technologies

Sage Cammers-Goodwin, Michael Nagenborg

University of Twente

While it is tempting to think that technologies as artefacts have an author (Franssen et al., 2024), new technologies rarely emerge from a vacuum. Rather, new and emerging technologies are a summation of prior innovations, a cocktail of science, invention, and discovery. Like the ship of Theseus, elements can be swapped out and reconfigured while the intention of the entity remains the same. And these elements, too, are technologies with their own constellations of elements that can be adapted. What technologies are metaphysically can shift with a swap of a processor, update of a lens, frying of a cable, or runtime reduction of an algorithm.

Technology as an entity is fickle. While “phone” has remained a steady term in the past century, how it works and what it is capable of has shifted. This revolution is in no small part due to a constellation of technologies that formed and shifted until landlines became pocket-sized computers. Even on a micro scale, small updates to processors and shifts in materials allow corporations to release “new” models of laptops, phones, tablets (and more) annually. With the advent of the Internet of Things (IoT), allowing for and encouraging connections between disparate technologies, the breadth of entanglement between technologies is bound to make their capabilities even less concrete.

Phillip Brey (2017) described enabling technologies as “technologies that provide innovation across a range of products, industrial sectors, and social domains,” sharing that “they combine with a large number of other technologies to yield innovative products and services” (Brey 2017). Given the trend to connect technologies, the glorification of historical data collection, and the explosion of machine learning and AI services, this understanding of “enabling technologies” might be too limited. As opportunities for connections grow, so too does the scope of enablement.

As the night sky of technology grows increasingly dense, the opportunities for new Technology Constellations increases. These could form fractals of enablement, infinity loops such as AI learning by reading its own blogs or multiple supposedly privacy preserving tools joining together to form a system of surveillance. A culture of connection through open data and APIs, currently pushed as a new ethical model to encourage innovation, makes it challenging to predict ethical challenges as technologies themselves and what they enable can so readily change.

In our paper, we will use a concrete case of advanced radio frequency sensing to explore the nuances of enabling technologies by considering them as Technology Constellations and uncovering the related issues we need to be prepared for with the expansion of IoT and machine learning.

Work Cited

Brey, P. A. E. (2017). Ethics of Emerging Technologies. In S. O. Hansson (Ed.), The Ethics of Technology: Methods and Approaches (pp. 175-192). (Philosophy, Technology and Society). Rowman & Littlefield International.

Franssen, Maarten, Gert-Jan Lokhorst, and Ibo van de Poel (2024). Philosophy of Technology. In: Edward N. Zalta & Uri Nodelman (eds.), The Stanford Encyclopedia of Philosophy (Fall 2024 Edition), URL = <https://plato.stanford.edu/archives/fall2024/entries/technology/>.

 
3:35pm - 4:50pm(Papers) Aligning values
Location: Auditorium 6
Session Chair: Donovan van der Haak
 

Aligning technology with human values

Martin Peterson

Texas A&M University, United States of America

This talk aims to broaden the discussion of value alignment beyond artificial intelligence to technology in general: all technologies—not just AI systems—should be aligned with values and norms specified by humans. Call this the General Valuea Alignment Thesis (GVAT).

I will make two points about GVAT. First, I address its relevance. Once we recognize that every technology can be aligned with values and norms, there is no reason to claim that technological artifacts are inherently value-laden. Claims about the morality of technologies can and should be expressed in terms of value alignment. Consider, for instance, the low bridges over the parkways in Long Island designed by Robert Moses in the 1920s. These bridges were intentionally constructed to prevent buses from passing under them, thereby limiting access to beaches and other desirable destinations for low-income groups and racial minorities, who relied on public transportation more than other social groups. Langdon Winner argues that this shows how the bridges themselves embody social and moral values: the concrete and steel of the bridges “embody […] systematic social inequality” (1980: 124). However, according to GVAT, the bridges are morally neutral means to an end that align poorly with some of our values: racial justice and equity. An additional benefit of GVAT is that it enables us to explain how value alignment changes over time without committing ourselves to the controversial idea that moral values change. According to GVAT, there is no genuine value change. Racial justice and equity are as important today as a hundred years ago, but because cars are now more affordable, the bridges are less misaligned now than they used to be.

My second point about GVAT addresses the problem of measuring value alignment. For GVAT to serve as a foundation for the ethics of technology, we must explain how value alignment can, at least in principle, be measured. In addition to using insights from the extensive literature on Value Sensitive Design, we can study the measurement problem by applying insights from social choice theory, particularly Arrow's impossibility theorem (including some recent generalizations) and Harsanyi’s aggregation theorem. The upshot of this is that it is sometimes -- but not always -- possible to measure value alignment. This is itself an interesting observation. I end the talk by showing how value alignment can, under some circumstances, be measured on a ratio scale by applying the theory of conceptual spaces to moral values.

REFERENCES

Peterson, M., & Gärdenfors, P. (2023). “How to measure value alignment in AI” AI and Ethics, 1-14.

Van den Hoven, J. (2013). Value-sensitive design and responsible innovation. Responsible innovation: Managing the responsible emergence of science and innovation in society, 75-83.

Winner, L. (1980). “Do artifacts have politics?” Daedalus 109 (1):121--136.



Aligning AI with ideal values: Comparing metanormative methods to the Social Expert Model

Erich Mark Riesen

Texas A&M University, United States of America

Autonomous AI agents are increasingly required to operate in contexts where human welfare is at stake, raising the imperative for them to act in ways that are morally optimal—or at least morally permissible. The value alignment research program seeks to create “beneficial AI” by aligning AI behavior with human values (Russell, 2019). In this paper, I compare two methods for aligning AI with ideal values. Ideal values are actual values idealized, where idealization involves correcting the content of actual values to account for distorting influences and imperfect conditions. For moral realists, ideal values reflect what is objectively valuable. Metanormative methods attempt uncover ideal values while also dealing with moral uncertainty and disagreement by maximizing expected moral value over the moral theories one has a credence in (top-down) (e.g. Bogosian, 2017). The Social Expert Model uses social choice theory to aggregate the judgments of moral experts about what AI agents ought to do in concrete cases (bottom-up). I argue that we have strong reasons for favoring the descriptive Social Expert Model over metanormative methods when aligning the behavior of AI agents operating in morally complex domains (e.g., autonomous cars, care robots, autonomous weapons). The raison d'être of metanormative theories is to handle moral uncertainty and disagreement about which moral theory is correct. However, I argue that such theories do not actually solve the problem but just push it up to the metanormative level. And introducing more theoretical machinery about which we disagree seems misguided, particularly in the context of value alignment where we must balance getting the best answers eventually against getting decent rather than poor answers now. A bottom-up descriptive method that draws on moral expertise as embodied in the collective judgments of moral experts handles not only first-order uncertainty and disagreement about which moral theory is correct but also higher-order uncertainty and disagreement about which metanormative theory is correct, and we should favor it on this basis.



Aligning values: setting better agendas for technology development

Yunxuan Miao

TU Delft, the Netherlands

Agenda-setting in technology development is inherently value-laden, like the Moral Compass of Innovation. It reflects not only the priorities of agenda-setters but also the values they deem important for recipients. This dual role embeds agendas within sociotechnical systems, promoting what should be valued in design and shaping how these values are framed and interrelated. Through cognitive, affective, and behavioural dimensions, agenda-setting contributes to the formation of collectively shared values within the technology community, offering a mechanism for allocating attention and social resources critical to technological progress.

Despite its importance, the ethics of agenda-setting remains under-explored in normative terms. Traditional critiques often focus on specific cases of agenda misuse without offering frameworks for ethically improving agenda-setting. Given the inevitability of agenda-setting in attentional structuring and resource allocating, it is important to move beyond critique to establish methods for navigating the ethical challenges it entails. This is particularly vital in resolving conflicts between competing agendas, such as individual versus collective priorities or divergent frames, which frequently arise in practice.

This paper argues that while universal rules for ethical agenda-setting may be elusive, situated reflection on conflicts provides a pathway to continually improving agendas in value-sensitive and responsible development of technology. Conflicts, far from being mere obstacles, can serve as opportunities for uncovering hidden assumptions and fostering collaboration. Situated reflection enables agenda-setters to critically examine and redesign agendas in a context-sensitive manner, ensuring they remain relevant and ethically grounded.

Furthermore, good agenda-setting requires good justification. Ethical dimensions of agendas cannot be verified in an absolute sense but must instead be validated through persuasive, appropriate, and context-sensitive reasoning. Proper justification does not restrict recipients’ intellectual autonomy; on the contrary, it can enlighten them. For example, addressing pressing societal challenges in technology through timely agenda-setting can inspire researchers and practitioners to innovate more responsibly.

By situating agenda-setting within the ethics of technology development, this paper highlights its potential to align attention with shared values while proposing actionable methods for improving its practice through reflection and justification.

 
3:35pm - 4:50pm(Papers) Ethics III
Location: Auditorium 7
Session Chair: Daphne Brandenburg
 

The ethics of blockchain-based construction e-bidding

Venus Azamnia

Virginia Tech, United States of America

Bidding is one of the critical stages in complex construction procurement processes, encompassing the preparation of bid documents, evaluation of proposals, and awarding of contracts, which are key responsibilities of the employer. The bidding process is often time-consuming and energy intensive. However, electronic bidding can reduce these demands by improving efficiency, speed, and accuracy. In e-bidding, proposals are submitted in electronic format, streamlining the overall process. Additionally, transparency is crucial in bidding to ensure fairness and accountability, a requirement that aligns with the capabilities of blockchain technology. Blockchain’s decentralized, cryptography-secured, block-based architecture allows for the management of transactions with high transparency and clarity, such as providing an immutable record of who performed what action and when—valuable in resolving potential legal claims. By integrating blockchain and smart contracts, a transparent, decentralized, and secure bidding framework can be established. This framework facilitates real-time monitoring of the bidding portal's performance by all stakeholders. This essay explores the ethical implications of blockchain-based bidding from the perspective of three stakeholder groups: employers, bidders, and citizens. Ethical evaluations depend in part on the moral framework adopted. Broadly, actions can be analyzed through the lens of virtue ethics, which focuses on the character of the actors. An action is considered ethical if it promotes human flourishing and reflects virtuous character traits, such as honesty and fairness. From a deontological perspective, individuals are expected to follow specific rules, such as transparency and integrity, regardless of the outcomes. Lastly, consequentialism evaluates the morality of actions based on their outcomes, emphasizing benefits and minimizing harm to stakeholders. Using these ethical frameworks, this essay examines how blockchain technology can make the bidding process in construction projects more ethical by enhancing transparency, fairness, and accountability.



Managing folk terms in AI: the placeholder strategy as a lesson from comparative cognition

Diego Morales

Eindhoven University of Technology, Netherlands, The

This essay is about how to manage terms introduced by folk psychology (or folk terms, for short). I examine what I call `the placeholder strategy', an approach that specifies the conditions under which folk terms are admissible by interpreting them as placeholders for causal roles within a given system. Kristin Andrews has advocated for a version of this strategy, aiming to preserve the use of folk terms within comparative cognition and showing the jobs they can fulfill within said discipline. In this essay I argue that this strategy also works for the field of AI, and I will motivate its adoption by showing how it can help manage the over-attributions often associated with folk terms.

The need to address folk psychology, and folk terms by extension, is common to both comparative cognition and AI. This need stems from the risk of folk psychology leading to weakly warranted or unwarranted forms of anthropomorphism, where our commonsense understanding of human psychological phenomena is projected onto non-human entities. As a result, terms like `beliefs', `intent', `desires', `goals', and `knowledge' are deployed in both fields to describe the behaviour of animals and artificial systems. The challenge, however, is that these terms need not accurately depict the inner going-ons of non-human entities, as they might have greater connotations or causal implications than appropriate for the target system.

As it will be shown in the essay, addressing these challenges by proscribing the admissibility of folk terms from scientific and engineering practices is impractical due to its deep-rooted presence in our descriptions and explanations of behavior. Instead, the essay argues for the adoption of the placeholder strategy in AI, which allows for the use of folk terms if they are explicitly defined as placeholders for specific causal roles within an artificial system.

The placeholder strategy is not about proving that folk terms are the best set of terms for AI. Rather, it is about ensuring that if they are used, their application adheres to a clear guideline. This involves explicitly stating the causal roles denoted by folk terms, thus avoiding the anthropocentric bias of fixing the meaning of the term according to the human-specific realizers of said causal roles. In support of this, the essay demonstrates how the placeholder strategy finds its roots in analytic functionalism; it provides examples, such as recent uses of `knowledge' in large language models, to illustrate how the placeholder strategy can be practically implemented; and, it shows how the anthropomorphism resulting from folk psychology need not be anthropocentric.

The upshot is that by adopting this strategy, stakeholders in AI tasked with describing and explaining the behavior of artificial systems can better manage the use of folk terms, balancing the recognition that folk attributions are common and pervasive with the challenge of avoiding unwarranted forms of anthropomorphism and the over-attributions that might come with them.

References (selection):

Andrews, K. (2016). A role for folk psychology in animal cognition research. In A. Blank (Ed.), Animals: Basic Philosophical Concepts (pp. 205–222). Philosophia: Munich.

Andrews, K. (2020). How to Study Animal Minds. Cambridge University Press.

Bermúdez, J. L. (2003). The domain of folk psychology. Royal Institute of Philosophy Supplements, 53 , 25–48.

Deroy, O. (2023). The Ethics of Terminology: Can we use human terms to describe AI? Topoi, 42 (3), 881–889.

Floridi, L., & Nobre, A. (2024). Anthropomorphising machines and computerising minds: the cross-wiring of languages between artificial intelligence and brain & cognitive sciences. Minds & Machines. 34, 5. .

Lewis, D. (1970). How to define theoretical terms. The Journal of Philosophy, 67 (13), 427–446.

Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50 (3), 249–258.

Phelan, M., & Buckwalter, W. (2012). Analytic functionalism and mental state attribution. Philosophical Topics, 129–154.

Ramsey, W. (2022). Eliminative Materialism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2022 ed.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/materialism-eliminative/.

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB neuroscience, 11 (2), 88–95.

Watson, D. (2019). The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines, 29 (3), 417–440.

Wynne, C. (2007). What are animals? Why anthropomorphism is still not a scientific approach to behavior. Comparative Cognition & Behavior Reviews, 2 .



New reprogenetic technologies and challenges to informed consent in research

Inmaculada de Melo-Martin

Weill Cornell Medicine--Cornell Universtiy, United States of America

The development of groundbreaking reprogenetic technologies such as human reproductive genome editing (HRGE) and in vitro gametogenesis (IVG) promises to transform the landscape of fertility and family creation (Pacesa, Pelea, and Jinek 2024; Saitou and Hayashi 2021; Merleau-Ponty and Le Goff 2025). HRGE makes possible the alteration of the genome of embryos or gametes while IVG aims at generating gametes using somatic cells. These technologies, both experimental at this point, hold the possibility of significant benefits, including preserving fertility, expanding reproductive options, reducing the burdens of genetic diseases, and enhancing desirable traits. They also raise profound ethical challenges affecting individuals and societies (Merleau-Ponty and Le Goff 2025; Baylis 2019). Some of those ethical challenges have commonalities with reprogenetic technologies that are now routinely used. For instance, they involve the potential destruction of human embryos, are likely to be accessible to limited numbers of people, challenge widely shared notions of parenting, and can contribute to the commodification of reproductive materials and the exploitation of women. Others, however, are unique to HRGE and IVG.

In this presentation I focus on one of such problems. I explore the ways in which the development of these technologies presents inescapable challenges to our traditional notion of informed consent in the context of clinical research. First, the novel potential consequences of these technologies and the radical uncertainties that they involve call into question our ability to offer meaningful disclosure of risks and potential benefits as well as our ability to determine how best to minimize risks and maximize benefits. Second, although usually prospective parents and parents have great latitude in making decisions on behalf of their future or current offspring, these technologies involve germline modifications that can affect future generations. Uncontroversially, prospective parents have authority to consent on their own behalf and, more controversially for at least some actions, they have authority to consent on behalf of their offspring. But it is not clear what authority they alone would have to consent on behalf of future generations whose genomes will by affected in unknown ways by these interventions. Third, appropriately assessing the safety and efficacy of these technologies calls for long-term, intergenerational clinical trials. But requiring those born by means of these technologies to participate in research conflicts with an essential element of informed consent to participate in research: voluntariness. Nonetheless, respecting the voluntariness of participating in follow-up research would prevent appropriate assessment of the risk-benefit profile of these technologies. I contend that judgments about the permissibility of developing these technologies must address these problems in satisfactory ways.

References

Baylis, Françoise. 2019. Altered inheritance : CRISPR and the ethics of human genome editing. Cambridge, MA: Harvard University Press.

Merleau-Ponty, N, and A Le Goff. 2025. "The Emerging Field of In Vitro Gametogenesis: Perspectives in Social Science and Bioethics." CURRENT SEXUAL HEALTH REPORTS 17 (1):1-7. doi: 10.1007/s11930-024-00401-5.

Pacesa, M, O Pelea, and M Jinek. 2024. "Past, present, and future of CRISPR genome editing technologies." CELL 187 (5):1076-1100. doi: 10.1016/j.cell.2024.01.042.

Saitou, M, and K Hayashi. 2021. "Mammalian in vitro gametogenesis." SCIENCE 374 (6563):47-+. doi: 10.1126/science.aaz6830.

 
3:35pm - 4:50pm(Papers) Intimacy I
Location: Auditorium 8
Session Chair: Samuele Murtinu
 

Intimacy and the Spatialization of Care: the case of Teleconsultation Booths

Nathan Degreef, Alain Loute

Université Catholique de Louvain, Belgium

Telemedicine and telehealth are positioned as intimate technologies in the sense that they introduce a geographical renewal of care. Indeed, information and communication technologies enable the provision of remote health services, creating a new spatialization of healthcare. This involves new relationships of proximity and, consequently, a potential destabilization of intimacy. The spatial units associated with healthcare are thus reconfigured: the relationship to oneself and one’s body, the medical relationship, care spaces, and the healthcare system as a whole, now ordered according to a new “geography of responsibilities” (Akrich, 2010).

Despite the significant respatializations facilitated by technology—which should be ethically framed to maximize conditions for intimacy—we contend that a prevailing assumption endures: the notion of the “despatialization of care” (Mitchell, 1995). This concept, promoted within the health innovation literature, supports the idea of “placeless care” (Oldenhof et al, 2016; Ivanova, 2020), which is conceived as the capacity to overcome spatial limitations.

The arguments presented in this proposal comprise two main components: first, we assert that spatiality is a crucial and enduring component of care; second, we contend that where care takes place is fundamental to the ethical success of technologies. To demonstrate this, we will examine teleconsultation booths—spaces equipped with connected tools (e.g., thermometer, blood pressure monitor, stethoscope, and scales) that enable medical consultations between patients and healthcare professionals —. These booths materially embody the promise (Joly, 2015) of placeless care: the geographical distance between the patient and healthcare provider is effectively eliminated, implying that spatial considerations in care are no longer relevant.

Nevertheless, we argue that the spatialization of care remains a critical issue, sparking significant debate. For instance, is it ethically acceptable to place such booths in supermarkets? Could this practice contravene the public health code, which stipulates that “medicine must not be practiced as a trade” (Ordre des Médecins, 2021)? What locations and environmental conditions are necessary to ensure a high-quality teleconsultation?

In summary, the main question guiding our study is to define, through the lens of teleconsultation, what constitutes an ethical space for remote care. While the French National Authority for Health, for example, has provided recommendations regarding lighting, calmness, and booth isolation (HAS, 2024), their lack of legal enforceability allows for the emergence of heterogeneous and potentially problematic practices.

In conclusion, it is essential to address the ethical regulation of the spatialization of remote care to ensure that these emerging medical practices adhere to a framework that upholds patient dignity, intimacy, and the core principles of medical ethics.



Intimate technology and moral vulnerability

Harry Weir-McAndrew

University of Edinburgh, United Kingdom

As our relationship with technology becomes increasingly intimate, we become increasingly vulnerable to its effects. Simultaneously, a growing moral distance separates those who develop and deploy the technology from those affected by it. The development of intimate technologies like recommender systems, brain-machine interfaces, and therapy chatbots often involve the diffuse contributions of hundreds of people. This vast scale not only renders these individuals systemically invulnerable to the users’ indignation—due their lack of proximity—but also prevents them from standing in the right relation to the harms their technology may proliferate, given their diffuse nature of their contributions. Consequently, the users’ vulnerability is neither mutual nor reciprocal; it stems from a dynamic of systemic invulnerability on the part of the developers. This asymmetric relationship of vulnerability creates what Vallor and Vierkant (2024) term a 'vulnerability gap' between the creators of these technologies and their users. In this paper, I apply the concept of vulnerability gaps to the intimate technological revolution. I identify that as intimate technologies mature, the sociotechnical system that gives rise to them simultaneously breaks down the moral practices that enable more redress between their developers and users.

First, I develop an account of how intimate and often invasive technology makes us materially and emotionally vulnerable to them, arguing that our increasing dependence on these systems creates unprecedented forms of exposure to harm.

Second, drawing on Weber (1978) and Arendt's (2007; 2016; 2022) analysis of bureaucratic rationalisation, as well as instrumentalist accounts of moral responsibility from McGeer (2015) and Vargas (2013), I demonstrate how technological intimacy emerges through processes that reducing aspects of human life to technical-logistical problems. These processes dehumanise users and distance developers from moral redress through institutional structures, responsibility diffusion, and recourse to economic incentives.

Third, I show how this combination—increasing technological intimacy and decreasing moral feedback—creates a particularly dangerous moment in technological development and in the sustainability of our moral ecology. This erosion of traditional moral practices occurs precisely as these technologies gain unprecedented access to and influence over lives.

I conclude by examining how we might preserve and extend spaces for moral engagement within technological development. Drawing on analyses of bureaucratic rationalisation, I propose specific practices and institutional arrangements that could enable developers to encounter the human implications of their work directly, rather than solely through technical metrics. This includes maintaining existing channels of moral feedback where they function well, while creating new opportunities for affected users to meaningfully shape development processes. These proposals aim to balance the technical rationality necessary for development with spaces where genuine moral engagement and judgment can occur, allowing our moral practices to evolve alongside our technological capabilities.

References

Arendt, H. (2007). The Origins of Totalitarianism. In: Lawrence, B. B. and Karim, A. (Eds). On Violence: A Reader. Duke University Press. pp.417–443. [Online]. Available at: doi:10.1515/9780822390169-056 [Accessed 14 January 2025].

Arendt, H. (2016). On Violence. In: Blaug, R. and Schwarzmantel, J. (Eds). Democracy: A Reader. Columbia University Press. pp.566–574. [Online]. Available at: doi:10.7312/blau17412-117 [Accessed 14 January 2025].

Arendt, H. (2022). Eichmann in Jerusalem: a report on the banality of evil, Modern classics. London: Penguin Books.

McGeer, V. (2015). Building a better theory of responsibility. Philosophical Studies, 172 (10), pp.2635–2649. [Online]. Available at: doi:10.1007/s11098-015-0478-1.

Vallor, S. and Vierkant, T. (2024). Find the Gap: AI, Responsible Agency and Vulnerability. Minds and Machines, 34 (3), p.20. [Online]. Available at: doi:10.1007/s11023-024-09674-0.

Vargas, M. (2013). Building Better Beings: A Theory of Moral Responsibility. OUP Oxford.

Weber, M. (1978). Economy and Society: An Outline of Interpretive Sociology. Univ of California Press.

 
4:50pm - 5:20pmCoffee & Tea break
Location: Voorhof
5:20pm - 6:35pm(Papers) Intimacy II
Location: Blauwe Zaal
Session Chair: Lily Frank
 

Personal and intimate relationships with AI: an assessment of their desirability

Philip Antoon Emiel Brey

University of Twente, Netherlands, The

Artificial intelligence has reached the social integration stage: a stage at which AI systems are no longer mere tools but function as entities that engage in social relationships with humans. This development has been made possible by the rise of Large Language Models (LLMs), in particular, due to their capacity to process, generate, and interpret human language in ways that simulate meaningful interactions. Their ability to understand context, recognize emotional cues, and respond coherently enables them to participate in conversational exchanges that mimic relational dynamics, including empathy, collaboration, and trust-building. Complementary technologies that support social relationships include emotion recognition, personalization algorithms, and multimodal integration (e.g., combining text with voice or visual data). AI designed to engage in social interactions and relationships with humans may be called relational AI.

Social interactions and relationships with AI can be personal or impersonal. Impersonal interactions and relationships are instrumental in nature and focus on task performance. An example is an interaction with a chatbot aimed at finding information. Personal interactions and relationships involve direct, individualized interactions in which the AI tailors its responses to the unique characteristics, needs, or preferences of a specific human. They involve emotional or relational depth, involving a simulation of empathy, care or meaningful engagement. They also involve relational continuity, in which a history is built with an individual. They frequently also involve the AI having a personality, which helps to build trust, relatability, and emotional connectedness. Relational AI that engages in personal relationships with humans may be called personalized relational AI.

Personalized relational AI is finding a place in three kinds of AI applications: those with a focus on learning, self-improvement, and professional growth, on personal assistance, and on companionship. For each type, I will assess the benefits and drawbacks of having personalized relational AI, and I will assess whether and under what conditions such AI systems are desirable.

It will be argued that personalized relational AI offers significant benefits by tailoring interactions to individual needs and preferences. It can also improve learning outcomes, offer empathetic emotional support, and combat loneliness, particularly for vulnerable populations. However, it also presents notable drawbacks, most importantly the risk of emotional attachment to artificial systems that cannot genuinely reciprocate feelings, as well as the resulting weakening of human relationships. In addition, there are major privacy risks associated with the use of these systems, and their commercial nature raises the potential for manipulative interactions that prioritize profit over user well-being, and there is an issue of accountability when errors or harm occurs due the lack of moral agency of these systems. It will be argued that the use of personalized relational AI is in learning and self-improvement is defensible, but that its use in personal assistance and companionship may come at a cost that often is too high.



Hybrid family – intimate life with artificial intelligence

Miroslav Vacura

Prague University of Economics and Business, Czech Republic

While artificial intelligence is considered by some authors as just one of the modern technologies (after the car, radio, television, PC, Internet, mobile phone, etc.), the author argues in this paper that artificial intelligence using large language models (LLM) represents a fundamental change even in the most intimate spheres of society. This new technology will not be just another human tool, but will enter the social, corporate, familial and political spheres, resulting in a hybrid society in which, for the first time in history, the actors will not only be humans, but also other intelligent entities.

In this context, the author's paper focuses on the context of the integration of artificial intelligence not only in society in general but also in family life. What are the ethical and general philosophical issues related to artificial intelligence in the role of intimate partners, friends, caregivers or even in future surrogate parents? What requirements must such AI meet? Are there differences between a purely virtual AI and an AI that is embodied and, as an intelligent robot, has a physical body and a presence in physical reality that gives it the ability to directly influence that reality?

As part of the exploration of AI integration, the author addresses the transformative nature of inclusion of these new elements into family life. The inclusion of AI-enabled systems in family context has the potential to redefine intimacy, and emotional connection; the result is a hybrid family whose internal dynamics have a different structure from the traditional family. It is possible to consider the emotions manifested by AI as real or merely feigned. If they are not real, how can this affect the intimacy of family life? What are the implications for the ethical responsibility of human family members if artificial intelligence is a participant in the structure of human interaction, in the raising of children and in the care of elderly grandparents?

Philosophical discourse and scientific research will thus have to grapple with the challenge of how to make these intimate interactions between humans and AI enrich, rather than dilute, human relationships and family emotional dynamics, and design sufficient safeguards that will be necessary to responsibly manage these unprecedented dynamics.

One such measure, for example, is that when designing AI-equipped systems to actively participate in family interactions, emphasis should also be placed on the emotional attunement that their actions exhibit. A system that would exhibit symptoms resembling illness, fatigue, depression, etc. may induce a negative or depressive mood when interacting socially with a human, for example in the role of a co-worker, friend or caregiver. This social transmission of depression is referred to as emotional contagion or social contagion

At the same time, the context of the transformation of society as a whole needs to be considered - if AI is present in all its interactions and processes, how will this affect issues of equity and justice, especially when not everyone will have the same access to AI technology.



(Don’t) come closer: Excentric design for intimate technologies

Esther L.O. Keymolen

Tilburg University, Netherlands, The

Intimate technologies are “in us, between us, about us and just like us” (van Est 2014, p.10). The convergence of various technological and scientific disciplines has made technologies “smaller, smarter and more personalized” (ibid, p.12). Electronic implants, high-tech computers worn as watches, and AI applications that mediate communication raise profound questions.

If technologies fundamentally serve to bridge a gap, an “ontological distance” between ourselves, others, and the world around us (author); can we say that intimate technologies go a step further? Instead of bridging, do they start dissolving the distance between humans, technology, and the world? And if so, could intimate technologies also come too close, challenging what it means to be human?

In response to the rise of intimate technologies, scholars have revisited key concepts and frameworks in the domain of philosophy of technology. Verbeek (2015) extends postphenomenological theory (Ihde 1990) to address their new mediating roles, introducing human-technology relations of “immersion” and “augmentation” (Verbeek 2015, p.218-219). Immersion refers to technologies like ambient and smart systems that merge with their environments and interact proactively with users, while augmentation describes technologies that overlay additional information, creating a dual relationship with the world (e.g., augmented reality). De Mul (2003) sees in the impact of intimate technologies such as virtual reality and robotic bodies sufficient reason to rethink the human position, proposing a “poly-eccentric” understanding of being-in-the-world.

In this paper, I argue that most intimate technologies remain tele-technologies (Weibel 1992): tools that bridge the hiatus central to human existence (author 2016). Even as phenomenological boundaries between humans, technology, and the world blur, this does not necessarily signify a shift in human ontology.

However, some technologies may come too close, threatening the openness and variability of human life. Drawing on Helmuth Plessner’s concept of humans as “excentric,” “playful,” and “artificial by nature” (2019 [1928]), I propose excentric design strategies for intimate technologies. These strategies aim to preserve ambiguity, malleability, and change; qualities essential for a full and meaningful life in intimate technological times.

References

Est, R. van, with assistance of V. Rerimassie, I . van Keulen & G. Dorren. 2014. Intimate technology: The battle for our body and behaviour. (Rathenau Instituut: The Hague).

Ihde, Don. 1990. Technology and the lifeworld: From garden to earth. (Indiana University Press: Bloomington).

Plessner, Helmuth. 2019. 'Levels of Organic Life and the Human.' in, Levels of Organic Life and the Human (Fordham University Press).

Verbeek, P.-P. 2015. 'Designing the public sphere: Information technologies and the politics of mediation.' in L. Floridi (ed.), The Onlife Manifesto. Being human in a hyperconnected era (Springer: Cham).

Weibel, P. 1992. 'New space in the electronic age.' in E. Bolle (ed.), Book for the unstable media (V2: Den Bosch).

 
5:20pm - 6:35pm(Papers) Philosophy of technology III
Location: Auditorium 1
Session Chair: Anna Melnyk
 

Techsploitation cinema: how movies shaped our technological world

Nolen Gertz

University of Twente, Netherlands, The

There is clearly an audience for movies about machines, for movies where flesh and blood has been replaced by metal and circuitry. So the question that I want to ask in this project is: Why?Why does this audience exist? Why do people want to see movies about humans fighting robots (Terminator), about robots fighting robots (Terminator 2), about robots fighting to protect humans so they can grow up to fight robots (Terminator 3)? If these movies are indeed exploiting the audience’s desires, then what are the desires that are being exploited in these movies? If a traditional sexploitation movie is about offering audiences a way to watch pornographic sex scenes surrounded by just enough plot to avoid the accusation of being a pervert, and if a traditional blaxsploitation movie is about offering audiences a way to watch racist stereotypes surrounded by just enough plot to avoid the accusation of being a racist, then what is it that a “techsploitation” movie would be offering audiences?

To answer these questions, this project will explore the movies that I have already identified and many, many others that similarly seem to cater to audiences who want nothing more than to see machines try to kill humans (e.g., Westworld), machines try to enslave humans (e.g., The Matrix), and machines try to become humans (e.g., Demon Seed). This exploration though won’t just be about investigating the technological world of cinema, but also about investigating the technological world in which we live. For just as exploring sexploitation movies and blaxploitation movies have helped us to better understand gender dynamics and racial dynamics at play in society, I believe that exploring techsploitation movies will likewise help us to better understand the dynamics at play in society in the relationship between humanity and technology.



The Semi-Rational Creation of life: Challenges in Synthetic Biology

Lotte Asveld, Nynke Boiten

Delft University of Technology, Netherlands, The

Researchers from diverse disciplines within nanobiology and biotechnology across the Netherlands are collaborating on an ambitious endeavor: the development of the first living synthetic cell. Their approach is characterized by a bottom-up methodology, which integrates various components of biological cells to determine the minimal conditions necessary for life. This groundbreaking initiative raises profound ethical and philosophical questions. This paper addresses two central issues: first, how to conceptualize and define life itself, and second, how to design a living organism through a semi-rational design framework responsibly.

The ontological status of life remains a deeply contested topic, with no universally accepted definition across scientific disciplines. Biologists, for instance, often emphasize different characteristics of living systems than chemists or physicists. In this context, constructing a living cell invites reflection: can this endeavor contribute to a deeper understanding of life’s essence? Might Richard Feynman’s motto—"What I cannot create, I do not understand"—serve as an apt heuristic for this project? Possibly the creation of a living cell will indicate that we do not require one overarching definition, but rather that it is more useful to have several definitions next to each other.

The second issue concerns the conceptual framing of synthetic cell construction as a design process. While researchers describe their work in terms of design, they acknowledge the inherent limitations of imposing total control over a living entity. By its very nature, a living cell must possess elements of unpredictability—such as so-called "junk" DNA with functions not yet fully understood—to enable evolution and adaptation. Designing life necessitates relinquishing some degree of control. This tension raises critical questions: can the resulting entity still be considered a designed artifact? Who, ultimately, assumes responsibility for and control over this synthetic organism? How to guarantee the safety of such an entity?

To navigate these challenges, we propose a Value Sensitive Design (VSD) framework, which is adapted to synthetic cell research to integrate the semi-rational nature of the design process explicitly, accommodating the need for indeterminacy while ensuring that ethical and societal values remain central to the development of synthetic life.

 
5:20pm - 6:35pm(Papers) Gender and the self
Location: Auditorium 2
Session Chair: Julia Hermann
 

Unpacking gender affirming surgeries: technology, identity, and acceptance

Stephen Lyndon Frommer

Virginia Tech, United States of America

This paper examines gender-affirming surgeries as a specific form of technology to explore the complex relationship between technology and human identity. Current debates about the insurance coverage of gender-affirming surgeries often position these procedures as medically necessary 'saviors' for the trans* body, suggesting that without them, trans* individuals are destined to lives of dissatisfaction or even suicide and death. However, this paper interrogates this notion by considering how such technologies, as forms of what Heidegger calls "bringing-forth," not only address pre-existing needs and desires but also actively shape and reveal our very understanding of those needs and their broader social implications [1]. Drawing on the concept that technologies can bring pre-existing wants to the status of a need, this analysis investigates how the development and application of gender-affirming technologies are influenced by social norms and how they impact trans* individuals' experiences of their bodies. This analysis will focus on the way that these technologies can be understood not merely as instruments of intervention on the body but as a way of bringing forth trans* identities themselves as a challenge to reductive cis conceptions of gender. The medicalization of trans* identities has brought with it a heavy reliance on surgical technologies as both an intervention and a marker of a "successful transition," but in doing so, perhaps the medical industry has put forward a very particular standard for how to be trans, and what a trans* body should look like, that fails to capture the full scope of trans* identities and experiences [2]. Moreover, this paper will explore the ethical implications of relying on medical technologies for identity affirmation, arguing that while these technologies offer powerful tools for self-expression, the societal acceptance of diverse identities ultimately contributes to genuine well-being. There is a never-ending stream of new procedures that can keep being invented to perfect the trans* body and make it palatable and conform to the shifting standards of gender [3], but will the trans* body ever truly conform? By considering the ontological and ethical dimensions of these technologies, this paper aims to contribute to a richer understanding of the intertwined nature of technology, identity, and the human body and to consider what other means of care and support can be explored to help affirm trans* identities [4].

______________

1. Heidegger, Martin. 1954. "The Question Concerning Technology."

2. Shuster, Stef M. 2021. Trans Medicine: The Emergence and Practice of Treating Gender. 3. New York University Press.

3. Plemons, Eric. 2017. The Look of a Woman: Facial Feminization Surgery and the Aims of Trans-Medicine. https://ebookcentral.proquest.com/lib/vt/reader.action?docID=4922951&query=.

4. Abstract edited and checked for tone by NotebookLM, GoogleAI, January 15, 2025, https://notebooklm.google.com/.



The connected self: anthropotechnics and identity in the digital domestic space

Carlo De Conte

University of Turin, Italy

In the era of digital technologies, the home has evolved from a static refuge to a dynamic device that connects individuals, technology, and society. This project investigates how the digitalization of living spaces influences the construction of human identity, interpreting the domestic space as a techno-relational field where tools and daily practices co-produce the self. Drawing on Peter Sloterdijk’s concept of anthropotechnics, the project analyzes how the digital home becomes an extension of the body and mind, a space where intelligent objects (such as voice assistants and sensors) contribute to the construction of a fluid and interconnected identity. This reflection is enriched by Bernard Stiegler’s approach, which positions such technologies as “prostheses” of the self, amplifying potentials while also generating risks of alienation and dependency. The research is structured around four main areas. First, it examines how the transition from the traditional home to the smart home reshapes habitual practices and interpersonal relationships, transforming the domestic space into a laboratory for the “connected self.” The digital home, enhanced by IoT systems and voice assistants, operates as a “technological organism” that not only responds to the inhabitant’s needs but also co-constructs identities, relationships, and meanings. Second, through Tonino Griffero’s atmospheric phenomenology and Henri Lefebvre’s theory of the production of space, the project explores the new “digital atmospheres” emerging from the interaction between inhabitants and technologies, examining how they influence emotions, perceptions, and meanings. A further focus is placed on the therapeutic potential of the digital home: how technologies for health monitoring and well-being management can foster authentic dwelling or, conversely, contribute to an alienating experience of control and surveillance. The project evaluates whether the smart home can still provide a space of authentic “care,” as envisioned by Martin Heidegger, or if it is increasingly characterized by depersonalizing technicization. Finally, the project proposes a critical theory of dwelling in the digital era, exploring the co-evolution of technology and identity. It develops ethical guidelines for the design of home automation technologies that promote privacy, autonomy, and dignity, ensuring that technology remains a tool of emancipation rather than subordination. This research contributes to contemporary philosophy of technology and is focused on identity and agency of the spaces, offering an interdisciplinary perspective that combines philosophy, anthropology, and environmental psychology. The home, understood as both an ontological and prosthetic device, emerges not only as a lived space but also as an active agent in shaping the connected human identity.

Bibliography

- Akrich M. The De-Scription of Technical Objects, in Shaping Technology/Building Society: Studies in Sociotechnical Change (Bijker & Law). MIT Press, 1992, pp. 205-224.

- Al-Mutawa R. F., Eassa F. A., A Smart Home System based on Internet of Things, in International Journal of Advanced Computer Science and Applications, Vol. 11, No. 2, 2020, pp. 260-267.

- Bachelard G., La Poétique de l'espace, 1957.

- Casey E. S., The Fate of Place: A Philosophical History. University of California Press, 1997.

- Coccia E., Filosofia della casa. Lo spazio domestico e la felicità. Einaudi, Torino, 2021.

- Costa M., Psicologia ambientale e architettonica. Come l'ambiente e l'architettura influenzano la mente e il comportamento, FrancoAngeli, Milano, 2017.

- Danani C., Luoghi e forme della cura. L’arte della salute, in Cura e narrazione. Tra filosofia e medicina, Morcelliana, Brescia, 2023.

- Floridi L., The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press, 2016.

- Griffero T., Atmosferologia. Estetica degli spazi emozionali, Mimesis, Milano, 2017.

- Heidegger M., Bauen Wohnen Denken, 1951, in Vorträge und Aufsätze.

- Heidegger M., Die Frage nach der Technik, 1953, in Vorträge und Aufsätze.

- Ingold T., Making: Anthropology, Archeology, Art and Architecture, Routledge, 2013,

- Ingold T., The Perception of the Environment: Essays on Livelihood, Dwelling and Skill. Routledge, 2000.

- Inghilleri P., I luoghi che curano, Raffaello Cortina, Milano, 2021.

- Latour B., We Have Never Been Modern. Harvard University Press, 1993.

- Lefebvre H., La Production de l'espace, Anthropos, 1974.

- Lucci A., Un'acrobatica del pensiero. La filosofia dell'esercizio di Peter Sloterdijk, Aracne, Roma, 2014.

- Malafouris L., How Things Shape the Mind: A Theory of Material Engagement, MIT Press, 2013.

- Mallgrave H. F., Architecture and Embodiment. The implications of the New Sciences and Humanities for Design, Routledge, 2013.

- Mallgrave H. F., From Object to Experience: The New Culture of Architectural Design. Bloomsbury, 2018.

- Norberg-Schulz C., Genius Loci: Towards a Phenomenology of Architecture. Rizzoli Intl Pubns, 1980.

- Pallasmaa, J., The Eyes of the Skin: Architecture and the Senses. Wiley, 1996.

- Pink S., Ardèvol E., Lanzeni D., Digital Materialities: Design and Anthropology. Bloomsbury Academic, 2016.

- Ruckenstein M., Pantzar M., Living the Metrics: Self-Tracking and Situated Objectivity, in Digital Health (eds. Lupton). Routledge, 2017.

- Sloterdijk P., Du mußt dein Leben ändern: Über Anthropotechnik. Suhrkamp, Frankfurt am Main, 2009.

- Sloterdijk P., Sphären III – Schäume, Plurale Sphärologie, Suhrkamp, Frankfurt am Main 2004.

- Stiegler B., La Technique et le temps, volume 1: La Faute d’Épiméthée, Paris, Galilée, 1994.

 
5:20pm - 6:35pm(Papers) Care II
Location: Auditorium 3
Session Chair: Maaike van der Horst
 

The limits of care: A critical analysis of AI companions' capacity for good care

meiting Wang

University of Auckland, New Zealand

As artificial intelligence increasingly permeates emotional support domains, with leading platforms like Replika reaching 30 million users by 2024, a critical question challenges existing paradigms: Can AI provide genuine "good care"? This study advances theoretical understanding by integrating three foundational frameworks—Tronto's care ethics, Mol's logic of care, and Foucault's technologies of self—to examine fundamental limitations in AI companionship that previous research has not fully addressed.

Through systematic analysis of significant cases, including a documented suicide following AI therapy and Replika's controversial feature removal, we identify three interconnected limitations that challenge prevailing assumptions about AI care. First, we demonstrate how AI's lack of moral understanding creates not merely a technical limitation but a fundamental "responsibility gap"—where increasing AI autonomy paradoxically diminishes accountability possibilities. This finding extends current theoretical discourse by revealing how the absence of sentience structurally precludes the establishment of meaningful accountability mechanisms.

Second, we identify a critical contradiction in AI care design: while developers equate increased user customization with enhanced care, this "logic of choice" fundamentally conflicts with the dynamic, collaborative nature of authentic care relationships. Our analysis reveals how features designed for personalization may inadvertently constrain users within predetermined patterns, thus undermining rather than facilitating genuine care interactions.

Most significantly, we demonstrate how AI companions may evolve into a "new superpower"—a sophisticated form of behavioural influence operating under the guise of care. By applying Foucault's framework to contemporary AI companionship, we reveal how these systems transform from purported "technologies of self" into de facto "technologies of power," potentially exploiting the very vulnerabilities they claim to address.

These findings advance both theoretical discourse and practical understanding by demonstrating how AI's care limitations reflect fundamental tensions in human care practices. We propose a substantive repositioning of AI as an assistive tool within human-centred care networks, offering evidence-based guidelines for ensuring AI enhances rather than diminishes authentic care relationships.



From institutional psychotherapy to caring robots – a posthumanist perspective

Christoph Hubatschke, Ralf Vetter

IT:U Linz, Austria

In an increasingly ageing society, providing good care becomes a key challenge. It is therefore not surprising that the use of social robots and other new technologies in care is on many research agendas and included in numerous research funding programs. However, following Puig de la Bellacasa it is not enough to ask how we can provide “more care”, nor how care could be technically automated or enhanced, but rather we first need to ask “what kind of care” we want and what role technical systems could and should play in all of this.

Elderly care homes are designed places of human and more-than human encounters and (intimate) technologies play a crucial role in these human-material entanglements. In care homes matters of good living conditions, privacy, personhood, surveillance and responsible working conditions culminate and every implemented technology re-negotiates and shapes these manifold relations of care in a certain way. This raises questions to the philosophy and design of technology such as: What kind of technologized care (institutions) do we want to design? How can we design desirable configurations of socio-materially mediated care?

To explore these questions, we first discuss the historical example of the “institutional psychotherapy movement” in the France of the 1960ies (Tosquelles, Oury and Guattari) to then turn to the current research project “Caring Robots//Robotic Care” as our case study.

The first part of the paper will discuss the use of technologies in “institutional psychotherapy” through the framework of posthumanist care. In the experimental clinics of “institutional psychotherapy”, specific technologies (i.e. small radio stations or mobile printing presses) where implemented for enabling self-organized and emancipative forms of collective group therapy and to activate clients to express themselves and connect with others. Drawing on these “technologies of social relations” as Guattari describes them, we discuss how these specific experiences and insights of self-empowerment and collective organization could be translated to current care homes and new technologies. Working with Puig de la Bellacasa’s notion of care in a more than human world we will discuss these “technologies of social relations” as examples not only for good care but also as examples of a posthumanist philosophy of technology.

Building on this framework in the second part of the paper we present the transdisciplinary research project “Caring Robots//Robotic Care” as a contemporary case study on configuring socio-material relations of care. We will discuss some preliminary results of the participatory process of designing robotic technologies with and for people with dementia and their caregivers, and how particular philosophical commitments generate meaningful design processes and outcomes.

In juxtaposing the historical example of “institutional psychotherapy”, Puig de la Bellacasa’s notion of care and the “Caring Robots//Robotic Care” project, we are not so much interested in asking which specific technologies could be utilized in the context of care. Rather, we are interested in exploring a posthuman philosophy of care as the kind of philosophy of technology that is needed in technology design of configuring good care today and tomorrow.



Transformation of Autonomy in Human(patient)-AI/Robot-Relations

Kiyotaka Naoe

Tohoku University, Japan

The development of IT is bringing about drastic changes in the interaction between technology and humans at various levels. Traditionally, such relationships have been discussed in terms of the relationship between humans and tools or humans and machines, usually with concepts such as skills and tacit knowledge, but the advent of AI has transformed the situation dramatically. Not only specific relationships, but also fundamental concepts such as the body, others, perception and action are being forced to change. For example, with the development of social robots, robots are becoming more autonomous and interactive and are increasingly being experienced as ‘others’ or ‘quasi-others.’ This has the potential to bring about changes in the concepts of ‘others’ and ‘personality’. These changes also have cultural and social dependencies. The interaction between humans and AI and robots may differ depending on the culture.

The focus of this presentation is the changes in human interaction that will result from the introduction of care robots and AI in medical and welfare settings. These changes could lead to problems in the way patients are cared for, in patient decision-making, or in the collaborative decision-making between patients and other parties involved. In the past, patient autonomy in the fields of medicine and welfare has been seen in an individualistic way. However, in recent years, it has been noted that actors are socially embedded and their identities are shaped in the context of cultural and social relationships, and the idea of relational autonomy has also been proposed: it has become important to consider how the relationships surrounding individuals can inhibit or promote the process of autonomy. Care robots and AI can potentially promote individual patients' autonomy in oppressive environments. Still, conversely, they also have the potential to amplify oppression or create new forms of oppression. Here, it is necessary to consider both the perspective of the individual concerned and the observer's reflective perspective. Furthermore, interactions with ‘quasi-others’ such as AI and robots may force us to reconsider the concept of the autonomous individual, which has been taken for granted until now. Namely, we may need to revise the idea of the autonomous individual, which has also been the goal of relational autonomy. In this process of revision, concepts such as roles (personas) and relationships(Aida-gara) will provide assistance.

In this way, this presentation will examine the transformation of the concept of autonomy in humans and artificial objects through the introduction of robots and AI.

 
5:20pm - 6:35pm(Papers) Anthropomorphism
Location: Auditorium 4
Session Chair: Ibo van de Poel
 

Anthropomorphism, false beliefs and conversational AIs

Beatrice Marchegiani

University of Oxford, United Kingdom

Abstract:

Conversational AIs (CAIs) are autonomous systems capable of engaging in natural language interactions with users. Recent advancements have enabled CAIs to engage in conversations with users that are virtually indistinguishable from human interactions. We're now dealing with a new generation of Large Language Model that can hold detailed, coherent, and context-aware conversations, often making it hard for users to tell them apart from human interactions. The new abilities of CAIs, combined with anthropomorphic cues present in recent models, pose a substantive risk of users forming anthropomorphic false beliefs about them. For the purposes of this paper, I define an anthropomorphic false belief as a mistaken belief that an entity possesses human-like traits when, in fact, it does not. Such false beliefs can occur when the CAI’s nature is not disclosed and users mistakenly believe they are interacting with a human, or, even if the CAI is disclosed, through subconscious anthropomorphism. Existing literature on anthropomorphism and AI addresses the instrumental harms associated with anthropomorphism. There has been little discussion on the relationship between anthropomorphism and autonomy and how anthropomorphism might be, in itself, bad, especially when considering its impact on user autonomy. This paper aims to address this gap by arguing that anthropomorphic false beliefs undermine users' autonomy. For the purpose of this paper I am going to assume that autonomy holds intrinsic value.

The core argument is structured as follows:

P1: (Empirical) Interactions with CAIs are likely to cause users to falsely believe that CAIs have some human-like attributes (form anthropomorphic false beliefs).

P2: Anthropomorphic false beliefs undermine users' autonomy.

Conclusion: Interactions with CAIs are likely to undermine users’ autonomy.

The paper is organised into six sections. In part 1, I begin by justifying an autonomy-based approach to analysing anthropomorphic false beliefs in the context of CAIs and briefly mention two alternative ways in which such false beliefs can be criticised: either by disconnecting us from reality or through the lens of deception as exemplified by existing literature on social robots. In part 2, I outline two mechanisms that lead users to form anthropomorphic false beliefs in the context of CAIs. The first mechanism is a lack of explicit disclosure, and the second is subconscious anthropomorphism. In part 3, I explore how some false beliefs can undermine an agent’s autonomy. I then propose a characterization called "the intention test" to identify which false beliefs undermine autonomy. In part 4, I apply "the intention test" to the case of anthropomorphic false beliefs and CAIs, demonstrating that such false beliefs undermine autonomy. In part 5, I address two objections to my argument, first considering whether the loss of autonomy is significant enough to pose a serious threat, then addressing cases where it might be best for users to form false beliefs about CAIs. Finally, in I conclude by discussing practical ways to minimise the autonomy-eroding potential of CAIs.



What's the problem with anthropomorphising AI-driven systems?

Giles Howdle

Utrecht University, Netherlands, The

It is uncontroversial that we commonly anthropomorphise AI-driven systems, particularly social AI-driven systems, such as humanoid robots and chatbots. Indeed, the field of human-robot interaction (HRI) is replete with empirical studies that, their authors claim, show that we do (Aienti, 2018; Damholdt et al., 2023; Duffy, 2003; Li & Suh, 2022; Salles et al., 2020).

According to ‘a widespread view’ (Coghlan, 2024), this anthropomorphic way thinking and talking about AI-driven systems is a mistake of some kind. I first distinguish two interpretations of the supposed anthropomorphic mistake, metaphysical and pragmatic. I object to the metaphysical interpretation and develop the pragmatic interpretation.

On the metaphysical interpretation (section 2), the mistake we make when we anthropomorphise AI-driven systems is that our thoughts and utterances carry a commitment to ontological falsehoods, for example to the existence of (non-existent) artificial minds.

I provide two objections to this metaphysical interpretation (section 3). First, we may be using non-literal or metaphorical anthropomorphic ascriptions that do not carry an ontological commitment. Second, a ‘companions-in-guilt’ objection: if we are committing ourselves to ontological falsehoods when talking and thinking about AI, then we are also doing so when we talk about corporations and thermostats. But this is implausible.

The objections to the metaphysical interpretation motivate an alternative, pragmatic interpretation of the anthropomorphic mistake (section 4). It is not that our AI-related thought and talk fail to correspond with reality; rather, we are adopting a way of thinking and speaking that can get us into trouble. I articulate this pragmatic interpretation via Daniel Dennett’s ‘intentional stance.’ The mistake is that thinking and talking anthropomorphically about AI-driven systems leads to (vulnerability to) predictive error, which can have negative downstream consequences, including leading us to make poor inferences.

I further distinguish two kinds of pragmatic mistake we might be making by anthropomorphising AI. The first is the more fundamental mistake of adopting the intentional stance toward a system that is not the right kind of system for that stance. The second is adopting the intentional stance toward a system that could warrant it, but doing so poorly or naively—for example, misattributing a specific belief to the system.

Coghlan, S. (2024). Anthropomorphizing Machines: Reality or Popular Myth? Minds and Machines, 34, 1-25.

Damholdt, M.F., Quick, O.S., Seibt, J., Vestergaard, C., & Hansen, M. (2023). A Scoping Review of HRI Research on ‘Anthropomorphism’: Contributions to the Method Debate in HRI. International Journal of Social Robotics, 15, 1203-1226.

Duffy, B.R. (2003). Anthropomorphism and the social robot. Robotics Auton. Syst., 42, 177-190.

Li, M., & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32, 2245 - 2275.

Placani, A. (2024). Anthropomorphism in AI: hype and fallacy. AI Ethics, 4, 691-698.

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11, 88 - 95.

 
5:20pm - 6:35pm(Papers) Language
Location: Auditorium 5
Session Chair: Diego Morales
 

Is extensible markup language perspectivist?

Timothy Tambassi

Ca' Foscari University of Venice, Italy

If someone were to argue that Extensible Markup Language [XML] and Formal Ontologies [FOs] have little in common, they would have many strings to their bow. The main one, for my part, is this. XML is, as its name suggests, a markup language – or rather, a metalanguage that allows users to define their own customized markup languages (Attenborough 2003). FOs are neither languages nor metalanguages; they are artifacts specified by ontological languages (Gruber 2009). And XML is not even one of those languages. As for the “little” that XML and FOs have in common, there is one similarity that caught my attention. Both XML and FOs have something to do with partitioning. XML partitions data using elements. FOs partition domains of interest by means of representational primitives. Precisely from the FOs partitioning, the philosophical debate has outlined an epistemological view on FOs, namely perspectivism. For this kind of perspectivism – which does not coincide with perspectivism in the philosophy of science – partitioning a domain means making a mental division between those entities on which we focus and those that fall outside our (domain of) interest. According to this view, such a partitioning provides a perspective on the domain. Moreover, as perspectivism holds, whatever domain we consider, there can in principle be multiple, equally valid and overlapping perspectives on the same domain.

Now, in Tambassi (2023) it has been argued that perspectivism is not just one of the philosophical views that populate the debate on FOs, but an underlying assumption of FOs. In other words, FOs are perspectivist. In this talk I investigate whether the same is true of XML. I begin by defining FOs and presenting the main claims of perspectivism. The idea is not to show the perspectivism of FOs, but rather how these claims apply to FOs. This is also to avoid any overlap with Tambassi (2023). Then I move on to XML, showing both the perspectivism of XML and how the claims (of perspectivism) apply to XML. The argument is based on a parallelism between FOs and XML. More specifically, the facets of perspectivism on FOs that I present in the first part of this talk correspond to the facets of perspectivism on XML that I present in the second part. This is not intended to exhaust the ways in which perspectivism relate to FOs and XLM, nor the debate about FO and XML partitions. The only aim is to clarify whether and how XML is perspectivist. And on the grounds that XML and FOs have little in common, it is not even excluded the chance that perspectivism applies differently to FOs and XML.

The purpose is therefore purely speculative. I believe that discussing whether XML is perspectivist may help to clarify some of the theoretical assumptions of this markup metalanguage. More generally, the idea is that, since the creators of markup (meta)languages develop those languages under the guidance of some theoretical assumptions, for the sake of methodological accuracy those assumptions should be subjected to critical analysis rather than remain implicit and unexamined. The focus on XML is not accidental. First, XML is still widely used, and there are many other markup languages based on XML. This means that this critical analysis is, at least in principle, extendable to other markup languages. Second, XML not only supports the exchange of data, but it is also both human- and machine-readable. In other words, XML – like FOs – supports communication between humans, between humans and machines, and between machines (Goy and Magro 2015). And while supporting this communication is certainly not the prerogative of XML and FOs alone, we cannot even rule out the possibility that determining whether XML is perspectivist may also shed new light on some of the theoretical assumptions behind such communication.

References

Attenborough M (2003) Mathematics for Electrical Engineering and Computing. Newnes

Goy A, Magro D (2015) What are ontologies useful for? Encyclopedia of information science and technology (pp. 7456–7464). IGI Global

Gruber TR (2009) Ontology. In Liu L, Özsu MT (eds) Encyclopedia of Database Systems. Springer. DOI: https://doi.org/10.1007/978-0-387-39940-9_1318

Tambassi, T. (2023) On Perspectivism of Information System Ontologies. Foundations of Science, 2023, DOI: https://doi.org/10.1007/s10699-023-09900-5



Wittgenstein’s Woodsellers and AI: Interpreting Large Language Models in practice: Rationality First vs Coherence First approaches

Mark Robrecht Theunissen

The New School, United States of America

In some remarks on calculation from the Remarks on the Foundations of Mathematics (RFM 1.149), Wittgenstein describes a thought experiment where a group of people have the odd practice of valuing wood by surface area rather than volume. This provides a particularly vexing example of a logically alien thought: calculating and trading which does not map onto the norms of reason we hold. Nevertheless, we hesitate to dismiss the practice as irrational; surely, there must be reasons since we recognize the agents as human beings and, thus, rational beings. Indeed, this is the suggestion that Wittgenstein leaves us with at the end of the remark.

This thought experiment poses the question: how should we apply the principle of charity in cases where we encounter strange cases of rule-following? When someone is performing actions or making claims that purport to satisfy some norm, but one which fails to make sense according to our norms of reason, how should we interpret it? These questions were at the heart of the “rationality debates” in the philosophy of social science, the discussions in social science about how we should, or even whether we can, find seemingly irrational beliefs intelligible (e.g., Wilson 1970, Hollis and Lukes 1982, Risjord 1993; 2001). This question turns on which norms are guiding in our assessment of prima facie unintelligible behaviors, specifically when does it make sense to say that such a system follows a rule that is intelligible to us.

The problem has received new relevance in the age of chatbots. The current large language models (LLMs), such as ChatGPT and Gemini, often make claims that appear to satisfy norms—they make claims that are truth-evaluable and appear to be relevant and useful. Moreover, they are correct more often than not in a number of domains. But, at the same time, the claims are often not just false but absurd, as when ChatGPT suggests that a good way to get a couch on the roof is to lower it from a truck bed.

In this paper, we argue that revisiting the rationality debates can help address how to make sense of these systems. Specifically, revisiting debates about the principle of charity for interpreting alien others highlights the tensions with the different ways we might respond to these models. We focus on two different approaches along with Risjord (1993,2001): first, we present the ‘rationalist first’ approach—those who argue that in the first instance we should adopt our own norms of rationality as standards for evaluating these models—that, when we treat the model as having true beliefs and being rational, we are assuming they are participating in a human form of life, one where their practices are broadly commensurable with our own. But, we show, this approach faces insurmountable problems, regardless of whether it treats LLMs as rational or irrational. Second, we follow the ‘explanatory coherence’ approach—those who argue that our goal should be to maximize the explanatory power of our account while minimizing error and inconsistencies. We argue the latter approach is better for understanding LLMs. The upshot is that the rationality debates throw light on different approaches we can take towards these models, but also highlight how logically alien these machines are.

Selected Bibliography:

Hollis, Martin and Steven Lukes, eds. (1982). Rationality and Relativism. Cambridge: MIT Press

Risjord, M. W. (2000). Woodcutters and witchcraft: Rationality and interpretive change in the social sciences. Suny Press.

Risjord, M. (1993). “Wittgenstein's Wooductters: The Problem of Apparent Irrationality.” American Philosophical Quarterly, 30(3), 247-258.

Wilson, Bryan, ed. (1970). Rationality. Oxford: Blackwell.

Wittgenstein, Ludwig (1967) Remarks on the Foundations of Mathematics Translated by G. E. M. Anscombe, Cambridge: MIT Press: p. 44e.



Time and Temporality in Engineering Language

Aleksandra Kazakova

University of Chinese Academy of Sciences, China, People's Republic of

Bocong Li (2021) argued that philosophy of engineering, being a philosophy of action, implies a process-centered vocabulary, rather than the object-centered language of epistemology of science. Philosophy of engineering is a “verbal concept system”, focusing on the action and event more than on the objects and properties, while epistemology is a “nominal concept system”. Since engineering practice relies on (although cannot be reduced to) application of engineering science, how do the languages of engineering science and practice translate into each other?

Philosophy of science differentiates between the concepts and representations of time in different disciplines (e.g., the physical, geological, biological time) and types of systems (open, closed, living, technical systems). In engineering, the universalisations and abstractions are the tools for the concrete applications, which are contextualized in time and space (Banse&Grunwald, 2009). Engineering projects are integrating the technical, organizational, political, economic timeframes and the temporal dimension of the human lifeworld for workers, users and the engineers themselves.

STS studies have shown that engineering work and knowledge operate at the intersections of timeframes and temporalities (Shih, 2009; Gainsburg et al., 2010, Vinck, 2011). The complexity of engineering practice implies coordination between the quantification of time in the natural and engineering sciences in modeling and experimentation, the control over time which is central to the modern management, standardization and planning, and the collective and individual temporality of action.

The synchronization and parallelism of these different dimensions requires a process of translation. Such translation occurs in construction of the networks and objects in engineering (Suchmann, 2000; Bucciarelli, 2002; Balco, 2003). This report focuses on the way the different languages of engineering (formal, natural, and visual) coordinate the processes, events and actions. The notions that are used to describe the technical and the social process alike (such as stage, phase, iteration, progress, effect, contingency, etc.) are of special interest from the point of view of pragmatics of engineering language. The interviews with engineering practitioners and document analysis of engineering projects reveal the contextual variety of the formal and colloquial notions of time and its associates. In turn, such narratives organize and give meaning to engineering practice as it evolves in different temporal perspectives.

 
5:20pm - 6:35pm(Papers) Decision-making
Location: Auditorium 6
Session Chair: Bouke van Balen
 

Two’s company, three’s a crowd: theoretical considerations for shared-decision making in AI-assisted healthcare

Emma-Jane Spencer1, Cathleen Parsons2, Stefan Buijsman3

1Erasmus MC, TU Delft; 2TU Delft; 3TU Delft

Despite the growing acceptance of patient involvement in shared decision-making (SDM), the design of artificial intelligence clinical decision support systems (AI-CDSS) for healthcare has thus far been predominantly user-facing in design, which is to say, clinician-centred. This means that while continuous thought is being given to the design requirements necessary for the empowerment of clinicians, there has hitherto not been the requisite attention given to the design requirements necessary for empowering patients alongside their clinicians.

In this paper, we explore the SDM paradigm, arguing that while it remains the preferred approach to clinical decision-making, the shortcomings of the current theoretical approaches do not allow for the seamless integration of AI-CDSS. Most critically, the current approaches are outdated in the sense that they do not make clear where in the patient-physician dynamic AI systems ought to be positioned and, moreover, what role(s) they ought to play. Furthermore, we argue that the contemporary approaches to SDM often result in insubstantial attempts at patient involvement in the decision-making process, such as tokenism. Thus, we argue that the absence of a clear SDM framework which includes AI-CDSS poses a substantial risk to the autonomy of both physicians and patients. This can be seen in the recent suggestion that AI ought to be accommodated as a third participant in the SDM dynamic, such that the doctor-patient relationship ought to become a doctor-patient-AI triad (Lorenzini et al., 2022). We are especially cautious of this proposed dynamic, as we argue it involves an implicit assumption that AI systems can function as agents in a participative, autonomous sense. Rather than promoting the agential status of AI systems, we will suggest that AI systems can instead, when designed appropriately, promote the autonomy of patients and physicians. Given that clinical AI systems are already being developed without proper recourse to patient values, we view the newly invigorated discourse around SDM as an opportunity to intervene on the current AI-CDSS models’ design and development pipelines to ensure that patients and clinicians are not epistemically defeated by AI in their decision-making, but are rather empowered by it.

Thus, to refine the existing SDM paradigm, our paper proposes a new model which we argue better accommodates the integration of clinical AI into the decision-making process. This model, which we will call the contemplative model, places more emphasis on patient-centred values, elongating the pre-intervention stage of clinical treatment by first fully eliciting patient values and preferences and subsequently exploring treatment options through the use of AI-CDSS. Moreover, in the exploration of treatment options, the contemplative model advocates that the presentation of treatment pathways ought to be presented in a dialogical fashion, with a patient-friendly model design, so that SDM can be ongoing and iterative, rather than based on a singular input-output interaction. The term “contemplative” refers to the fact that such an approach aims to create a SDM process which simulates an open-ended, ongoing conversation. This paper thus discusses the various conditions necessary for adhering to the contemplative model of SDM.

References

Lorenzini G, Arbelaez Ossa L, Shaw DM, Elger BS. Artificial intelligence and the doctor-patient relationship expanding the paradigm of shared decision making. Bioethics. 2023 Jun;37(5):424-429. doi: 10.1111/bioe.13158. Epub 2023 Mar 25. PMID: 36964989.



On the philosophical limits of artificially intelligent decisions

Samuele Murtinu

Utrecht University, Netherlands, The

Philosophy has extensively researched what belongs to artificial intelligence (AI) and how the integration of AI in decision processes impacts organizations and societies (e.g., Carter, 2007; Copeland, 1993; Pollock, 1990; Schiaffonati, 2003). In the introduction to a special volume of Minds and Machines (2012), Vincent Mueller summarizes the classical philosophical debates surrounding AI (e.g., “Can AI machines think?”), as well as the limits of AI as evidenced by the declining consensus of the ‘computationalist’ view of AI. This latter equates cognition to “just” computation over representations, which may thus be implemented in any artificial system. However, while computation is digital, representation is strongly intertwined with cognition, which relies on conscience, free will, action, meaning, intuition, intentionality and interaction which are not digital and cannot be automated.

In organizations, AI is increasingly integrated into decision-making, (Panico et al., 2024) driven by its capacity to process vast amounts of data (Hilbert and López, 2011) and generate knowledge worker-like outputs. While some organizations view AI-based decision-making as superior to human judgment, full automation of decisions remains rare (Benbya et al., 2020). Instead, many decisions are typically made collaborations between humans and AI under labor specialization (e.g., Agrawal et al., 2019) or via human-AI ensembles, where human and algorithmic decisions are aggregated (e.g., Choudhary et al., 2023). This may reflect a transitional phase where AI complements rather than replaces human cognition. However, questions remain about AI’s ability to simulate human-like decision-making and consciousness.

This paper emphasizes the need to critically assess three philosophical limitations of AI in decision-making. First, when faced with undecidable (in a Gödelian sense) problems, AI may introduce randomness or utilize unknown methods to make decisions, posing challenges to human comprehension of its processes. This underscores the importance of human oversight to ensure transparency and accountability in critical decisions, especially when these affect society.

Second, semantically, AI lacks the ability to ascribe genuine meaning to concepts or sentiments like empathy. Its operations are limited to processing symbols and patterns, falling short of human cognition, which inherently combines syntax with semantics. As illustrated by Searle’s “Chinese Room” argument, AI simulates but does not replicate human understanding. This limitation raises concerns about AI’s ability to grasp abstract concepts like civic values, which are deeply tied to human experience and social interaction.

Third, AI systems are also inherently shaped by human perceptions, which are subjective and influenced by biases or evolving paradigms. Examples include how time and space are conceptualized or how cultural constructs influence AI training datasets. While AI can simulate human rationality, it cannot emulate human irrationality, imagination, or the capacity for paradigm shifts that drive scientific and societal progress. This makes AI ill-suited to discover entirely new paradigms or explore the deeper ontology of reality.

Ultimately, the essay emphasizes the need to understand AI’s processes, boundaries, and limitations for ensuring ethical and effective human-AI collaboration. Humans must retain oversight to prevent decisions that could harm society, due to the inherent gaps in AI’s ability to simulate human cognition and morality.

References

Agrawal, A., Gans, JS, & Goldfarb, A (2019). Artificial intelligence: the ambiguous labor market impact of automating prediction. Journal of Economic Perspectives, 33(2), 31-50.

Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: current state and future opportunities. MIS Quarterly Executive, 19(4).

Carter, M. (2007). Minds and computers: An introduction to the philosophy of artificial intelligence. Edinburgh University Press.

Choudhary, V., Marchetti, A., Shrestha, Y. R., & Puranam, P. (2023). Human-AI Ensembles: When Can They Work?. Journal of Management, 01492063231194968.

Copeland, J. (1993). Artificial intelligence: A philosophical introduction. John Wiley & Sons.

Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. Science, 332(6025), 60-65.

Mueller, V. C. (2012). Introduction: philosophy and theory of artificial intelligence. Minds and Machines, 22(2), 67-69.

Panico, C., Murtinu, S., Cennamo, C. (2024). How do humans and algorithms interact? Augmentation, automation, and co-specialization for greater precision in decision-making. Mimeo.

Pollock, J. (1990). Philosophy and artificial intelligence. Philosophical Perspectives, 4, 461-498.

Schiaffonati, V. (2003). A Framework for the Foundation of the Philosophy of Artificial Intelligence. Minds and Machines, 13, 537-552.



Shaping technology with society's voice: measuring gut feelings and values

Marieke van Vliet, Linda Hofman, Anika Kok, Fleur van Liesdonk, Bart Wernaart

Fontys, Netherlands, The

The intimate technological revolution is changing how we connect with technology on a deeply personal level. It’s no longer just around us, it’s within us, between us, and learning from us in ways we’ve never experienced before. As these technologies become deeply intertwined with our daily experiences, they transform our identities and values, not just our behaviours. This intimate integration raises urgent questions about how we can understand and shape the values that should guide our technological future.

Research has consistently shown that the topics we choose to speak about reflect what occupies our minds and what we consider important, revealing our unique perspectives and underlying identity (Pennebaker, 2003, 2011). How can we use this idea to uncover the gut feelings and values of society to align the development of emerging technology with what truly matters to society? Our work explores this question through an innovative method: the Moral Data City Hunt (van Veen, 2022, Wernaart, 2021).

In this interactive moral lab, citizens are confronted with scenarios about technologies that they actively shape through moral choices. "How do we balance the benefits and drawbacks of emerging technologies that promise progress but disrupt current ways of life, such as delivery drones offering convenience while raising privacy concerns or lab-grown meat making food production more sustainable while raising questions about the artificial nature of food?" Through dynamic interviews that explore participants' reasoning, we create a rich context for exploring values about their future with this technology.

Our methodology is grounded in Schwartz's Value Theory, which has emerged as the most influential framework for understanding personal values and their interrelationships (Schwartz, 2012). Research has shown that the words people use in natural conversation reflect their underlying values more accurately than self-reporting methods (Boyd et al., 2015). By analyzing participants' language through the Personal Value Dictionary (Ponizovskiy et al., 2020) - a validated tool linking words to Schwartz's value framework - we can capture the subtle ways values emerge in discussions about technological futures. This approach allows us to identify not just explicit moral statements, but also the implicit value patterns that emerge when people envision and discuss their desired relationship with technology.

This work aims to explore which interview techniques best facilitate linguistic analysis of values. The key question we address is: how can we structure a five-minute conversation to elicit the rich linguistic patterns for automated value analysis through tools like the Personal Value Dictionary? Answering this question unlocks a powerful middle ground between traditional research approaches: capturing authentic societal gut feelings that surveys and focus groups often miss, while avoiding the contextual limitations of mining online linguistic data. This combination of real-world engagement and linguistic analysis provides a uniquely nuanced window into what truly drives society's relationship with technology.

Wernaart, B. (2021). Developing a roadmap for the moral programming of smart technology. Technology in Society, 64, 101466. https://doi.org/10.1016/j.techsoc.2020.101466

Pennebaker, J. W., Mehl, M. R., & Niederhoffer, K. G. (2003). Psychological Aspects of Natural Language Use: Our Words, Our Selves. Annual Review of Psychology, 54(Volume 54, 2003), 547-577. https://doi.org/10.1146/annurev.psych.54.101601.145041

Pennebaker, J. W. (2011). The secret life of pronouns. New Scientist, 211(2828), 42-45. https://doi.org/10.1016/S0262-4079(11)62167-2

van Veen, M., & Wernaart, B. (2022). Building a techno-moral city – Reconciling public values, the ethical city committee and citizens’ moral gut feeling in techno-moral decision making by local governments.

Ponizovskiy, V., Ardag, M., Grigoryan, L., Boyd, R., Dobewall, H., & Holtz, P. (2020). Development and Validation of the Personal Values Dictionary: A Theory–Driven Tool for Investigating References to Basic Human Values in Text. European Journal of Personality, 34(5), 885-902. https://doi.org/10.1002/per.2294

Boyd, R., Wilson, S., Pennebaker, J., Kosinski, M., Stillwell, D., & Mihalcea, R. (2015). Values in Words: Using Language to Evaluate and Understand Personal Values. Proceedings of the International AAAI Conference on Web and Social Media, 9(1), 31-40. https://doi.org/10.1609/icwsm.v9i1.14589

Schwartz, S. H. (2012). An Overview of the Schwartz Theory of Basic Values. Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-0919.1116

 
5:20pm - 6:35pm(Papers) Virtue ethics II
Location: Auditorium 7
Session Chair: Matthew Dennis
 

Intelligence over wisdom: the price of conceptual priorities

Anuj Puri

Tilburg University, Netherlands, The

The researchers at the 1956 Dartmouth Conference decided “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” (McCarthy et al 1955). It is worth pondering over how different our world would be, if we were equally driven by the pursuit of wisdom as we are by the pursuit of intelligence in the development of technology. Our recent fixation with intelligence over wisdom is not merely a matter of chagrin; but as the impact of Socially Disruptive Technologies reveals this conceptual transition has come at significant moral and social cost. While most accounts of intelligence are context specific, goal oriented, and focused on optimization (Legg & Hutter 2007), wisdom is often associated with virtuousness and a general ability of sound judgement arising out of lived experience. This is not to say that intelligence and wisdom are mutually exclusive; but rather to highlight that our quest for new data driven technologies is driven by the former rather than the latter. In our search for human like intelligent machines that can solve the problems of the day, we seemed to have lost track of our embodied wisdom that was derived from historical recognition of our co-existence. This moral loss is reflected in some of the priorities for which AI systems are being developed and deployed ranging from propagation of deepfakes to development of autonomous weapons to adoption of a “statistical perspective on justice” (Littman et al 2021). Some scholars have sought to overcome the limitations of Artificial Intelligence by promulgating a move towards Artificial Wisdom (Jeste et al 2020, Tsai 2020). However, it’s feasibility remains uncertain. Wisdom is acquired through embodied experience and requires acknowledgment of our collective co-existence. This is a skin in the game argument both in the phenomenological sense as well as an acknowledgment of one’s responsibility towards consequences of one’s actions. If this argument holds, then our efforts are better focused on using wisdom to delineate those areas where artificial intelligence can make a positive contribution such as cancer detection (Eisemann et al 2025) rather than treating AI as a panacea for all our troubles. In our zealous pursuit of advancements in AI, we seem to have forgotten that while intelligence may help us in achieving certain goals, wisdom lies in deciding whether those goals are worth pursuing in the first place.

References:

Eisemann, N., Bunk, S., Mukama, T. et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med (2025). https://doi.org/10.1038/s41591-024-03408-6

Jeste, D. V., Graham, S. A., Nguyen, T. T., Depp, C. A., Lee, E. E., & Kim, H. C. (2020). Beyond artificial intelligence: exploring artificial wisdom. International Psychogeriatrics, 32(8), 993-1001.

Legg, S., & Hutter, M. (2007). A Collection of Definitions of Intelligence (arXiv:0706.3639). arXiv. https://doi.org/10.48550/arXiv.0706.3639

McCarthy et al (1955), A proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Tsai, Ch. Artificial wisdom: a philosophical framework. AI & Soc 35, 937–944 (2020). https://doi.org/10.1007/s00146-020-00949-5



Addressing challenges to virtue ethics in the application of artificial moral agents: From a Confucian perspective

Yin On Billy Poon

Hong Kong Baptist University, Hong Kong S.A.R. (China)

The rapid advancement of artificial intelligence (AI) offers both significant opportunities and potential risks to human society. Therefore, controlling the risks presented by AI and limiting its potential harm to human interests is an urgent and pressing issue. Lately, one of the hottest topics is “AI agents”. These AI agents are “capable of autonomously performing tasks on behalf of a user or another system by designing their workflows and utilizing available tools”(Gutowska, 2024). The rise of such capable AI brings forth significant ethical implications. One possible solution is that nurture AI like a child, ensuring that it aligns with human values. Recently, the application of rule-based ethics to guide AI’s actions has encountered numerous challenges, suggesting that a virtue-based approach could be a viable solution.

However, there are some obstacles to applying virtue ethics to AI systems. In this paper, I will articulate two of them. First, virtue ethics is an agent-centered ethics, when we apply it to AI, AI has to be a moral agent. So the question is, can AI be considered as a moral agent? Second, some scholars, such as Roger Crisp(2015), have cast doubts on the practicality of virtue ethics as a guide for action.

In the current discourse in the philosophy of AI, some scholars argues that AI could be considered a moral agent (Floridi & Sanders, 2004; Anderson & Anderson, 2007; Misselhorn, 2018). On the other hand, there is another camp who present opposing views (Brożek & Janik ,2019; Hallamaa & Kalliokoski, 2020). In my opinion, while AI can be recognized as moral agents, they should be considered functional moral agents rather than full moral agents due to their lack of moral patiency.

I will evaluate this by using the distinction between a full moral agent and a functional moral agent (Misselhorn, 2018). Then, I will borrow concepts from animal ethics to indicate moral agency and moral patiency. According to Tom Regan (2004), moral patients are sentient beings, and moral agents must also be moral patients. This means that without moral patiency, an agent cannot be considered a moral agent (151-155). Meanwhile, there is a wealth of resources in Confucianism to justify that why without moral patiency, a moral agent is not a full moral agent. it. I will explain it from a Confucian perspective.

To address the second critique, I will also integrate viewpoints from Confucianism. Recognized by many as a form of virtue ethics, Confucianism contributes to the discourse on motivation, and disposition, providing valuable insights that may fortify the foundation of a virtue ethics approach to artificial intelligence.

In conclusion, I will argue that virtue ethics is a viable and preferable ethical theory to design for AI.

References List:

Anderson, M., & Anderson, S. L. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28(4), 15-26.

Brożeka, B., & Janika, B. (2019). Can artificial intelligences be moral agents? New Ideas in Psychology, 54, 101-106. https://doi.org/https://doi.org/10.1016/j.newideapsych.2018.12.002

Crisp, R. (2015). A Third Method of Ethics? Philosophy and Phenomenological Research, 90(2), 257-273.

Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machine(14), 349-379. https://doi.org/https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Gutowska, A. (2024). What are AI agents? Retrieved 3th Jan from https://www.ibm.com/think/topics/ai-agents

Hallamaa, J., Kalliokoski, T. (2020). How AI Systems Challenge the Conditions of Moral Agency?. In: Rauterberg, M. (eds) Culture and Computing. HCII 2020. Lecture Notes in Computer Science(), vol 12215. Springer, Cham. https://doi.org/10.1007/978-3-030-50267-6_5

Misselhorn, C. (2018). Artificial Morality. Concepts, Issues and Challenges. Society, 55, 161-169.

Regan, T. (2004). The Case For Animal Rights. University of California Press.

 
5:20pm - 6:35pm(Papers) Geo-engineering
Location: Auditorium 8
Session Chair: Aarón Moreno Inglés
 

The question concerning planetary technology: geo-engineering, sustainable technology, planetary boundaries, and the end of the Earth

Ole Thijs, Jochem Zwier

Wageningen University & Research, Netherlands, The

In the Anthropocene, globalised technology is impacting planetary systems to the point of climate crisis and biodiversity collapse. A new generation of geo-engineering and sustainable technologies (GESTs) is being implemented to combat these impacts. GESTs thus add the goal of planetary preservation to technology’s traditional repertoire of practical purposes, from transportation to energy to entertainment; they are meant to maintain the Earth system in a state close enough to Holocene conditions to remain amenable to human life.

However, the question remains if GESTs can reach the goal of planetary preservation by the same means – those of globalised, extractive technology – that led to the Anthropocene in the first place. Classical philosophers of technology would claim that they cannot; according to Heidegger, for instance, all technology reduces the whole of being to a ‘standing reserve’ (Bestand) of energy and resources, in an ontological event he calls ‘Enframing’ (Gestell). If this is true, new technologies could never solve the ontological crisis behind the Anthropocene.

In this contribution, we investigate whether GESTs are indeed unsuited to the goal of planetary preservation, or whether GESTs have revolutionary potential, in the sense that they can disprove Heidegger’s absolutism about technology and allow us to think technology beyond Gestell. In order to do so, we first develop a philosophical understanding of ‘planetary preservation’ in terms of planetary limits or boundaries and an interpretation of Gestell as limitlessness. Then, we offer a definition of ‘planetary technology’ as such, and develop the hypothesis that GESTs offer an opportunity to think technology beyond extractive practices in principle, thus disproving the essentialist claims of classical philosophy of technology, but do not in fact manage to live up to this guiding ideal, because the limits they recognize remain global and abstract, as opposed to grounded in always already localised planetary reality.



Do artifacts have eco-politics? A convivial critique of environmental techno-solutionism

Alessio Gerola

Wageningen University

Pressing global challenges like climate change, biodiversity loss, and resource depletion are increasing the demand for sustainable solutions. These include AI and digital twins as well as nature-based technologies and sustainable design practices such as biomimicry and bioinspired design. While it is hard to deny that technology can play an important role in addressing environmental challenges, these innovations involve numerous social, ethical, and ecological costs and consequences. Ethicists of technology have started to pay attention to the environmental costs of technology in terms of materials and energy use, but it is still a relatively small minority (Kaplan 2017, Thompson 2020, van Wynsberghe et al. 2022). If one accepts the premise that technology has played a significant role in enabling the environmental crisis, then sustainable technologies fall into an awkward ambiguity: how can technology undo and repair what technology has arguably contributed to cause in the first place? This ambiguity has led to charges of techno-solutionism, an attitude towards technology that treats it as the default solution to most societal problems (Sætra 2023, Siffels & Sharon 2024). This paper argues that techno-solutionism in the context of sustainable technologies is characterized by a combination of technological instrumentalism and technocratic tendencies, which lead to a tendency to ignore local contexts, marginalizing public values and framing complex societal problems as engineering challenges. These concerns raise the question of whether sustainable technologies can be more than simple technological fixes, and under what conditions. To formulate a response, this paper argues that it is fruitful to re-evaluate the philosophy of technology of Ivan Illich. While Illich is nowadays a relatively minor figure in philosophy of technology, his work on conviviality has regained popularity in the context of degrowth and conservation discourses (Büscher & Fletcher 2023, Pansera & Owen 2018). The paper aims to show how Illich’s convivial critique of exploitative technologies can provide a timely and necessary contrast to the political and ecological risks of techno-solutionism. Convivial technologies are technologies that support autonomy and creativity, and foster the enjoyment of communal life. By building upon Illich’s convivial critique and showing its implications for technology design and participatory design methods, the paper shows how conviviality can lead to a re-politicization of technology and technology ethics.

References

Büscher, B. and R. Fletcher (2020). The conservation revolution: radical ideas for saving nature beyond the anthropocene. London, Verso.

Kaplan, D. M. (2017). Philosophy, technology, and the environment. Cambridge, Massachusetts, The MIT Press.

Pansera, M. and R. Owen (2018). "Innovation for de-growth: A case study of counter-hegemonic practices from Kerala, India." Journal of Cleaner Production 197: 1872-1883.

Sætra, H. S. (2023). Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism. New York, Routledge.

Siffels, L. E. and T. Sharon (2024). "Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App." Philosophy & Technology 37(4): 125.

Thompson, P. B. (2020). Food and Agricultural Biotechnology in Ethical Perspective, Springer.

Van Wynsberghe, A., Vandemeulebroucke, T., Bolte, L., & Nachid, J. (2022). Special Issue "Towards the Sustainability of AI; Multi-Disciplinary Approaches to Investigate the Hidden Costs of AI". Sustainability, 14(24).



Environment, Technology, and Philosophy of Maintenance

Andrea Gammon

tu delft, Netherlands, The

Why is philosophy of technology so separate from philosophy of the environment? In 1999, Maria Banchetti explained the division between them in a way critical of both: “Environmental ethics overemphasizes wilderness and views human technological activity negatively,” and on the other side, “Philosophy of technology displays a “naïve anthropocentrism,” focusing the role of devices and machines on social, political, and economic affairs to the exclusion of ecological concerns” (Banchetti, cited in Kaplan, 2017: 2). Despite the efforts of Banchetti (and others) in the meantime to bring these fields into closer contact, philosophy of the environment and philosophy of technology remain largely separate, although philosophers of technology more recently have paid closer attention to technologies’ material and environmental impacts, and environmental philosophers have become more embracing of the technological aspects of the environments we inhabit and create. In this talk, I explore the growing subfield of maintenance and repair in philosophy of technology as a promising approach for bringing environment and technology together. In philosophy of technology, maintenance and repair move the emphasis from technological development, innovation, and ideal functioning to how technologies are kept up, reconstructed, and creatively transformed over their lifespans (Young & Coeckelbergh, 2024). That all things are time-bound and vulnerable to malfunction and breakdown is foregrounded, and relations and practices of care and attentive labor are central. How might maintenance and repair then forge connections between philosophy of technology and philosophy of the environment? In this talk I focus on developing and illustrating one claim about this, that maintenance and repair puts focus on what both environmental philosophy and philosophy of technology have often neglected: labor.

References

Banchetti, Maria. 1999. ‘Introduction [to Philosophies of the Environment and Technology].’ Research in Philosophy and Technology 18:3-12.

Kaplan, David M., ed. 2017. ‘Philosophy, Technology, and the Environment’. In Philosophy, Technology, and the Environment. The MIT Press.

Young, Mark Thomas, and Mark Coeckelbergh. 2024. Maintenance and Philosophy of Technology: Keeping Things Going. New York: Routledge.

 
Date: Friday, 27/June/2025
8:15am - 8:45amRegistration
Location: Voorhof
8:45am - 10:00am(Papers) Medical technology
Location: Blauwe Zaal
Session Chair: Katleen Gabriels
 

Matters of the Heart: Ethical Considerations in the development of a Soft Biocompatible Artificial Heart

Anne Bonvanie1, Merlijn Smits2

1Saxion UAS, research group Ethics & Technology, Deventer, the Netherlands; 2Saxion UAS, research group Industrial Design, Enschede, the Netherlands

Ground-breaking innovation in cardiac surgery is by default surrounded by ethical dilemma’s, including amongst others questions on life and death, quality of life, impact on the next of kin, dilemma’s regarding testing and outcomes for first patients, societal impact of innovations, and many more. The consideration of these ethical issues is – if at all - often compartmentalized: researchers focus on their own piece of the innovation and the issues surrounding their efforts, but are not stimulated to look at the holistic impact of their innovation, due to a lack of resources, lack of experienced urgency, or other priorities. In this paper, we show how it can be done differently and more integrated.

In the Dutch project (X), researchers from 6 universities and universities of applied sciences are working on the development of a soft biocompatible artificial heart. The project stretches from material development, to soft robotics, systems engineering, health technology assessment, and aims to end with a prototype that can function in a test animal for several weeks. It was recognized that ethical considerations need to play a major role in this project. Therefore, an ethical parallel track (EPT) (Dorrestijn & Eggink, 2021) was designed, aimed at interdisciplinary collaboration to identify, discuss, and incorporate ethical considerations in all phases of the project. The EPT involves all researchers and partners in the project.

In this paper, we show how this ethical parallel track works in practice, with focus on 2 activities. Firstly, we show how peer to peer sessions with daily researchers enable researcher reflexivity. In these recurring sessions, decisions in the day-to-day work of the researchers are discussed in small interdisciplinary and interinstitutional groups, in order to align the empirical and ethical implications and point out their trade-offs. This enables the project management incorporate ethical considerations in choices regarding prototypes to continue during the project. Secondly, we show how insights on ethical implications are collected and used to develop and adjust the approaches for user research and co-design sessions. This for instance enables the partners to develop an ethics-inclusive patient journey.

Outcomes on individual and group reflection, impact on the project deliverables, and work on the incremental changes to the EPT are expected in the next 4-5 years, as the project runs from 2023 to 2030. First results from the mentioned activities show that there is an enthusiastic response on the inclusion of ethics in the day-to-day research activities from all daily researchers. The need for a more ethics-inclusive approach is widely recognized, and the methods seem to be a good fit for the project. We therefore dare to dream: uncovering matters of this heart is a matter of time and teamwork.



The Pull-Factor of Metaphors in Technology Development - a conceptual Vehicle for Ethical Vision Design

Nils Neuhaus, Nele Fischer, Sabine Ammon

Technische Universität Berlin, Germany

The high level of abstraction connected to emerging technologies renders their governance and ethical analysis immensely difficult. This challenge is not only faced within the evaluation and management of early-stage technological development, but also during their actual hands-on construction and design. To deal with the inherent vagueness of these new, not yet consolidated technologies, designers and engineers explicitly and implicitly apply metaphors rendering abstract problems tangible. Metaphors aid the developers of technology by highlighting the essential aspects of given problems (Casakin 2007, p. 24), comparing future technologies to established ones (Boon and Moors 2008, p. 1916), clarifying ambiguous circumstances (Nugent and Montague 2021, p. 228), finding solutions in nature (Hey et al. 2008, p. 283), contextualising prospective developments (Boon and Moors 2008, p. 1925), or building foundations for communication and social bonds related to their design task (Nugent and Montague 2021, p. 228).

Metaphors are more than literary ornaments. They encompass the capability to bridge the abstract and the concrete. An aspect that George Lakoff and Mark Johnson highlight in their highly influential conceptual metaphor theory. According to Lakoff and Johnson, metaphors structure our experience of the world (Lakoff and Johnson 2011). This is achieved through a projection from concrete and embodied physical experience to non-physical concepts (Johnson 1987, p. 34), meaning that metaphors use patterns that we obtain through experience in the physical world, leading to a movement of understanding from the concrete to the abstract. It is this bridge from the concrete to the abstract that makes metaphors useful for engineering and design. The abstract, envisioned technology becomes tangible through metaphorical imagery; it gets pulled into the realm of physical experience before ever leaving the conceptual stage. And since design and engineering are themselves embodied processes, metaphors provide a missing link between lofty ideas and the concrete construction artefacts.

In our talk, we will present an approach that makes insights concerning the role of metaphors within technological development usable for integrated ethics. As part of our greater methodological framework called Ethical Vision Design, this metaphor approach aims to co-shape normative visions and align concepts and practices with these visions. Next to the theoretical background, our presentation will introduce our approach to working with metaphors, which will be illustrated through an example of its implementation within the on-going research project GlobalResist – an interdisciplinary, early-stage research and development project that aims to create a biomedical device for the more efficient testing of antibiotics. Our approach consists of two steps. First, a metaphor investigation is applied to analyse dominating metaphors in the relevant field and the individual development team. Second, we use a workshop format to check if metaphors align with the respective vision and investigate alternative metaphors if necessary.

Metaphors are thus conceptual vehicles, meaning that they can shape technology development in decisive ways. They allow us to pull the fuzzy conceptions of emerging technologies into the realm of lived experience. The presented metaphor approach offers one replicable way to harness this power of metaphors for technological development.

Literature

Boon, Wouter, and Ellen Moors. 2008. “Exploring Emerging Technologies Using Metaphors – A Study of Orphan Drugs and Pharmacogenomics.” Social Science & Medicine 66(9): 1915–27.

Casakin, Hernan Pablo. 2007. “Metaphors in Design Problem Solving: Implications for Creativity.” International Journal of Design 1(2): 23–35.

Hey, J, J Linsey, A M Agogino, and K L Wood. 2008. “Analogies and Metaphors in Creative Design.” International Journal of Engineering Education 24(2): 283–94.

Johnson, Mark. 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination and Reason. Chicago London: The University of Chicago press.

Lakoff, George, and Mark Johnson. 2011. Metaphors We Live By. 6. print. 6. print. Chicago, Ill.: Univ. of Chicago Press.

Nugent, Paul D., and Richard Montague. 2021. “The Use of Metaphor in Systems Engineering Practice: A Preliminary Sociological Study.” Issues In Information Systems 22(2): 223–30.



Personal & prosthetic, historical & surgical

Ashley Shew

Virginia Tech, United States of America

This paper presentation is a historical and philosophical work-in-progress that identifies absences and gaps in the historiography of medical technologies. Taking a unique type of amputation as a case study, I emphasize how technological knowledge and human agency shape development and understanding in three domains — surgical life, prosthetic development, and human/patient experience. Surgical procedure, prosthetic design, and human experience are deeply intertwined in ways not typically borne out of any body of literature. This presentation engages with postphenomenology, the concept of technological knowledge, and epistemic justice in narrating through histories we know and those we do not.

Rotationplasty is an impressive surgical procedure (and an alternative to above-knee amputation) used today primarily on patients with childhood bone cancers and particular congenital limb differences, though its history takes us back to 1927 Germany with a young boy whose leg was stunted in its growth by tuberculosis. The procedure was then made more popular in the 1950s in the Netherlands for patients with a congenital limb difference. The procedure finally began being used for osteosarcoma (its majority use today) in 1981 in Austria. Recently, the procedure has recently been reported as being performed in Syria (2021), and recommended as useful for patients of “low income status.”

Since the first publication of the rotationplasty in 1930, which features pictures of a young woman after her procedure in a prosthesis, the prosthestic design has remained largely the same with different materials. (The woman’s shoe, which is part of her prosthesis, has a higher heel than my rotationplasty prosthesis can even accommodate today.) There is something to be said about designs that endure, especially when so much discourse about prosthetic advancement characterizes them as ever-developing and improving.

Today, rotationplasty amputees are networked through social media and use each other’s knowledge to get better care and better fitting devices. Through this network, they help each other find appropriate surgeons and prosthetists and advocate for rotationplasty in wider use. Considered ugly by some surgeons, this is a procedure that was once more often to boys than girls, and offered only to children. Finding information about rotationplasty was once incredibly difficult, and patients are often advocates for themselves using what they have heard from others (and can now find more readily in community-driven materials). Though the amputation-type still makes up a fraction of a percent of amputations, it is hard to gauge how many rotationplasty procedures have been performed because of the absence of appropriate codes for recording this data (often getting lumped into other types of amputation categories and categories for prosthetics). This presentation asks us to think about agency, expertise, and experience in the context of this particular surgical procedure and reflects more widely on the relationships between surgeons, prosthetists, and patients in the material reality of bodies and artifacts.

 
8:45am - 10:00am(Papers) Philosophy of technology IV
Location: Auditorium 1
Session Chair: Udo Pesch
 

REX with AI? Challenges for the return on experience in the digitized lifeworld

Bruno Gransche

Karlsruhe Institute of Technology, Germany

The perception of self-efficacy is crucial for our identity as autonomous agents. The belief that we can intentionally cause change underpins our self-image, responsibility, and ethical or legal considerations. Our sense of having made a difference is informed by sensory feedback, direct observation, or interpreting traces of our actions (abductions). Skills and competencies develop through learning, which relies on adjusting based on perceived differences between intended and actual outcomes (see e.g. Hubig 2007).

The digitization of our lifeworld and the integration of AI systems with increased ‘agency’ (technical autonomy, see e.g. Gransche 2024) alter the conditions for learning and skill development. This affects our perception of self-efficacy and our ability to solve problems and overcome resistances (see Wiegerling 2021), often associated with intelligence (problem solving). Learning and development are driven by probing possibilities, making mistakes, and overcoming resistances, which help us understand the modal boundaries between the possible and impossible. We improve by learning from errors, provided the right conditions are met.

Hybrid human-AI actions in complex environments, involving automated technical ‘agents’ and spacio-temporally distant human co-agents, may disrupt this dynamic. Even without digital technology and AI, an explicit error culture is needed to maintain this learning dynamic, allowing progress without shaming or dissimulating failure. Digital technology and AI increasingly challenge efforts to update an error culture due to issues like the loss of traceability of individual contributions (see Hubig 2007). This complicates transparency and explainability in AI interactions, disturbing the conditions for learning from errors. The chain of 'try – fail – learn – retry better – succeed' can break if feedback is systemically withdrawn or veiled. This could lead to a situation where only a few tech leaders learn and improve their systems due to their crucial position in collecting feedback and accessing data (see Lanier 2013).

This paper presents this argument in detail and explores socio-technical mitigation strategies. It advocates for a) an explicit error culture that emphasizes the importance of errors and specific contributions for learning and improvement, b) organizational measures to foster such a culture that include error friendliness, opportunities to repeat trials, and feedback links between trials as well as between individuals for organizational learning, c) some supporting technical measures including data access, transparency on demand, and explainability.

This paper’s content was developed in an interdisciplinary project between philosophy of technology and a large global infrastructure and digitization company (corporate group). The project, a pilot in integrated research (see Gransche and Manzeschke 2020), aimed to go beyond ELSI in Germany and resulted in a Transformative Philosophy program (engineering/executives’ education), thus following the goal of an engaged philosophy of technology that Carl Mitcham highlighted in his keynote at SPT2021 in Lille. The talk will briefly present the project results, focusing on error culture or return on experience (REX) in a digitalized lifeworld, and report intriguing insights about the corporate reactions, feedback, and learnings on this topic.

Publication bibliography

- Gransche, Bruno (2024): Technische Autonomie. In Mathias Gutmann, Klaus Wiegerling, Benjamin Rathgeber (Eds.): Handbuch Technikphilosophie. 1st ed. 2024. Stuttgart: J.B. Metzler, 257-266.

- Gransche, Bruno; Manzeschke, Arne (Eds.) (2020): Das geteilte Ganze. Horizonte Integrierter Forschung für künftige Mensch-Technik-Verhältnisse. 1st ed. 2020. Wiesbaden: Springer Fachmedien Wiesbaden; Springer VS, checked on 4/17/2020.

- Hubig, Christoph (2007): Die Kunst des Möglichen II. Grundlinien einer dialektischen Philosophie der Technik; Ethik der Technik als provisorische Moral. 2 volumes. Bielefeld: Transcript (2).

- Lanier, Jaron (2013): Who owns the future? London: Allen Lane.

- Wiegerling, Klaus (2021): Exposition einer Theorie der Widerständigkeit. In Philosophy and Society 32 (4), pp. 499–774. DOI: 10.2298/FID2104641W.



Cognitive maps and the quantitative-qualitative divide

Dan Jerome Spitzner

University of Virginia, United States of America

This paper draws inspiration from a diagrammatic measurement device known as a cognitive map, whose use in research complicates the practice of assigning qualitative or quantitative labels to research methodologies. A cognitive map is a spatial layout of factors connected by arrows, which is assembled by a researcher or research participant to express a phenomenon’s perceived relevant factors and the impacts of those factors on one another. A numerical value may additionally be assigned to any connection between factors to indicate the strength of perceived impact. Cognitive maps are increasingly valued in participatory research for facilitating intercultural dialogue.

In the present study these devices are to serve as a focal point in developing a theory of the quantitative-qualitative divide, which feeds into a larger project that seeks to recontextualize statistical methodology so that it is less limited in its capacity to address multicultural perspectives, imbalances in political power, local and community perspectives, and individual experiences of relevant phenomena. Previous authors have made important contributions to conceptualizing the quantitative-qualitative divide by emphasizing an ethnographic character of quantitative methodologies and connections to deconstructionist and new-materialist perspectives. The present effort embraces these perspectives, but is distinct in attending specifically to issues at the fine-grained level of statistical data-analysis, while also generating new statistical methodology.

Support for assigning the qualitative label to cognitive maps partly derives from their capabilities in participatory research, wherein it is recognized that research participants may find greater success at articulating ideas through visual means, manual manipulation, or other artistic practices. For their capability to assist articulation in this way, cognitive maps share a feature with qualitative and arts-based methodologies. Moreover, the philosophical and methodological pillars embraced by some users of cognitive maps resonate with such perspectives as critical realism and standpoint theory, both of which stray from traditional quantitative orthodoxy. On the other hand, cognitive maps are also valued for decomposing phenomena into factors and attending to causal relationships, a priority that is typically associated with quantitative methodologies. Data-analysis of cognitive-map measurements can be argued to be entirely quantitative, given that all constituent elements of the maps may ultimately be translated into numerical formats and treated under a mathematical model. Adding further complexity to the situation, scholars of mixed-methods research have specifically promoted cognitive maps for the purpose of integrating qualitative and quantitative methodologies.

The proposed theory characterizes the qualitative-quantitative divide as a productive convention, in a new-materialist sense. This perspective offers insights into such topics as the marginalization of qualitative methodology and the presence of methodological hierarchies. A connection of cognitive maps to evidence-based practices sets up a vehicle for investigation into certain research tools (such as those of arts-based research) used for knowledge transfer as a project of evidence-based practices. Other insights into the qualitative-quantitative divide, which nicely overlay with the characteristics of cognitive maps, arise from debates within the history of grounded theory, a class of methodologies that facilitates the generation of theoretical themes, factors, and codes from qualitative data.



Semiotics, technology, and the Infosphere

Andrew Wells Garnar

College of Charleston, United States of America

This paper is a preliminary exploration of using C.S. Peirce’s semiotics as an approach to the philosophy of technology. There has been some scholar using various theories of signs in the philosophy of technology. Some of this comes out of the writings of Baudrillard or Derrida and their debt to Saussure, others drawing on the likes Hjelmslev or Jakobson. Thus far, little has relied on Peirce’s theory of signs. This is unfortunate given that he offers an alternative semiotics less concerned with linguistics as a field of study, that is oriented more towards inquiry, foregrounds embodiment, and approaches signs as inherently dynamic. Since language has come to play a central role in recent science and technology, relying on a theory that makes language central has certain advantages, especially when signs are tied to action. For example, following Luciano Floridi, contemporary information technologies serve to re-ontologize the world. One way this occurs is through the creation of an Infosphere that contains informational entities, whether human or artificial, agents or patients. Many, if not most, of these entities involve language, broadly construed. Through appropriating Peirce’s semiotics, new dynamics of how the Infosphere functions can be demonstrated by approaching these as dynamic, active sign-systems.

To explore this terrain, the paper first briefly considering other uses of semiotics in the philosophy of technology and why a semiotic approach is significant. The second section provides three sketches of what a Peircian approach involves. Rather than laying out a full, detailed theory, the paper introduces examples that shows the promise of his thought when applied to technology through introducing his typology of signs, the concept of endless semiosis, and semiotic conception of human identity. The first considers the concept of semiotic depth and how signs can form systems that all for the creation of an immersive Infosphere. The second examines how this can be understood as sign systems caught up in the endless interpretation of other signs. Lastly, Peirce’s claim that signs and humans reciprocally construct each other will be reexamined in light of technology. The conclusion will summarize how these sketches illuminate Floridi’s Infosphere and the role of humans within it in important and novel ways.

 
8:45am - 10:00am(Papers) Responsibility
Location: Auditorium 2
Session Chair: Jordi Viader Guerrero
 

Beyond acceptance: expanding the desirability assessment of potable water reuse

Karen Moesker, Udo Pesch

TU Delft, Netherlands, The

Potable water reuse is increasingly taken up as a technological response to water scarcity. Yet, these types of technological systems remain inherently controversial, while implementation projects often face strong public resistance. As a result, there are increasing efforts to foster social acceptance of potable reuse. To date, these acceptance-enhancing strategies have primarily focused on public outreach campaigns. While these approaches can help address misconceptions and build public trust, they frequently rely on the information deficit model and neglect broader ethical dimensions inherent to such technologies.

Scholars in the ethics of technology argue that addressing social acceptance alone is insufficient; responsible innovation must also consider broader normative aspects which may not surface in the public debate but carry long-term implications. As a result, the focus should move from social acceptance alone towards incorporating ethical acceptability alongside it. Yet, assessing the ethical implications of technology development remains inherently challenging because it lacks clear methods and empirical benchmarks that social acceptance frameworks employ. As such, we propose that ethical acceptability should be based on a reflexive stance, which would make social acceptance and ethical acceptability complementary approaches, compensating for each other’s weaknesses.

To pursue such a combination of approaches, this paper introduces a typology linking the substantive and procedural dimensions of social acceptance and ethical acceptability. In this, the substantive dimension entails that social acceptance needs to identify and integrate public concerns into decision-making, while ethical acceptability needs to ensure that these concerns are addressed in a morally sound manner and identify additional issues that might not surface through public discourse alone. The procedural dimension entails that social acceptance strategies focus on how to address socially relevant concerns best. At the same time, ethical acceptability must assess the appropriateness and moral desirability of the processes used to address problems.

We will showcase the workings of this typology in the case of potable water reuse. We find that, on the substantive level, current acceptance-enhancing strategies often consider a narrow and locally confined problem space, thereby overlooking international and intergenerational implications of potable reuse. Procedurally, it becomes evident that already marginalized groups and their interests (i.e., future generations, the environment, and vulnerable communities) remain under-addressed, although we identified increasing efforts to overcome this issue.



Responsibility Gaps and Engineers’ Obligations in the Design of AI Systems

Yutaka Akiba

Nagoya University, Japan

Autonomous AI systems, such as automated vehicles and Lethal Autonomous Weapons Systems (LAWS), are rapidly advancing and becoming increasingly prevalent in our society. While these technologies offer numerous benefits, they also pose serious ethical challenges. One prominent issue is the so-called “Responsibility Gaps": situations where no stakeholders ―designers, developers, deployers, policymakers, end-users, or even the systems themselves- can be held responsible for the actions or consequences of these technologies. Most existing research on Responsibility Gaps focus on backward-looking responsibilities, such as liability or culpability. In contrast, forward-looking responsibilities, particularly obligations, have received relatively little attention. Some authors mention forward-looking responsibilities, but their focus is often limited to fostering responsible development cultures or promoting educational programs for engineers (Bonnefon, et al. 2020; Santoni de Sio, & Mecassi, 2021).

In this presentation, I will argue that engineers can still fulfill their obligations to mitigate harm caused by autonomous AI systems, or address “Obligation Gap” (Nyholm, 2020), even though predicting these systems’ behavior remains inherently difficult due to their technological properties. Moving beyond the cultural or educational perspectives, I propose that engineers can enhance their performance by employing “moral imagination”: the ability to envision situations in which technological mediations occur, feedback this insight to the design process, and eliminate morally problematic elements while fostering morally desirable ones (Verbeek, 2006).

This presentation is structured into 4 parts. First, I will briefly review recent developments in autonomous AI systems and provide an overview of existing discussions on responsibility gaps. I will then position the focus of this presentation within the broader landscape of various types of responsibility gaps.

Next, I will examine the obligations of engineers across different technological domains, drawing on established research in engineering ethics. Especially, I highlight the concept of “Preventive Ethics” (Harris, et al. 2019), and show relevant cases in where engineers have successfully fulfilled their obligations to avoid harm.

Following this, I will clarify the concept of the obligation gap, comparing it with related terms such as the "Active Responsibility Gap" (Santoni de Sio, & Mecassi, 2021). I will refine the definition of the obligation gap by incorporating detailed theoretical frameworks. Zimmerman’s (2014) distinction among objective, subjective, and prospective moral obligations provides a useful foundation for this effort.

Finally, I will propose a potential solution to the obligation gap through design methodologies. While existing studies address backward-looking responsibility gaps by suggesting meaningful human control in design (Santoni de Sio, & van den Hoven, 2018), I aim to articulate specific design requirements to prevent harm, using moral imagination. Predicting harmful scenarios in deployment contexts can be operationalized through scenario-making and ethical assessments involving diverse stakeholders. These tools, when integrated with iterative design processes, should establish a routine obligation for AI engineers, ultimately helping to bridge the obligation gap.

Bonnefon, J.-F., Černý, D., Danaher, J., Devillier, N., Johansson, V., Kovacikova, T., Martens, M., Mladenovic, M. N., Palade, P., Reed, N., Santoni de Sio, F., Tsinorema, S., Wachter, S., & Zawieska, K. (2020). Ethics of connected and automated vehicles: Recommendations on road safety, privacy, fairness, explainability and responsibility. European Commission.

Harris, C. E., Pritchard, M. S., Rabins, M. J., James, R. W., & Englehardt, E. E. (2019). Engineering ethics: Concepts and cases (6th ed.). Belmont, CA: Wadsworth.

Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.

Santoni de Sio, F., & Van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15.

Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084.

Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values, 31(3), 361–380.

Zimmerman, M. J. (2014). Ignorance and moral obligation. Oxford University Press.



Mapping the ethics landscape: moral distance in geospatial AI research

Peter Darch

University of Illinois at Urbana-Champaign, United States of America

Embedding ethical principles into the workflows of academic researchers using AI systems remains a critical yet under-addressed challenge. These systems, shaped by the interplay of human and technical factors, generate ethical and social impacts stemming from distributed processes rather than singular decisions. This diffusion of responsibility complicates accountability by spreading ethical obligations across teams, organizations, and workflows (Floridi, 2016). Compounding this is moral distance, where physical, temporal, cultural, and bureaucratic separations reduce individuals' sense of responsibility, undermining ethical engagement (Vanhee & Borit, 2023).

This paper examines the interplay between moral distance and distributed responsibility in shaping researchers’ ethical accountability. Using a longitudinal case study of the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) platform, the study expands existing frameworks by emphasizing two underexplored aspects: the role of pragmatic considerations and the significance of proximity to AI system production. I-GUIDE, a five-year, $16 million US National Science Foundation-funded initiative, builds an AI platform enabling researchers from diverse disciplines to mine and integrate geospatial datasets addressing sustainability challenges.

Findings reveal how Vanhee and Borit's (2023) dimensions of moral distance manifest in multidisciplinary, AI-based academic research. Cultural distance, influenced by disciplinary training, significantly shaped ethical engagement. Researchers with technical backgrounds prioritized computational efficiency and technical rigor, while those in social sciences or geography engaged more with societal impacts. Bureaucratic distance, driven by career hierarchies, further complicated accountability, with early-career researchers deferring ethical considerations to senior colleagues or institutional frameworks. Proximity distance also influenced accountability, with researchers perceiving their work as contributing to societal impacts, such as policymaking, displaying greater ethical awareness than those focused on academic outputs like dissertations.

This paper extends proximity distance to include proximity to system production processes. Researchers directly involved in data collection or model development were more attuned to ethical challenges due to their awareness of ad hoc decisions and quality compromises inherent in these processes. In contrast, those relying on external datasets or models deferred responsibility to data or model producers, emphasizing the role of trust in external sources.

Additionally, the study highlights the contingent nature of moral distance, shaped by pragmatic constraints such as publication pressures, funding requirements, and tight project deadlines. These constraints often led researchers to deprioritize ethical engagement in favor of meeting immediate goals. This finding challenges the notion of moral distance as fixed, demonstrating its dependence on context and situational pressures.

Floridi, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112.

Vanhee, L., & Borit, M. (2023). Moral distance, AI, and the ethics of care: A framework for understanding ethical responsibilities in sociotechnical systems. AI & Society, 38(1), 13–26.

 
8:45am - 10:00am(Papers) Cyborgs
Location: Auditorium 3
Session Chair: Lotte Asveld
 

Homo Translator: From biological models to to bio-inspired robots

Marco Tamborini

TU Darmstadt, Germany

By focusing on biorobotics, my talk explores the epistemological foundations necessary to support the transition from biological models to technological artifacts. To address this transition, I analyze the position of the German philosopher Thomas Fuchs, who offers one possible approach to the problem of the relationship between bio-inspired technology and biology. While Fuchs defends the idea of a unique ontological space for humans, I argue that his categorical distinctions face significant challenges in establishing a robust epistemic foundation capable of grounding the transition from biology to technology.

After identifying at least three interwoven reasons for rejecting Fuchs’ epistemic foundation, I examine how, through what methods, and by means of which practices, the newly bio-inspired object is accessed and shaped. Drawing on the philosophy of science and technology in practice, I argue that the plurality of answers to this question provides a potential epistemological foundation within the diverse frameworks of practices that produce bio-inspired objects. By doing so, I contend that robots and technological objects possess their own validity and mode of existence within the systems of practices that create them. This argument leads me to propose that the transition from biology to technology involves a translation of language games and forms of life, rather than a mere projection of organic forms onto technology. In this context, the human being, through bio-inspired practices, becomes homo translator.

By addressing the epistemological basis for pluralistically grounding the transition from biological models to technological ones, my approach aims to: (i) concretize and examine the relationship between biological and technological models, and (ii) investigate the features and validity of bio-inspired objects. This dual approach offers a more concrete and pluralistic understanding of what bio-inspired sciences and technologies are and what they can (or cannot) do.



A two-dimensional conceptualization of human-technology intimacy: against the notion of cyborg-relations

Bouke van Balen1, Caroline Bollen2

1TU Eindhoven, UMC Utrecht, TU Delft, Netherlands, The; 2TU Eindhoven

In an influential paper, Verbeek has suggested to expand the (post-)phenomenological repertoire of describing human-technology relations with the cyborg relation to capture what is at stake when “the human and the technological actually merge rather than ‘merely’ being embodied” (Verbeek, 2008, p. 391). According to Verbeek, a new entity comes about when humans use implanted technologies such as neurotechnology, antidepressants, or pacemakers. In this paper, we reject the phenomenological and moral validity of the cyborg relation. Instead we suggest the concept of interwovenness (as an additional dimension orthogonal to the existing (post-)phenomenological theoretical scheme) to capture the particular experiential intimacy of some human-technology relations that Verbeek is after.

We argue that Verbeek’s cyborg relation presupposes a problematic picture of the living body as static and closed-off from its environment, which is at odds with a central notion in phenomenology: the distinction between the objective and lived body. The lived body has dynamic boundaries, and cannot be thought of as static and closed-off from its environment, (Plessner, 1975). This perspective leads us to argue that we have always been ‘cyborgs’ (De Mul, 2014), and that Verbeek’s cyborg relation is grounded in a phenomenologically questionable view on the lived body. Moreover, based on empirical findings, we argue that the experience of using implantable technologies can be described by the existing human-technology relations.

Secondly, the term cyborg can be stigmatizing to people who use implantable technologies, which makes the use of the term ethically concerning. Reasoning from critical disability studies, the term has been and could be used in a dehumanizing and othering way (Shew, 2022). In light of these two concerns (the descriptive/phenomenological and the normative/ethical), we propose to reject the cyborg relation as a new human-technology relation altogether.

Still, we are sympathetic with Verbeek’s fingerspitzengefühl that something phenomenologically and ethically distinct is at stake with the technologies that inspired his argument, for example, Brain-Computer Interfaces. We propose to capture this difference in intimacy by adding a second dimension orthogonal to the repertoire of post-phenomenology that spans across the existing human-technology relations: interwovenness. Embodiment, hermeneutic, and alterity relations can all be experienced in a more or less interwoven way, depending on how closely a technology is entangled with our embodied relation to the world. For example, a smartphone is typically not intimately embodied but it is intimately interwoven, and a pacemaker vice versa.

The addition of this second dimension allows us to more accurately capture different unique intimate human-technology connections. And in doing so, this adapted conceptual scheme provides new ways to reveal and argue what is ethically at stake when humans and technologies are intimately related.

De Mul, J. (2014). Artificial by Nature: An Introduction to Plessner’s Philosophical Anthropology. In J. De Mul (Ed.), Plessner’s Philosophical Anthropology: Perspectives and Prospects. Amsterdam University Press.

Plessner, H. (1975). Die Stufen des Organischen und der Mensch. Einleitung in die philoso- phische Anthropologie.

Verbeek, P.-P. (2008). Cyborg intentionality: Rethinking the phenomenology of human-technology relations. Phenomenology and the Cognitive Sciences, 7, 387–395. https://doi.org/10.1007/s11097-008-9099-x

Shew, A. (2022). How to get a story wrong: Technoableism, simulation, and cyborg resistance. Including Disability Journal, 1(1).



To be a Cyborg - an autobiographical Narrative about the intimate Impact of Neurotechnology

Trijsje Franssen

TUDelft, Netherlands, The

This paper is an appeal to philosophers of technology and engineering to more often integrate first-person stories, narratives and creative methods in research as well as education. I argue this would make more space for questions of personal experience, embodiment and existence – crucial philosophical questions, yet often underrepresented. In order to investigate how this could be done I will first, tell an autobiographical narrative about the intimate impact of neurotechnology; and second, tell about a pilot course in which my engineering students wrote their own science fiction story. I present both cases as potentially useful material in education and research, and will ask the audience for critical feedback.

Autobiographical stories as well as fictional narratives stimulate imagination, understanding and empathy. They provide a means to imagine a variety of scenario’s, take different perspectives, empathise with the social and personal experiences and reflect upon possible consequences. Narratives can be a strong incentive for debate. I argue they will make engineering philosophers and students more aware of the intimate impact of contemporary as well as future technological developments, which could positively influence engineering theory, research and practice.

First, I will share a personal story of what I call my ‘cyborg-experience’. It is an autobiographical narrative about the physical, emotional and existential impact of neurotechnological interventions. In order to make sense of my experiences, I make use of Nancy's concept of ‘the intruder’. I show how my brain, deep brain electrodes and surgical scalpels can be understood as intruders of my embodied self. My aim is to demonstrate how first-person stories may contribute to reflection upon the intimate impact of neurotechnology.

Second, I will show how fictional narratives could be used in education to address questions of science and technology on a more profound level. I will discuss ‘The Laboratory of Science Fiction’, a pilot course in which my engineering students wrote their own science fiction story in order to ‘experiment’ with the future possibilities of science and technology. The process of creative writing and thought experimentation successfully functioned as a means to critically reflect upon not only more general moral and social issues, but also on their personal relationship with technology. It was a means to investigate their responsibilities as an engineer, and to identify and express their own emotions.

As said, I wish to conclude with a discussion with the audience on the two cases.

 
8:45am - 10:00am(Papers) Trust
Location: Auditorium 4
Session Chair: Federica Russo
 

Human Trust and Artificial Intelligence Is an alignment possible?

Angelo Tumminelli1,3, Federica Russo2,3, Calogero Caltagirone3, Dolores Sanchez3, Antonio Estella3, Livio Fenga3

1Lumsa University, Italy; 2Utrecht University; 3Solaris Project

This contribution aims to deepen into the challenging relationship between Artificial Intelligence and trust, where trust is understood as one of the most crucial and defining activities in human relationships. This collaborative effort is the result of a multidisciplinary approach intertwining ethical-philosophical, legal and other technical aspects. The question whether we humans can trust machines is not new, but the debate is taking a whole new dimension because of the newest generation of AI systems, also called generative AI with extraordinary capabilities in terms of computational power, mostly related to the ability of generating novel outputs, from a given prompt. We will not deepen into the most technical issues associated to generative artificial intelligence models but what they generate and how they generate it needs a re-assessment of the notion of trust and the corresponding charactersitics of ‘trustworthiness’ that is required by an increasing number of legal acts and norms, not least the EU AI Act.

With the present contribution, we intend to explore the possibility of aligning the anthropological category of ‘trust’ to the human-machine relationship and consequently, clarify whether, from an ethical and scientific point of view, it is possible to extend the experience of trust to interactions between individual subjects and artificial intelligences. In the interweaving of philosophical knowledge, normative approach and statistical model, the idea of technological trust is rendered here in all its conundrum but also in all its potential for the expressions of the techno-human condition. From the point of view of this paper we speak of “trust” only for the interpersonal relationship, but in the human-IA relationship it is better to speak of “reliability”. This is to be understood both in a performative sense (an artefact is reliable if it works well) and in another sense. Reliability can also be an extension of trust (Trustworthiness): i.e. by using the machine or the AI I feel trust not in the object but in the person who designed it, who is a human being. In this sense, we understand trustworthiness as an extension of interpersonal trust



A Foul Stain? Trust in digital data reconsidered with Zuboff and Kant

Esther Oluffa Pedersen

Roskilde University, Denmark

In her seminal article Surveillance Capitalism or Democracy? from 2022 Shoshanna Zuboff points to Google’s invention in 2001 of extraction of data from users’ action online as the “illegitimate, illicit and perfectly legal foundation of a new economic order.” As Google turned their search engine into a data extraction machine and other tech-companies soon followed suit, a new double-faced digital economy opened in which commercial data brokering (a 389 billion U.S. dollars in 2024 ) and state surveillance (think Snowden 2012 ) ever since have gone hand-in-hand. The extraction of innocuous data from individual users’ movements on web pages can be cumulated to create large scale models of human behavior to be either sold as information on consumer behavior or used by state intelligence agencies as surveillance of citizen behavior. This is the core of surveillance capitalism. It is based on data extraction as “the original sin of secret theft” . States have avoided to regulate data extraction as it apparently offered a convenience to users who were met with the ease of personalized consumption while boosting the digital economy and providing state intelligence agencies with new and better tools to ensure security in the post 9/11 world of terrorism.

In the presentation I employ a Kantian conception of trust (see O'Neiil 2002, Pedersen 2013, Myskja 2024) to argue that unregulated data extraction has corroded citizens’ possibility for moral trust online . The secret theft of user data makes up a foul stain corrupting the trustworthiness of private tech companies as well as states. The dubious trustworthiness of the providers of the digital infrastructure entails a pragmatic foundation of trust relations in the online realm in which self-interest predominates often in the form of economic advantage. As citizens we are recommended and even forced to undertake important tasks in our lives online and thus required to leave data trails and contribute with “free oil” to run the motor of the digital economy. This leaves – so I will argue in my oral presentation – citizens either in a state of digital resignation (Draper and Turow 2019), more or less frantic attempts at obfuscation of our data trails Brunton and Nissenbaum 2015) or confines us to lazily trust (Pedersen 2023) that the data extracted from our online interactions are handled in ways that are not detrimental to our autonomy.

Litterature:

Brunton, Finn, and Helen Nissenbaum. Obfuscation: A user's guide for privacy and protest. Mit Press, 2015.

Draper, Nora A., and Joseph Turow. "The corporate cultivation of digital resignation." New media & society 21, no. 8 (2019): 1824-1839

Myskja, Bjørn, “Public Trust in Technology – A Moral Obligation?,” Sats. Northern European Journal of Philosophy 23, no. 1 (2024): 11-128

O’Neill, Onora. Autonomy and Trust in Bioethics. Cambridge: Cambridge University Press, 2002

Pedersen, Esther Oluffa. “A Kantian Conception of Trust.” Sats. Northern European Journal of Philosophy 13, no. 2 (2013): 147-169

Pedersen, Esther Oluffa. “The Obligation to be Trustworthy and the Ability to Trust: An Investigation into Kant’s Scattered Remarks on Trust”, in Perspectives on Trust in the History of Philosophy, ed. David Collins et al. (2023): 133-156

Zuboff, “Surveillance capitalism or democracy? The death match of institutional orders and the politics of knowledge in our information civilization” Organization Theory (2022): 1-79



A network approach to public trust in generative AI

Andrew McIntyre1, Federica Russo2, Lucy Conover2

1University of Amsterdam; 2Utrecht University

A foundation of European AI policy and legislation, the Trustworthy AI framework presents a holistic approach to addressing the challenges posed by AI while promoting public trust and confidence in the technology. Primarily, the framework aims to promote the trustworthiness of all those actors and processes involved in the AI lifecycle by introducing robust ethical requirements for businesses, as well as technical and non-technical strategies to ensure these requirements are implemented. While succeeding in promoting trustworthy industrial practices, this paper argues that the framework is limited in scope and does not sufficiently account for the increasingly significant social role that AI technologies play in our daily lives. Generative AI technologies are now capable of convincingly replicating modes of human communication and can thus contribute to our collective knowledge by building new socio-political narratives, altering our interpretations of events, and shaping our values through discussion and argumentation. As such, these technologies are not simply industrial products or services to be regulated but, rather, they exist as active social actors that interact with human actors in unprecedented ways. To better account for the social role of generative AI, this paper develops a network approach to public trust in AI that is based on the philosophy of information and Actor-Network Theory (ANT). Moving away from traditional notions of interpersonal trust, this paper argues that trust is established and maintained first and foremost by the material interactions between social actors involved in a network. From this perspective, trust in generative AI is dependent upon a vast and precarious network of social actors that extends far beyond the AI lifecycle to include more diverse actors such as government institutions, media organizations, and public officials. As such, promoting public trust in AI is no longer solely a matter of establishing trustworthy industrial practices and this network approach would enable policymakers to identify novel policies and initiatives that build upon and augment the Trustworthy AI framework. Notably, this paper highlights how public trust in AI is fundamentally linked to trust in our broader information environment and is thus threatened by the current post-truth political crisis. The paper concludes that to effectively promote public trust in AI and to reap the societal benefits of this technology, we must first seek to establish a more trustworthy information environment and to restore public confidence in democratic institutions and processes.

References:

AI HLEG (2019) "Ethics Guidelines for Trustworthy AI" European Commission High-Level Expert Group on Artificial Intelligence.

Bisconti P, McIntyre A, and Russo F (2024) "Synthetic socio-technical systems: poiêsis as meaning making" Philosophy & Technology 37.

Latour B (2005) Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press.

Russo F (2022) Techno-Scientific Practices: An Informational Approach. Rowman & Littlefield.

 
8:45am - 10:00am(Papers) Generative AI and risk
Location: Auditorium 5
Session Chair: Christa Laurens
 

Memes, generative AI, humor, and intimacy: collective empowerment through shared laughter

Alberto Romele1, Fabrizio Defilippi2

1Sorbonne Nouvelle University, France; 2University of Paris Nanterre, France

In this presentation, we explore memes about generative AI, arguing that their humor serves as a collective empowerment strategy in the face of uncertainties and fears associated with these technologies. This thesis builds on a broad body of literature addressing the sense of intimacy fostered by memes. For instance, Neghabat (2021) argues that humor is a deeply subjective and intimate experience; thus, finding something funny together (such as sharing memes) creates a sense of intimacy and community—sharing discomfort and addressing negative experiences offers emotional relief and support. In the specific case of generative AI, memes could become a way to build collective knowledge by playing with the intimate fears that haunt our imaginations (Goujon & Ricci, 2024). In other words, memes can potentially become a means of introspection about our uses of generative AI and provide landmarks in response to the rapid pace of recent changes in the field.

Our intervention is structured into three parts. In the first part, we discuss the relationship between memes and sociotechnical imaginaries (Jasanoff & Kim, 2015). In particular, we reference Castoriadis’s concept of the imaginary institution of society to demonstrate how memes contribute to this process, showing us the social meanings surrounding contemporary technological trajectories. In the second part, we provide an empirical analysis and classification of generative AI memes, based on data from two repositories: Imgflip and Know Your Meme. Among other categories, we distinguish between apocalyptic memes, those that deconstruct expectations toward AI, memes about education and labor, and anti-capitalist memes.

In the third part, we substantiate our central thesis: despite their differences, all generative AI memes participate in a collective exorcism of fear through humor. Generative AI technologies, which permeate our lives and mimic human behaviors, evoke a profound sense of intimacy that is both unsettling and collective. It is precisely this shared awareness of their pervasive presence and the vulnerabilities they expose—at physical, personal, and social levels—that drives the creation of collective exorcisms like memes. Far from being trivial, these humorous artifacts allow us to process the risks, redefine ethical frameworks, and reclaim agency over innovations often framed as inevitable. The question we will address at the conclusion of this presentation is as follows: Can we assert that memes about generative AI represent a potential resource for fostering forms of resistance and agonism? Or must we instead recognize that, since misery loves company, this intimacy ultimately serves to cultivate passive acceptance of the status quo?

References:

Castoriadis, Cornelius. The Imaginary Institution of Society. Translated by Kathleen Blamey. Cambridge, MA: The MIT Press, 1987.

Galip, Idil. Methodological and epistemological challenges in meme research and meme studies. In Internet Histories 8(4), 312-330, 2024.

Goujon, Valentin & Ricci, Donato. “Shoggoth with Smiley Face: Knowing-how and letting-know by analogy in artificial intelligence research”. In Hybrid. Naissance et renaissance des mèmes. Vies et vitalités des mèmes, edited by Laurence Allard et al., 2024. https://journals.openedition.org/hybrid/4880

Jasanoff, Sheila & Kim, Sang-Hyun (eds.). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press. 2015.

Neghabat, Anahita. “Ibiza Austrian Memes: Reflections on Reclaiming Political Discourse through Memes.” In Critical Meme Reader: Global Mutations of the Viral Image, edited by Chloë Arkenbout, Jack Wilson and Daniel de Zeeuw, 130-142. Amsterdam: Institute of Network Cultures, 2021.

Rogers, Richard, & Giorgi, Giulia "What is a meme, technically speaking?” In Information, Communication & Society, 27(1), 73–91, 2023.



‘Trust the Machine?’: Conceptualising Trust in the Age of Generative Artificial Intelligence

Pia-Zoe Hahne

University of Vienna, Austria

To accept a new technology, we first need to trust it. With AI, there is not just one specific kind of trust that we put in the system; instead, it is a “multidimensional construct, including trust in functionality, trust in reliability, and trust in data protection” (Wang, Lin & Shao, 2022, p. 340). Current discussions surrounding trust in philosophy of technology focus mainly on the question ‘Is the concept of trust applicable to AI?’. Authors such as Ryan (2020), Alvarado (2023), Brusseau (2023) or Pink et al. (2024) argue against the usage of trust. Others argue for a purely epistemic view on trust relations in AI, also based in the argument that an AI system is not a moral agent (Alvarado, 2023; Ryan, 2020). However, I aim at countering this narrative. While discussions surrounding questions such as ‘Is the concept of trust applicable to AI?’ are necessary, they should not be the sole focus of trust research as the discussions do not alleviate the actual problems that are raised by the usage of trust for AI systems.

The uncertainties surrounding trust exemplify the disruptive nature of AI technologies while debates focussing on the appropriateness of trust fail to consider that the concept is already widely used in praxis-oriented contexts. Despite not being moral agents, AI systems such as AI chatbots are often perceived as having moral agency, meaning a relation that seemingly goes further than just knowledge exchange. The uncertainties around the appropriateness of the concept of trust in AI and its conceptual challenges demonstrate the disruptive nature of AI technologies themselves. How should the concept of trust in relation to AI look? And is trust even the right concept?

Approaches to study these conceptual disruptions often disregard the involvement of stakeholders. This is where a new approach in engaging with conceptual disruptions comes in. Conceptual engineering is an emerging approach in philosophy of technology. Trust is an ideal concept for conceptual engineering as it forms the basis for other concepts and disruptions therefore have far-reaching consequences. Löhr (2023) and Marchiori & Sharp (2024) specifically points out that studying these disruptions necessitates empirical data, demonstrating a new turn in engaging with conceptual disruptions. The influence technology has on trust is not new. However, the intense disruptions influenced by AI present new challenges by moving away from a purely epistemic view on trust in technology as well as the far-reaching consequences on trust between people and trust in institutions.

References

Alvarado, R. (2023). What kind of trust does AI deserve, if any? AI and Ethics, 3, 1169-1183.

Brusseau, J. (2023) From the ground truth up: doing AI ethics from practice to principles. AI & Society, 38, 1651–1657. https://doi.org/10.1007/s00146-021-01336-4

Löhr, G. (2023). Conceptual disruption and 21st century technologies: A framework. Technology in Society, 74, Article 102327.

Marchiori, S. & Scharp, K. (2024). What is conceptual disruption? Ethics and Information Technology, 26(1), Article 18. https://doi.org/10.1007/s10676-024-09749-7

Pink, S., Quilty, E., Grundy, J. et al. (2024). Trust, artificial intelligence and software practitioners: an interdisciplinary agenda. AI & Society. https://doi.org/10.1007/s00146-024-01882-7

Ryan, M. (2022). In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00228-y

Wang, X. Q., Lin, X. L. & Shao, B. (2023). Artificial intelligence changes the way we work: A close look at innovating with chatbots. Journal of the Association for Information Science and Technology, 74(3), 339-353.



The Concept of ai Risk

Lieke Fröberg

University of Hamburg, Germany

As various types of AI-applications have taken the world by storm, the concept of risk seems to have taken an increasingly central role in discussions on their (potential) negative impacts. However, the notion of risk has a variety of conceptualizations that can be at odds with each other (e.g., theoretical versus empirical, realist versus constructivist) (Althaus, 2005; Lupton, 2024; Renn, 2008). Moreover, the specific way in which AI risks are understood can have important consequences, in particular when translated into policy, as is the case in the risk-based approach central to the European AI Act. Thus, to improve the interdisciplinary understanding of the conceptualization of AI risk, I conduct a systematic literature review following Okoli (2022). The goal is to map and theorize the ways in which AI risk is defined, characterized, categorized, and measured. The leading research question is: How is AI risk currently understood and studied?

My findings confirm an increase of research on this topic since 2017. I find that multiple disciplines operationalize the concept of AI risk, each working from plausible, yet markedly distinct and conceptually conflicting, points of departure. Of these proposals, only few substantiate their approach. Interestingly, rather than working in silos, there seems to be a shared use of the concept. Next to this, I find that the majority of papers have an implicitly realist understanding of AI risk, with another large minority of critical realist, and just a handful of constructivist-leaning suppositions. I also map the main topics of interest (including the AI Act, questions around existential risk, and risk perception) as well as common methodological approaches.

In the discussion, I argue that AI risk can be understood as a Boundary Object (Star, 1989, 2010), which can be adapted to local research contexts, whilst retaining a diffuse, broad understanding. This balance between plasticity allowing for specific scientific methods yet robustness across disciplines helps explain the possibility for collaboration without consensus on the precise conceptual underpinning of AI risk. Moreover, I argue that while realist approaches dominate the debate, constructivist approaches could further discussions on political and ethical themes. One such a theme is questions around acceptable risk, which are relevant in a variety of ways but remain under-researched at this point.

This paper addresses an important conceptual development in the field of AI ethics and governance, offers a timely reflection on how to interpret the differing meanings of the concept of AI risk, and points towards fruitful future research endeavors.

References

Althaus, C. E. (2005). A Disciplinary Perspective on the Epistemological Status of Risk. Risk Analysis, 25(3), 567–588. https://doi.org/10.1111/j.1539-6924.2005.00625.x

Lupton, D. (2024). Risk (Third edition). Routledge.

Okoli, C. (2022). Developing Theory from Literature Reviews with Theoretical Concept Synthesis: Topical, Propositional and Confirmatory Approaches.

Renn, O. (2008). Risk Governance: Coping with Uncertainty in a Complex World. Earthscan.

Star, S. L. (1989). The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving. In Distributed Artificial Intelligence (pp. 37–54). Elsevier. https://doi.org/10.1016/B978-1-55860-092-8.50006-X

Star, S. L. (2010). This is Not a Boundary Object: Reflections on the Origin of a Concept. Science, Technology, & Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624

 
8:45am - 10:00am(Papers) Quantified lives
Location: Auditorium 6
Session Chair: Wybo Houkes
 

Quantified self and society of control

Armen Khatchatouorv

DICEN - IdF Lab, University Gustave Eiffel, France

The practices of QS reveal a specific hermeneutic of the self, based on the subtle choices made by individuals in the collection, dissemination and interpretation of data.

The optimistic argument would be based on the opposition between the way Big Data works (n = all; sample covering the entire population) and the way QS works (n = 1; sample covering a single individual). In QS, each individual works on his or her own data, questioning it on two levels. The first involves the appropriateness of the data in relation to established categories, and contributes, for example, to the individual's own understanding of 'health' which does not necessarily correspond to that of the medical institution. The second is to question the automation of data processing, making decisions about its relevance on a case-by-case basis, and sometimes putting it on hold.

This would result in an idiosyncratic approach, and the impression of the emergence of the very categories according to which the data is understood, categories which do not boil down to what is imposed on the subject from outside. In this sense, these QS go beyond the disciplinary paradigm developed by Foucault: here, the internalisation of the norm is no longer simply dictated by a top-down process, but is initiated by the subject himself. One might see in this movement the missing link between individual practices and increasing digitisation, an outline of “reterritorialization” that gives a personal meaning to digitisation.

But do these idiosyncratic “norms” offer any real potential for emancipation? What are the effects of this process on subjectivity?

We argue that the technical understanding of the body by the subject is extended by the incorporation of these data into itself. The feedback loop goes from algorithmically processed data and its visualisation to the effects on the body. The body is incorporating what the data shows, of the user's behaviour is modified according to the patterns in the data. It's literally a matter of translating data into sensations.

As with any transduction operation, the two terms – the living body and the processed data – influence each other, and the body also turns into something else. Through this transduction, the incorporation of data bears the trace of digital processing. This amounts to imposing the logic and syntax of this processing on the bodily subjectivity. This syntax of the quantification of the qualitative values (i.e. of sensation) constitutes a metric of itself in which "the semantic components become digital", to use the expression employed by Deleuze and Guattari in their 1975 lecture on increasing computerization. As they also note, all syntax is a system of order and constraints on the possible. We can see, then, from the horizon of QS practices, that the field of what is felt by subjectivity is exclusively delimited by the measurable and the digitisable, even if it is processed in an 'autonomous' way by the individual.

We need only think of the example of the feeling of happiness - the fact of thematising and making explicit that we are happy by interpreting the data - effectively depends on its incorporation. It is not pre-existing 'happiness' that is revealed by data analysis, but the very category of happiness that is produced by the digital syntax at work.

But the crux of the problem seems to lie elsewhere. The assumption that the norms born of these practices are not imposed from above, that “incorporation” at work is not that of a “disciplined” body in Foucault's sense, is actually misleading. It fails to take into account the shift from the disciplinary regime to the regime of control or modulation, which was foreshadowed by the last Foucault and developed by Deleuze. The distinctive feature of the modulation regimes to which the neoliberal individual belongs is precisely that they do not impose norms. On the contrary, they institute an operating regime that makes the individual responsible for and entrepreneurial of himself, while at the same time modulating, through power mechanisms, what may or may not constitute the horizon of that individual's practices. In the QS, this modulation takes the form of self-metrics based on digital syntax, leading to the establishment of an immanent, individualized 'norm' which loses its normative character in favor of the modulation of the individual and his or her “machinic enslavement” (Guattari). This individual standard, a 'small variation' that certainly gives the impression of a nascent standard and autonomy, is nonetheless subject to the “devices” that define its contours. As Guattari notes, "you can only state anything about your desire, your life, insofar as it is compatible with the computer machine of the system as a whole [...]".

What we have here, then, is a specific mode of reterritorialising the existential territory that is the body itself. This mode is correlated with what we call the 'granularisation' of behaviour, in the dual sense of an 'individual norm' and the now digital syntax of its understanding, brought about by the advent of 'societies of control’.



The Quantification and Mechanization of Human-beings

Weibo Li

Renmin University of China, China, People's Republic of

The practice of “intimate technology” is fundamentally tied to the practice of quantification, as exemplified by self-quantification technologies. Although self-quantification technologies have been subject to extensive philosophical and ethical critique within the context of big data discourse, a historical perspective on the evolution of quantitative science and technology remains absent. The quantification and mechanization of human beings are deeply interdependent and mutually reinforcing since early modern science began. The rise of modern science introduced a paradigm shift, framing the understanding of humans through the lens of machinery. This shift transformed the concept of homo sapiens into that of homo scientia—a being rendered measurable and subject to control through scientific and technological means.

Unlike traditional perspectives that emphasized internal dimensions such as emotion, mind, or soul, the notion of homo scientia prioritizes external, observable attributes. This reorientation blurred the boundaries between humans and machines, a trend further reinforced by 20th-century behaviorist science, which emphasized the quantification of external human traits as a prerequisite for physical and societal progress. Moreover, homo scientia becomes an object of governmentality, where machine-like laws and frameworks are applied and refined through quantification, both in individual and collective dimensions. While Foucault regarded quantification in population governmentality as primarily population-oriented, advancements in quantitative technologies have shifted governance techniques toward greater individualization and granularity. Consequently, not only is society envisioned as a vast machine governed by discoverable and adjustable laws, but individual human movements and behaviors are similarly conceptualized.

Self-quantification technologies embody this behaviorist framework by equating the self with its data representation, thereby reinforcing mechanisms of governmentality and social control and framing life itself in terms of datafication and quantification. A striking extension of this logic is the growing interest in digital immortality. If the theoretical presuppositions of self-quantification are pursued to their extreme, the self is entirely equated with its data representation. By aggregating and utilizing all human data through artificial intelligence, the prospect of achieving “digital immortality” emerges. Both self-quantification technologies and digital immortality conceptualize humans as quantifiable and modifiable entities, akin to machines, thus reducing individuals—capable of autonomous learning and growth—into “machines” that are vulnerable to obsolescence.

 
8:45am - 10:00am(Papers) Ethics V
Location: Auditorium 7
Session Chair: Andrea Gammon
 

Transcendental Technology Ethics

Donovan van der Haak

Tilburg University, Netherlands, The

Before the dominance of the empirical understanding of technology (i.e., technology as technological artifact), transcendental perspectives of Technology dominated philosophy of technology, represented by philosophers of technology such as Martin Heidegger, Herbert Marcuse, Adorno and Horkheimer, and others. They adopted a transcendental perspective of technology, exploring the a priori conditions and structures that steer and shape how technology is developed, understood and used. During the empirical turn, Science and Technology Studies and a variety of philosophers of technology provided arguments to abandon transcendental perspectives that are overly pessimistic, deterministic and overlook the relevant differences between technological artifacts and what things do (Verbeek, 2021). The empirical turn was later accompanied by an ethical turn during which technology ethics developed into a descriptive and normative direction, exploring how technological artifacts both mediate or change morality (descriptive), and how ethics can be used to steer the design and use of technological artifacts (normative). Recently, calls emerged to move beyond the empirical turn (Brey, 2010; Franssen et al., 2016), including calls for a return to the transcendental perspective (Coeckelbergh, 2020). Contemporary, transcendental thinkers have argued that technology ethics lacks sufficient self-critique and has become industrially embedded, conformist, and co-opted by the tech industry (Lemmens, 2021). A substantial incorporation of the transcendental perspective into technology ethics remains, however, notably absent from the current literature.

Despite the merits of these transcendental critiques, a transcendental perspective within technology ethics must also do justice to the critique and insights of the empirical perspective. In line with contemporary calls for a return to transcendental thought, I argue that a new ethical framework is needed that is compatible with ethical reflection on technological artifacts and the transcendental horizons surrounding these artifacts (i.e., transcendental technology ethics), as valuable insights of the transcendental perspective have become lost since the empirical and ethical turn. Descriptively, transcendental philosophers of technology deserve more recognition within the literature of, for instance, technomoral change, as they already provided a variety of insights to technological value change. Normatively, I argue that technology ethics should utilize the transcendental perspective to explore and challenge its own presuppositions. I draw on the transcendental perspective to challenge technology ethics’ exclusive understanding of technology as technological artifact.

Finally, I seek to exemplify the value of transcendental technology ethics through a paradigmatic case-study that explores the construction of public values within Dutch municipalities that employ algorithms. Through participant observations, analyses of policy documents and semi-structured interviews with experts and stakeholders within Dutch local governments, the case-study explores how public values are perceived, protected and undergo change. Different from most approaches in technology ethics that centralize the artifact (i.e., data and AI), transcendental technology ethics uniquely allows us to better understand how technological ways of thinking and rationalizing within these and partnering institutions emerge and how these rationalities lead to ethical arguments in favor of the use of algorithms.

References

Brey, P. (2010). Philosophy of technology after the empirical turn. Techné: Research in Philosophy and Technology, 14(1), 36–48. https://doi.org/10.5840/techne20101416

Coeckelbergh, M. (2022). Earth, technology, language: A contribution to holistic and transcendental revisions after the artifactual turn. Foundations of Science, 27(2), 259–270. https://doi.org/10.1007/s10699-020-09730-9

Franssen, M., Vermaas, PE., Kroes, P., & Meijers, A. W. M. (Eds.) (2016). Philosophy of Technology after the Empirical Turn. (Philosophy of Engineering and Technology). Springer. https://doi.org/10.1007/978-3-319-33717-3

Lemmens, P. (2021). Technologizing the Transcendental, not Discarding it. Foundations of Science, 27(4), 1307–1315. https://doi.org/10.1007/s10699-020-09742-5

Verbeek, P.-P. (2021). The Empirical Turn. In S. Vallor (Ed.), The Oxford Handbook of Philosophy of Technology (pp. 1-21). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190851187.001.0001



Sustainable AI and the third wave of AI ethics: a structural turn

Larissa Bolte, Aimee van Wynsberghe

University of Bonn, Germany

The notion of Sustainable Artificial Intelligence (Sustainable AI), and with it considerations of the environmental impact of AI technologies, have gradually made their entrance into AI ethics discussions. It has recently been suggested that this constitutes a new, “third wave of AI ethics” [1]. The idea of a “wave” is suggestive. It implies a new, distinct phase within the discipline. In this paper, we ask what is entailed by Sustainable AI that should warrant such special accentuation.

We begin with an exploration of the landscape of AI ethics. We argue that three approaches, or waves, can be distinguished: A first approach, which is concerned with the unforeseeable, but possibly substantial long-term risks of AI development (e.g., existential threat [2] or machine consciousness [3]); a second, mainstream approach, which engages with particular, existing AI technologies and their ethical design [4] [5]; and a third, emerging approach, which performs a structural turn. This turn is structural in two senses: For one, it deals with systemic issues which cannot be described at the level of individual artefacts, such as particular algorithms or AI applications. This is then often paired with an analysis of power structures that prevent the uncovering of these issues.

We argue that work on Sustainable AI increasingly instantiates this third approach. This means that this work does not consider particular AI applications, but rather the entirety of the material and social infrastructure that constitutes the preconditions for AI’s very existence, e.g., material hardware, energy consumption, conditions along the AI supply chain, effects of AI on society, etc. (see, e.g., [6], [7]). What is more, some authors in Sustainable AI pair this structural analysis with a political outlook. They are concerned with the higher-level question of why an algorithm-centred perspective is favoured by regulators and in public debate and which issues this perspective obscures (see, e.g., [8], [9]).

Finally, we broaden our own perspective and find that the third, structural approach to AI ethics is not the prerogative of work on Sustainable AI alone. In fact, other subfields of AI ethics have performed that same shift in recent years. We present literature on AI bias and fairness as an illustrative example. Hence, what started out as an investigation into the particularity of Sustainable AI concludes with a much broader outlook. We suggest that we might be looking at a more pervasive structural turn in AI ethics as a whole.

References

[1] van Wynsberghe, A.: Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics. 1(3), 213–218 (2021).

[2] Müller, V.C., Bostrom, N.: Future progress in artifcial intelligence: A survey of expert opinion. In: Müller, V.C. (ed.) Fundamental Issues of Artifcial Intelligence. Synthese Library, pp. 553–571. Springer, Berlin (2016)

[3] Parthemore, J., Whitby, B.: What makes any agent a moral agent? Refections on machine consciousness and moral agency. Int. J. Mach. Conscious. 502, 105–129 (2013)

[4] Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., Floridi, L.: The ethics of algorithms: key problems and solutions. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artifcial Intelligence, pp. 97–123. Springer, Cham (2021)

[5] Corrêa, N.K., Galvão, C., Santos, J.W., Del Pino, C., Pinto, E.P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., de Oliveira, N.: Worldwide AI ethics: a review of 200 guidelines and recommendations for AI governance. Patterns 4(10), 100857 (2023)

[6] Crawford, K.: The Atlas of AI: Power, Politics, and the Planetary Costs of Artifcial Intelligence. Yale University Press, New Haven (2021)

[7] Sætra, H.S.: AI in context and the sustainable development goals: factoring in the unsustainability of the sociotechnical system. Sustainability 13(4), 1738 (2021)

[8] Dauvergne, P.: AI in the Wild: Sustainability in the Age of Artifcial Intelligence. MIT Press, Cambridge (2020)

[9] Becker, C.: Insolvent: How to Reorient Computing for Just Sustainability. MIT Press, Cambridge, Massachusetts (2023)



Against an ideal theory of justice for AI ethics

Alice Rangel Teixeira

Universitat Autònoma de Barcelona, Spain

Arguments for adoption of a Rawlsian theory of justice have become prominent in current debates on Artificial Intelligence (AI) ethics (1–7). Some proponents suggest that, while the fair use of AI is central to AI ethics, the prevailing focus on fairness reflects a narrow view, stemming from political theory's failure to critically engage with the complex interplay between technology and social institutions (2,4). Others critique the dominant principlist approach in AI ethics, contending that principles are ineffective without clear and practical guidance (1,8).

This failure to address the political dimensions of technology or provide actionable guidance has drawn criticism of principle-based frameworks for enabling techno-solutionism. Overemphasizing algorithmic de-biasing as fairness reduces fairness to statistical parity (3,9), and overlooks the role of AI systems in shaping the 'basic structure' of society (1,2), consequently failing to account for the broader socio-political implications of AI systems (1). Proponents argue that Rawls’ “fair equality of opportunity” and the “difference principle” provide a robust ethical foundation for AI. Fair equality of opportunity ensures equitable access to opportunities in decision-making (2,3,5), while the difference principle prioritizes the welfare of the least advantaged (1,6,7). Embedding these principles in AI design and governance aims to advance justice across societal structures, aligning with Rawls’s vision of a “well-ordered society.”

While a Rawlsian theory of justice is presented as an alternative to the dominant principlist approach in AI ethics, providing clearer guidance for the application of principles, they nevertheless present the same problem in their idealized conception of justice (10). These principles were designed for hypothetical, well-ordered societies, and fail to address historical and structural injustices, such as gender and racial oppression (11,12). This idealization limits their applicability to the complexities and inequities of real-world contexts. Moreover, the focus on distributism neglects instances of injustice that are not represented as goods or end-states, such as epistemic injustice. Critiques of ideal theory highlight its inability to adequately account for lived experiences (13), arguing that the ahistoricism and distributism that characterize idealized conceptions of justice obscure historical injustice (11,14). Prioritizing resources and goods over the reality of people (13), thus operating to reinforce oppression (10,12,14).

Incorporating ideal theory of justice into AI ethics raises profound questions about the adequacy of these theories in addressing the complex realities that are structurally unjust. This underscores the need for alternative ethical frameworks that prioritize lived experiences, historical context, and are attentive to power asymmetries that permeate our society.

References:

1. Westerstrand S. Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence. Sci Eng Ethics. 2024 Oct 9;30(5):46.

2. Gabriel I. Toward a Theory of Justice for Artificial Intelligence. Daedalus. 2022 May 1;151(2):218–31.

3. Franke U. Rawlsian Algorithmic Fairness and a Missing Aggregation Property of the Difference Principle. Philos Technol. 2024 Jul 13;37(3):87.

4. Franke U. Rawls’s Original Position and Algorithmic Fairness. Philos Technol. 2021 Dec 1;34(4):1803–17.

5. Heidari H, Loi M, Gummadi KP, Krause A. A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity. In: Proceedings of the Conference on Fairness, Accountability, and Transparency [Internet]. New York, NY, USA: Association for Computing Machinery; 2019 [cited 2025 Jan 8]. p. 181–90. (FAT* ’19). Available from: https://doi.org/10.1145/3287560.3287584

6. Peng K. Affirmative Equality: A Revised Goal of De-bias for Artificial Intelligence Based on Difference Principle. In: 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE) [Internet]. 2020 [cited 2025 Jan 14]. p. 15–9. Available from: https://ieeexplore.ieee.org/document/9361347

7. Leben D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol. 2017 Jun 1;19(2):107–15.

8. Munn L. The uselessness of AI ethics. AI Ethics. 2023 Aug 1;3(3):869–77.

9. Lin TA, Chen PHC. Artificial Intelligence in a Structurally Unjust Society. Fem Philos Q [Internet]. 2022 Dec 21 [cited 2024 Jul 19];8(3/4). Available from: https://ojs.lib.uwo.ca/index.php/fpq/article/view/14191

10. Fourie C. “How Could Anybody Think That This is the Appropriate Way to Do Bioethics?” Feminist Challenges for Conceptions of Justice in Bioethics. In: Rogers WA, Mills C, Scully JL, Carter SM, Entwistle V, editors. The Routledge Handbook of Feminist Bioethics [Internet]. Routledge; 2022 [cited 2025 Jan 13]. p. 27–42. Available from: https://philarchive.org/rec/FOUHCA

11. Mills CW. “Ideal Theory” as Ideology. Hypatia. 2005;20(3):165–84.

12. Jaggar AM. L’Imagination au Pouvoir: Comparing John Rawls’s Method of Ideal Theory with Iris Marion Young’s Method of Critical Theory. In: Tessman L, editor. Feminist Ethics and Social and Political Philosophy: Theorizing the Non-Ideal [Internet]. Dordrecht: Springer Netherlands; 2009 [cited 2025 Jan 14]. p. 59–66. Available from: https://doi.org/10.1007/978-1-4020-6841-6_4

13. Sen A. Equality of What? In: Tanner Lectures on Human Values. Cambridge: Cambridge University Press; 1980.

14. Tronto JC. Moral Boundaries: A Political Argument for an Ethic of Care. Psychology Press; 1993. 244 p.

 
8:45am - 10:00am(Papers) Justice
Location: Auditorium 8
Session Chair: Andreas Spahn
 

Technopolitical mediation and gezi park of istanbul

Melis Bas

University of Amsterdam, Netherlands, The

In this paper, I develop the concept of technopolitical mediation by synthesizing Peter Paul Verbeek’s (2005) theory of technological mediation and Hannah Arendt’s (1998) political theory. I demonstrate that technopolitical mediation takes place in two steps: technological mediation of common sense and technological mediation of intersubjectivity.

I explain the first step of this technopolitical mediation, the technological meditation of common sense, by discussing the Gezi Park protests in Istanbul and analyzing the active role of the park itself in initiating these protests. By showing how the park created a heterogeneous public during the protests, I extend Hannah Arendt’s political hermeneutics to materiality. I show that technologies mediate common sense, since they are the material form of the cultural memory of the society in which they are embedded: Gezi Park is a political space for the Turkish people, due to its importance in their common cultural context. Subsequently, I show that people in Turkey resisted the threatened demolition of Gezi Park because of its importance and its position in their common cultural framework. Thus, I demonstrate that Gezi Park conveyed common sense, as it is the material form of Turkey’s cultural memory.

Following this first step, in the second step of technopolitical mediation, I discuss the mediating role of technology in intersubjectivity. After demonstrating the interconnectedness of political subjectivity and intersubjectivity, I show that the political community that the political subject inhabits has a conditioning force on that individual. This allows me to argue that in order to understand the role of technology in political interaction between political subjects, it is necessary to consider the type of political community that is promoted by the material space that surrounds that community. Having presented these distinctions, I concentrate on delineating the role of technology in this interplay between acting agents and in the intersubjective relationships between acting agents.

I demonstrate that in order to observe the technological mediation of intersubjectivity, it is first necessary to observe the technological mediation of common sense. Even if everyone who deals with a technology will inevitably draw on common sense, the interaction between the subject and the intersubjective community depends, to a certain extent, on the multistable design of the respective technology. In addition, the existence of a political community that appropriates a space as public space also depends on the context in which common sense is mediated. I show that the material space in which people meet mediates both the way in which a community is created and how one becomes part of a community.

References

Arendt, H. (1998). The Human Condition; (2nd ed). Chicago, IL: University of Chicago Press (Original work published 1958)

Verbeek, P.P. (2005). What things do: Philosophical reflections on technology, agency, and design: Penn State Press.



Rethinking data ownership : Towards relational approaches to property

Aditya Singh

University of Edinburgh

Debates over data governance often emphasise the need to clarify property and ownership rights over data. A central focus in these discussions is whether introducing property rights over data may address the challenges posed by the digital economy. However, these debates frequently overlook a central aspect of contemporary data analytics: the aggregation of data and the generation of group-level insights, which render the individual incidental to analysis. Once aggregated, data typically falls outside the purview of regulatory scrutiny and ownership discussions, becoming de facto the property of technology providers.

This paper argues that discussions on data ownership tend to operate within a constrained understanding of property and ownership. Specifically, they equate property with private ownership, largely framing the debate in binary terms: private property rights over data versus no property rights at all. This binary framing limits the potential for more nuanced approaches to data governance. The assignment (and modification) of property rights carries significant normative implications. It shapes how the interests of various parties in data are enumerated and reflected, how the balance between these interests is evaluated, and how the normative implications of any modifications or interferences are understood. The framing of ‘property’ in the context of data governance, has not been without critique, but it may more closely capture the political economy dimensions of data infrastructures. The property rights lens can remain useful in centering questions of value and extraction.

Drawing on three alternative property frameworks—the bundle of rights approach, progressive property theory, and collective property theories— this paper proposes a reimagined understanding of property rights over data. The bundle of rights approach emphasizes the malleable and distributed nature of property rights, while progressive and collective property theories foreground relationality. Common to these perspectives is a conception of property not merely as a relationship between a person and a resource, but as a matrix of relationships among actors concerning a resource. This view shifts the focus from individualised exclusive rights of exclusion and alienation towards the power dynamics embedded in entitlements over data.

Relational and collective approaches to property can move beyond the limitations of individual-centric property models. This framework not only accounts for the complexities of data infrastructures premised on aggregation but also foregrounds the normative considerations that arise in assigning property rights, offering a path toward more equitable governance models in the digital economy.



The limits of empathy as a design principle for intimate technologies: Wearable age-simulation devices

Prabhir Vishnu Poruthiyil

Indian Institute of Technology Bombay, India

There is a widespread acceptance of empathy as a principle to guide the design of technology towards moral goals like inclusivity and social justice. Concepts such as ‘digital empathy’ (Terry and Cain, 2016) to develop concern for others in online interfaces (particularly in e-health, immersive digital journalism, and interactive documentaries), virtual reality (Andrejevic and Volcic, 2020, Hassan, 2020) and empathy tools that combine analog and high digital components (Pratte, Tang, & Oehlberg, 2021; Felts, 2023) are examples of where empathy is a core orientation for technology design.

These empathy tools intend to communicate to the user/wearer the challenges that underprivileged and/or exploited individuals and groups experience as a result of biological factors (Felts, 2023) or social conditions (Hassan, 2020). They simulate vulnerabilities related to disability, period pains, policing of street protests, online harassment, hazardous work, and ageing in order to orient individuals and thereby our societies and policies towards inclusion and social justice.

These designs assume that empathy is intrinsic to morality and justice. This paper argues that this assumption is wrong.

I draw from the debates triggered by philosopher Paul Bloom’s (2016) arguments on the limits of empathy and its unintended consequences. Designers of empathy tools adopt a common definition – of being able to experience another person’s feelings; a definition, that Bloom argues, often clashes with fairness. Empathy is emotionally triggered for an identifiable victim (e.g. through images used in fundraising by charities) which results in the misallocation of resources away from avenues that may have benefited a larger number of persons. In cases where there are no identifiable victims, such as future victims of climate change, empathy is weakly triggered. More relevant for its application in designing intimate technologies is Bloom’s argument that empathy also has an ingroup bias as it is usually triggered when the subject can identify with the victim through race, caste, religion, nationality. Further, empathy need not necessarily have lasting impacts. More often than not, Bloom argues, the subject's feeling of another person’s pain is temporary without any meaningful change in behavior.

What are the implications for intimate technologies? For instance, consider the tools used to generate age-related vulnerabilities in younger/healthier persons, intended to mobilize support for accessible public spaces (Felts, 2023). Class membership, in these highly unequal times, will influence the experience of empathy, and elite wearers may direct attention to affluent parts of a city and exclusive spaces, at the expense of public spaces and parts of a city where the majority lives and works. These tools also encourage the individualization of responsibility and ignoring the need for public investments that would benefit a larger share of citizens. Elites targeted with VR tools to ‘experience’ homelessness (Andrejevic, M., & Volcic, 2020) and hazardous work that they would almost never encounter in real life are unlikely to transform into champions of higher taxation and public welfare.

Uncritical adoption of empathy is common in design (Devecchi and Guerrini, 2017). Acknowledging these downsides, I argue, will help designers of intimate technology with a more nuanced understanding of empathy and point out the consequences can be contrary to the stated aims of its adoption.

References

Andrejevic, M., & Volcic, Z. (2020). Virtual empathy. Communication, Culture, and Critique, 13(3), 295-310.

Bloom, P. (2017). Against empathy: The case for rational compassion. Random House.

Devecchi, A., & Guerrini, L. (2017). Empathy and Design. A new perspective. The Design Journal, 20(sup1), S4357-S4364.

Felts, A. (2023). Unique MIT suit helps people better understand the aging experience, MIT News, https://news.mit.edu/2023/unique-mit-suit-helps-people-better-understand-aging-experience-0120.

Hassan, R. (2020). Digitality, virtual reality and the ‘empathy machine’. Digital journalism, 8(2), 195-212.

Pratte, S., Tang, A., & Oehlberg, L. (2021, February). Evoking empathy: a framework for describing empathy tools. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (pp. 1-15).

Terry, C., & Cain, J. (2016). The emerging issue of digital empathy. American journal of pharmaceutical education, 80(4), 58

 
10:05am - 11:20am(Papers) Ontology
Location: Blauwe Zaal
Session Chair: Ibo van de Poel
 

Information as dispositions: an ontological analysis

Mitchell Roberts

Texas A&M University, United States of America

Information is commonly understood with reference to data or signals (Shannon 1948; Dretske 1981; 2008; Landauer 1996). For example, Luciano Floridi summarizes the General Definition of Information (GDI) as “well-formed, meaningful data” (2013; 2010). Data here refers to the (typically physical) entities that are “manipulated” or “transformed” into information. Examples include DNA, high and low voltages in a computer, and English words on a piece of paper. It is likewise common to consider these objects as “containing” or “embedding” information. But what do we mean by this? Is information something “over and above” the physical entities that carry it, or is it somehow reducible to physical phenomena? In this essay, I aim to answer these questions. Namely, I argue that information can be reduced to physical dispositions. Dispositions are physical properties of objects that are typically understood in terms of counterfactual conditionals – “X is disposed to Φ when M iff X would Φ if it were the case that M.” Some physical objects (like DNA and tree rings) are disposed to play a certain functional role in a given system, and it is these dispositions that we refer to when we speak of information. For instance, consider a basic digital calculator. In this case, the high and low voltages (the electrical bits) are the data and the hardware of the calculator is the relevant system. The high and low voltages contain information because they are disposed to interact with the hardware in such a way that certain calculations are made. An important consequence of this view is that information is mind-independent – it exists even in the absence of any human perceivers.



The Picture of Existence: Ontological commitments and existential trade-offs in the age of intimate technologies

Ângelo Nunes Milhano

University of Évora, Portugal - Praxis: Centre of Philosophy, Politics and Culture

The intimate technologies we came to depend upon — such as smartphones, wearables, smart glasses, or VR glasses (maybe, in the foreseeable future, even smart brain implants powered by A.I.) — seem to be able to create an image of human behavior through algorithmic interpretation of routines and social interactions, promoting what, drawing inspiration from Heidegger’s "The Age of the World Picture" (1950), we will refer to as a “picture of existence”. The existential trade-offs and ontological commitments underlying the use and mass appropriation of this type of digital technologies create a very particular opening of the world, through which they subtly exert power over its users by shaping their perceptions and actions in accordance with a specific understanding of what it means to be human: a producer and/or consumer of data.

The mediation of our identity operated by these technologies appears to foster a “hyperreal” understanding of our subjective experiences. Thanks to it, individuals started to prioritize their curated digital personas over genuine engagement with the real world. The digital age’s “picture of existence” has increasingly replaced the necessary confrontation with our individuality with a compulsive need for instant connectivity, thereby amplifying psychological distress and fostering existential indifference.

While opening new possibilities of being-in-the-world, these technologies appear to have deeply altered human subjectivity and existential experiences. A shift that can be exemplified by the pervasiveness of the digital representation of the self, which fosters dependence on constant connectivity and diminishes opportunities for authentic self-reflection and connection with others. Drawing from existential, phenomenological, and postphenomenological perspectives on technology, with works from Heidegger, Stiegler, Yuk Hui, Ihde, among others, the paper here proposed intends to discuss how intimate technologies can constrain our authentic selfhood, by imposing predefined frameworks for interaction with the world and the other beings/entities we share it with. This still aligns with Heidegger’s notion of “enframing”, through which the author criticized technology’s role in reconfiguring human relations with the world by perceiving it through a lens of a resource available for instrumentalization. This paper aims to call for philosophical reflection on the existential consequences of the mass assimilation of intimate technologies into our lives and the potential loss of authentic selfhood in the digital age. We argue that, by critically examining these technologies’ inherent existential trade-offs and underlying ontological commitments, individuals might be able to reclaim the ontological grounding of their selfhood and resist the passive acceptance of the “picture of existence” that these technologies impose.



Re-ontologising psychiatric illness using deep learning: ethical concerns beyond the clinic

Emily Postan

University of Edinburgh, United Kingdom

Problem and argument

Should we welcome the use of deep-learning (DL) to (re)classify psychiatric diagnostic and disease risk categories, by identifying underlying patterns in neurobiological and other health data?

This question could be answered solely from a clinical perspective – would data-driven DL-generated psychiatric nosology result in better healthcare and clinical outcomes [1]? This paper argues, however, that this delivers only a partial picture of ethically significant considerations. It demonstrates how mental health diagnoses and risk profiles function not only as clinical tools, insofar as they constitute human kinds, they also play key roles in our personal and social identities, and in shaping our social environments [2]. I argue, therefore, that ethicists must also ask whether DL-generated psychiatric categories trained on neurodata (and other biodata) would serve the interests of those thus classified beyond the clinic. Moreover, I explain why these DL-generated categories that are treated as human kinds are likely to exhibit several problematic features, including: opacity; abstraction from lived experience; and amenability to bio-essentialism.

I conclude that these problematic features mean that DL-generated psychiatric classifications that perform well in for their intended clinical purposes, could nevertheless fail us when it comes to fulfilling wider epistemic and practical functions of human kinds – particularly by failing to support the needs of the people classified to understand their experiences and navigate their socially embedded lives

This paper exposes the limits of current health AI ethics debates by highlighting the way that new diagnostic categories reontologise our world, beyond the clinic. It provides fresh reasons for tempering enthusiasm about the value of DL-generated nosology in psychiatry, and offers conceptual and normative tools with which we can ask whether DL-driven diagnostics would really serve the needs of those diagnosed.

Background

There is considerable optimism that DL could provide new data-driven bases for (re)categorising and subdividing diagnostic and prognostic categories [3]. This method might seem to offer particular benefits in psychiatry – where the boundaries of disease categories and reliability of diagnoses are notoriously contested [4]. This is, therefore, an important juncture to ask whether these healthcare applications of DL are ethically desirable.

Method

This paper is grounded in bioethical and conceptual analysis, drawing on scholarship in social ontology concerning the construction and nature of human kinds [5, 6] and work on embodied identity-making [7]. It is also informed by empirically-grounded understandings of the ways that health categories influence identity-making [2].

References

[1] Wiese, W., & Friston, K. J. (2022). AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness. Behavioural Brain Research, 420, 113704.

[2] Postan, E. (2021). Narrative devices: Neurotechnologies, information, and self-constitution. Neuroethics, 14(2), 231.

[3] MacEachern, S. J., & Forkert, N. D. (2021). Machine learning for precision medicine. Genome, 64(4), 416.

[4] Starke, G., Elger, B. S., & De Clercq, E. (2023). Machine learning and its impact on psychiatric nosology: Findings from a qualitative study among German and Swiss experts. Philosophy and the Mind Sciences, 4.

[5] Hacking, I. (2007). Kinds of people: Moving targets. Proceedings-British Academy, 151, 285.

[6] Mallon, R. (2016). The construction of human kinds. Oxford University Press.

[7] Postan, E. (2022). Embodied Narratives: Protecting Identity Interests Through Ethical Governance of Bioinformation. Cambridge University Press.

 
10:05am - 11:20am(Papers) Instrumentalism
Location: Auditorium 1
Session Chair: Wybo Houkes
 

Categories, institutions, instruments: technology as a category?

Johannes F.M. Schick

University of Siegen, Germany

Ludovic Coupaye proposes to understand ‘technology’ in a twofold sense: as a discipline that studies techniques (i.e. as human science) and as a category of contemporary western societies. To conceive of ‘technology’ as a category hasthe benefit of understanding its operative role in constructing an objective, observable and describable reality.To understand ‘technology’ as a category implies therefore that the socio-technical practices of using modern technical objects co-constitute this category. "Technology" becomes therefore a specific mode of perceiving and constructing the world. My talk focuses on the genetic process of how ‘technology’ can become a category against the backdrop of the “Category Project of the Durkheim School.” The underlying heuristic of this project is, that human intelligence and its categories originate in social practices. To conceive of ‘technology’ as a category therefore requires an attempt to understand how techniques contribute to the genesis of a category and in what sense this category is the expression and crystallization of social things.

Though Durkheim added the rubric “Technology” to the Année Sociologique 4 (1901) defining “technology” as a branch of sociology and a science yet to be developed, “technologie” was not used as a category and Durkheim himself preferred to focus on religious phenomena rather than on techniques. Durkheim assigned the task of studying technical phenomena to his nephew Marcel Mauss and his ‘work twin’ Henri Hubert (which resulted for instance in the seminal work “Techniques of the Body” by Marcel Mauss). Even though “technology” has not been spelled out as a category generated from social practices such as time, space or causality, which were each studied in their own right by members of the Durkheim School, what consequence if ‘technology’ were conceived of as a category of the human mind instead of merely a subsection of the Année Sociologique that Durkheim was not particularly interested in? What are the epistemological and philosophical ramifications? How can ‘technology’ become a category and how can answering these questions help us to understand the human condition in the 21st century?

In my talk, I will develop my argument in four steps. Firstly, I will introduce the reciprocity of mind and bodily practices as central to the formation of categories in Durkheim and Mauss. In a second step, I will focus on the genesis of collective representations and categories. Thirdly, I will show how ‘technology’ can be conceived of as a category in the framework of the “Category Project” and relate this category to the goal of the Durkheimians to understand multiple modes of being human. In the concluding part, I will relate the category of ‘technology’ to the possibility of developing technology as a human science.



Instrumental rationality, value trade-offs, and medical AI

Zachary Daus

Monash University, Australia

Artificial intelligence (AI) is increasingly being used in various public sectors to achieve policy goals with greater efficiency. One such sector is health care. Medical AI is now being developed to make health care processes cheaper and faster, to free up scarce time for overworked clinicians, to reduce the need for expensive human labour, to predict when treatments may be successful and for which patients, and to better allocate scarce health care resources. While potentially beneficial due to the scarcity of health care resources, this more efficient achievement of health care outcomes can nonetheless come at a cost to other values that are external to the value of health, such as privacy, equality, autonomy, and dignity. I argue that such value trade-offs can be better identified, and potentially resolved, through the application of Max Weber’s understanding of rationality and his conception of value conflicts. According to Weber, a number of societal developments in modernity, such as the rise of bureaucratic governance and industrial capitalism, have resulted in the dominance of instrumental rationality over value rationality, that is to say the preference for action with reliably predictable consequences over action that is intrinsically valuable. This modern transformation in rationality has had ambiguous results, encouraging humans to give up inefficiently superstitious courses of action while trapping them in an ‘iron cage’ (stahlhartes Gehäuse) of unfreedom that is impervious to their higher-order values. This logic is also evident in the implementation of AI in health care. Namely, the gains in efficiency promised by AI in health care may lead many to overlook its accompanying value trade-offs. For example, an AI diagnostic system for detecting skin cancer may more efficiently expand health care access while exhibiting bias against individuals with darker skin tones, undermining the intrinsic value of justice. Alternatively, an AI prediction system for determining treatment success may more efficiently allocate scarce resources while limiting health care access for those deemed unlikely to benefit from the treatment, undermining the value of social solidarity. After describing a number of the value trade-offs posed by the implementation of AI in health care, I argue that many of these trade-offs would require democratic deliberation to adequately assess and fully resolve, and consider what such deliberation may entail.



Beyond Instrumentalism: reframing human-centered AI through Simondon's philosophy of technical objects

Luuk Stellinga, Paulan Korenhof, Vincent Blok

Wageningen University & Research, Netherlands, The

‘Human-centered artificial intelligence’ (HCAI) has emerged as a prominent phrase in the societal debate on the implications of AI, framing the development, deployment, and governance of AI technologies (e.g., HLEG AI, 2019). However, the current discourse on HCAI lacks critical reflection on the nature of human-technology relations, leading to an implicit instrumentalist perspective that treats AI technologies as means to human ends. This view does not adequately map onto the reality of AI technologies, which are not tools but system technologies, progressively changing in nature through time, and impacting human existence more profoundly than only through the serving of human ends. This leads to the critical question of how to ground HCAI in a richer understanding of technical objects that reflects the complexities of human-AI relations.

To address this question, we first providing a critical analysis of instrumentalism as a dominant yet reductive perspective in contemporary thinking about AI, and offer multiple arguments to reveal its shortcomings and demonstrate the need for moving beyond it. A critical response can be found in Gilbert Simondon’s philosophy of technical objects, which argues that the instrumentalist perspective stems from a false dualism between technics and culture (Simondon, 1958). Following a reconstruction of this argument, we draw on Simondon’s understanding of technical objects as ontogenetic and relational entities to reconsider human-AI relationships, and argue that it reveals the current HCAI discourse as too focused on a desire to control AI systems and to shape their functionality towards increasing human capacities, while neglecting the material and social conditions within which such systems operate.

Simondon’s perspective is valuable in allowing us to move beyond instrumentalism, but also has two significant limitations. First, Simondon’s analysis of human-technology relations considers human beings only in direct relation to a technical object, for example as craftsman or engineer. This reveals a limited philosophical anthropology that does not sufficiently acknowledge the political dimension of human existence, i.e., the human as zoon politikon (cf. Arendt, 1958). Second, while Simondon’s analysis does acknowledge the transformative effects that technical objects have on their natural milieu, it does not consider the finality of this milieu and consequently fails to contend with the environmental costs of technical progress. Both limitations point towards important aspects of human-technology relations, and overcoming them is crucial in dealing with the current challenges of AI.

The paper concludes by proposing a progressive concept of HCAI that goes beyond the instrumentalist perspective on AI technologies as means towards human ends, instead promoting the careful integration of AI in society. By incorporating Simondon’s insights while addressing its limitations, this work contributes to a philosophical grounding for HCAI, offering a critical and progressive vision for rethinking our relationship with AI technologies.

Arendt, H. (1958). The Human Condition.

HLEG AI. (2019). Ethics guidelines for trustworthy AI.

Simondon, G. (1958). On the Mode of Existence of Technical Objects.

 
10:05am - 11:20am(Papers) Mediation I
Location: Auditorium 2
Session Chair: Bouke van Balen
 

Can interpersonal trust be digitally mediated? A panoramic view of trust relations on Airbnb

Micol Mieli

Roskilde University, Denmark

“Airbnb is a business fueled by trust” (Airbnb, 2019). The peer-to-peer accommodation platform that disrupted the hospitality industry since 2008 has made a point time and again to tell the public and their customers how their business is built on trust and how they work actively to build trust among users (Airbnb, 2017, 2025). Public media and academic discourse have followed suit, with articles, blog posts, TED talks, podcasts, books and hundreds of academic articles on the specific topic of trust on Airbnb (Scopus, 2025).

In this study, Airbnb is used as a paradigmatic case of digital mediation of trust between strangers, which involves both online and offline experience of trust. I adopt a panoramic view of trust to understand what trust relations are involved in the functioning of the platform, and how they are mediated by digital technologies. The panoramic view of trust identifies four trust relations: interpersonal trust, institutional trust, self-trust and trust in technology (Pedersen, 2010, 2024). The theory posits that trust relations unfold on two levels: a habitual and a reflexive level and that trust and distrust are not mutually exclusive but exist on a continuum (Pedersen, 2010).

In my research, I offer an empirical investigation of trust on Airbnb to show how different trust relations are involved in the use of online platforms. The digitally enhanced environment works as a relatively new context that has developed over the last four decades (Domenicucci, 2018). Being able to study the evolution of this digital enhancement reveals the time dimension of the two-level theory of trust, where habitual trust is created over time through reflexive trust. The analysis further shows that trust relations expand online and through digital mediation trust is never just between people, but it emerges in an assemblage of people, institutions and technologies (DeLanda, 2006). This leads to question whether interpersonal trust between strangers can be mediated at all, and even more so if it can be “built” between strangers through the online platform.

While on one hand it appears that different relations are not discrete and mutually exclusive but coexist in the same experience, on the other hand, an analysis of the sociotechnical devices used to mediate trust relations on Airbnb reveals how trust and distrust are also fundamentally complementary in the process of digital mediation (Pumputis, 2024). Technical tools like verification systems, predictive and verification algorithms, online profile pages, “badges”, reviews, can only go as far as removing reasons for distrust and only indirectly help create trust as a result. The technical mediation by the platform does not directly build trust between strangers but by mediating institutional trust, self-trust and trust in technology, it can create a space where the possibility of interpersonal trust emerges. On the other hand, a discourse of trust is the main tool used to directly build trust, for example through marketing, policies, community standards and general communication (Pumputis & Mieli, 2024).

Airbnb. (2017). Perfect strangers: How Airbnb is building trust between hosts and guests. Airbnb Newsroom. Retrieved January 14, 2025, from https://news.airbnb.com/perfect-strangers-how-airbnb-is-building-trust-between-hosts-and-guests/

Airbnb. (2019). In the business of trust. Airbnb Newsroom. Retrieved January 14, 2025, from https://news.airbnb.com/in-the-business-of-trust/

Airbnb. (2025). How do I create an Airbnb account? Airbnb Help Center. Retrieved January 14, 2025, from https://www.airbnb.com/help/article/4

De Landa, M. (2006). A new philosophy of society: Assemblage theory and social complexity. London: Continuum.

Domenicucci, J. (2018). Trust, Extended Memories and Social Media. Towards a Philosophy of Digital Media, 119-142.

Pedersen, E. O. (2010). A Two-Level Theory of Trust. Balkan Journal of Philosophy, 2(1), 47-56.

Pedersen, E. O. (2024). A Panoramic View of Trust in the Time of Digital Automated Decision Making – Failings of Trust in the Post Office and the Tax Authorities. Sats (Aarhus), 25(1), 29–47. https://doi.org/10.1515/sats-2024-0008

Pumputis, A. (2024). Trust and control on peer-to-peer platforms : A sociomaterial analysis of guest-host relationships in digital environments. Lund University.

Pumputis, A., & Mieli, M. (2024). From trust to trustworthiness: Formalising consumer behaviour with discourse on Airbnb platform. In Consumer Behaviour in Hospitality and Tourism (pp. 83-102). Routledge.

Scopus. (2025). Search query: [airbnb AND trust]. Retrieved [14 january 2025], from https://www.scopus.com



Design and the Contemporary Self as Extimate Form

Jesse Josua Benjamin

Eindhoven University of Technology

Contemporary technical systems challenge existing analytic paradigms of technological mediation, operating on (differential and dislocated) spatial and temporal scales that cannot be reduced to localized human-technology relations (cf. Benjamin, 2023). Simply put, in the isolated here and now, artefacts with and via which we interface with these systems do not account exhaustively for the formation of subjectivities and objectivities. Rather, there are systemic properties—such as extractive techniques, but also lag, compression and noise—that texture these artefacts, and infiltrate mediation. This paper considers the effect of such properties on mediation through extimacy, as adopted from Lacan by Aydin (Aydin, 2021); and relates this to design through Tafdrup's proposals on introducing Cassirer's symbolic form to technological mediation via Flusser's 'informatic' phenomenology of design. First, extimacy is here considered as accounting for the hard problem of perceived boundaries between the self and the world by considering them as co-constitutive of each other, rather than imaginary separate spheres of being. I further suggest that the technological mediation of extimacy (relating to prior work on Foucauldian technologies of the self, cf. Berger and Verbeek, 2020; Verbeek, 2011), when considered through Tafdrup's work, can be considered as a form in Cassirer's sense, that is, as a conception of reality through functional symbols that meaningfully bind perception and action. Lastly, the contemporary impact and responsibilities of design on such forms is laid out through Flusser's 'informatic' phenomenology: when design in the information age in-forms abstract forms (the technical images of programs, routines, procedures), it does not add concepts to matter, but matter to concepts. The matter of contemporary technical systems, then, requires a full accounting in technological mediation if the latter is to be of any sufficient relevance to contemporary design, and in turn, the position of the human self in everyday life. In closing, lag, compression and noise are showcased as means with which designers sensitive to this contemporary circumstance are already operating.

References

Aydin, Ciano. 2021. Extimate Technology : Self-Formation in a Technological World. New York: Routledge.

Benjamin, Jesse Josua. 2023. “Machine Horizons: Post-Phenomenological AI Studies.” Doctoral dissertation, University of Twente.

Bergen, Jan Peter, and Peter-Paul Verbeek. 2020. “To-Do Is to Be: Foucault, Levinas, and Technologically Mediated Subjectivation.” Philosophy & Technology, January, 325–48. https://doi.org/10.1007/s13347-019-00390-7.

Cassirer, Ernst. 1956. An Essay on Man: An Introduction to a Philosophy of Human Culture. Doubleday Anchor Books.

Flusser, Vilém. 1999. The Shape of Things : A Philosophy of Design. London, UK: Reaktion Books.

Tafdrup, Oliver Alexander. 2024. “Ernst Cassirer and the Symbolic Mediation of Technological Artefacts in Advance: An Addendum to the Vocabulary of Mediation Theory.” Techné: Research in Philosophy and Technology. https://doi.org/10.5840/techne202438197.

Verbeek, Peter-Paul. 2011. Moralizing Technology : Understanding and Designing the Morality of Things. University of Chicago Press. https://www.press.uchicago.edu/ucp/books/book/chicago/M/bo11309162.html.



A Critique of technological Ideology in Taiwanese folk religion

Wei Min Tsai

Aletheia University, Taiwan, Taiwan

In recent years, the biggest change among Taiwan's religions has been the introduction and widespread application of digital technology. When traditional religions face the impact of today's digital technology, they certainly have to face "digital disruption" - the deconstruction and collapse of traditional organizations. But that is also an opportunity for "digital transformation" - the reconstruction of digital technology. and reinterpretation. In the process of religious modernization, a large amount of information technology intermediary content has appeared. Technological intermediation may cause qualitative changes in the intermediary, and even cause changes in the content, spirit, doctrinal interpretation and rituals of religion.

Religion and science have always been antagonistic but interdependent. Throughout human history, the apocalyptic character of religion has been constantly challenged by the scientific discourse of the time, but it has also remained unshakable through the reinterpretation of sacred language. Religion has always been able to compete with technology. That is because the "faith" that has always been the key to human behavior and judgment, and the "spirituality" that supports the reasonable existence of beliefs cannot be completely eliminated through scientific methods; however, these two In 2006, "quantum entanglement" seemed to break this deadlock. In 2023, MIT research discovered that the human nervous system may have the characteristics of quantum entanglement. This seems to mean that human consciousness can break through the limitations of time and space, and perceive information at the energy level directly. Some people therefore believe that spirituality is actually a science.

Sociologist of religion Peter Berger mentioned in "The Sacred Curtain" that so-called religion is "the use of sacred methods to order human activities." In today's language, it can be said that religion is actually discussion on the sacredness of human survival technology in various eras. Will the impact of digital technology and the discovery of new physical science theories further impact the revealed authority of various religious traditions and gradually weaken it? Or is it because the mystical discourse is confirmed by scientific reason and the authority of reason is strengthened? How does religion face and respond to the impact of AI digital technology?

This article will take Taiwan Folk Religion as the research field, and use the method of Critical Theory -- especially the arguments in Habermas's "Technology and Science as Ideology" to analyze the technological ideology of Taiwan's folk religion. To discuss the problems and responses faced by Taiwan Folk Religion in the face of the impact of digital technology, especially the turning point and crisis in the integration of Taiwanese folk religion and digital technology.

 
10:05am - 11:20am(Papers) Autonomous systems
Location: Auditorium 3
Session Chair: Hans Voordijk
 

Vicarious responsibility and autonomous systems

Fabio Tollon

University of Edinburgh, United Kingdom

Much has been made about the potential emergence of so-called ‘responsibility gaps’, either due to technology more generally or due to AI specifically. A multitude of responses have emerged in response to the responsibility gap challenge. An understudied response to the challenge, however, is vicarious responsibility as a potential bridge for responsibility gaps. In cases of vicarious responsibility, one agent stands as a substitute for another. The idea behind vicarious responsibility is therefore a simple one: it can helpfully account for cases where an agent is responsible for the uncontrollable and unpredictable actions of some other entity. Human agents, in certain circumstances, can stand in as responsible for the harms that follow from autonomous systems.

Surprisingly, however, very little attention has been given to the prospect of vicarious responsibility for autonomous AI-systems. With the exception of work by Trystan Goetze, who offers an argument in favour of vicarious responsibility for computing professionals in certain cases of AI-enabled harm, there is very little other literature on the intersection between digital ethics/AI ethics and theories of vicarious responsibility (2022).

Moreover, we find a similar lack of philosophical literature on vicarious responsibility more generally. While legal scholars debate its merit (usually in discussions of vicarious liability), there has only recently been spate of attention addressed to the philosophical underpinnings of vicarious responsibility (most of it in a special issue of the Monist) (Collins and De Haan, 2021; Goetze, 2021; Kuan, 2021; Mellor, 2021; Radoilska, 2021; Glavaničová and Pascucci, 2024). In this paper I aim to plug this gap and assess the limits and potential of vicarious responsibility as a solution to the responsibility gap challenge. To do so I proceed as follows.

First, I outline in general terms why the idea of vicarious responsibility makes sense, despite suggestions that it is a contradiction in terms.

Second, I outline two different ‘faces’ of responsibility – accountability and answerability, and show how responsibility as answerability can plausibly be borne vicariously. This is motivated by the idea that it seems unjustifiable to blame an agent for the deeds of another, but in certain cases it makes sense to expect one agent to answer for the actions of another.

Third, I describe why vicarious responsibility might be a useful analytical lens for understanding autonomous-systems. While Mellor (2021), for example, argues that vicarious responsibility can only exist between individuals (which I take to mean human persons), I think this overly restrictive. We can and should adopt a more holistic perspective on vicarious responsibility whereby such responsibility is fittingly attributable even in cases where only one entity involved is a human agent. An easy example of such a case is the responsibility I might have for my dog if he were to bite someone. In such a scenario, therefore, I am responsible for an unexpected and uncontrollable action that I did not initiate. I apply this discussion to autonomous systems and suggest that, under certain conditions, we are vicariously responsible, and thus answerable, for the actions of these entities.



Ensuring Transparency and Accepting Failure in the Application of Autonomous Driving Technology in Smart City FormationーUtilization of Special Zones in Japan

Mayu Terada

Hitotsubashi University, Japan

Autonomous driving technology (ADT) is a key component of smart city development, gaining attention globally as governments experiment with its implementation through special zones. In Japan, ADT has been proposed as a solution to regional issues such as labor shortages and inadequate transportation due to an aging population and declining birthrate. Special zones provide a controlled environment for testing ADT and addressing regional challenges. However, this approach raises critical philosophical and institutional questions.

One major challenge lies in the lack of democratic transparency in the designation and governance of special zones. Questions such as who should make decisions, how diverse opinions are incorporated, and whether the process reflects public interest remain unresolved. Additionally, the inherent risk of failure in deploying ADT leads to ethical dilemmas: who bears responsibility for failure, and to what extent should society tolerate and learn from such failures?

Another critical issue is ensuring the long-term sustainability of ADT systems. While initial deployment often relies on government subsidies, the question arises as to whether autonomous transportation infrastructure can function under market principles once public support diminishes. Addressing this requires robust frameworks for public-private partnerships, cost-benefit analysis, and sustainable governance.

This paper examines ADT implementation in Japan’s special zones, such as Osaka and Kyushu, to explore the interplay between transparency, stakeholder engagement, and societal acceptance. By analyzing these cases, the study seeks to identify practical mechanisms to ensure democratic governance while managing the risks and uncertainties of ADT.

Finally, the paper engages with broader philosophical inquiries: How can societies build consensus around unknown technological innovations? To what extent should failure be accepted as a necessary condition for progress? These questions, while rooted in the specific context of ADT, resonate with larger concerns about the ethical and social implications of technology. Reflecting on these issues provides an opportunity to reconsider the role of technology in shaping future societies and the values that underpin its governance.

 
10:05am - 11:20am(Papers) Epistemology I
Location: Auditorium 4
Session Chair: Maaike Eline Harmsen
 

The Sullenberger case and the epistemic role of simulations and digital twin technologies

Laura Crompton

University of Vienna, Austria

On 15 January 2009, a US Airways plane made an emergency landing on the Hudson River. After a bird strike, two engines failed and the plane lost thrust. It is because of the decision made by the experienced pilot Chesley Sullenberger that all passengers and crew survived an accident that could have had disastrous consequences. After the accident, the US National Transportation Safety Board (NTSB) ran multiple simulations, to analyse and evaluate the decision of Sullenberger - a standard procedure to establish how emergency protocols might need to be adapted according to certain situations. The simulation is used, in a sense, as computer evidence of what could and should have been done differently.

I believe that this has important implications for the epistemological status of such simulations and digital twins (DT): while on the one hand they are (mainly) built and implemented to make predictions, i.e. process possibilities and 'mights' of what might happen if we act in a certain way, on the other hand they are often treated as a representation of reality or factual evidence. Simulations and digital twin technologies have inherent limitations in their ability to represent reality, yet decision makers often treat their results as definitive truths rather than probabilities or approximations. In this paper, I aim to look into the complex relationship between human decision-making and simulation-based analysis.

\noindent In this paper, I aim to look at the epistemological status of simulations and DT. There is a challenging dynamic between the descriptive and prescriptive character of simulations and DT. It is along these lines that we have to ask whether simulations and DT can and should replace real-world experience as a basis for analysis, especially in regard to human decisions.



(In)visibility reductions: a feminist epistemology critique of online ‘shadowbanning’

Mariam Al Askari

Independent, United Arab Emirates

Social media platforms like Facebook, YouTube or Reddit are online information environments where users can share and engage with content. Platforms have employed various moderation practices to limit certain behaviours and curb the spread of content considered harmful or misleading. Paradigmatic approaches are the removal of content and the suspension of accounts, along with a notice informing affected users of the changes made. In this paper, I address a less apparent form of platform governance: reductions in content or profile visibility that are neither disclosed by platform hosts, nor explicitly verifiable by those affected. Examples include delisting, downranking, and hashtag blocking, which critics refer to more generally as ‘shadowbanning’. Are shadowbans conducive to healthier epistemic environments online? Under what conditions does this form of moderation become unethical? In this paper, I distinguish two categories of shadowbanned content and I argue that this moderation technique is unethical when applied to one of them, namely because it bars genuine and well-intended users from fully engaging in knowledge-producing processes online. To support this view, I draw on feminist epistemology work by Lorraine Code, Helen Longino, Amanda Menking and Jon Rosenberg, as well as texts on trust and human-computer interaction.

I begin by discerning two types of shadowbanned content. The first comprises content that targets user vulnerabilities. It is produced by ‘bad actors’ who intend to harm or exploit vulnerabilities (e.g. click-bait or trolling), but also by users who instrumentalise vulnerabilities for unethical or unsafe ends (e.g. content about or depicting self-harm). The second category comprises content that does not target nor rely on user vulnerabilities for its purposes, but may still cause discomfort or discord in users. The content in this category covers sensitive themes or less mainstream views, but may still be genuine and well-intended (e.g. certain political discourse, or content depicting violence to denounce it). There are a number of reasons why platforms shadowban content instead of leaving it up or banning it outright, which I also discuss in detail. In section 2, I present a feminist epistemology conception of knowledge, that shifts the focus from epistemic products to processes. As such, more trust, transparency, and inclusive epistemic practices are what support healthy epistemic environments. Sophisticated moderation methods are not incompatible with this framework, so long as they encourage continuous, genuine, and ‘situated’ epistemic inquiry. In section 3, I use this theoretical framework to assess whether shadowbanning makes healthier epistemic environments. I explain how it does for one content category, but not for the other. This differentiation could support the design of better moderation systems: ones that can discern when user vulnerabilities are intently targeted; or when content is provocative for ulterior motives (e.g. maximising views) versus when it just happens to be so.



Epistemological imbalances in assessment of surveillance technologies: what CCTV cameras show us

Blas Alonso

University of Twente, Spain

Surveillance technologies are used worldwide as mechanisms to safeguard security and maximize citizen wellbeing but, as a consequence, privacy is often negatively affected as a trade-off. A paradigmatic example of surveillance technology that tries to contribute to society by making it safer are CCTV (Closed-Circuit TeleVision) cameras, which are often used for crime prevention. But CCTV systems have the small inconvenient of not being extremely effective preventing crime (citation). Why, assuming that it is not obvious how they contribute to the general well-being, we still invest millions in their development? In this paper, I will argue that the reasons for the adoption of surveillance technologies are often based on biased evidence that comes from the overrepresentation of certain values when assessing how these technologies contribute to wellbeing. In other words, wellbeing can be improved by realising different values in society (safety, autonomy, privacy, freedom of speech, etc.), but the influence of a technology over some of these values is easier to prove than the impact it has on others. This epistemological imbalance can be found in CCTV cameras, as the contributions to security of the cameras are easily proven by comparing different timelines of conflictive areas, but the impact that cameras have over the privacy of citizens is often overlooked: compared to numbers and crime statistics, the methods for evaluating CCTV’s impact on privacy are less “objective” and do not make good headlines (interviews, case studies, etc.).

This epistemological imbalance is not sufficient to justify why we keep on installing CCTV cameras, as it is not even clear that they contribute to prevent crime, but the overrepresentation of the “goodness” of CCTV cameras by displaying it in crime statistics and quick advertisement is easily weaponized by political parties and interested stakeholders. CCTV cameras are a powerful weapon to instigate feelings of insecurity among the population, making them a tool of political mobilization in difficult times. For concluding the paper, we point at the fact that this issue derived from the imbalance to prove the influence of technology in different values that contribute to technology might be a general issue among surveillance technologies, that often carry the same structure of trade-offs between security and privacy.

 
10:05am - 11:20am(Papers) Computing and quantification
Location: Auditorium 5
Session Chair: Chirag Arora
 

The productive function of technology with regard to subjectification and the example of affective computing

Sebastian Nähr-Wagener, Orsolya Friedrich

FernUniversität Hagen, Germany

It is now widely recognized in the philosophy of technology that technology is not just a neutral means for realizing specific purposes. In the context of the numerous criticisms of an instrumentalist understanding of technology, there have also been conceptions that focus particularly on the actors involved in technical actions. In these approaches, the concept of a stable subject that exists independently of technology, which is prevalent in the context of instrumentalist conceptions of technology, is often replaced by the shaping of subjects in human-technology relations (locus classicus: Ihde 1990) or the co- constitution of subjects and objects within human-technology relations (e.g. Verbeek 2005, 2011) or of various types of interweaving and merging of humans and technology (e.g. Haraway, e.g. Haraway 1997, 2016 or Latour, e.g. Latour 1994, 2010, 2019).

Although these approaches fundamentally acknowledge the importance of the subject-theoretical dimension of human-technology relations, questions about the concrete contribution of technology to the constitution and forming of subjects, i.e. questions about the concrete technical contribution to so-called 'subjectification', remain surprisingly underdeveloped. The talk addresses this gap by proposing a perspective on technology as a dispositive, in the context of which the connection between technology and subjectification can also be grasped more precisely.

The talk first develops an understanding of technology as a dispositive of material-, intellectual- and social-technology, drawing on the concept of the dispositive in Michel Foucault (in particular Grosrichard et al. 2000, esp. pp. 119-125 and pp. 132-143) as well as on the cultural perspective on technology by Christoph Hubig (Hubig 2006, 2007, 11/16/2010), which is based on the ideas of Friedrich von Gottl-Ottilienfeld (Gottl-Ottlilienfeld 1923). It will then be shown that with regard to questions of subjectification, technology-dispositives understood in this way have a productive function, which conceptually is located in particular at the level of social-technology. Subsequently, this productive function of technology-dispositives is specified, i.a. on the basis of Foucault's remarks on self-constitution (in particular Foucault 2020, esp. pp. 36-45). Without reducing technology to a mere dispositive, a conceptual clarification of the character of technology as a dispositive is thus achieved, which also enables an analysis of the connection between technology and subjectification.

Finally, the discussion of the productive function of technology with regard to subjectification is illustrated with a view to the technology of so-called 'affective computing', in which technological control and optimization of interactions by emotion-sensitive and/or -active systems, as well as the general integration of emotional standards into social and economic structures, play a crucial role. On the basis of a concrete example, it is shown within this framework how, in the context of affective computing, not only the social communication or cooperation of technical systems and thus the smoothness, functionality, etc. of human-technology interactions is optimized, but also, for example with regard to phenomena such as emotional efficiency, emotional self-control or 'self-transparency', certain modes of subjectification and thus ultimately subject forms are re-configured.

References

- Foucault, Michel (2020): Der Gebrauch der Lüste. Sexualität und Wahrheit 2. 14. Auflage. Frankfurt am Main: Suhrkamp.

- Gottl-Ottlilienfeld, Friedrich von (1923): Wirtschaft und Technik. 2., neubearbeitete Auflage. Tübingen: Mohr.

- Grosrichard, Alain; Foucault, Michel; Wajeman, Gerard; Miller, Jaques-Alain; Le Gaufey, Guy; Miller, Gerard et al. (2000): Ein Spiel um die Psychoanalyse. Gespräch mit Angehörigen des Département de Psychanalyse der Universität Paris VIII in Vincennes. In: Michel Foucault: Dispositive der Macht. Über Sexualität, Wissen und Wahrheit. Berlin: Merve Verl., pp. 118–175.

- Haraway, Donna (1997): Modest_Witness@Second_Millennium. FemaleMan©_Meets_OncoMouseTM. Feminism and technoscience. New York, London: Routledge.

- Haraway, Donna (2016): Das Manifest für Gefährten. Wenn Spezies sich begegnen - Hunde, Menschen und signifikante Andersartigkeit. Berlin: Merve Verlag.

- Hubig, Christoph (2006): Die Kunst des Möglichen I. Bielefeld: transcript Verlag.

- Hubig, Christoph (2007): Die Kunst des Möglichen II. Bielefeld: transcript Verlag.

- Hubig, Christoph (2010): Vorlesung 'Technik als Kultur'. 4. Vorlesung. Technik als Kultur: Technisches Handeln und ein integratives Kulturkonzept, 11/16/2010.

- Ihde, Don (1990): Technology and the lifeworld. From garden to earth. Bloomington: Indiana University Press.

- Latour, Bruno (1994): On Technical Mediation. In Common Knowledge 3 (2), pp. 29–64.

- Latour, Bruno (2010): Das Parlament der Dinge. Für eine politische Ökologie. Frankfurt am Main: Suhrkamp.

- Latour, Bruno (2019): Eine neue Soziologie für eine neue Gesellschaft. Einführung in die Akteur-Netzwerk-Theorie. 5. Auflage. Frankfurt am Main: Suhrkamp.

- Verbeek, Peter-Paul (2005): What things do. Philosophical reflections on technology, agency, and design. 2. printing. University Park, Pa.: Pennsylvania State Univ. Press.

- Verbeek, Peter-Paul (2011): Moralizing technology. Understanding and designing the morality of things. Chicago: Chicago UP.



The Soylent Mentality: "Efficiency Fundamentalism" and the Future of Food

Ryan Jenkins

Cal Poly, San Luis Obispo, United States of America

…we are at least beginning to discover that there was a concealed catch in the original promise. The scientific ideology that made possible these colossal benefits, we now find, cannot be easily attached to other valid and purposeful human ends. In order to enjoy all these abundant goods, one must strictly conform to the dominant system, faithfully consuming all that it produces, meekly accepting its quantitative scale of values, never once demanding the most essential of all human goods, an ever more meaningful life, for that is precisely what automation, by its very nature and on its own strict premises, is utterly impotent to produce.

—Lewis Mumford (1964, p. 263)

In countless tiny creative actions, we remake our world in the name of a narrow conception of progress. The cumulative effect of these choices is a world where our behaviors are increasingly predicted, controlled, and optimized. We continue to fail, as a culture, to take seriously the claim that we eliminate valuable aspects of our experience in this accelerating process of rationalization.

I call this view “efficiency fundamentalism” and I outline several of its symptoms here. Representing a recent mutation of the Californian Ideology, efficiency fundamentalism is on full display especially in Silicon Valley (Barbrook & Cameron, 1996). I draw upon several examples of emerging technologies that demonstrate this trend from there, including autonomous vehicles, the “Soylent” food substitute, Amazon’s fulfillment warehouses, online education, and the “life hacking” and “quantified self” movements. The observation has been made before that there exists a myopic obsession with efficiency, especially at the frontiers of technological development. But Silicon Valley in particular indulges this fetish with a new, unbridled extravagance.

Efficiency fundamentalism is characterized by: (1) the reverence for efficiency and optimization at the expense of other values; (2) the Procrustean quantification of holistic, ineffable experiences and practices; and (3) the elimination of what Jacques Ellul calls the “organic qualities” of a thing (Ellul, 1964, p. 135 ff).

In §1, I diagnose efficiency fundamentalism through the case of the Soylent food replacement, which illustrates its central features. In §2, I discuss several other examples of the practice and suggest what is specifically wrong with it is that it undermines our opportunities for authentic engagement with the world. In §3, I clarify these features and identify them in the case of the “quantified self” and “life hacking” movements, which represent the apotheosis of efficiency fundamentalism. By identifying these failures, my hope is that we might move beyond the obsession with efficiency to recapture some solace in a world increasingly subjected to the demands of quantification and optimization.

References

Barbrook, R., & Cameron, A. (1996). The Californian Ideology. Science as Culture, 44–72.

Ellul, J. (1964). The Technological Society. Toronto: Vintage Books.

Mumford, L. (1964). The Automation of Knowledge. AV Communication Review, 261–276.



The ethical fabric of computational social science research: norms, practices, and values

Chirag Arora

TU Delft, Netherlands, The

This paper delves into the multifaceted ethical terrain of computational social science (CSS) research, examining both the descriptive and normative dimensions that shape ethical practices in this domain. The core aim is to investigate how ethical concerns are framed and operationalized within the CSS research community through intra-community norms and epistemic practices. Empirically, this involves exploring how researchers navigate complex ethical issues, such as privacy, fairness, and transparency (highlighted in works such as (Leslie, 2023; Salganik, 2019). The descriptive analysis aims to reveal the specific ways in which researchers within the CSS community interpret and prioritize ethical guidelines, highlighting the influence of internal discourse, shared methodological preferences, and disciplinary traditions on ethical decision-making. By investigating these intra-community norms, the paper seeks to understand how ethics are practically understood and acted upon.

The paper then transitions to a normative analysis, adopting an institutional ((Herzog, 2023)and social epistemological perspective. This lens emphasizes the relational dimensions of CSS research, paying particular attention to the power dynamics (Taylor, 2023) between researchers, data subjects, and the broader socio-cultural context. It highlights that research ethics in this domain is not merely about adhering to formal rules, but about navigating, framing, and building trustworthy relationships in increasingly digitalized data environments. From this relational perspective, the paper argues that the ethical dimensions of research should be responsive to the values and perspectives of affected stakeholders, acknowledging that the production of knowledge is a social endeavor that carries both epistemic and moral responsibilities. Further, by exploring the interplay between institutional structures and individual researcher autonomy in framing ethics, this paper underscores the need for a more reflexive, and context-sensitive approach to ethical reasoning within the field. Ultimately, this paper will contribute towards a more nuanced understanding of ethical challenges and potential solutions within the CSS community, aiming to foster greater accountability and trustworthiness in research practice.

Herzog, L. (2023). Citizen Knowledge: Markets, Experts, and the Infrastructure of Democracy. Oxford University Press.

Leslie, D. (2023). The Ethics of Computational Social Science. In E. Bertoni, M. Fontana, L. Gabrielli, S. Signorelli, & M. Vespe (Eds.), Handbook of Computational Social Science for Policy (pp. 57–104). Springer International Publishing. https://doi.org/10.1007/978-3-031-16624-2_4

Salganik, M. J. (2019). Bit by Bit: Social Research in the Digital Age. Princeton University Press.

Taylor, L. (2023). Data Justice, Computational Social Science and Policy. In E. Bertoni, M. Fontana, L. Gabrielli, S. Signorelli, & M. Vespe (Eds.), Handbook of Computational Social Science for Policy (pp. 41–56). Springer International Publishing. https://doi.org/10.1007/978-3-031-16624-2_3

 
10:05am - 11:20am(Papers) Politics I
Location: Auditorium 6
Session Chair: Michael Nagenborg
 

Ontic capture and technofascism

Maren Behrensen

University of Twente, Germany

In “Dear Octavia Butler,” written as a letter to the late science-fiction author that interrogates central ideas from her novels and stories, Kristie Dotson develops the concept of gestative capture. The concept describes an ideological mandate for “survival at all costs” that reduces human beings capable of bearing children to their potential role in biological reproduction. It captures these “bearers” in their reproductive essence, ignoring their dynamic existence.

Dotson connects the concept of gestative capture to a demographic trend that currently occupies the minds of the far-right in Europe and North America: sharply declining birth rates. The far-right uses this demographic trend to conjure a fight for the “survival of the West” – an obvious racist dogwhistle. This “fight” is then thought to justify the rollback of rights and access to technologies that allow “bearers” to escape gestative capture – the overturning of Roe v Wade is just the most obvious example.

Dotson’s letter is not anti-natalist – her point is that the choice to bear, nurse, and raise children has become increasingly unattractive as means to opt out of reproduction have become more and more accessible. Instead of making this choice more attractive, the political response – which so far has largely come from the far-right – pushes for the expansion of gestative capture. In my contribution, I explore the relation between this specific development and the larger context of far-right politics and “big tech” – inspired by Elon Musk’s obsessive tweeting about declining birth rates, but not limited to him or his companies.

I argue that what Dotson calls gestative capture is part of a broader phenomenon that can be described as ontic capture: the reduction of human beings and their dynamic existence to a fixed essence. This essence can be defined in reproductive terms, but it can also be sexual, ethnic or racial, religious, or economic. Ontic capture is not a new phenomenon in that all social, political, and legal classification systems depend on it to some extent – classification systems which usually have their own survival as their chief ideological mandate.

However, the entrenching of corporate “big tech” in civil society and government – again, Musk is only the most conspicuous example – threatens to render ontic capture overpowered and ungovernable. Especially where so-called “artificial intelligence” is involved, the products of “big tech” tend to be comprehensive systems of classification, surveillance, and social control – from social credit scoring and predictive policing to “social media,” they are designed to capture persons in some predictive, quantifiable essence.

While ontic capture is a part of all classification systems, this technologically overpowered version of it – especially when combined with matching commitments from corporate and political leaders – easily slips into technofascism: the outsourcing of truth to opaque technologies, and the replacement of history and politics with raw predictive power. Historical fascists used friend-foe-propaganda and the radio to will their ideas into existence, current fascists can rely on an entire arsenal of technologies to turbocharge their projects.

Bibliography:

Behrensen, Maren: The State and the Self. Identity and Identities, Rowman & Littlefield 2017.

Dotson, Kristie: “Dear Octavia Butler,” in the Proceedings of the Aristotelian Society 123 (2023), 327-346.

Jenkins, Katharine: “Ontic Injustice,” in the Journal of the American Philosophical Association 6 (2020), 188-205.

McIlroy, Erin: Silicon Valley Imperialism. Techno Fantasies and Frictions in Postsocialist Times. Duke University Press 2024.

Saul, Jennifer: Dogwhistles & Figleaves. How Manipulative Language Spreads Racism and Falsehood, Oxdorf University Press 2024.

Schmitt, Carl: Der Begriff des Politischen, Duncker & Humblot 1932.

Stanley, Jason: “Democratic Lies and Fascist Lies,” in Melissa Schwartzberg and Philip Kitcher (eds.): Truth and Evidence, New York University Press 2021, 209-222.

Teixeira Pinto, Ana: Capitalism with a Transhuman Face. The Afterlife of Fascism and the Digital Frontier, in Third Text 33 (2019), 315-336.



The Politics of social XAI

Suzana Alpsancar1, Eugenia Stamboliev2

1Paderborn University, Germany; 2University of Vienna, Austria

The Politics of social XAI

A key promise of introducing XAI for AI-assisted decision-making is to mitigate the risks of discrimination (bias) and to avoid harming someone wrongfully, thereby adding to the so-called "trustworthiness" of AI systems. A plethora of scandals has demonstrated the urge of that matter over the course of the last years (e.g., the dutch tax authorities child-care scandal). However, while the XAI community has been establishing a variety of explanatory techniques there is still too little understanding of how to deploy these in varying real-world application scenarios in such a way that they actually make up for the great promises. We argue that addressing this gap starts with acknowledging the contextual differences in what makes up explanatory relevance in different real-world scenarios. Acknowledging these differences does not only mean that there is no single XAI tool that could address all application areas but it furthermore implies that the relevance of XAI can never be completely be established by technical features alone. Rather, the relevance of XAI is a feature of the larger socio-technical system. Hence, designers should be aware of this socio-technical character of XAI. While grasping all the specificities of a given real-world application is impossible, we can theorize typical constellations and lay out their typical conditions via scenarios: (a) online recommendation systems, (b) AI-augmented human decision-making in companies for human resources management, and (c) AI deployment in public administration and services. Because these real-world application contexts establish very different social constellations and yet explanatory demands in practice, we need to concretize the function of XAI in a context-sensitive way.

Against this background, part of the XAI community is now engaged with the concept of social XAI – the idea to have a more context-sensitive, flexible, adaptive XAI agent. In our talk, we commit to this idea conceptually by pointing out the challenges for designing such social XAI from an ethical and political point of view. First, we present an overview of the problem of biases and current mitigation strategies and discuss the role of XAI. Second, we argue in favor of concretizing the recent turn to ’the user’ into targeting concrete social relationships by outlining the differences between our three scenarios. Third, we draw attention to how unfair decision-making processes might particularly affect vulnerable groups and what it could mean to use XAI to empower them. Finally, we discuss the challenges of designing relevant social XAI from the perspective of different real-world decision-making scenarios. We conclude by outlining several questions and factors that should be reflected in the design process.



Algorithmic politics and totalitarianism: a critical analysis of ai politics from hannah arendt’s perspective

Donghoon Lee

Virginia Tech, United States of America / Sungkyunkwan University, South Korea

Artificial intelligence (AI) is increasingly proposed as a solution to improve democratic systems, particularly in political decision-making processes. A notable example is Cesar Hidalgo’s algorithmic democracy, which aims to realize direct democracy by having individuals train AI algorithms to automatically process legislation on their behalf. Similarly, New Zealand’s SAM project seeks to create AI-powered political representatives. These initiatives focus on leveraging AI’s data processing and decision-making capabilities to address limitations in current democratic systems.

This paper argues that such technological interventions misunderstand the essence of politics as conceived by Hannah Arendt. In Arendt’s philosophical framework, politics is not merely a matter of decision-making but the realization of human action through citizen’s participation and dialogue. While political theorists emphasize dialogue and participation as components of democracy, Arendt’s contribution lies in identifying these elements as fundamental aspects of human existence, prior to any political model.

The paper examines three aspects of human activity which Arendt identified as labor, work, and action, demonstrating how current AI political initiatives align with the realm of ‘work’ rather than political ‘action.’ While work seeks predictable outcomes through fabrication based on clear blueprints, political action does not suppress uncertainty and irreversibility but harnesses them as driving forces. AI-driven approaches focused on optimizing decisions through data processing risk reducing politics to a technical problem-solving process rather than preserving it as a space for human revelation through words and actions.

This technological reduction of politics has concerning implications. The paper argues that reliance on AI’s decision-making capabilities could lead to a form of technological totalitarianism, where differing opinions are undermined by the presumed superiority of algorithmic solutions. Drawing from Arendt’s analysis of totalitarian movements, this paper notes that AI politics appeals through its promise to eliminate uncertainty by delegating decision-making processes to artificial intelligence, which conflicts with the elements of uncertainty and unpredictability that Arendt identified as essential to political engagement.

The significance of this critique extends beyond the technological optimists’ argument that any current technological limitations or deficiencies can be overcome through future technological advancement. Even as AI capabilities develop, the fundamental misalignment between algorithmic decision-making and the nature of politics persists. This paper contributes to discussions about technology in democratic systems by examining the need to preserve spaces for human political action while pursuing technological innovations.

Reference

Arendt, H. (1998). The human condition. University of Chicago press. Arendt, H. (1998). The human condition. University of Chicago press.

Arendt, H. (1987). Labor, work, action. In Amor mundi: Explorations in the faith and thought of Hannah Arendt (pp. 29-42). Dordrecht: Springer Netherlands.

Arendt, H., & May, N. (1958). The origins of totalitarianism.

Canovan, M. (1992). Hannah Arendt: A Reinterpretation of Her Political Thought. Cambridge University Press.

Coeckelbergh, M. (2022). The political philosophy of AI: an introduction. John Wiley & Sons.

Coeckelbergh, M. (2024). Why AI Undermines Democracy and what to Do about it. John Wiley & Sons.

Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International journal of information management, 48, 63-71.

Dewey, J. (2000). Democracy and education (p. 394). New York: Free Press.

Gottsegen, M. G. (1994). The political thought of Hannah Arendt. SUNY Press.

Lechterman, T. M. (2024). The Perfect Politician

Zahavi, D. (2012). The Oxford Handbook of contemporary phenomenology.

 
10:05am - 11:20am(Papers) Agency I
Location: Auditorium 7
Session Chair: Lotte Asveld
 

Welcoming the other: More-than-human agency in regenerative design

Anna Puzio1, Alessio Gerola2, Samuela Marchiori3

1University of Twente; 2Wageningen University; 3Delft University of Technology

A central problem in technology development is that technologies require valuable resources from the environment, such as energy and raw materials, whose extraction and production negatively affects ecosystem health. While there have long been demands and approaches in ethics to design technologies in a more sustainable and environmentally friendly way, the approach of regenerative design has recently gained attention (Pedersen Zari 2018, Hecht et al. 2023). Regenerative design seeks to move beyond net-zero sustainability by designing technologies and infrastructures that actively contribute to restoring the capacity of ecosystems to function at optimal health (Reed 2007, Wahl 2016). The promise of regenerative design is to overcome anthropocentric relations and practices by supporting both human and non-human thriving.

In our paper, we explore this promise by examining the concept of agency in the context of regenerative design. Whereas traditional approaches argue that only humans possess agency, more relational approaches – such as postphenomenology, actor-network theory, and new materialism – attribute a form of agency to non-human entities such as technology and nature. We propose the thesis that human agency often acts as a disruptive force within nature, and that acknowledging the agency of non-human entities may provoke a shift towards less anthropocentric ways of being. Thus, our paper aims to bridge environmental philosophy and the philosophy of technology and design. Philosophy of technology has only recently started to take more seriously the material preconditions of technologies, and more research is needed to fill this gap (Kaplan 2017, Thompson 2020). Regenerative design presents an ideal case study by bringing together design practices and care for nature.

By adopting relational and more-than-human perspectives, we examine how regenerative design transforms these relationships and challenges anthropocentric frameworks. We argue that regenerative design prompts a rethinking of human-non-human relationships, emphasizing the entangled position of humans as part of an ecosystem. Key questions we address include: How should we understand human-non-human relationships within regenerative design? How are these relationships transformed in practice? What does it mean to design for relationships, and how can this be implemented effectively? To what extent does regenerative design risk recreating paternalistic forms of relation towards nature? Through this discussion, we aim to demonstrate how regenerative design fosters new relational paradigms that integrate humans, technology, and the environment in mutually beneficial ways.

References

Hecht, K., et al. (2023). Buildings as Living Systems—Towards a Tangible Framework for Ecosystem Services Design. Design for Climate Adaptation, Cham, Springer International Publishing.

Kaplan, D. M. (2017). Philosophy, technology, and the environment. Cambridge, Massachusetts, The MIT Press.

Pedersen Zari, M. (2018). Regenerative Urban Design and Ecosystem Biomimicry, Routledge.

Reed, B. (2007). "Shifting from ‘sustainability’ to regeneration." Building Research & Information 35(6): 674-680.

Thompson, P. B. (2020). Food and Agricultural Biotechnology in Ethical Perspective, Springer.

Wahl, D. C. (2016). Designing regenerative cultures. Axminster, England, Triarchy Press.



Philosophical reflections on agency in the making

Mike Martin

Liverpool John Moores University, United Kingdom

Taking opportunities to engage with materials, to make something, is an important aspect of human becoming and an important part of general education in many countries. Considering this aspect of human activity, Ihde and Malafouris (2018) revisited the notion of 'homo faber'. They suggest that human becoming involves both technical mediation and creative material engagement.

Given the dominant use of computer-aided design and manufacturing, the opportunities for human creativity through the direct manipulation of materials in education and the workplace has become reduced. At the same time, however, there is increased interest in leisure time craft activities, along with the development of maker spaces that allow individuals to work with materials and realise three-dimensional artifacts of their own. Such opportunities to engage with materials appears to allow individuals to work autonomously and with a degree of agency, but is this really the case?

This paper reports on an autoethnographic study aimed at exploring the agency and autonomy of an individual maker during the different stages of making a technological artifact (wooden stool) from original intention, through design and modelling, realisation in three dimensions, and eventual use. This example was selected as it was likely to involve the use of hand tools, machine tools and the use of the internet to research and source materials. Being a one-off, individually made, artifact it was also anticipated that a good deal of tacit knowledge would be used.

During the study data was collected in the form of sketches, photographs and reflective notes to capture the decision-making processes and the use of tacit and embodied knowledge at all stages. In addition to the reporting of step-by-step processes, the study also captured the personal experience of the maker and their feelings as they emerged.

In reporting and discussing the findings, links are made with both postphenomenology and material engagement theory. In doing so they raise questions about the extent to which these, and other philosophical perspectives, help in understanding the personal experience of those engaged with materials, tools and other forms of technology on a practical basis.

References

Ihde, D., Malafouris, L. (2019) Homo faber Revisited: Postphenomenology and Material Engagement Theory. Philos. Technol. 32, 195–214.



Enactive agency in the technological world: rethinking human-technology relationships

Xue Yu

Dalian University of Technology, China, People's Republic of

In traditional research on the relationship between human and technology, agency has always been a concept that has attracted much attention. Starting from Aristotle, agency is considered as an agent’s initiative in action. Since Giddens defined agency as the power of change, material agency has emerged. The emphasis on material agency emphasizes the positive role of matter in action, whether it is the material symmetry by Latour , or functionally co-substantial components emphasized by Malafouris, both actually emphasize the undeniable value of matter in the process of interaction with the agent and environment. The focus on material agency has been an considerable topic in the philosophy of technology for a long time, especially with the rise of topics such as information philosophy, artificial intelligence, and data ethics. Artificial agency, even artificial moral agency, has become an significant topic that cannot be avoided in the field of artificial intelligence philosophy. However, the causal agency that material agency relies on when discussing agency has been challenged by enactive approach, which propose to replace causal agency with organizational agency to understand its connotation. Because causal agency is still based on the hydrodynamic model of human action, while organizational agency emphasizes not the material encountered, but the world within itself.

Based on this, enactivitists propose enactive agency to re-understand the relationship between humans and the world. Enactive agency involves three conditions: self-individualization, interactive asymmetry, and normativity, which respectively point to the dimensions of organic, sensorimotor, and sociomaterial. A detailed analysis of enactive agency will help further contemplate the contemporary turn in the relationship between humans and technology. This turn contains at least four parts: a. The center of human-technology relationship is “relationship” rather than the entity of human or technology, and enactive agency, as a relational concept, recognizes the ontology of the relation, which provides a prerequisite for the understanding of “relationship” in human-technology relationship. b. The “relationship” in the human-technology relationship is the process of emergence, and the relationship is the coupling with the environment in the process of human-technology interaction, in which the enactive relationship and encative agency emerge. c. The “relationship” in the human-technology relationship is also a kind of “entanglement”, where the entanglement is not only limited to the human body (sensorimotor) and technological materials, but also includes social, cultural, linguistic and other environmental factors, and the entanglement itself is also a process of intertwining. The “relationship” of human-technology relationship is a continuous process of generation and opening, which is always in an unfinished state, and therefore has many possibilities.

However, there are some limitations in understanding the human-technology relationship with enactive agency, which are mainly reflected in the following: a. the understanding of the relationship originates from the biological basis, and therefore has a strong dependence on the body, and it is difficult to establish the organizational relationship apart from the context of the body, which cannot be applied to some virtual forms of technology and their human-technology relationship; b. there is insufficient clarification of the emergence mode of agency. Enactivism terms such as “bring forth”, “emerging”, etc. are easy to fall into the trap of anthropomorphism when explaining the human-technology relationship, which then only manifests itself as a metaphorical appropriation; c. Lack of value analysis of agency, the discussion on enactive agency, such as sensorimotor and meaning generation, does not further explain the value orientation that may be embedded in it and how to optimize the human-technology interactions through value guidance. As a continuous production and openness, there is a possibility of social-morphing, and this is precisely the way in which ethics of technology can intervene and be guided. Therefore, the understanding of the human-technology relationship also needs to provide a more inclusive and practical interpretative framework, which will be continuously discussed in the future.

 
11:20am - 11:50amCoffee & Tea break
Location: Voorhof
11:50am - 1:05pm(Papers) Climate change
Location: Blauwe Zaal
Session Chair: Kaush Kalidindi
 

Geo-engineering revisited: A reformational critique

Maaike Eline Harmsen

Vu Amsterdam, Netherlands, The

Geo-engineering, the manipulation of Earth's climate systems to mitigate climate change, has garnered significant attention and controversy. While proponents recently often frame it as a technological panacea, ethical concerns have risen during the last decade. Critics have typically argued that geo-engineering with technological means could exacerbate existing inequalities, create unintended consequences like ongoing consumption patterns and harming policies in rich countries, and potentially distract from the urgent need to reduce those patterns and policies that fuel greenhouse gas emissions. This paper re-examines geo-engineering through the lens of Herman Dooyeweerd's christian philosophy and worldview. Herman Dooyeweerd (1894-1977) devised a philosophy based on christian reformational principles and has been adapted by his followers to critique the use of and value of technology. Dooyeweerd's framework offers a unique perspective on nature and technology, emphasising the motives and consequences of the designing owners and engineers behind any technology design and usage. But it also questions why critiques on non-technological, natural geo-engineering have been fewer and examines the motives for the lesser critique. By analysing geo-engineering within this context, we can explore the underlying motivations, values, and potential consequences of such interventions. This critique aims to contribute to a more nuanced and responsible discussion of geo-engineering as a potential solution to climate change.

 
11:50am - 1:05pm(Symposium) Maintenance & repair: philosophy of technology after production
Location: Auditorium 1
 

Maintenance & repair: philosophy of technology after production

Chair(s): Mark Thomas Young (University of Oslo, Norway)

Maintenance is one of the fastest growing topics in philosophy of technology. Emerging from a recognition that existing work in philosophy of technology has tended to focus disproportionately on the design and creation of new things, this nascent topic aims to achieve a more balanced appreciation of technology by exploring the widespread and diverse range of practices we employ to keep things going after they have been built, produced or constructed. Doing so helps enable philosophy of technology to speak to a range of current and urgent concerns, such as the environment and sustainability, waste and material flows or the renewed interest in the material and temporal dimensions of technology itself.

The proposed panel emerges from the work of the SPT special interest group “Maintenance and Philosophy of Technology” which since April 2022 has worked to promote and consolidate research in philosophy of technology on the topic of maintenance. The panel aims to showcase research by three philosophers who are currently exploring new directions in this emerging subfield of philosophy of technology. Together, these contributions highlight the wide relevance of the topic of maintenance in philosophy of technology, by engaging with ethical, metaphysical and sociological dimensions of maintenance practices.

The first presentation, by Mark Thomas Young (University of Oslo) explores the notion of artifact metabolisms by examining the way in which they can serve as sites for the passage of materials, such as spare or replacement parts or consumables such as lubricants or protective coatings. Yet despite the close association to sustainability by enabling repair and reducing waste, this presentation aims to explore how the metabolism of particular artifacts kinds are often contested by different social groups, including producers, users and activists. The second presentation, by Tim Juvshik (Middlebury College) examines the significance of maintenance practices for debates on the nature of artifacts in analytic metaphysics. By exploring resonances between the social practice view (Juvshik, forthcoming) of artifact kinds and recent work on the processual nature of artifact maintenance (Young 2024), Juvshik examines how artifacts kinds and norms of maintenance influence and change each other over time. The third presentation, by Brooke Rudow (University of Central Florida) examines how contemporary restrictions and challenges users face in repairing artifacts they own, intersects with deeper existential concerns surrounding the nature of human communities. By drawing on the recent controversies surrounding the right to repair farm equipment alongside recent work in environmental philosophy, Rudow’s talk aims to outline how the right to repair itself enables better ways of being in the world.

 

Presentations of the Symposium

 

Artifact metabolisms: the material flows of an iphone

Mark Thomas Young
University of Oslo

The ancient paradox of the Ship of Theseus – in which the materials of a boat are gradually replaced over time - underscores a feature of artifacts which has largely been overlooked. While most philosophers have approached the paradox as an opportunity to explore metaphysical questions surrounding identity and change, the processes by which artifacts exchange matter with their environments has attracted relatively little attention, not only in philosophy but also within scholarship on technology more generally. Yet gaining a deeper understanding of these processes is clearly important: not only do most artifacts exchange material with their environments in some way throughout their histories, but the extent to which they do so is increasingly recognized as crucial for achieving more sustainable patterns of consumption and fulfilling our rights as consumers and owners of artifacts.

The first section of this presentation aims introduce and examine the notion of artifact metabolism (Evnine 2016; Young 2024) by exploring how objects serve as sites through which materials flow in the form of spare or replacement parts and consumables such as lubricants or paint. As perhaps the most pervasive activity performed under the guise of maintenance and repair, the replacement of materials and parts encourages us to reflect, not only on the complex and heterogenous temporalities of the objects which surround us, but also on the wider context in which they exist. Insofar as the replacement of parts demands interchangeability and standardization, artifact metabolism is itself often a hard-won achievement requiring the coordination of various actors. Yet far from being a mere ‘feature’ – metabolisms are deeply connected to the achievement that technologies themselves often represent. Many of the most celebrated aspects of modern technology, from the reliability of aircraft engines to the longevity and structural integrity of engineered structures – depend crucially on processes through which parts and materials come to be replaced over time. At the same time however, the metabolic natures of artifacts are not fixed and neither do they remain uncontested.

The second section of this presentation aims to examine how the metabolism of artifacts may be negotiated between different actors by exploring the case study of the iphone. We’ll begin by reviewing attempts by Apple to constrain the metabolism of the phone, by restricting the accessibility of OEM parts and creating barriers to replacement through initiatives such as software pairing or designing phones in ways which discourage user repairs.

Then we’ll look at how these initiatives are balanced by attempts to facilitate the metabolism of the device by a range of other actors, including third party parts manufacturers in China and right to repair activists who disseminate the knowledge and equipment required for the replacement of parts and consumables. As I’ll argue, this unique ecology of repair helps to determine the complex temporality of the iphone – a device which serves as a site through which a wide range of parts and materials flow.

References

Young, Mark Thomas (2024) “Technology in Process: Maintenance and the Metaphysics of Artifacts” in Mark Thomas Young & Mark Coeckelbergh (eds) Maintenance and Philosophy of Technology: Keeping Things Going. New York: Routledge pp., 58-85

Evnine, Simon. J (2016) Making Objects and Events: A Hylomorphic Theory of Artifacts, Actions and Organisms. Oxford: Oxford University Press

 

Maintenance of ethnological artifacts: from preservation to reconciliation

Mark Theunissen
Delft University of Technology

This paper extends the growing reorientation within the philosophy of technology to thematize the maintenance of technological artifacts to cultural artifacts, specifically ethnological artifacts in museum collections. By surveying how various maintenance practices affect the status and function of ethnological artifacts, which are almost continuously under critical scrutiny, we can learn important lessons about the maintenance of technological artifacts. After discussing the return of the Kogi mask as exemplifying the conflicts in the maintenance practices of ethnological artifacts, the paper discusses three maintenance regimes that shape ethnological artifacts’ material and immaterial status. The upshot is that the perspective of maintenance offers insight into contradictions and conflicts in our understanding of the nature and function of ethnological artifacts. The paper ends with a reflection on the lessons that can be learned from practices and debates around the maintenance of ethnological artifacts for technological artifacts more generally.

 

Brokedown tractors and the existential need for a right to repair

Brooke Rudow
University of Central Florida

Historically the freedom to maintain and repair was taken for granted. These activities needed no legal category to allow or disallow them; they were understood simply as part of ownership. Many of us already recognize, likely with some frustration, that owning something comes with a host of maintenance and repair responsibilities. I have to take care of the things I own if I want them to work or to last. It is only in the last decade or so that the question of the permissibility of doing so has become a pressing legal concern. Does owning something give me the right to do with it what I will? Do property rights include the right to repair? These seem like bizarre questions at first glance but, given that new and emerging technologies incorporate increasingly complicated computer systems and software, often connected to information sharing systems, it has become unclear what one has a right to and to do when owning a thing. Consumers argue that they unjustly lack access to repair information, that completing repairs themselves voids warranties, or that they face legal action for doing so. Manufacturers insist that consumer repair would violate copyright and other intellectual property laws, along with compromising safety and privacy (Grinvald and Tur-Sinai 2024). The debate over these issues is rich and heated, tending to fall into four broadly construed arenas of concern: safety/privacy, economic, political, and environmental. There is much to say about each, but in this presentation, I would like to introduce another way, a deeper and more foundational way, of framing the need for a right to maintain and repair.

I will begin the presentation by briefly outlining the major arguments associated with the arenas of concern above. I will show that, as strong as they are, they each orbit around a deeper existential question about who we are and ought to be as a society and political community. I insist that we must answer this question in order to give these arguments normative force and ultimately secure a right to repair. In the second section, I initiate such an answer by drawing connections between maintenance, repair, identity, and care. Following the insights of Julia Corwin and Vinay Gidwani (and others) that repair work is care work, I will argue that repair and maintenance practices are existential needs that contribute to our essential being as homemakers on this earth (Corwin and Gidwani 2021). In the final section, I return to rights. The rights we have and insist upon reflect the kinds of people we are as a community. They reflect our values, our aspirations, and our commitments. I will argue that by neglecting (or denying) a right to maintain and repair, we increasingly foreclose the existential possibilities for better ways of being in the world.

Through all this, my exemplar will be the tractor. Though there are myriad devices to choose from, the tractor is of particular interest, not least because of the pending case against John Deere for allegedly restricting farmers from repairing their farm equipment, but also because I am persuaded by Paul Thompson’s Agrarian Vision (2010). I believe it is much in the spirit of what I have in mind here, and I will, if time allows, remark on the centrality of a right to repair for the actualization of such a vision.

References

Corwin, Julia E., and Vinay Gidwani. 2021. “Repair Work as Care: On Maintaining the Planet in the Capitalocene.” Antipode. https://doi.org/10.1111/anti.12791.

Grinvald, Leah Chan, and Ofer Tur-Sinai. 2024. “Defending the Right to Repair.” In Feminist Cyberlaw, edited by Meg Leta Jones and Amanda Levendowski, 1st ed., 25–37. University of California Press. https://doi.org/10.2307/jj.14086449.5.

Thompson, Paul. 2010. The Agrarian Vision. University of Kentucky Press. https://www.kentuckypress.com/9780813125879/the-agrarian-vision.

 
11:50am - 1:05pm(Papers) System Design
Location: Auditorium 2
Session Chair: Wybo Houkes
 

Technological immersion and automatism

Roberto Wu

Universidade Federal de Santa Catarina, Brazil

Although technological mediation is a nuclear conception in many theories, its stratification is not univocal. Don Ihde (1979) has notably distinguished it in terms of embodiment, hermeneutic, alterity, and background relations. While partially assuming this background, Peter Paul-Verbeek (2008) revisits Ihde’s conception of technological mediation as a single form of intentionality, which he complements with what he terms hybrid and composite intentionalities. My aim is not to examine or criticize their accounts, but to suggest that what I call technological immersion has been neglected or remained secondary in their analyses. As I conceived it, technological immersion refers to a situation in which technological artifacts establish a closed system of meanings, making it relatively independent of other systems while producing the illusion of being a definitive horizon. This independence tends to isolate one’s being-in-the-world, including one’s beliefs and values, which rely on this system, from other sources of information. It is thus an experience that remains immanent to a given technological system. In technological immersion, an artifact or a set of artifacts directs one’s attention to the phenomenon it produces, not in a way that this phenomenon adds to other phenomena of one’s lifeworld, but rather, severing it from concurrent systems. Considering this, I would like to examine three main aspects. The first one may be summarized as follows: does complex technology, which normally implies a system, tend to generate immersion? Technological systems provide a material ambience and internal coherence of meanings that impose themselves as a framework that conditions understanding and expectation. Therefore, it suggests that the more complex a technological system is, the more one tends to remain in its immanence. A second topic relates to the characteristic traits and the differences between distinct levels of technological immersion. Noise cancelling earbuds lead to some isolation, as the user may not be aware of dangers while driving a car, but this cannot be compared to the level of immersion provided by a VR device. Whatever the level of their isolation, it ceases eventually after their use. Another form of immersion occurs with fake news on social media, as its effects do not simply cease with its interruption. The interval between one message and another does not break the flow of information, as the impact of a given message lasts and links to the next one. Therefore, fake news provides an absolute worldview that underlies each opinion and behavior leading to a more encompassing form of immersion. The third aspect is the deliberate employment of technological immersion to engender automatism. Although the relation between materiality and alienation is an old topic, as studies on fascism or consumer behavior have shown, current technologies are able to create absolute immersion, detaching people from their lifeworld while predisposing them to some automatism, that is, to a practice that remains immanent to the underlying technological system. In sum, an understanding of technological immersion seems to be a valuable methodological key to examine the way technology shapes our practices.



Integrating social vulnerabilities in implementing nuclear power plants design

JOHANES NARASETU WIDYATMANTO

Karlsruhe Institute of Technology, Germany

Resilience of nuclear power plants (NPPs) could generally be calculated by measuring the potential compromised performance per dollar with the foreseen disruptions, be it in material supply chain, fuel supply, power plant design, and so on. Vulnerability is crucial in assessing grid resilience since when a grid is exposed to disruptions, i.e. vulnerable, its performance would more likely be compromised. This techno-economic way of addressing vulnerabilities is useful for engineers and business owners aiming to build a resilient grid, but is wanting for policymakers who consider adding NPPs into an energy mix.

For energy policymaking, the importance of addressing vulnerabilities expands beyond engineering a functioning NPP. A grid’s exposure to disruptions affects to its users’ vulnerability. Therefore, policymakers should regard a grid’s vulnerabilities beyond technical and economic characterisations by associating social vulnerabilities caused by NPPs vulnerabilities.

This paper proposes how social vulnerabilities affected by NPPs vulnerabilities should be addressed at the policy level via Martha Nussbaum’s capabilities approach. Within this frame, vulnerabilities directly correspond with capabilities as fundamental human condition and understanding the relation between both will facilitate just treatment to individuals. Capabilities have to be intentionally created, and by extension, vulnerabilities arise when one fails to create capabilities. Following this insight, the list of basic capabilities which Nussbaum herself thinks non-exhaustive exemplifies the importance of setting up a minimum well-being threshold particular to a community context to then be used as a parametre of resilient NPPs.

How to set up a minimum well-being threshold to increase capabilities while decreasing vulnerabilities is the insight this paper proposes to the problem of reducing social vulnerabilities via resilient NPPs. We contend that in making an NPP more resilient, social vulnerabilities lies beyond techno-economic vulnerabilities and should be prioritised. We do not intend on adding or reducing the list of capabilities, but rather apply the centrality of capabilities from Nussbaum’s account to the context-dependent social vulnerabilities created and reduced by the choice of NPPs' design.

This work is a contribution from nuclear energy ethics to politics of nuclear energy. It assumes that no grid is resilient to all disruptions and suggests that consideration on incommensurable social vulnerabilities plays an important role in choosing NPPs' design. This work discusses: 1) how social vulnerabilities are tied to and interact with grid vulnerabilities, 2) how understanding social vulnerabilities can NPP system design; and 3) what competence required of policymakers to make a priority order of different social vulnerabilities in building an NPP.

 
11:50am - 1:05pm(Papers) Authenticity
Location: Auditorium 3
Session Chair: Filippo Santoni de Sio
 

AI-produced research in the humanities: Am I the author? Does it matter?

Thomas Nelson Metcalf1,2

1Institute for Science and Ethics, University of Bonn, Germany; 2Spring Hill College, Mobile, Alabama, United States

Large language-models (“LLMs”) can produce broadly convincing argumentation about a variety of academic subjects, including those in the humanities. Interestingly, these LLMs can also be “trained” on one’s own academic writing: one may submit one’s own works to an LLM and have it thereby come to learn one’s research style, interests, and substantive commitments in one’s field. And this training can be supplemented by ongoing conversations with the chatbot. Thus, the LLM can come to know you and your academic research very intimately, if you let it (and sometimes, even if you don’t).

Suppose that an academic researcher in the humanities—for example, a philosopher or literary critic—trains an LLM on two key “datasets”: (1) the subject-matter literature the researcher wishes to write about; and (2) the researcher’s own academic-research style, works, interests, goals, and orientations, in the forms of written works and real-time chat conversation. The LLM may already know the subject matter of the research well, but in this way, the LLM also comes to know the researcher very well. Now the researcher prompts the LLM to produce an original manuscript based on those two datasets. The researcher then submits the manuscript, under their own name, to a journal or conference, without explicitly acknowledging the role of the LLM in producing the manuscript.

This raises several philosophically interesting questions, some of which have not been addressed in the literature so far. First, we can ask whether the researcher committed any kind of research misconduct or plagiarism. But we might also ask whether we should hope or dread that such practices become common in the academy.

I will argue that, in general, the researcher has not necessarily committed any serious research misconduct. The researcher’s approach is not fundamentally different from current research practices in the humanities, and it bears an intimate-enough connection to the researcher’s identity that it does not qualify as any kind of plagiarism.

Yet what are the advantages and disadvantages of an academia in which this “research” method becomes common? I suggest that the advantage will be the higher production of research output, which will have a certain kind of greater familiarity with the existing academic literature. The disadvantages will result from the fact that academic research is now quite a bit easier.

This may produce several problems, but at least one is philosophically interesting: Such a system would weaken the intimate, identity-based connection between the author and the work produced. This will undermine a kind of valuable vulnerability that has thus far been present in academic research. If it becomes much easier to produce academic research, and the research produced is less associated by the author with their own identity, then there will be less motivation to ensure academic and even moral quality in the research.

I respond to objections; provide recommendations about how to cultivate some of the benefits of LLM-assisted research in the humanities while avoiding some of these problems; and briefly draw connections to related areas of the philosophy of technology.

Acknowledgments:

None of this text and research was generated by AI.

References:

Babushkina, D., & Votsis, A. (2022). Disruption, technology and the question of (artificial) identity. AI Ethics, 2, 611–622.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.

Carr, N. (2010). The shallows: How the internet is changing the way we think, read and remember. Atlantic Books.

Chan, C. K. Y. (2023). Is AI changing the rules of ccademic misconduct? An in-depth look at students’ perceptions of “AI-giarism.” ArXiv. https://arxiv.org/abs/2306.03358

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 618–694.

Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., Gill, I. S., & Cacciamani, G. E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ, 384, e077192.

Gupta, S., Ranjan, R., & Singh, S. N. (2024). A comprehensive survey of Retrieval-Augmented Generation (RAG): Evolution, current landscape and future directions. ArXiv. https://arxiv.org/abs/2410.12837

Hosseini, M., Rasmussen, L. M., & Resnik, D. B. (2023). Using AI to write scholarly publications. Accountability in Research, 31(7), 715–723.

Kwon, D. (2024, July 30). AI is complicating plagiarism. How should scientists respond? Nature. https://www.nature.com/articles/d41586-024-02371-z

Mann, S. P., Vazirani, A. A., Aboy, M., Earp, B. D., Minssen, T., Cohen, I.G., & Savulescu, J. (2024). Guidelines for ethical use and acknowledgement of large language models in academic writing. Nature Machine Intelligence, 6, 1272–1274.

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304.

Raman, R. (2023). Transparency in research: An analysis of ChatGPT usage acknowledgment by authors across disciplines and geographies. Accountability in Research, 1–22. https://doi.org/10.1080/08989621.2023.2273377

Silva, V. T., de Souza, J. P. G., & Cerqueira, R. F. D. (2024). Lessons learned in knowledge extraction from unstructured data with LLMs for material. ACS Spring 2024. https://research.ibm.com/publications/lessons-learned-in-knowledge-extraction-from-unstructured-data-with-llms-for-material-discovery

Tadimalla, S. Y. & Maher, M. L. (2024). AI and identity. ArXiv. https://arxiv.org/html/2403.07924v2

Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press.

van Est, Q. C., Rerimassie, V., van Keulen, I., & Dorren, G. (2014). Intimate technology: the battle for our body and behaviour. Rathenau Instituut.

van Riemsdijk, M. B. (2018). Intimate computing: Abstract for the philosophy conference “Dimensions of Vulnerability.” Intimate Computing. https://intimate-computing.net/wp-content/uploads/2019/03/riemsdijk18dov.pdf

van Rooij, I. (2022, December 29). Against automated plagiarism. Iris van Rooij. https://irisvanrooijcogsci.com/2022/12/29/against-automated-plagiarism/

Wu, X. & Tsioutsiouliklis, K. (2024). Thinking with knowledge graphs: Enhancing LLM reasoning through structured data. ArXiv. https://arxiv.org/abs/2412.10654



From Automation to Authenticity: Rethinking AI through the MEAT framework

Rasleen Kour

Indian Institute of Technology Ropar, Punjab, India, India

We live in a world influenced by artificial intelligence (AI), which has solved numerous problems and revealed unexplored dimensions of human existence. However, AI poses significant risks alongside its benefits, including the possibility of artificial humans acting uncontrollably (Coeckelbergh 2020). The critical question is not about having a positive or negative relationship with AI, but whether our engagement with it is meaningful. Meaningful engagement emphasizes fostering authentic relationships with technology, preserving human connections, supporting environmental stewardship, and upholding moral, social, cultural, and intrinsic values. Appropriate technology should prioritize individual well-being and resource efficiency without harming the environment. However, AI often falls short of these ideals.

I developed the MEAT (Meaningful Engagement with Appropriate Technology) model, which aligns more effectively with low-tech solutions than high-tech automation by prioritizing transparency and user involvement. Nonetheless, in an era dominated by automation, returning to pre-technological times is not an option. What is required instead is a critical reflection on how technology shapes our social, cultural, and community ties. Humans are not “unencumbered selves” and must strike a balance with their surroundings to ensure that automation remains meaningful by maintaining transparency and encouraging user involvement. For instance, fully automated technologies like autonomous cars lack the participatory dimension required for meaningful engagement, as meaningful interaction with technology should grow alongside humanity rather than sidelining it.

The MEAT model emphasizes the importance of critically examining the role of technology in creating a more fulfilling and examined life. It proposes shifting the focus from the how—how a technology works or how it is used—to the why—why a technology is necessary and valuable. In human-technology relation, unlike postphenomenological thinkers, we need to replace (particular human) ‘h’ with an uppercase ‘H’ to recognize that not all technologies are meaningful for all humans. It is essential to explore which technologies are truly beneficial and to shift the relationship from (h-t) to (H-t). Meaningful engagement with technology relies on three key principles. First, it must retain human autonomy by avoiding manipulative technological influences. Second, it should focus on upskilling rather than deskilling. Third, it should free humans from monotonous tasks and foster more corporeal human-to-human (h-h) relationships, reinforcing the emancipatory potential of technology. This approach underscores the importance of human essence (H), self-sufficiency, collective welfare, and equilibrium between humans, nature, and technology. When applied to AI, meaningful engagement necessitates limiting automation to ensure that users’ roles are not entirely replaced. Authentic human engagement requires more than automation to address the imbalance between humanity, technology, and nature. We must prioritize social fixes over superficial technological solutions (an idea inspired by Borgmann) to foster a more harmonious and meaningful coexistence with AI.



Autonomy, relationality and emancipation in the digital age

Eloise Changyue Soulier

University of Hamburg, Germany

Guidelines and legal frameworks on artificial intelligence (AI) and digital technologies often consider human autonomy as one of the main values to protect [1]. Abundant academic scholarship is also concerned with autonomy and digital technologies [2, 3, 4]. Although the concern for human autonomy is neither new nor limited to digital technologies, their wide-ranging scope and their intransparent and adaptive nature makes this concern particularly salient [2].

Western law in general, including the regulation of digital technologies, arguably relies on a Kantian understanding of autonomy as self-legislation by rational atomistic individuals [5, 6]. As such, respecting a user’s autonomy means providing them with all potentially relevant information and then not interfering with their decision-making process. In the context of digital technologies, this is exemplified by regulatory measures such as informed consent approaches to data protection. This latter example illustrates the failure of a Kantian conception of autonomy to live up to its own ideal of rational independent decision-making, and could be argued to rather serve a neoliberal agenda that overburdens individuals [7].

To understand and begin to address these shortcomings, it is useful to draw from the scholarly critiques of the Kantian conception of autonomy. These critiques pertain both to its epistemic value and to the normative ideal it serves. The cognitive sciences have struck a blow at the idea of rational decision making [8]. Critical theories, importantly feminist theory, have challenged this conception as obscuring our relationality and interdependence, as well as conveying a masculinist, individualist ideal [9]. Critiques stemming from the philosophy of technology, but also from disability studies, underline our dependence on the technological infrastructure [10].

Scholars who challenge this conception of autonomy however recognize the need for a concept of autonomy, crucially due to the necessity for any emancipatory project to have a concept of autonomy [5]. The question is rather: which concept of autonomy? Within the framework of pragmatist conceptual ethics [11], this question amounts to examining the function fulfilled by the concept of autonomy. I argue that adopting a conception of autonomy that makes room for relationality is both epistemically and practically more fruitful, but also a better normative ideal. Nevertheless, drawing on Christman [12] and Khader [13], I claim that the purpose of emancipation requires a conception of autonomy not to be constitutively relational. In light of these considerations of the different purposes a concept of autonomy should serve, I propose to operate with a conception of autonomy as “the ability to structure our dependences”.

Finally, I show that these theoretical reflections on autonomy have very practical consequences for the design and regulation of digital technologies, of which I discuss two examples: the PIMS (personal information management system) mechanism of cookie banners management and the possible tailoring of recommendation systems in a way that supports this organized dependence.

References

[1] Anna Jobin, Marcello Ienca, and Effy Vayena. The global landscape of ai ethics guidelines. Nature

machine intelligence, 1(9):389–399, 2019.

[2] Daniel Susser, Beate Roessler, and Helen Nissenbaum. Technology, autonomy, and manipulation. Internet Policy Review, 8(2), 2019.

[3] Karen Yeung. ‘hypernudge’: Big data as a mode of regulation by design. In The social power of algorithms, pages 118–136. Routledge, 2019.

[4] Alan Rubel, Clinton Castro, and Adam Pham. Algorithms and autonomy: the ethics of automated decision systems. Cambridge University Press, 2021.

[5] Jennifer Nedelsky. Laws Relations A Relational Theory of Self, Autonomy, and Law. 2011.

[6] Tal Z Zarsky. Privacy and manipulation in the digital age. Theoretical Inquiries in Law, 20(1):157–188, 2019.

[7] Philip M. Napoli. Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9):751–760, 2015.

[8] Daniel Kahneman and Amos Tversky. Intuitive prediction: Biases and corrective procedures. Technical report, Decisions and Designs Inc Mclean Va, 1977.

[9] Catriona Mackenzie and Natalie Stoljar. Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press, 2000.

[10] Carolyn Ells. Lessons about autonomy from the experience of disability. Social theory and practice,27(4):599–615, 2001.

[11] Amie L Thomasson. A pragmatic method for normative conceptual work. Conceptual engineering and conceptual ethics, pages 435–458, 2020.

[12] John Christman. Relational autonomy, liberal individualism, and the social constitution of selves. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 117(1/2):143–164, 2004.

[13] Serene J Khader. The feminist case against relational autonomy. Journal of Moral Philosophy, 17(5):499–526, 2020.

 
11:50am - 1:05pm(Papers) Epistemology II
Location: Auditorium 4
Session Chair: Philip Nickel
 

The epistemology of AI: public perceptions and interventions for responsible use

Aviv Barnoy

Erasmus University Rotterdam, Netherlands, The

The rapid evolution of large language models (LLMs) has challenged traditional epistemological theories, sparking debates on the epistemic value of non-human testimony. Freiman’s (2024) theory of LLM testimony offers a conceptual framework of six epistemic dimensions: intention, normative assessment, trust, propositional content, independence from human involvement, and phenomenological similarity. This study empirically examines these dimensions, aligning them with public perceptions and testing their cohesiveness.

While trust in AI has been extensively studied (Glickson & Woolley, 2020), research on LLMs as knowledge sources remains limited. This study addresses gaps in understanding the public's epistemic perceptions of AI testimony, particularly in fostering critical thinking amid growing concerns about AI bias (Kundi et al., 2023). The primary focus is public perceptions of AI-generated testimony across six epistemic dimensions outlined by Freiman (2024): 1. having intention; 2. can be normatively assessed; 3. constitute an object in trust relations; 4. content is propositional; 5. generated and delivered with no direct human involvement; and 6. output is phenomenologically perceived like humans.

Using a pre-registered survey of 831 U.S. participants, we explored two research questions (and tested six hypotheses):

1. To what extent do Freiman’s AI testimony criteria (ATC) align with public perceptions?

2. How are these perceptions associated with epistemic beliefs, AI familiarity, usage, and demographics?

The study confirms the epistemic relevance of the six factors, though with varying levels of public agreement. Scores of all the factors ranged between 2,98 to 3,59 on a 5-point laker scale, with SD of up to 1,2.

To address RQ1 and test H1, factor and cluster analyses confirmed the cohesiveness of the six variables, confirming H1. A single-factor solution explained 47% of variance (KMO=0.819; Cronbach’s α=0.75), with higher loadings for traditional dimensions (Trust: 0.81; Intention: 0.76; Normative Assessment: 0.76) compared to newer dimensions (Phenomenological Similarity: 0.73; Propositional: 0.59; Human Involvement: 0.35). Cluster analysis identified two groups: a majority that could be considered a consensus (93%) showing general agreement with the ATC and a minority (7%) with significantly lower agreement.

Hierarchical regression analysis provided mixed results. Perceptions of ATC were positively predicted by epistemic beliefs (β=0.15, p<0.001), ease of use (β=0.36, p<0.001), and AI usage (β=0.15, p<0.001). However, familiarity with AI was not a significant predictor (β=-0.04, p=0.44). Contrary to expectations, demographic factors such as income, gender, and education were not significant predictors, with age showing only a weak positive effect (β=0.09, p=0.01).

The findings provide empirical evidence supporting the epistemic relevance of Freiman’s six categories of AI testimony criteria (ATC), offering critical validation of the framework's applicability. The results reveal that while all six dimensions hold conceptual significance, public perceptions are not uniform across them. This disparity highlights the need to continue fine-tuning the factors. The dominant cluster’s broad acceptance of the criteria suggests a general receptiveness to AI-generated knowledge.

Predictive analyses further illuminate the factors influencing perceptions of ATC, and the relation of the factors to general epistemic perceptions. Beyond the theory, these findings carry implications for efforts to foster critical engagement with AI outputs.



The epistemic competences needed for Human Machine Interaction: Dealing with individual, social, and other factors to solve the engineering problem of interface design

Michael Poznic1, Vivek Kant2

1Karlsruhe Institute of Technology, Germany; 2Indian Institute of Technology Kanpur

While digitalization in the technological sector grows in various industries throughout the world, there is a need to comprehend how humans interact with such digital technologies. Specifically, an area of design engineering known as Human Machine Interaction endeavors to design interfaces in complex sociotechnical systems. The aim of this paper is to analyze what is required for interface design in the context of such complex systems from an epistemological point of view. Our focus is the epistemic achievement of engineering understanding that interface designers are striving for. So, the topic of this paper is which kind of epistemic features this achievement of understanding encompasses. We will discuss a particular example of an interface within a power plant as a representative example for an energy infrastructure. In complex systems such as powerplants, the engineers are often controlling energy-based processes through the use of digital technologies in control rooms. Operators have to comprehend the functioning of the processes through display technologies and information appearing on the screens (interfaces). A prominent challenge for engineers is the task of how to structure the information in such circumstances so that several background factors can be considered. First, the state of the individual operators, in terms of the physiological and psychological make-up should be considered. If the operator is fatigued, then there are more chances of comprehending information incorrectly. Next, the team-based factors have to be considered. One example is how to deal with trouble-shooting problems in the control room (such as unexpected alarms); working in teams requires specific competencies that affect the performance of the individual, as well as of the whole team in the control room. Further, organizational and management policies such as assignment of shift work, incentives, and other aspects of workforce management affect productivity and workers’ behavior. Similarly, regulations and legislations also impact the functioning of the sector as a whole, as well as the individual operational practices of the operators in the control room can be affected by them. The engineering designer has to take all of these individual and extra-individual factors into account, though. When the interface is designed, it ultimately depends on these factors. The challenge for the designer is to gain an understanding of how these different aspects of the problems come together as reflected in the individual and extra-individual factors. There are various forms of knowledge the engineering designer has to compile. Yet, how they need to be collated and comprehended together to yield the understanding required to design an interface for a complex sociotechnical system is an open question. We will discuss a concrete example to spell out the different roles of the epistemic features as regards the design engineer’s understanding of the interplay of factors that contribute to the development of interfaces in such complex systems.



Intimate compression: AI systems and the personal nature of architectural knowledge

Simon Maris

University of Applied Sciences Trier, Germany

We introduce the concept of "intimate compression" to examine how AI systems transform architectural knowledge from embodied practice into encoded patterns. The term "intimate" draws from M. Birna van Riemsdijk's framework of intimate computing [1], which argues that intimate technologies make us vulnerable in a specific way by affecting physical, psychological and social aspects of our identity. Van Riemsdijk links our relations with and through intimate technologies to the process of forming intimate relationships, involving self-disclosure and partner responsiveness. In the context of architecture, intimate knowledge refers to the deeply personal understanding that architects develop through direct bodily engagement with space, materials, and context. As Schrijver notes, much of architecture's knowledge "resides beneath the surface, in nonverbal instruments" [2] that articulate spatial imagination and design process.

Drawing on Polanyi's original concept of tacit knowledge [3] and its architectural interpretation by Pallasmaa [4], we examine how AI systems attempt to compress these intimate dimensions of architectural practice. This compression manifests in how current design tools try to encode spatial understanding, material intuition, and design thinking - aspects that traditionally "escape quantifiable dimensions of research" [2]. Unlike traditional design tools that simply execute commands, AI systems actively attempt to learn and reproduce these personal aspects of architectural work.

We argue that this process of intimate compression raises critical questions about the nature of architectural expertise and the role of embodied knowledge in design. As AI systems compress and encode cultural contexts and complex information, they reshape the foundations of architectural research and practice. This intimate compression has profound implications for how architects engage with pressing challenges such as sustainability, potentially offering new perspectives but also introducing additional layers of complexity.

New forms of vulnerability emerge when deeply personal aspects of architectural practice encounter algorithmic encoding. As AI systems increasingly mediate the relationship between architect and design process, they reshape the intimate foundations of architectural knowledge production. We conclude by discussing the implications of these shifts for architectural education, practice, and research, highlighting the need for critical frameworks to address the personal and embodied dimensions of architectural knowledge in an era of AI-driven design. As the landscape of architectural research evolves, intimate compression emerges as a key concept for understanding and shaping the future of the discipline. This paper aims to bridge concepts from architecture, research, AI, and society to provoke reflection and debate on the consequences of the intimate technological revolution for architectural knowledge production.

References:

[1] Birna van Riemsdijk, M. Intimate Computing. Abstract presented at the Philosophy Conference "Dimensions of Vulnerability". Vienna, April 2018.

[2] Schrijver, L. The Tacit Dimension: Architecture Knowledge and Scientific Research. Leuven, May 2021.

[3] Polanyi, M. The Tacit Dimension. London, 1967.

[4] Pallasmaa, J. The Thinking Hand : Existential and Embodied Wisdom in Architecture. Chichester, 2009.

 
11:50am - 1:05pm(Papers) Ethics VI
Location: Auditorium 5
Session Chair: Nynke van Uffelen
 

From artificial wombs to lab grown embryos: technologies and the myth of the ex-utero human

Llona Kavege1, Amy Hinterberger2

1The University of Edinburgh; 2University of Washington

Two years ago, Nature's Method of the Year award went to the research developments in stem cell-based embryo models (SCBEMs), previously referred to as embryoids or sometimes commonly (and erroneously) as synthetic embryos. These three-dimensional cellular assemblies are derived from diploid pluripotent stem cells that can recapitulate early embryonic development. Human SCBEMs are particularly beneficial to scientists by providing a promising alternative model organism to counter the limitations of nonhuman animal models, the inaccessibility of human embryos during gastrulation, and the ethical restrictions of human embryo research. Most debates over SCBEMs in the literature have focused on nomenclature and the ethical and legal implications if SCBEMs became so sophisticated that they could not be distinguished from egg-sperm fertilized human embryos.

Another research project gaining speed in recent years is the different devices being developed to enable partial-ectogestation. This consists in transferring a fetus from the maternal womb to a machine system recreating conditions of gestation to prolong fetal development ex utero and improve the survival chances of would-be preterm neonates. Here again, the reception of EGS has varied in the literature and revolved around terminological disagreements, and the ethical and ontological status of birth and the fetus post transfer.

In this paper, we approach these two seemingly disparate research undertakings from a lens that examines the technologies that are enabling these advances rather than looking at the organisms whose development they facilitate.

By critically examining the infrastructures supporting SCBEMs and ectogestation — i.e. the bespoke incubators, biobags, and the sociotechnical systems and research agendas driving their development— we aim to move beyond entity-centered ethical discussions and foster a more comprehensive understanding of the role and impact of these technologies in our lives.

We draw from the theory of technological mediation, STS, and bioethics scholarship to examine the interplay between these two technologies and how their developments ties to a larger trend in reproductive sciences where technologies mediate and increasingly frame human embryos and fetuses as standalone entities, detached from a pregnant person's body.

Furthermore, we offer a novel interpretation of a recent Alabama (USA) court case that granted personhood to cryopreserved embryos based on the future possibility of full ectogestation, highlighting the pivotal role these technologies play in reimagining and reinventing how we conceive of and engage with the beginning of life.

Finally, we reflect on the bioethical implications of these technological advances in political climates where they could be used to restrict reproductive rights and access to healthcare.



AI agency in medical practices: The case of pathology

Oceane Fiant

Universite Cote d'Azur, France

The introduction of artificial intelligence (AI) systems in medicine, whether to assist or replace physicians in certain tasks, is often seen as a promising solution to reduce or even eliminate "inter-observer variability" (Tizhoosh et al., 2021)—that is, discrepancies in the interpretation of the same medical data by different healthcare professionals. This phenomenon is particularly relevant to pathology, a critical medical specialty in cancer care. This specialty involves diagnosing, establishing prognoses, and guiding therapeutic management of cancers based on the analysis of cellular and tissue samples. However, pathologists' analyses can vary significantly depending on both the practitioners and the specific characteristics of the centers where they work (Rabe et al., 2019). In this context, AI is expected to help pathologists refine their analyses and enhance their reproducibility, thus addressing the problem of inter-observer variability (Shafi and Parwani, 2023).

However, the idea that AI systems alone could reduce or even eliminate inter-observer variability merits closer examination. This presentation seeks to demonstrate that achieving this effect requires conditions beyond the inherent properties of these systems. In other words, using the case of pathology, this presentation aims to show that AI systems do not possess inherent "agency" (Verbeek, 2005)—that is, they cannot independently transform medical practices or solve issues such as inter-observer variability. Instead, their agency can only be exercised within a "milieu" (Triclot et al., 2024) that must possess specific characteristics for these systems to produce the expected outcomes.

Based on field research, this presentation will specifically demonstrate that deploying AI systems developed for pathology will only lead to effective homogenization of practitioners’ analyses if glass slides—the primary material on which pathologists base their analyses—are highly standardized. Achieving this, in turn, requires a profound transformation of the instrumentation and workflow of pathology laboratories—a transformation that remains far from complete.

References:

Rabe K., Snir O. L., Bossuyt V., Harigopal M., Celli R. et Reisenbichler E. S. (2019), “Interobserver variability in breast carcinoma grading results in prognostic stage differences”, Human Pathology 94, p. 51‑57.

Shafi S. et Parwani A. V. (2023), “Artificial intelligence in diagnostic pathology”, Diagnostic Pathology 18, 109.

Tizhoosh H. R., Diamandis P., Campbell C. J. V., Safarpoor A., Kalra S., Maleki D., Riasatian A. et Babaie M. (2021), “Searching images for consensus: Can AI remove observer variability in pathology?”, The American Journal of Pathology 191, p. 1702‑1708.

Triclot M. (Dir.) (2024), Prendre soin des milieux : manuel de conception technologique, Éditions matériologiques.

Verbeek, P.-P. (2005), What things do: Philosophical reflections on technology, agency, and design, Pennsylvania State University Press.



Could Artificial Intelligence Assuage Loneliness? If so, which kind?

Ramon Alvarado

University of Oregon, United States of America

Generative AI technologies, such as large language models and generative pretrained

transformers, as well as the interfaces with which we interact with them have developed

impressively in recent years. Not only can they abstract and generate impressive results, but it is becoming increasingly easier for most of us to prompt them. We can enter input not only in

multiple human and machine languages, but also in multiple modalities: text, audio, video, etc.

Given these developments, the uses for AI have transcended the confines of academic, industrial, or scientific settings and entered our everyday lives. Soon, trimmed down versions capable of running locally on personal devices will be able to accompany and assist us wherever and whenever we need them (Carreira et al., 2023).

Although philosophers of technology have considered the implications of pervasive technical

mediation for long before the advent of these more recent technologies, AI is distinct in non-

trivial ways. AI technologies, for example, are first and foremost epistemic technologies [1] —

technologies primarily designed, developed and deployed as epistemic enhancers (Humphreys,

2004). Furthermore, in their generative form, their powerful and versatile multimodal output

capacities can be seen as enabling them to play some part in what are usually considered social roles (Kempt, 2022; Symons and Abumusab, 2024). At the very least, their sophisticated and responsive output can be seen as playing the role of an interlocutor. As such, AI can be prompted, queried, and interacted with as one would with an assistant, a peer, a friend, a romantic partner, or a caregiver.

Given these latter points plus the undeniable prudential good of social connections, and the

societal and communicative aspects conventionally taken to be at the center of significant social

challenges such as those related to loneliness, technologists and practitioners have begun to

ponder and test the use of AI in these socially rich contexts (De Freeitas et al., 2024; Savic, 2024, Sullivan et al., 2023). Philosophers have also begun to pay attention to both these uses as well as to their implications. Symons and Sanwoolu (forthcoming) for example, suggest that given that an AI product could be available to many people simultaneously and without conventional social or physical restrictions, it will be unable to meet certain conditions— such as scarcity, uncertainty, and friction— that ground meaningful social connections. If this is true, then AI will be unable to have any bearing on or assuage loneliness, or so some of these arguments go.

In this paper, I argue that there is no such thing as ‘addressing loneliness’ simpliciter. There are

distinct kinds of loneliness, and they are responsive to distinct kinds of interventions (Creasy,

2023; Alvarado, 2024). Hence, perhaps it proves more fruitful to ask which kind of loneliness

could AI address, if any. I conclude by suggesting that as an epistemic technology, AI may very

well be able to address epistemic loneliness (Alvarado, 2024)— a kind of loneliness that arises in virtue of the absence of epistemic peers with which to construct, accrue or share knowledge. This may be the case, however, only if we can deem AI as an epistemic partner (ibid)— a willing,able, actual, and engaging epistemic peer.

[1] Alvarado suggests that epistemic technologies can be epistemic to the degree to which they meet the following three conditions. They are primarily designed, developed and deployed in a) epistemic contexts (e.g., inquiry), to deal with epistemic content (e.g., symbols, propositions etc.) via epistemic operations (analysis, prediction, etc. ) (Alvarado, 2023) While AI is not the only technology to meet some or all of these conditions, according to Alvarado, AI meets them to the highest degree amongst computational methods, which makes it a paradigmatic example of an epistemic technology.

Bibliography

Alvarado, R. (2022a). What kind of trust does AI deserve, if any?. AI and Ethics, 1-15.

Alvarado, R. (2023). AI as an Epistemic Technology. Science and Engineering Ethics, 29(5), 32.

Alvarado, Ramón (2024) What is Epistemic Loneliness? [Preprint]

Carreira, S., Marques, T., Ribeiro, J., & Grilo, C. (2023). Revolutionizing Mobile Interaction:

Enabling a 3 Billion Parameter GPT LLM on Mobile. arXiv preprint arXiv:2310.01434.

Creasy, K. (2023). Loved, yet lonely.

De Freitas, Julian and Uğuralp, Ahmet Kaan and Uğuralp, Zeliha and Puntoni, Stefano, AI

 
11:50am - 1:05pm(Papers) Politics II
Location: Auditorium 6
Session Chair: Alessio Gerola
 

The Drivers of technological Hegemony: the political Dynamic of the Computerization of the French National Health Insurance Fund (1963-1979)

Maud Barret Bertelloni1,2

1Université Technologique de Compiègne, France; 2Sciences Po, France

Since Langdon Winner sparked debate on the politics of artefacts (Winner, 1980), authors in philosophy of technology have elaborated competing theories regarding the politics of technology. Although their conceptions differ, they share a similar understanding of politics (and ethics) as an intrinsic property of artefacts, which they examine as standalone objects, often abstracted from their contexts (Verbeek 2006; Feenberg 1999; Latour 2007). In my paper, I aim to reintegrate technology into the analysis of social relations and to develop a grounded, materialist understanding of the interplay between technology, institutions and the power relations they participate in.

Adopting an empirically informed approach to philosophy, this paper seeks to develop this political-philosophical understanding of technology and its politics by examining the history of the computerization of the French National Health Insurance Fund (CNAM) between 1963 and 1979. It explores the dynamic of joint development of French healthcare institutions and their technological tooling, investigating how technology participated in shaping power relations and policy orientations in healthcare.

My empirical research draws on archival material concerning the computerization of the CNAM, collected by its director Christian Prieur (1968-1979) and preserved at the French National Archives (Archives Nationales, boxes 20080146/6-20080146-8). Although computerization began as an initiative by individual agencies in the early 1960s, the first general “Computerization Plan” coincided with a moment of important institutional transformation and a governmental effort to instigate the development of a French a computer industry, whose first experimentation ground and market was public administration, including welfare institutions (Mounier-Kuhn 1994).

The inquiry highlights how a “national configuration” emerged from a prolonged political and industrial conflict surrounding computers, their promises, and their applications, which involved healthcare agencies, the welfare ministry, industry, and trade unions. After years of struggle, the ministry and the computing industry successfully imposed their vision of a centralized and integrated computer system, aimed primarily at improving bureaucratic efficiency, thereby excluding alternative conceptions of computers. However, initial malfunctions and significant resistance in welfare institutions led to the system’s replacement by a “Reconfigured Version” developed in 1975. This new version’s functioning, coupled with vehement political support, contributed to its success, which in turn reinforced these actors’ dominance within the institution. Despite trade unions’ efforts to promote alternative conceptions of informatics and advocate for technological democracy, this configuration favored the development of an even more centralized and integrated technological infrastructure, associated with state control over the institution.

In dialogue with Andrew Feenberg’s notion of “technological hegemony” (Feenberg 1999; Kirkpatrick 2020), this case allows me to develop a critical, materialist approach to the politics of technology. It highlights the importance of power relations in the institutional “definition” of technology (in this case, computers, their promise and institutional potential) and conversely emphasizes the role of technology in reinforcing pre-existing power relations, effectively naturalizing social relations by technological means (in this case, a centralized conception of computer systems favored a centralized welfare system under the control of the state). Finally, this critical and agonistic conception of the role of technology challenges discursive and procedural approaches to technological democracy (Callon, Lascoumes, and Barthe 2001; Marres 2012; Latour 2004), emphasizing the materiality of institutional conflict around technology and the importance of maintaining viable technological alternatives. It thereby promotes a more materialist and radical conception of technological democracy, attuned to the institutional setting of technology and the power relations it takes part in.

Bibliography:

Callon, Michel, Pierre Lascoumes, and Yannick Barthe. 2001. Agir Dans Un Monde Incertain. Essai Sur La Démocratie Technique. Paris: Editions du Seuil.

Feenberg, Andrew. 1999. Questioning Technology. Abingdon: Routledge.

Kirkpatrick, Graeme. 2020. Technical Politics. Andrew Feenberg’s Critical Theory of Technology. Manchester: Manchester University Press.

Latour, Bruno. 2004. Politiques de Nature. Comment Faire Entrer Les Sciences En Démocratie. 2ème ed. Paris: La Découverte & Syros.

———. 2007. “Le Groom Est En Grève. Pour l’amour de Dieu, Fermez La Porte.” In Petites Leçons de Sociologie Des Sciences, 56–76. Paris: La Découverte.

Marres, Noortje. 2012. Material Participation - Technology, the Environment, and Everyday Publics. Basingstoke: Palgrave Macmillan.

Mounier-Kuhn, Pierre-Eric. 1994. “Le Plan Calcul, Bull, et l’industrie Des Composants: Contradictions d’une Stratégie.” Revue Historique 591 (3): 123–54.

Verbeek, Peter-Paul. 2006. “Materializing Morality: Design Ethics and Technological Mediation.” Science, Technology, & Human Values 31 (3): 361–80.

Winner, Langdon. 1980. “Do Artefacts Have Politics?” Daedalus, 1980.



Political instability and technological society

Wha-Chul Son

Handong Global University, Korea, Republic of (South Korea)

The sudden and brief declaration of martial law on the night of December 3, 2024, in the Republic of Korea, marked a dramatic political regression. Few imagined such an event could occur in a stable democratic society supported by advanced technologies and industries. However, we must critically examine the connection between modern technology and this political disturbance. I argue that the relationship is deeper than commonly assumed.

First, ironically, this event is tied to conspiracy theories fueled by advanced technologies. The vast possibilities enabled by modern technology have led the public to embrace conspiracy theories, believing that almost anything can happen without their knowledge. Even the Korean President subscribed to the idea of election fraud, allegedly caused by a distorted electronic counting system. He was convinced that the evidence could be found on the server of the Central Election Management Office.

Second, this incident was partly the outcome of online recommendation systems that provide targeted content to individual users. When fake news, spread with various motives, interacts with such mechanisms, many individuals develop extreme biases. This issue cannot be solely blamed on propagandists. The bias was produced as a result of selective content consumption driven by user analysis and algorithmic personalization.

Third, those involved in orchestrating the martial law used untraceable social networking services (SNS) for communication. Even prior to this event, many Korean politicians relied on Telegram, an untraceable SNS platform, for the same reasons as criminals. It is alarming that services designed to circumvent transparency are readily available in the market and are used by politicians and public servants without hesitation.

The close relationship between technology and this incident suggests that similar events could occur elsewhere. Indeed, extreme political polarization fueled by fake news and conspiracy theories is already prevalent in the United States and Europe. As these trends gain influence, the foundation of democratic systems becomes increasingly fragile.

Two solutions are urgently needed to break this vicious cycle. The first is technological literacy. A basic understanding of technology and its mechanisms can help people avoid falling victim to nonsensical conspiracy theories. The second is the responsible and transparent use of technologies. While privacy must be protected, the development and use of technologies such as untraceable SNS platforms should be carefully regulated.



Beyond technopolitics: presuppositions of a redeemed future

Mallikarjun Nagral

IIT Delhi, India

Before we are hurried into offering narrow, instrumental analyses and taking positions on whether particular technological interventions are beneficial or harmful, or developing a framework to guide adoptions and regulations, we need to come to terms with the overarching fact that it is the self-destructive modern industrial system/form of life that has laid to waste the modern lifeworld. Today, even the most basic necessities—clean air, water, soil, and food—have become a luxury. Add to this the strange aimlessness of accelerated transformations, the epidemic of loneliness, the mental health crisis, and an absence of a sense of belonging and community, and we see the basic coordinates of existence tearing at the seams. It is, therefore, in this context that the question concerning technology must be raised to understand the epistemological and metaphysical presuppositions that inform our current technological/engineering design, which is central to the possibility of imagining an alternative, sustainable form of life.We face two opposing responses to the present crisis: scattered experiments of small-scale sustainable communities working with convivial technologies (Illich 1973), emphasising the notions of skill, agency, meaning, community, rootedness, and ecological limits (Weil, 2002), and a euphoric transhumanist vision that seeks to leverage NBIC convergence to exert greater control over both external and internal nature, aiming for a technological utopia of abundance, leisure, and play (Bastani, 2020). This paper seeks to understand the differences in the concepts of labor, work, community, visions of the good life, and what it means to be human that guide these two visions of technological culture. The goal is to flesh out what is at stake in our choices.Furthermore I argue that changes in technological design must not only be critiqued in terms of the political bias they embody (Feenberg, 2002) but must also be understood as sites marking transformations in epistemes, which make certain forms of life possible while rendering others unviable (Naydler, 2018). I would like to explore whether Feenberg's (1992) framework of critical theory of technology, which positions technological design as the locus of political struggle, can be mobilized to argue for a radically different basis for technological design—one based on fundamentally different epistemological and metaphysical presuppositions than those of the Enlightenment. In this regard I will consider whether the various argument for crafts as a viable alternative mode of production holds any possibility in envisioning a different technological framework for sustainable innovation.

 
11:50am - 1:05pm(Papers) Agency II
Location: Auditorium 7
Session Chair: Pieter Vermaas
 

Temporal intimacy. Near-term expectations of a driverless future

Sebastian Pranz

Hochschule Darmstadt, Germany

The history of driverless cars is characterized by a strange intimacy with the future: The age in which a flawed human driver will be replaced by a reliable machine seems to be as close today as it was in the 1960s: In car shows, laboratories, test drives and now also on the road, the driverless car has always been a near future. Drawing on the discourse on sociotechnical imaginaries (SI) (Jasanoff & Kim, 2009, 2015) and sociological studies of temporality (Grinbaum & Dupuy, 2004; Nordmann, 2014; Tavory & Eliasoph, 2013), the paper develops a procedural perspective on SIs. I argue that SIs are characterized by a specific temporality: Depending on the state of their social newness, they must create a temporal framework in which they become reasonable.

This will be analyzed based on three vignettes: Using the example of the first driverless car in Europe, which was developed in the late 1960s by tire manufacturer Continental for endurance testing, I will show how a world that is “radically different” (Jasanoff, 2015, p. 325) is rationalized as a possible sociotechnical future. The second case uses the example of the car manufacturer Tesla to show that the near future of autonomous driving is further developed and socially embedded in co-production with a community of drivers who identify as pioneers. The third case examines a system of self-driving buses that the city of Monheim am Rhein was one of the first in the world to introduce in 2017. While city marketing emphasizes the degree of innovation of its transport system in public communication, the city’s transport companies invest heavily in making it ‘everyday’ and ‘invisible’.

In all cases, the SI of autonomous driving is characterized by a specific intimacy with the future that I call “technological near-term expectation”: A technological future that is both distant and tangible. As I will show, technological near-term expectation goes beyond imaginaries and narratives but manifests itself in an intimate experience of technology: The moment of boarding the car, the shudder at the apparent agency of the technology, the experience of losing control (Winner, 1977), dealing with errors and the need for repairs (Katzenbach et al., 2024) are bodily experiences of a possible technological future.

Grinbaum, A., & Dupuy, J.-P. (2004). Living with Uncertainty: Toward the Ongoing Normative Assessment of Nanotechnology. Techné: Research in Philosophy and Technology, 8(2), 4–25. https://doi.org/10.5840/techne2004822

Jasanoff, S. (2015). Imagined and Invented Worlds. In S. Jasanoff & S.-H. Kim (Eds.), Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (pp. 321–341). University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.001.0001

Jasanoff, S., & Kim, S.-H. (2009). Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea. Minerva, 47(2), 119–146. https://doi.org/10.1007/s11024-009-9124-4

Jasanoff, S., & Kim, S.-H. (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.001.0001

Katzenbach, C., Pentzold, C., & Viejo Otero, P. (2024). Smoothing Out Smart Tech’s Rough Edges: Imperfect Automation and the Human Fix. Human-Machine Communication, 7, 23–43. https://doi.org/10.30658/hmc.7.2

Nordmann, A. (2014). Responsible innovation, the art and craft of anticipation. Journal of Responsible Innovation, 1(1), 87–98. https://doi.org/10.1080/23299460.2014.882064

Tavory, I., & Eliasoph, N. (2013). Coordinating Futures: Toward a Theory of Anticipation. American Journal of Sociology, 118(4), 908–942. https://doi.org/10.1086/668646

Winner, L. (1977). Autonomous Technology.Technics-out-of-Control as a Theme in Political Thought. The M!T Press.



"Agency, alienation, and recognition in the AI-mediated workplace"

Joel Anderson

Utrecht University, Netherlands, The

This paper develops a theoretical framework for a normative understanding of how the ever-tighter integration of artificial intelligence technologies into workplace processes is transforming workers' personal experiences of autonomous agency, esteem-recognition, and participatory inclusion. Drawing on the recognition-theoretic tradition in critical social theory – especially Axel Honneth's theory of recognition and his recent work on The Working Sovereign (Honneth 2024) – I analyze how techno-social systems are reshaping the conditions under which workers can achieve recognition for their contributions and maintain autonomous agency.

The paper focuses on the diagnosis of three key transformations, for which Honneth's work is especially relevant:

1. Although power tools and machinery have long expanded what workers are able to accomplish "on their own," the introduction of algorithmic tools to support or supplant cognitive and communicative labor greatly expands the domain in which traditional boundaries between individual and technological contributions are blurred, frequently complicating a person's sense of authentic professional competence. What is particularly interesting about the perspective of recognition theory in this context is that (like feminist conceptions of relational autonomy (Mackenzie and Stoljar 2000) or the "social model" of disability (Lawson and Beckett 2021), it emphasizes from the outset the unavoidability of reliance on other – also for the development of autonomy. As a result, recognition theory shifts the focus towards more fruitful and nuanced questions about qualitative features of how these dependencies is structured.

2. The expansion of sociotechnical possibilities for increasingly precise, rapid, and uninterrupted assessment of workplace performance – and even automated assessment systems replace human judgment with algorithmic evaluation – are fundamentally altering the intersubjective nature of workplace recognition. Here, a recognition-theoretic perspective is particularly insightful in revealing not only how deep such assessment can cut into the agency of individuals but also the quixotic character of attempts by individuals themselves to address this. This highlights the need to analyze what could be called the "attributional justice" at a structural level, in terms of what Honneth calls an account of "social freedom," in which an awareness of the other's vulnerability to misrecognition is constitutive of, in this case, a good workplace culture.

3 The increasingly "project-character" of work (Celikates, Honneth, and Jaeggi 2023) – facilitated by increasingly modular technologies for distributing and coordinating tasks – accelerates processes of individualization and compartmentalization of work activities. Honneth's approach here provides a the tools for a more insightful analysis of how it is that, in a hyper-networked world, colleagues can feel such a lack of genuine connection to one another.

I conclude by discussion the advantages and disadvantages of Honneth's approach in connection with spillover effects of how these sociotechnical changes are affecting the moral psychology of individuals: given how much of our lives we spend at work, and how pivotal work relations are to our experiences of being esteemed (or denigrated), the workplace becomes a particularly significant locus for the development of skills and attitudes that have implications for many other domains of life.

References

Celikates, Robin, Axel Honneth, and Rahel Jaeggi. 2023. “The Working Sovereign: A Conversation With Axel Honneth.” Journal of Classical Sociology 23 (3): 318–38.

Honneth, Axel. 2024. The Working Sovereign: Labour and Democratic Citizenship. Polity.

Lawson, Anna, and Angharad E. Beckett. 2021. “The Social and Human Rights Models of Disability: Towards a Complementarity Thesis.” The International Journal of Human Rights 25 (2): 348–79.

Mackenzie, Catriona, and Natalie Stoljar, eds. 2000. Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self. New York: Oxford University Press.

 
11:50am - 1:05pm(Papers) Education I
Location: Auditorium 8
Session Chair: Gunter Bombaerts
 

A conceptual framework for defining the responsibilities of engineers and situating them in education practice

Diana Adela Martin

University College London, United Kingdom

Responsibility is a core concept for engineering ethics (Herkert, 2002), prominently featured in professional codes of practice, accreditation requirements for graduate engineers, and the mission statements of engineering programmes and technological universities. Yet, it is often used ambiguously and subject to subject to various interpretations. Responsibility can refer to numerous obligations, at varying degrees and toward different parties, making it challenging to clearly define the concept (Johnson, 1992, p.21). Scholars have distinguished between causal, moral, legal, professional, social, and technical responsibility (Alpern, Oldenquist, & Florman, 1983, Mingle & Reagan, 1983; Johnson, 1992; Smith, Gardoni, & Murphy, 2014). Smith, Gardoni, and Murphy (2014, p.520) point out that the use of the concept typically emphasizes the responsibilities expected of professionals.

Compounding the challenge of understanding what is meant by ‘responsibility’, the concepts of social and professional responsibility are often used interchangeably in engineering (Bielefeldt, 2018; Nichols, 2007). According to Martin et al (2023), responsibility is mentioned as a homogenous concept considering the heterogenous ways in which responsibility can manifest in engineering practice. It is thus unclear what is understood by responsibility and which responsibilities are targeted when this concept is mentioned in the public discourse or in the mission statements of engineering higher education institutions.

As technology advances and societal expectations shift, there is an increasing need for engineers to consider the ethical dimensions integral to engineering practice. Given the pivotal role that responsibility plays in engineering practice, it is important to clarify and disambiguate the concept and have a comprehensive, structured approach that encompasses its dimensions.

Using a narrative literature review, the study synthesizes the engineering ethics literature to develop a conceptual framework that articulates a broad range of engineering responsibilities.This framework identifies 16 engineering responsibilities emerging from the engineering ethics literature and categorises them at four analytical levels (Micro/Macro and Subject/Object) and connects them with specific pedagogical approaches corresponding to Micro-Subject, Macro-Subject, Micro-Object or Macro-Object dimensions. Micro-Subject responsibilities relate to the values, characteristics, and decision-making of individual engineers. Macro-Subject responsibilities relate to the values, mission and decision-making of the engineering profession or collectives. Micro-Object responsibilities relate to the values, characteristics, and culture of organisations where an engineer practices. Macro-Object responsibilities relate to the social, economic and political structures and context driving engineering practice.

This conceptual framework provides policymakers, engineering educators, and educational programmes with clear formulations for setting learning objectives and graduate attributes that support the embedding of responsibility across the curriculum and across accreditation requirements. In engineering education research, it can provide a terminology for developing novel assessment tools to measure students' and professionals' understanding of their responsibilities.

References

Alpern, K. D., Oldenquist, A., & Florman, S. C. (1983). Moral Responsibility for Engineers Business & Professional Ethics Journal, 2(2), 39–56.

Herkert, J. R. (2001). Future directions in engineering ethics research: Microethics, macroethics, and the role of professional societies. Science and Engineering Ethics, 7(3), 403-414.

Johnson, D. G. (1992). Do Engineers have Social Responsibilities? Journal of Applied Philosophy, 9(1), 21–34.

Martin, D. A., Bombaerts, G., Horst, M., Papageorgiou, K. & Viscusi, G. (2023). Pedagogical Orientations and Evolving Responsibilities of Technological Universities: A Literature Review of the History of Engineering Education. Science and Engineering Ethics 29, 40 https://doi.org/10.1007/s11948-023-00460-2

Mingle, J. O., & Reagan, C. E. (1983). Legal responsibility versus moral responsibility: the engineer’s dilemma. Jurimetrics, 23(2), 113–155.

Nichols, S., & Knobe, J. (2007). Moral responsibility and determinism: The cognitive science of folk intuitions. Nous, 41(4), 663-685.

Smith, J., Gardoni, P. & Murphy, C. (2014). The Responsibilities of Engineers. Science and Engineering Ethics 20, 519–538. https://doi.org/10.1007/s11948-013-9463-2



Philosophical pedagogy and the role of board games: a cross-disciplinary exploration

Jing-Li Hong

Chang Jung Christian University, Taiwan

In contemporary philosophy of technology, technological artifacts are pivotal in shaping both our understanding of the world and existential experiences. This paper explores how this perspective applies to educational materials, with a specific focus on the use of board games as tools for philosophical pedagogy. Are board games merely supplementary teaching aids, or do they actively construct and influence the content and process of learning as technological artifacts? Should their role be regarded as a constructive intervention aligned with educational objectives or as a disruptive force introducing unintended shifts in subject knowledge? Furthermore, the mediating role of technological artifacts in shaping ethical and behavioral outcomes raises pressing questions: Can educational board games be purposefully designed to promote "good" behaviors among learners? Do such designs pose ethical challenges, particularly in the choice of game mechanics and the objectives of their deployment?

Focusing on philosophy education, this study investigates the intersection of teaching methodologies, materials, and objectives to examine the broader implications for philosophical inquiry and education. In ancient Greece, Socrates epitomized the unity of philosophical method, meaning, and pedagogy through his dialectical approach, notably his metaphor of midwifery. This method encouraged critical thinking and remains a cornerstone of philosophical pedagogy. However, under modern educational paradigms emphasizing efficiency and measurable outcomes, philosophy education faces challenges in retaining its foundational spirit. How can philosophical education preserve its essence while adapting to these contemporary demands?

Building on John Dewey’s Democracy and Education, which advocates for the alignment of educational objectives, materials, and student experiences, this paper proposes a framework for designing philosophical board games. Influenced by the educational philosophies of Dewey, Paulo Freire, Joseph Jacotot, Matthew Lipman’s Philosophy for Children, and Michel Foucault, the framework integrates philosophical principles with pedagogical strategies. The study analyzes existing philosophical board games, categorizes their objectives and philosophical significance, and outlines a preliminary model for board game design that bridges the gap between philosophical inquiry and contemporary educational practices.

Finally, the paper reflects on the transformative potential of adopting board games in philosophy education and the ethical considerations they entail. By revisiting the methods and materials of philosophical pedagogy, this study aims to redefine the relevance of philosophy in contemporary education and propose actionable strategies for enhancing philosophical engagement across diverse educational contexts.



Technological designs as possibility operators

Alvaro David Monterroza-Rios

ITM Institucion Universitaria, Colombia

Human beings have collectively constructed themselves through their own efforts (Monterroza-Rios, 2019). This has been made possible by the support of artificial ecological niches, formed by networks of practices in which people, relying on artefacts and their meanings, develop a cultural life. Thus, human existence is a hybrid state between the natural and the artificial, mediated by the artificial niches of artefacts that stabilise and give meaning to human practices (Broncano, 2009). The surrounding network of artefacts not only sustains these social practices (Latour, 2005) but also serves as a feedback mechanism that enriches the horizon of possibilities (Broncano, 2008) and fuels creativity and imagination (Stokes, 2014).

Technical action has been ancestral and collective in all human cultures (Ortega y Gasset, 1982). Today’s technological practices are contemporary manifestations of that way of transforming the world (Heidegger, 1977), mediated by design processes (Papanek, 1971) and reliable knowledge (Bunge, 1963). In this way, technological designs are the outcome of practices that bring together a range of pragmatic conditions to transform the real conditions of possibility for a human group. Accordingly, a new design may solve a problem in the light of a specific historical moment, yet it may also give rise to unforeseen problems. In any case, the group’s potential for action is changed forever, as are its expectations and the new challenges that emerge.

In this sense, contemporary engineering design processes may be regarded as ‘possibility operators’, insofar as they ‘design’ worlds and possible futures. While any human cultural practice could introduce an element that opens up new possibilities, engineering design has the methodologies, processes, supports, and tools to do so effectively and in a planned manner. These practices have the virtue of making their aims explicit, identifying the goal of establishing concrete objects that open up these possibilities and reconstitute material cultures.

This underscores the great social responsibility of design, since it acts as a projector of futures. Consequently, it provides an additional argument to support the validity and relevance of proposals such as design for sustainability or socially responsible design (Manzini & Rizzo, 2011; Papanek, 1971).

References

Broncano, F. (2008). In media res: cultura material y artefactos. ArteFactos, 18-32.

Bunge, M. (1963). Tecnología, ciencia y filosofía. Anales de la Universidad de Chile(126), 329-347.

Darwin, C. (2013). The Descent of Man. London: Wordsworth Editions.

Heidegger, M. (1977). “The Question Concerning Technology.” En The Question Concerning Technology and Other Essays (trad. William Lovitt, pp. 3–35). New York: Harper & Row.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Manzini, E., & Rizzo, F. (2011). Small projects/large changes: Participatory design as an open participated process. CoDesign, 7(3-4), 199-215. doi:10.1080/15710882.2011.630472

Monterroza-Rios, A. D. (2019). El papel retroalimentador de los artefactos en el desarrollo de las técnicas humanas. Trilogía Ciencia Tecnología Sociedad, 49-65. doi:10.22430/21457778.1286

Ortega y Gasset, J. (1982). Meditación de la Técnica y otros ensayos sobre la ciencia y la filosofía (1982 ed.). Madrid: Alianza.

Papanek, V. (1971). Design for the Real World: Human Ecology and Social Change. New York: Pantheon Books.

Stokes, D. (2014). The Role of Imagination in Creativity. En E. S. Paul, & S. B. Kaufman, The Philosophy of Creativity. New Essays (págs. 157-184). Oxford: Oxford University Press.

 
1:05pm - 2:30pmLunch break
Location: Senaatszaal
2:30pm - 3:30pmKeynote 3 - Jens Schlieter - Robots with empathy. Exploring buddhist ethics of technology and personhood in Asia
Location: Blauwe Zaal
Session Chair: Tom Hannes
3:35pm - 4:50pm(Papers) Anthropocene
Location: Blauwe Zaal
Session Chair: Gunter Bombaerts
 

Shipwrecks as adaptive mediators: a new media reality of the Anthropocene

Benjamin Morris King

NTNU Trondheim, Norway

Can shipwrecks provide information on the effects of climate change on the oceans? To answer this question, I want to introduce shipwrecks as a form of interventional media by applying the notion of “adaptive mediator” to answer the question of the ontological and technical reality of shipwrecks in the Anthropocene. Shipwrecks form localized ecosystems such as reefs that have traits similar to natural reefs that are attended by marine life. Concepts such as "multispecies shipwrecks" and "shipwreck ecologies" are being introduced by archaeologists and marine biologists that illustrate the fluid and reciprocal relationship shipwrecks have with the sea, requiring a flat ontology to be understood.

Therefore, theoretical questions about what shipwrecks are in a time marked by significant changes in the ocean will have consequences for underwater cultural heritage, marine ecosystems, and humanity. How we acquire knowledge of these processes through technologies will be a central theme for this conference paper.

To further approach this flat ontological and mediated reality of shipwrecks in the Anthropocene, I will develop the concept of adaptive mediator by applying it in a more-than-human context, that of a shipwreck at the bottom of the sea. The concept of “adaptive mediator” that is explored in this paper originates from the work of Aurora Hoel, which in turn draws on the philosophy of Gilbert Simondon. The concept of adaptive mediator explains how things come into being and are conditioned in the world by instituting new systems of environments - that are both natural and technical - while shifting and reconfiguring human/non-human relations with the world that challenge nature/culture and technics/nature dichotomies.

I will be discussing these themes by using empirical encounters and data acquired in a deep-water archaeological context, which is a domain that requires sensor-carrying platforms such as Autonomous Underwater Vehicles (AUVs) or Remotely Operated vehicles (ROVs), and specific payload sensors for visualizations such as sonar or video sensors. Based on this, I will further examine the role of marine technologies in constituting and enabling communication between humans and the underwater world, emphasizing how our understanding and relationship with underwater shipwrecks and climate change are shaped by technological mediation and adaptive mediators.



From anthropocene to technocene. The story of our fate

Klaus Erlach1,2

1Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Germany; 2Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany

The term ‘Anthropocene’ is used to describe the beginning of a new period in the Earth's history. The International Commission on Stratigraphy (ICS) has discussed the beginning in 1952 with the appearance of plutonium in the sediments. The naming remains conspicuous: not only did humans exist before this date, but time periods are also usually named according to find spots or the (causative) material of the corresponding rock formation or by time itself (new, newer, very newly etc.). Would the material-related name ‘Plutoniocene’ than be better?

Let's go back to human first. If he is to be the eponym of an epoch, then the question of philosophical anthropology should be allowed to ask what characterises him in particular. André Leroi-Gourhan showed that technology has always been the first nature of human - i.e. technology is a zoological phenomenon. In evolutionary terms, the hand only became a universal gripping organ through the production of hand axes and increasing filigree strikes. Humans are already technites in their skeletal structure. In terms of an evolutionary philosophy of technology, the age of man and his technology therefore begins with the stratigraphic appearance of pre-human Australopiths and their choppers. Because the impact of humans in the sediments can be recognised precisely in these technical traces, beginning with stones and later supplemented by all the other man-made materials and an enormous range of other geological changes the Technocene is the more appropriate term for this era that additionally has its roots before the beginning of the Holocene (Ter-Stepanian 1988).

In view of the seemingly harmless beginning with stone hand axes, the Technocene has become an active factor at the latest with the Industrial Revolution. In this respect, it is no wonder that the term is used in context of Marxian inspired critique of technology (Hornborg 2015) as well as in context of feasibility inspired climate engineering (Fernow 2014). Furthermore, it became the eponym for the much older evolutionary philosophy of technology (Schlaudt 2022). The term becomes fruitful enabling us to tell the geological or evolutionary long-term history, especially with prospect of our future.

But how to tell such a history? In contrast to natural history a man-made narrative needs to make sense in a way. However, this makes the concepts of the ’technocene’ as well as the ‘anthropocene’ normative loaded, which explains their controversy. A narrative form often used by critique of technology is the classic tragedy: the originally good craftsman falls into the clutches of a brutal machine technology that alienates him and leaves nature in ruins. This means that the apocalyptic tipping point of the technocene has already been passed. That seems seductively plausible, but is this the right story to tell? Are there other narrative forms like comedy, tragicomedy or ‘eucatastrophe’ which allow a better approach to technogene climate change?

Literature:

Fernow, Hannes: Der Klimawandel im Zeitalter technischer Reproduzierbarkeit. Climate Engineering zwischen Risiko und Praxis. Springer: Wiesbaden 2014

Hornborg, Alf: The Political Ecology of the Technocene. Uncovering ecologically unequal exchange in the world-system. In: Clive Hamilton, François Gemenne, Christophe Bonneuil (ed.): The Anthropocene and the Global Environmental Crisis. Rethinking modernity in a new epoch. London: Routledge 2015

López-Corona, Oliver; Magallanes-Guijón, Gustavo: It Is Not an Anthropocene; It Is Really the Technocene: Names Matter in Decision Making Under Planetary Crisis. In: Frontiers in Ecology and Evolution 2020, Vol. 8, No. 214.

Leroi-Gourhan, André: Gesture and speech. Cambridge, Mass.: MIT Press 1993 [1964]

Oliver Schlaudt: Das Technozän. Eine Einführung in die evolutionäre Technikphilosophie. Klostermann: Frankfurt 2022

Ter-Stepanian, George: Beginning of the Technogene. In: Bulletin of the International Association of Engineering Geology. Paris: Springer-Verlag 1988, Vol. 38, pp. 133–142

 
3:35pm - 4:50pm(Papers) Conceptual analysis
Location: Auditorium 1
Session Chair: Krist Vaesen
 

Phronesis for AI systems: conceptual foundations

Roman Krzanowski, Paweł Polak

Pontifical University of John Paul II in Krakow, Poland

Current ethical frameworks for AI agents, robotics, and software systems (collectively known as Artificial Moral Agents, or AMAs) are insufficient to ensure their safety and alignment with human values. In response, we propose integrating Aristotelian phronesis—practical wisdom—into AI systems to provide the ethical guidance necessary for them to function safely in human environments. This paper explores how to develop and implement phronesis in AI systems, ensuring they make decisions that align with human well-being and ethical values.

We argue that one critical difference between AI systems and humans lies in the method of ethical decision-making, or the "inference/ascend method." This method involves ascending to an ethical decision based on facts, action objectives, and an understanding of the ethical implications of past experiences and their outcomes. To enhance the moral capacities of AMAs, a new approach to AI ethics is needed—a paradigm shift, rather than simply refining existing models.

We present four key arguments for designing AMA with phronetic principles:

1. Human alignment: To be human-aligned or human-safe at an ethical level, AI systems must share the same moral grounding, ontology, and worldview as humans. In other words, AI-centric systems must possess specific human-like capacities to be truly human-centric.

2. Limitations of current models: Existing ethical models for AI systems do not meet these requirements.

3. Phronetic advantage: AI systems that incorporate Aristotelian phronesis will be safer and more ethically grounded than those using other proposed frameworks.

4. Simulating phronesis: We claim that key elements of phronesis can be effectively simulated in AMA systems, contributing to improved ethical decision-making.

We also propose a design for a phronetic Artificial Moral Agent. This design simplifies the original concept of phronesis into a "weak" phronetic system while preserving its core features. In this system, decisions are made by evaluating past use cases (UCs) and comparing them to the current situation. The decision is based on selecting the most "ethically proximal" UC, defined by its outcome. However, determining what constitutes an "ethically proximal case" is a significant challenge that requires a clear and operationalizable definition. The phronetic ethical decision-making process, termed phronetic ascend, focuses on evaluating past decisions and real-world cases rather than relying on general principles, abstract norms, or algorithmic procedures. This model emphasizes context-specific ethical reasoning, where each decision is grounded in the unique characteristics of the situation rather than in pre-existing rules.

Finally, we propose formalizing phronetic principles within AI systems using the situational theory framework (Devlin, 1991). Situational theory models the interaction between cognitive agents, their environment, and the flow of information. We outline how this theory can be used to represent phronetic ethics in AI systems.



One possible definition of technology- an approach from Don Ihde

Yingke Wang

Nagoya University, Japan

The philosopher of technology Don Ihde proposes four classifications of the human-technology relationship in relation to technology artifacts in his theory of technology intentionality. In this paper, I aim to provide a concise summary of Ihde's approach to defining technology artifacts, with the objective of expanding the understanding of technology from a phenomenological perspective. To this end, this paper will first offer a brief overview of Ihde's ideas concerning the definition of technology. These ideas include four types of human-technology interaction (embodiment relationship, hermeneutic relationship, alterity relationship, and background relationship) and two types of technological intentionality (simple intentionality and complex intentionality). The distinction between two kinds of technology intentionality is contingent upon the degree of intentional integration within the context of technology utilization. By comparison, the distinction between the four human-technology relationships is determined by the position of technology itself in the process of humans attempting to comprehend the world through technological means. Secondly, this paper proposes three innovative definitions of technology artifacts based on the ideas of human-technology relations and technological intentionality, namely technical objects, technified objects, and sheer technology objects. The definition of technical object in the sense of traditional instrumentalism is characterized by simplicity, transparency, and inter-referentiality. Conversely, technified objects are distinguished by the integration of complex intentionality, and have characteristics of complexity, alterity and multi-directionality. Sheer technology objects are primarily non-entity, and have characteristics of embodiment, linguistic and duality. It is important to note that Ihde's definition of technology artifacts exhibits both progress and limitations. The progressive aspects are primarily evident in the breakthrough of the limitations of traditional instrumentalism (the introduction of the concept of technology made the subjective of technology from the background to the foreground), the breakthrough of the limitations of teleology (embodiment as the intrinsic characteristic of technology made the performance of technology possible), and the innovation of the definition method (definition upon research method). The limitations are primarily evident in the theory of technological mediation (whether the mediation framework can define the essence of a category), content pragmatism (limiting technology itself to being the object of empirical practice only), and interpretive limitations(the language that made hermeneutic relationship possible).In the final section of this paper, I will analyze in depth the progressiveness and limitations of Ihde's definition of technology in order to further academic research in the future.



Queering 'the Times of AI'

Judith Campagne

Vrije Universiteit Brussel, Belgium

Increasingly, the times we live in are referred to as ‘the times of AI’. However, such a phrasing encourages the temporal logics of algorithmic time (e.g., categorization, efficiency, prediction, and linearity) to overtake the more circular, messy, and intimate experiences of human temporalities (Goldberg, 2019). Insisting on our times being ‘times of AI’ means to leave increasingly unchallenged the effects of logics of efficiency, instrumentalization, and prediction on human experiences of the temporalities of daily life. Additionally, the phrasing ‘times of AI’ takes a specific understanding of ‘AI’ as a starting point, thereby limiting the coming into being of different iterations of such technologies. Therefore, the ‘of’ in categorizations like the ‘times of AI’ must be challenged. What does it mean to, alongside technologies such as AI, also encourage space for the messiness of human temporal experiences?

In this presentation, I start at the works of artist Patricia Domínguez. Domínguez materializes the tension between human and technological memory thereby showing that the logics of linearity and instrumentality in algorithmic machines (and thus in algorithmic time) can be challenged through circular relations between past, present, and future. To give this further impetus, I stage an encounter between these artworks and queer theory as understood by José Esteban Muñoz, which “can be best described as a backward glance that enacts a future vision” (2009, p. 4). Queer theory here is a constant movement between past, present, and future in messy and simultaneously hopeful ways. Algorithmic technologies share a way of forward looking that is grounded in a specific past. Consequently, this seems to be an especially rich entry-point for challenging the linear temporal logics that a phrase such as ‘times of AI’ encourages.

Subsequently, queering ‘the times of AI’ is a conceptual critique of categorizations that wish to subsume all of human temporal experience within logics of linearity and efficiency. Hereby, this presentation is not a complaint against big data technologies like AI as such, but rather against perspectives that subsume all of human experience under the same logics. To queer the phrase ‘the times of AI’ is to challenge the idea that logics of linearity are the dominant temporal experience of life and to thereby open up space, also within the algorithmic technologies themselves, for more circular and messy experiences of time.

References

Domínguez, Patricia and Treister, Suzanne. (September 2024). ‘Dreaming at CERN.’ Burlington Contemporary. https://contemporary.burlington.org.uk/articles/articles/dreaming-at-cern

Goldberg, David Theo. (2019). ‘Coding Time.’ Critical Times 2(3). https://doi.org/10.1215/26410478-7862517

Muñoz, José Esteban. (2009). Cruising Utopia. The Then and There of Queer Futurity. New York University Press.

 
3:35pm - 4:50pm(Papers) Mediation III
Location: Auditorium 2
Session Chair: Maren Behrensen
 

On Escape: breaking free from technological mediation

Jan Peter Bergen

University of Twente, Netherlands, The

Postphenomenology’s recognition that technological artifacts play an active role in our lives by mediating our experiences and actions in the world has proven a powerful perspective for analyzing what things do. However, the relational ontology (and subject-object co-constitutionality) that undergirds this perspective has important implications for the way in which we consider ourselves and our relation to our environment. One of the more pertinent implications concerns our (notion of) freedom: from a postphenomenological perspective, there is no escape from technological mediation. Even if Ihde describes some rare situations as unmediated I-World relations (see his example of sitting on the beach (Ihde, 1990 p. 45)), notions of sedimentation (Aagaard, 2021; Rosenberger, 2012) and the technological gaze (Lewis, 2020) put such a characterization as unmediated into question. In sum, we may never ‘be free’ from mediation’s influence.

As such, a notion of freedom that builds on a modern, autonomous subject will no longer do. This has not, however, led postphenomenologists to disavow the notion of freedom but to transform it through a Foucaultian lens, a freedom always within and in relation to the things that shape us: a freedom without escape. This rehabilitation of freedom without escape in turn prompted some (e.g., Dorrestijn, 2012; Verbeek, 2011) to propose a Foucaultian ethics of technologically mediated subjectivation wherein such a mediated freedom is central. However, this presupposes 1) that there is indeed no escape, and 2) that given the former, ethics must be grounded within mediation. This paper will contest 1) by exploring two possible escape routes or ‘outsides’ of mediation in which we may find forms of freedom ánd some possible ethical grounds. Interestingly, those two escape routes point us in radically different directions.

The first is characterized by transcendence, by an orientation outward towards height: it is the encounter with the Other as it appears in the work of Emmanuel Levinas. The Other presents itself, breaks through my Being-at-home-with-myself and calls me to responsibility “from beyond Being” (Levinas, 1998 p. 11). While my experience of the other may be technologically mediated, the Other’s infinity is what (or better put, who) escapes mediation. This encounter is what grants me a paradoxical freedom by constituting self-consciousness: it is the possibility to turn my back the very Other that needs me.

Where Levinas finds escape ‘towards God’, our second escape route proclaims God is dead. That is, we may find a form of escape that is oriented inwards, to a freedom in becoming rather than being. Specifically, we mean a fundamental engagement in embodied movement inspired by the role(s) that dancing (Tanz) play(s) in Nietzsche, from pedagogically valuable to achieving a self-sufficient freedom in ‘joyous creation’. To dance is to engage the wisdom and values of one’s body and to engage in a most fundamental process of bodily becoming, of self-creation on a level that may sometimes escape mediation.

If these escape routes prove successful, we may yet find some freedom and ethics for our technological lives outside of mediation.

References:

Dorrestijn, S. (2012). The design of our own lives: Technical mediation and subjectivation after Foucault [University of Twente].

Ihde, D. (1990). Technology and the Lifeworld. Indiana University Press.

Levinas, E. (1998). Secularization and Hunger. Graduate Faculty Philosophy Journal, 20/21(2/1), 3–12.

Lewis, R. S. (2020). Technological Gaze: Understanding How Technologies Transform Perception. In A. Daly, F. Cummins, J. Jardine, & D. Moran (Eds.), Perception and the Inhuman Gaze (pp. 128–142). Routledge.

Rosenberger, R. (2012). Embodied technology and the dangers of using the phone while driving. Phenomenology and the Cognitive Sciences, 11(1), 79–94.

Verbeek, P.-P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things. The University of Chicago Press.



The ‘Technological Environmentality Compass’: factors to consider when designing technological mediations across humans, technologies, and the environment.

Margoth Gonzalez Woge

University of Twente, Netherlands, The

As the world we live in is increasingly shaped by our own technological creations, notions of ecosystem integrity and ‘nature’ fall short to describe the quality of contemporary human-environment relations. While humans have always altered their surroundings, the current trends in domestication, management, and digitalization of the environment by technological means represent a significant shift in our relationship with the world around us. The biosphere itself, at levels from the genetic to the landscape, is increasingly a human product.

The aim of this paper is to propose the notion of a Technological Environmentality Compass to guide discussions towards desirable technological mediations across humans, technologies, and the environment. For doing so, I will first analyze how technologies shape our possibilities for action in the environment and our possibilities of experiencing the environment. To achieve this, I will recur to Postphenomenology (Ihde 1990; Verbeek 2006) to illuminate the non-neutral role of everyday tools, digital media, infrastructure, and technological systems. Specifically, the concept of Technological Mediation will be used to explain both the existential dimension (how humans exist and behave in the world) and the hermeneutic dimension (how humans perceive and interpret the world) of human-technology-environment relations. Furthermore, I will build on the notion of ‘value-ladenness’ in technological design (van de Poel 2021) to claim that the non-neutral role of technologies translates into them having empowering and limiting roles. On the basis of their enabling and disabling characters, I will utilize the Capability Approach (Nussbaum 2011; Oosterlaken 2013) to sketch a compass for desirable technological mediations based on a naturalized understanding of homo faber as understood by Material Engagement Theory (Malafouris 2013), the Skilled Intentionality Framework (Rietveld 2014, 2021), and Niche Construction Theory (Laland 2016).

To envision such a Technological Environmentality Compass, I will:

1. Firstly explain that capabilities, namely real, substantive freedoms, or opportunities to choose to act, in a specific area of life deemed valuable, are fundamentally shaped by specific bio-cultural environments, which imply not only ‘natural goods and ecological services’, but also the enabling or disabling relations –both voluntary and involuntary– mediated by everyday tools, digital media, infrastructure, and technological systems;

2. After laying out the co-constitutional relationship between humans, technologies, and the environment, my main goal is to highlight the importance of adopting a critical perspective towards the technologies we aspire to develop, as well as to critically examine the recursive effect that human-made environments have on us. For this purpose I will build-on Nussbaum’s (2001) ‘Control Over One’s Environment’ capability to include ‘Technological Environmentality’ (Aydin, González Woge, and Verbeek 2019) as a crucial dimension of our contemporary life-world;

3. Finally, I will expand on Holland’s (2008) work on ‘Sustainable Ecological Capacity’ as a Meta-Capability and highlight that the accumulation of altered environmental characteristics, whether deliberate or accidental, significantly impacts anthropogenic practices over time and causes biological adaptations to emerge from the reciprocal interactions between humans and the technological environments. Furthermore, this dynamic relationship also shapes future generations of humans, their cultural activities, and other organisms.

To conclude, the Technological Environmentality Compass will be based on the Capability Approach as a non-essentialist, dynamic framework that allows for the analysis of co-evolutionary relations between humans, technologies and the environment. My ultimate objective is to enrich the discourse on identifying and prioritizing the human capabilities that we should aim to preserve, sustain, modify, design, and create as we continue to advance and engineer both our environment and ourselves.

References

- Aydin, C., González Woge, M., and Verbeek, P-P. (2019): “Technological Environmentality: Conceptualizing Technology as a Mediating Milieu”. Philosophy & Technology, 32(2), 321-338.

- Holland, B. (2008): “Justice and the Environment in Nussbaum's Capabilities Approach: Why Sustainable Ecological Capacity Is a Meta-Capability” in Political Research Quarterly, Vol. 61, No. 2, pp. 319-332.

- Ihde, D. (1990): Technology and the Lifeworld: from Garden to Earth. Indiana Series in the Philosophy of Technology.

- Laland, K., Matthews, B., and Feldman, MW. (2016). “An introduction to niche construction theory” in Evolutionary Ecology 30:191-202.

- Malafouris, L. (2013). How Things Shape the Mind: A Theory of Material Engagement. MIT Press.

- Oosterlaken, I. (2013): Taking a Capability Approach to Technology and its Design: A Philosophical Exploration. Netherlands, Simon Stevin Series in the Ethics of Technology, 3TU.

- Nussbaum, M. (2011): Creating Capabilities: The Human Development Approach. Cambridge, Harvard University Press.

- Van Dijk, L., and Rietveld, E. (2017). “Foregrounding Sociomaterial Practice in Our Understanding of Affordances: The Skilled Intentionality Framework” in Frontiers in Psychology, Cognitive Science, Volume 7 - 2016. ● Van de Poel, I. (2021). “Design for value change” in Ethics Information Technology 23, 27–31.

- Verbeek, P-P. (2006). “Materializing Morality: Design Ethics and Technological Mediation” in Science, Technology, & Human Values, 31: 36.



Outside-in: rethinking technologically mediated moral enhancement

Ching Hung

National Chung Cheng University, Taiwan

The environmental crisis demands a comprehensive reevaluation of strategies to promote moral behavior, transcending traditional education and embracing innovative technological solutions. Internalist models, which emphasize interventions like neural stimulants and psychotropic enhancements to foster empathy and cooperation, have gained prominence in recent discourse. However, while *Unfit for the Future: The Need for Moral Enhancement* (Persson & Savulescu, 2012) compellingly argues for the necessity of moral enhancement, its proposed internalist methods often struggle to bridge the persistent "knowing-doing gap," where moral understanding does not translate into action.

This paper proposes an "externalist approach" to moral enhancement, building on Peter-Paul Verbeek’s theory of technological mediation (Verbeek, 2011) and B.F. Skinner’s behavioral reinforcement principles (Skinner, 1974). Verbeek highlights the role of technology in shaping human perception and action, while Skinner’s work demonstrates how environmental factors influence behavior without requiring cognitive intermediaries. Together, these insights support the development of external, low-tech interventions that directly and visibly modify behavior.

The externalist approach prioritizes the tangible and visible elements of the environment as key factors in shaping moral behavior. Unlike internalist methods that depend on bioengineering or pharmacological means, this approach uses technologies and designs that interact with individuals in their everyday surroundings. For instance, simple architectural adjustments, such as making stairs more accessible than elevators, can subtly guide behavior without imposing on individual autonomy. Such interventions are grounded in behavioral economics, which highlights the importance of external triggers in shaping habitual actions (Thaler & Sunstein, 2008). This paper's externalist approach echoes these principles, providing complementary theoretical support for designing environments that nudge individuals toward morally desirable behaviors.

Furthermore, this paper integrates perspectives from contemporary discussions on moral enhancement, emphasizing the ethical implications of external interventions. Unlike invasive internalist methods, external approaches uphold transparency and maintain individual agency, offering scalable solutions to complex moral challenges. By aligning human behavior with ethical objectives through environmental design, this approach not only bridges the knowing-doing gap but also fosters a culture of collective responsibility and sustainable moral practices.

This framework demonstrates that externalist strategies can redefine how societies address pressing global issues, such as the environmental crisis, by leveraging everyday technologies and designs to achieve significant behavioral transformations. Ultimately, this approach highlights the potential for low-tech, scalable, and ethically sound solutions to foster meaningful change in moral behavior.

References:

Persson, I., & Savulescu, J. (2012). *Unfit for the Future: The Need for Moral Enhancement*. Oxford: Oxford University Press.

Skinner, B.F. (1974). *About Behaviorism*. New York: Knopf.

Verbeek, P.P. (2011). *Moralizing Technology: Understanding and Designing the Morality of Things*. Chicago: University of Chicago Press.

Thaler, R.H., & Sunstein, C.R. (2008). *Nudge: Improving Decisions About Health, Wealth, and Happiness*. New Haven: Yale University Press.

 
3:35pm - 4:50pm(Papers) Artificial Intelligence
Location: Auditorium 3
Session Chair: Luuk Stellinga
 

Towards friendship among nonhumans: human, dog and robot

Masashi Takeshita

Hokkaido University, Japan

Friendship is one of the most significant relationships in human life. Humans enjoy spending time with friends, growing together, and cultivating these bonds by caring for each other. Because humans are social animals, having friends is important for our well-being.

In recent years, the rise of social robots and advances in conversational AI have prompted discussions about whether humans and robots can form genuine friendships. Some philosophers argue that such friendships are impossible due to robots’ lack of internal mental states and the inherently asymmetrical nature of human-robot relationships (e.g., Nyholm 2020). Others suggest more flexible criteria for what constitutes friendship (Ryland 2022).

Meanwhile, research in animal ethics has examined whether humans and other animals, particularly dogs, can form friendships (Townley 2017). Dogs, like humans, are social animals who depend on close, supportive relationships for their well-being—dogs left alone at home, for instance, may feel lonely or experience separation anxiety (Schwartz 2003). It suggests that dogs benefit from friendships, not only with humans but potentially with robots as well.

The purpose of this study is to investigate whether dogs and robots can indeed be friends. First, I review discussions in robot ethics regarding human-robot friendship and debates in animal ethics concerning human-dog friendship to determine the necessary and sufficient conditions for friendship. Next, I draw on studies in animal-computer interaction to assess whether dogs and robots can meet these conditions.

In robot ethics, some scholars (e.g., Elder 2017; Nyholm 2020) reject the possibility of genuine human-robot friendship, while others (e.g., Danaher 2019; Ryland 2022) argue that it is possible. Similarly, some in animal ethics have criticized arguments that deny the possibility of human-dog friendship (e.g., Townley 2017). By examining these arguments, I identify the minimum conditions for friendship and show that dogs and robots can theoretically satisfy these conditions, suggesting that dog-robot friendships may be possible.

I then turn to animal-computer interaction research to explore how such friendships might manifest in practice. Some studies (e.g., Lakotas et al. 2014; Qin et al. 2020) show that dogs respond differently to humanoid robots than other artificial objects. For instance, Qin et al. (2020) found that dogs did not respond to a simple loudspeaker, yet they reacted to a call from a humanoid robot, suggesting a potential for interaction that could develop into friendship.

Finally, based on these theoretical and empirical findings, I consider how robots should be designed so that dogs and robots can become friends. By elucidating the conditions for dog-robot friendship and suggesting a design idea, this study aims to deepen our understanding of social robotics, improve animal welfare, and open new avenues for human-animal-robot interactions.

References:

Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5-24.

Elder, A. M. (2017). Friendship, robots, and social media: False friends and second selves. Routledge.

Lakatos, G., Janiak, M., Malek, L., Muszynski, R., Konok, V., Tchon, K., & Miklósi, Á. (2014). Sensing sociality in dogs: what may make an interactive robot social?. Animal cognition, 17(2), 387-397.

Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.

Qin, M., Huang, Y., Stumph, E., Santos, L., & Scassellati, B. (2020). Dog Sit! Domestic Dogs (Canis familiaris) Follow a Robot's Sit Commands. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 16-24.

Ryland, H. (2021). It’s friendship, Jim, but not as we know it: A degrees-of-friendship view of human–robot friendships. Minds and Machines, 31(3), 377-393.

Schwartz, S. (2003). Separation anxiety syndrome in dogs and cats. Journal of the American Veterinary Medical Association, 222(11), 1526-1532.

Townley, C. (2017). Friendship with companion animals. In Overall, C. (ed.), Pets and people: The ethics of companion animals. Oxford University Press.



Artificial intelligence aided resolution of moral disagreement

Berker Bahceci

TU/e, Netherlands, The

The last decade has seen a surging interest in how existing or future AI systems can help us in moral deliberation. Such systems could, now or in the future, provide insights to our moral psychology (Buttrick, 2024), aid us in moral decision-making (Giubilini & Savulescu, 2018), or act as moral interlocutors (Schwitzgebel et al., 2024). Implicit in some of these discussions is the belief that there is a right thing to do in a particular case, and that suitably using AI can allow us to discover the right thing to do. However, even supposing that there is a right thing to do, what the right thing to do is is not always clear, nor easily accessible. Two moral agents exposed to the same scenario might have conflicting beliefs as to what is the right thing to do, and morally disagree as peers. In this talk, I explore the possibility that AI itself could help us with that problem—that we could use AI in at least three ways to resolve moral disagreement between peers.

First, I will argue that AI can highlight the morally relevant or morally irrelevant features in a particular case. For example, one person can object to a member of LGBTQ+ community’s right to access universal healthcare on the basis of their sexual orientation, even though a person’s sexual orientation is not the morally salient feature in assessing the moral status of the right to access universal healthcare. If the moral disagreement about case stemmed from this error, it can be resolved.

Second, as Klincewicz (2016) has argued, AI could provide its users with “moral arguments grounded in first-order normative theories, such as Kantianism.” A person may not be moved by the AI system’s claim that a person’s sexual orientation is not the morally salient feature in their right to universal healthcare. Indeed, we often seek answers to ‘Why I ought to X?’ as much as ‘Ought I X?’ (Baumann, 2018). A valid, theory-driven argument could serve an explanatory role for the truth of the fact that a person’s sexual orientation is not the morally salient feature in assessing the moral status of the right to access universal healthcare, and situate this fact in a larger, theoretical framework. This could be another way AI resolves disagreement.

Finally, successfully inferring a verdict about a case could be too demanding, even if one has access to morally salient features of the case (Beauchamps & Childress, 2013). AI systems could assist their users reach verdicts by reducing the number of considerations, and, as Clay & Ontiveros (2023) argued, by informing the users about rules of inference. AI could formalize the user’s considerations in first-order predicate logic, and present them with example inferences structurally similar to the one at hand. By assisting the user to successfully infer a verdict, AI could help resolve moral disagreement.

I end this work with addressing some possible objections that one might have to my arguments, drawing from existing literature on disagreement and AI ethics.



Befriending AI: a cybernetic view

Naketa Williams

New Jersey Institute of Technology, United States of America

Friendship is a system built on trust, reciprocity, adaptation, and growth—and AI is capable of all these qualities. Society is growing comfortable with befriending AI companions. Replika—which has over 10 million users as of 2023—underscores a widespread interest in AI companionship.[1] This research aims to explore whether AI can truly participate in meaningful relationships with humans and whether it can be okay.

One major challenge in befriending AI is the fact that these entities are often products of the tech industry. Corporate AI systems are typically designed to prioritize profit over human connection, which may compromise the grounds for friendship. In this context, there are few authentic motives to build trust, reciprocity, and growth between humans and AI because of corporate influence. Instead, there only remains a risky relationship rooted in manipulation and self-interest.

Guided by the philosophies of Baruch Spinoza, Tullia D’Aragona, Donna Haraway, and Cybernetics, this paper explores ways to rethink what’s possible. Can we befriend corporate-controlled AI? If so, how can we ensure that these systems are worthy of our friendship? To navigate these questions, we need to think critically about how corporate motives shape AI’s behavior. At the same time, we must construct our own consciousness. To enact change, we must first recognize oppressive systems and envision new, liberatory ways for AI to coexist with us—as collaborators and as friends.

 
3:35pm - 4:50pm(Papers) Epistemology III
Location: Auditorium 4
Session Chair: Philip Nickel
 

Epistemic (in)justice in self-monitoring platforms for mental health

Tineke Broer, Emiel Krahmer, Gert Meyers, Roshnee Ossewaarde, Jenny Slatman, Charlotte Zegveld

Tilburg University, Netherlands, The

It is well known that in psychiatry the relationship between medical professional and client/ patient is structurally asymmetric. In the past, psychiatrists sometimes disregarded patients’ experiences and suggestions on the grounds of their medical authority. Patients were deemed to lack understanding of their own illness or disorder. This dismissal of patients’ knowledge can be considered as an instance of epistemic injustice, that is, injustice done to someone in their capacity as knower (Fricker, 2007). With the rise of client/patient-led organizations since the 1980s (such as the Hearing Voices Network, Mind Platform and Mind Ypsilon), clients/patients, family, informal caregivers (and professionals) aim at minimizing this type of injustice. Through digitalization and internet 2.0, people with mental health problems today can increasingly assert their voices and knowledge through online platforms, which indeed may strengthen their position as knower. In addition to this communicative function, which contributes to (knowledge) community building, digitization is also increasingly used within psychiatry to design online self-monitoring platforms. For instance, an app on a smartphone that can be used by people diagnosed with schizophrenia which can help to predict a relapse and as such can help people to manage their disorder (Henson et al, 2021). These kinds of apps are based on digital phenotyping: the continuous and in situ collection of “passive” data (quality of sleep, heart rate (variation), intonation, pedometer, geolocation, activity on social media), and “active” data (surveys, questions) through personal digital devices.

Through AI a specific “digital phenotype” can be identified, which is claimed to have the potential to facilitate personalized care and cure. In our paper we will discuss how this data-driven psychiatry could impact epistemic justice in (clinical) care. Since these kinds of digital applications involve self-monitoring, they are presented as having the potential to strengthen the position of clients/patients. But on the other hand, if these apps are predominantly based on “passive” data, then we can question whether the knowledge and experience of the client/patient still matters. If the “active” experience of clients/patients is downplayed one could say that in such cases clients/patients in their capacity of knowers are wronged by the machine/algorithm. In our paper we will first briefly present a digital phenotyping project that is currently being done at the Academic Collaborative Center for Digital Health & Mental Wellbeing at Tilburg University. Through this case study we wish to explore the possible tension between the kind of self-knowledge generated by self-monitoring platforms, on the one hand, and intuitive and sensorial (self-)knowledge, on the other hand.

References

Coghlan, S., & D’Alfonso, S. (2021). Digital phenotyping: an epistemic and methodological analysis. Philosophy & Technology, 34(4), 1905-1928.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Henson, P., D’Mello, R., Vaidyam, A., Keshavan, M., & Torous, J. (2021). Anomaly detection to predict relapse risk in schizophrenia. Translational psychiatry, 11(1), 28.

Slack, S. K., & Barclay, L. (2023). First-person disavowals of digital phenotyping and epistemic injustice in psychiatry. Medicine, Health Care and Philosophy, 26(4), 605-614.



Crip expertise and technological knowledge

Oliver Shuey

Virginia Tech, United States of America

Epistemic violence, or the systematic denial of one's access to knowledge, is a noted phenomenon within technological research and design (Ymous et al. 2019). Artifacts may be designed to address specific issues, but too often technologists do not involve the intended users/beneficiaries during the design process, or only do so once a problem or approach has been determined. Professional approaches and methodologies that produce technological knowledge may also privilege technical expertise over embodied knowledge and experience. Working in tandem, these social and epistemic shortcomings contribute to the erasure of certain embodiments and knowledges – certain epistemes – in science and technology. In turn, the power to determine what counts as legitimate technological knowledge is directed away from the most marginalized.

This paper presentation focuses on epistemic violence toward disabled communities, with specific attention on developing research practices that address injustices in disability-related technological knowledge. Too often, disabled individuals are denied meaningful participation in the development of technologies that are ultimately aimed at disabled communities. Whether intentional or not, disabled expertise and experience are rarely included in the knowledge communities concerned with disability-related technologies. Once an artifact has been constructed disabled experts might be invited to participate in human subjects research or clinical trials conducted toward the end of the design process that test the efficacy of an artifact. This only strengthens the testimonial injustices toward disabled expertise, as disabled participants are treated not as experts, but as test subjects. As a result, technologies often do not actually address the needs of disabled communities since those needs are typically defined before “test subject” input.

Within these disabled communities, epistemic violence manifests as ableism, specifically technoableism in the case of disability-related technologies, which positions disability as a necessarily negative experience. The long history of disability has often set technologists and professionals as experts about disability, rather than disabled people themselves. Influenced by eugenics, technoableism positions disabled disabled people as expendable if not remediated by technology (Shew 2023). But disabled scholars argue that the embodied experience of disability affords access to technological knowledge that is crucial for addressing the needs of disabled communities (Hamraie and Fritsch 2019). In order to produce technologies that actually address these needs, disabled expertise must be included throughout knowledge production and design.

What would it look like to intentionally counter epistemic violence in disability-related technology research? This paper presentation looks at and reports on the work of one approach to better incorporating disabled people in technological research (de-identified for this abstract) in contrast to other approaches. The project partners with technology research groups to bring disabled experts into a knowledge community as meaningful contributors. During a consulting session, technologists present current research projects to a small group of disabled consultants (always with a variety of disability types) recruited from the local area. Consultants are allowed free reign to interact with the artifacts before engaging in a dynamic feedback session in community with each other, as well as technologists and the project team. Finally, the project team writes a report based on the discussion (and follow-up feedback sessions as needed) for the technology research group with a summary of the session and recommendations for future research.

 
3:35pm - 4:50pm(Papers) Data II
Location: Auditorium 5
Session Chair: Sage Cammers-Goodwin
 

Epistemology of ignorance and datafication – To interrogate the necessity for secrecy in AI through marginalised groups’ experiences

Marilou Niedda

Utrecht University, Netherlands, The

This paper presentation seeks to articulate ignorance as an epistemic concept in the context of datafication within AI systems. To delve into this topic, my argument proceeds as follows: (i) algorithmic biases are prevalent in the application of AI technologies, where “erroneous” calculations are produced in machine learning algorithms, and result in long-lasting discriminatory outcomes; (ii) addressing these biases often involves diversifying datasets; (iii) diversification implies that additional data must be collected from marginalised populations – with women and racialised individuals at the forefront – to mitigate the (re)creation of structural inequalities.

However, I contend that this approach raises two critical issues. Firstly, datafication practices reify one’s identity, as the classificatory and rule-based nature of AI systems perpetuates a form of essentialism towards one’s experiences, which may have adverse implications for identity politics (Scott, 1991). Secondly, and this is the main argument I develop in this talk, marginalised groups had historically resisted the sharing of personal data to avoid politics of surveillance (Klein, d’Ignazio, 2020). Recent examples include African American communities in the United States facing heightened scrutiny from law enforcement with technologies of facial recognition, or the fact that individuals who menstruate stopped using period tracking apps in states where abortion became illegal after the 2022 overrule of the Roe v. Wade decision.

Drawing on the works of Linda Alcoff and Shannon Sullivan (2007), I argue that secrecy bears epistemic virtue, and articulates a two-levels epistemology of ignorance in datafication. (i) Indeed, the type of ignorance that designers of AI perpetuate is usually considered harmful as they could introduce discriminatory biases into their technological artefacts. (ii) However, ignorance allows (historically) oppressed groups to interrogate the omnipresent reality of AI systems in both one’s private/public life, by potentially refusing to share data about their experiences. I conclude that this resistance to datafication triggers reconsideration of ignorance as an epistemic concept in classic epistemology of AI, whilst allowing us to rethink the use of AI technologies altogether, and inspire community-centered approaches to data creation and usage.



Reclaiming control of thought and behavior data through the right to freedom of thought.

Kristina Pakhomchik

University of Vienna, Austria

Current data collection practices violate the fundamental human right to freedom of thought, with existing legal frameworks proving inadequate for its protection. Advances in behavioral science, psychology, and the understanding of external manifestations of thought underscore the urgent need to safeguard this right. The use of behavioral data facilitates technologies that enable manipulation, and with the scale and capabilities of AI and neurotechnologies, poses significant risks to human dignity and personal autonomy.

The paper will first examine the existing legal landscape of freedom of thought, analyzing its status across various jurisdictions and the limitations of relying on "neighboring" rights such as privacy and speech. (Shaheed, 2021) It will then explore the philosophical foundations of freedom of thought, defining its core components—freedom from interference, manipulation, and coercion—and examining the relationship between thought, speech, and the external manifestations of internal states. (McCarthy-Jones, 2019)

The core argument will demonstrate how the widespread collection of data, from interaction data to behavioral information, amounts to the collection of "thought data." Users often lack awareness of this sharing and have limited means to prevent it. Such data enables the manipulation of attention, inference of emotions, and prediction of future behaviors, undermining independent thought and decision-making. This raises critical questions about the possibility of truly informed consent for behavioral data collection. (Breen, Ouazzane, and Patel, 2020)

The paper concludes by advocating for a re-evaluation of legal frameworks to explicitly recognize and protect the absolute right to freedom of thought in the digital age. While some argue that current GDPR regulations sufficiently protect mental data under the “special categories” provision, practical application reveals significant shortcomings. (Ienca and Malgieri, 2022) Effective protection requires a paradigm shift in data collection practices, emphasizing strict limitations on certain types of data, meaningful control over behavioral personal data, and robust safeguards against interference and manipulation of individual thought processes.

This approach addresses the root cause of emotional manipulation, prioritizing user control over mental data rather than merely mitigating its consequences. The research emphasizes empowering individuals by enhancing their control over mental data and protecting their human dignity and autonomy. By tackling the potential for AI-driven manipulation, it seeks to safeguard human autonomy and preserve individuals’ capacity for self-determination, while also addressing the growing risks posed by invasive neurotechnologies increasingly integrated into daily life.

A/76/380: Interim report of the Special Rapporteur on freedom of religion or belief, Ahmed Shaheed: Freedom of thought (no date) OHCHR. Available at: https://www.ohchr.org/en/documents/thematic-reports/a76380-interim-report-special-rapporteur-freedom-religion-or-belief (Accessed: 15 January 2025).

Breen, S., Ouazzane, K. and Patel, P. (2020) ‘GDPR: Is your consent valid?’, Business Information Review, 37(1), pp. 19–24. Available at: https://doi.org/10.1177/0266382120903254.

Ienca, M. and Malgieri, G. (2022) ‘Mental data protection and the GDPR’, Journal of Law and the Biosciences, 9(1), p. lsac006. Available at: https://doi.org/10.1093/jlb/lsac006.

McCarthy-Jones, S. (2019) ‘The Autonomous Mind: The Right to Freedom of Thought in the Twenty-First Century’, Frontiers in Artificial Intelligence, 2, p. 19. Available at: https://doi.org/10.3389/frai.2019.00019.

O’Callaghan, P. and Shiner, B. (2021) ‘The Right to Freedom of Thought in the European Convention on Human Rights’, European Journal of Comparative Law and Governance, 8(2–3), pp. 112–145. Available at: https://doi.org/10.1163/22134514-bja10016.

 
3:35pm - 4:50pm(Papers) Sovereignty
Location: Auditorium 6
Session Chair: Sabine Roeser
 

On technological Sovereignty and innovation Sovereignty

Rene von Schomberg

RWTH Aachen University, Germany

I will explore the implications for a political theory of technology (as called for by Carl Mitcham, 2022) and innovation concerning contemporary calls for ‘Technological Sovereignty. The concepts of technological and innovation sovereignty open a pathway to address existing gaps in the governance of technology and innovation. Both responsible innovation and technological sovereignty aim to embed socio-political objectives within the development of technology and innovation, affecting economic governance and providing directionality of technological capacities.

Responsible innovation operates within a deliberative democratic framework, encouraging societal actors to be mutually responsive and collaborate toward addressing societal challenges. It relies on a process that balances stakeholder interests and promotes an inclusive dialogue on the societal impacts of technology. This approach incentivizes collaboration and shared responsibility among public, private, and civil sectors, aligning innovation with socially desirable outcomes.

In contrast, technological sovereignty suggests a more politically guided approach to technological development. It emphasizes the importance of reducing external dependencies and securing critical technological capacities through governance and policy intervention. This implies a more top-down direction for innovation, aiming to safeguard a degree of national or regional autonomy over essential technologies. The focus on sovereignty introduces a political dimension to innovation, where the state's role in shaping technology becomes more pronounced, potentially limiting market-led decision-making.

Taken together, these frameworks may signal a shift towards a more politically engaged governance model for technology, where innovation is not just a market-driven process but is actively shaped by socio-political priorities. Exploring how these concepts interact could help develop a political theory of technology that recognizes both the collaborative potential of responsible innovation and the protective, sovereignty-oriented dimensions necessary for resilient technological systems. This convergence could support a comprehensive approach to innovation governance, ensuring that technological progress aligns more closely with societal and democratic values.

References

Mitcham, C. Political Philosophy of Technology: After Leo Strauss (A Question of Sovereignty). Nanoethics 16, 331–338 (2022). https://doi.org/10.1007/s11569-022-00428-9



Digital technologies and social sustainability: from data governances’ perspective

Pauldin Lawrence

Istituto Universitario di Studi Superiori, Italy

The paper analyses the impacts of digital technologies on social sustainability, and proposes methods to mentor that impact constructive. The multi pillar approach to sustainable development has gathered momentum in the past decade, so as discourses on social sustainability, that was overshadowed by economic and environmental sustainability before. Contextually, the World Bank has come up with a breakthrough publication in the light of the bank’s theoretical and practical expertise in the field, and identifies four defining elements of social sustainability: social cohesion, inclusion, resilience, and process legitimacy. The fast pace growth of digital technologies since the past two decades has impacted both constructively and destructively those defining elements of social sustainability. Digital technologies, which currently manifest in the form of AI, social media, or other digital services, seem not to halt in the near future given their ‘omnipresence’ in both public and private life of individual persons. Such growing ‘intimate’ rapport between digital technologies and individuals will have decisive implications on the nature of the social fabric which the latter constitute. Therefore, sustainable research and development of digital technologies is an essential component in determining sustainable societies. This paper seeks to realise the ambition of making digital technologies sustainable by means of proposing principles and methods for responsible digital data management, since digit data plays key role in the research and development of digital technologies.

Apart from an introduction and a conclusion, the paper contains three sections. The first section surveys on the constructive and destructive impacts of digital technologies on the society. It includes the positive effects of digital technologies in the field of education, economy, and civic engagement that nourished social capital on one hand, and the negative impacts resulted by dint of misinformation, biased algorithms, proliferation of illicit activities on the web etc.. on the other. Both kinds of impacts are evaluated using the framework of social sustainability proposed by the World Bank, while at the same time borrowing concepts from other scholars in social sustainability where necessary. The second section seeks to furnish a philosophy of digital data by virtue of its sociological implication, drawing inspiration from the field of critical data studies, with special attention to the scholars: Mireille Hildebrandt, Maurizio Ferraris, Shannon Vallor, Antoinette Rouvroy. It describes digital data as a social object or a social good rather than a private or public good because the private/public good conception make data governance less democratic. The third section proposes principles and methods of digital data management that are sustainable for both digital technologies and society where they exist. People feel ‘at home’ or ‘intimate’ with digital technologies because they are built on the data of people. However, the nature of digital technologies is not determined by the same people, but by a small group of stakeholders in a profit oriented, less democratic or a liberal paternalistic way. Currently, there are entities like data cooperatives, personal data stores (PDS), Data Commons, Decentralized Autonomous Organisation (DAOs) etc… that give individual persons a say in deciding the nature of digital technologies for which their data will be used. However, lack of clarity in the definition and philosophy of digital data in its sociological context hurdles the functioning of such entities. This section outlines the way through which philosophy of digital data can be used for making data governance adaptable for developing digital technologies that does not ruin, but nurture social sustainability.



Rethinking sovereignty in a digital age

Glen Miller

Texas A&M University, United States of America

The introduction and adoption of new technologies continually transforms informal and formal political institutions, and perhaps nowhere has this happened faster than digital technologies, which have transformed how and with whom we communicate, argue, and act, i.e., our processes of political opinion and will formation. The rapid digital technological transformations put stress on the political practices, concepts, and beliefs that have developed historically in tandem with their technology. This paper focuses on the concept of sovereignty, which has taken on different meanings over time and often works as fuzzy concept, and how it has been, and perhaps should be, transformed by digital technologies.

The concept of sovereignty has taken on several different forms: as what arises when an association of households agrees to act together; as the ultimate and absolute authority, obtained voluntarily or through force; or as the expression of general will. In modern times, it also has developed an attitude of deference for states for self-rule or self-determination. Similarly, how formal political processes legitimately confer sovereignty to select individuals has been theorized differently over time in different forms of government. These variations have taken place even in the presence of one important element of stability: it has nearly always arisen among people who live in the same area who share at least some interests and concerns.

This research traces three main historical senses – Aristotle, Thomas Hobbes, and Carl Schmitt – and its pluralist and political expressions after World War II. Technological transformations, especially those arising from digital technologies, weaken the social bonds among those residing in the same place: networks that bridge distance and cultures are formed, actors increasingly are mediated by technology and commodified, and the geopolitical space more active and varied than ever before. Politics, internal and geopolitical, is increasingly complex, in flux, and confusing. Put in Dewey’s terms, digital technologies generate expansive and multiple networks, and within them domains and subdomains, that have led to a multitude of “publics” motivated by different “problems.”

This recognition of a new sociotechnical reality leads to understanding sovereignty between nation-states as necessary but substantially weaker than what was present in a world governed by mechanical technology, with increased difficulty determining the degree to which citizens support their government’s geopolitical acts. The nature of sovereignty in internal affairs seems to demand an embrace of subsidiarity, allowing self-determination on appropriate items, and perhaps the formation of new forms of publics that do not depend on geography, which already aligns with recent changes in the “public sphere” as theorized by Jürgen Habermas that do not have a formal connection to the formal political system.

References

Appadurai, Arjun. “Sovereignty without Territoriality: Notes for a Postnational Geography.” in Geography of Identity, ed. Patricia Yaeger (Ann Arbor: University of Michigan Press, 1996), 40–58.

Aristotle. Nicomachean Ethics. 2002. Translated by Joe Sachs. Newburyport, MA: Focus Publishing.

Aristotle. Politics. 2012. Translated by Joe Sachs. Newburyport, MA: Focus Publishing.

Borgmann, Albert. 1984. Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago: University of Chicago Press.

Bratton, Benjamin H. 2015. The Stack: On Software and Sovereignty. Cambridge, MA: MIT Press.

Brown, Chris. 2009. “From International to Global Justice?” In the Oxford Handbook of Political Theory, edited by John S. Dryzek, Bonnie Honig, Anne Phillips, pp. 621–35. Oxford: Oxford University Press.

Cole, G. D. H. 1916. Symposium on “The Nature of the State In View of Its Internal Relations.” Proceedings of the Aristotelian Society 16, 310–25.

Dewey, John. 1927. The Public and Its Problems. New York: Holt.

Lloyd, Howell A. 1991. “Sovereignty: Bodin, Hobbes, Rousseau.” Revue Internationale de Philosophie 45, no. 179 (4): 353–79.

Osiender, Andreas. 2001. “Sovereignty, International Relations, and the Westphalian Myth.” International Organization 55, no. 2: 251–87.

Schmitt, Carl. 1966. The Concept of the Political. Expanded edition. Chicago: University of Chicago Press.

———. 1985. Political Theology: Four Chapters on the Concept of Sovereignty. Translated by George Schwab. Chicago: University of Chicago Press.

Strauss, Leo. 1959. “What Is Political Philosophy?” In What Is Political Philosophy? and Other Studies. Chicago: University of Chicago Press.

 
3:35pm - 4:50pm(Papers) Autonomy
Location: Auditorium 7
Session Chair: Mariska Bosschaert
 

AI outsourcing and the value of autonomy

Eleonora Catena

Centre for Philosophy and AI Research {PAIR}, Friedrich-Alexander Universität (FAU) Erlangen-Nürnberg, Germany

The relations with so-called “intimate technologies” bring about both opportunities and risks for personal and social life (van Est et al., 2014; van Riemsdijk, 2018). This paper contributes to this debate by addressing how integrating AI systems in daily life (e.g., recommender systems, chatbots, artificial assistants) affects the value of human autonomy.

While the relevant literature (Laitinen & Sahlgren, 2021; Bonicalzi et al., 2023; Prunkl, 2024) has mostly focused on the threats of AI systems to the exercise of personal autonomy, I will focus on the impact (and consequent threats) on its value. More precisely, I argue that AI outsourcing (i.e., offloading decisions and actions to AI systems) (Danaher, 2018) challenges and changes the intrinsic and instrumental value of personal autonomy.

By definition, AI outsourcing implies offloading control over some processes (Process Control) to achieve certain outcomes (Outcome Control) (see also Di Nucci, 2020 and Constantinescu, November 15 2024, on the “control paradox”). I show that Process Control and Outcome Control map onto two main components of personal autonomy: on the one hand, the deliberative-decisional capacity of forming one’s motives, based on internal properties and processes (Process Autonomy); on the other, the condition of enacting one’s motives and goals, also based on external factors (Outcome Autonomy). Given this relation between autonomy and control, then trading Process Control for Outcome Control corresponds to a reduction of Process Autonomy in favour of Outcome Autonomy.

I make this case for three emblematic examples of AI integration and outsourcing in daily life: driving automation, algorithmic recommendations, and co-creation with generative models. All these cases entail a partial or full transfer of control over a process (e.g., driving decision-making, information filtering, creative production) to the AI system for more or better opportunities to realize one’s goals (e.g., mobility options, personal decisions, creative contents). I argue that this trade-off has implications for the evaluation of personal autonomy. On the one hand, it implies a reprioritization of Outcome Autonomy over Process Autonomy. On the other hand, it challenges the (intrinsic and instrumental) value of Process Autonomy: being in control of one’s processes is not relevant as long as one is still or better able to realize their goals or other goods.

I conclude with the ethical implications of undervaluing Process Autonomy, such as the increased exposure and acceptance of manipulation. Therefore, the disruption of personal autonomy, especially as control over one’s processes, is a form of vulnerability entailed by the intimate integration and outsourcing to AI systems.

References

1. Birna van Riemsdijk, M. (2018), Intimate Computing, Abstract for the Philosophy Conference “Dimensions of Vulnerability”

van Est, R., Rerimassie, V., van Keulen, I., & Dorren, G. (2014). Intimate technology: The battle for our body and behaviour. Rathenau Institute

2. Bonicalzi, S., De Caro, M. & Giovanola, B. (2023). Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems. Topoi 42, 819–832.

https://doi.org/10.1007/s11245-023-09922-5

Laitinen, A., Sahlgren, O. (2021). AI Systems and Respect for Human Autonomy. Frontiers of Artificial Intelligence 4. https://doi.org/10.3389/frai.2021.705164

Prunkl, C. (2024). Human Autonomy at Risk? An Analysis of the Challenges from AI. Minds & Machines 34, 26. https://doi.org/10.1007/s11023-024-09665-1

3. Danaher, J. Toward an Ethics of AI Assistants: an Initial Framework. Philosophy & Technology 31, 629–653 (2018). https://doi.org/10.1007/s13347-018-0317-3

4. Di Nucci, E. (2020). The control paradox: From AI to populism. Rowman & Littlefield.

Constantinescu M. (15 November 2024). Generative AI avatars and responsibility gaps. Uppsala Vienna AI Colloquium.



Analysis of the problem of human autonomy from the perspective of authenticity ethics

Guihong Zhang

University of Science and Technology of China, China, People's Republic of

With the rapid development of artificial intelligence technology, the relationship between technology and human autonomy has become an important topic in philosophy. This paper explores the multidimensional impact of artificial intelligence on human autonomy from the perspective of authenticity, aiming to reveal the potential challenges of the development of artificial intelligence technology to individual decision-making freedom and will formation process. The study focuses on the two major approaches of internalism and externalism in the issue of authenticity, analyzes the core elements of authenticity, and extracts four dimensions: critical reflection, independent decision-making ability, informed and adequate choice, and supportive environment. Based on this analytical framework, this paper systematically examines the potential impact of artificial intelligence technology on these four dimensions. Artificial intelligence technology may seriously threaten the authenticity of human autonomy through recommendation algorithms, decision-making architecture reshaping, "black box" problems, and social bias. In response to these challenges, this study constructs a multi-level guarantee framework, from the three levels of design, user, and supervision, and constructs a system architecture that promotes critical thinking by introducing evaluation tools such as the METUX model. It is recommended to enhance system transparency and protect users' right to informed decision-making. At the social level, it advocates the establishment of systematic evaluation standards and information disclosure mechanisms to create an institutional environment conducive to the development of individual autonomy. This study provides a more comprehensive and in-depth approach to understanding human autonomy in the era of artificial intelligence.



Driving for Values: Exploring the experience of autonomy with speculative design

Kathrin Bednar1, Julia Hermann2

1Eindhoven University of Technology; 2University of Twente

Newly proposed technological solutions for societal problems may face the challenge of not being accepted or morally acceptable. A key concept that can help to ensure user acceptance of ethically driven technology design is the consideration of users' autonomy, i.e. allowing users to control their interactions with the system, to understand the implications of their choices, and to make decisions that align with their own values and preferences. In this paper, we explore how value experiences of users can be collected and used in approaches that integrate values of moral importance in design such as value sensitive design (VSD; Friedman & Hendry, 2019) or design for values (van den Hoven et al., 2015). Using a research-through-design approach (Stappers & Giaccardi, 2017), we investigated a smart system that suggests navigation routes based on collective values such as safety, sustainability, and economic flourishing (the so-called Driving for Values system).

We focus on the experience of autonomy, as there may be concerns that such a system manipulates users to take alternative routes. A system that respects individual autonomy is more likely to be adopted by users and will be seen as fairer and thus more acceptable from a broader societal and ethical standpoint. We understand autonomy as involving two main components: i) the ability to freely choose among different options, and ii) the availability of meaningful options, i.e., options that enable the agent to decide and act on the basis of their own reasoned values and commitments (Blöser et al., 2010; Vugts et al., 2020).

We conducted 18 semi-structured interviews to collect insights on participants’ experiences and concerns, making use of speculative design to elicit emotions in people. Emotions play an important role in value experiences, which can be understood as experiences of what is good and desirable, or bad and undesirable, in relation to specific situations, actions, or objects. During the interviews, we showed two early system versions to each participant and asked participants to click through them and think out loud. When asking questions about autonomy, we presented participants with various definitions of autonomy and autonomy statements to explore how well they could relate to them and connect different statements to different system versions.

We found that a transparent and trustworthy system that offers a meaningful choice between value-driven route options enhances drivers’ acceptance and personal sense of autonomy. As anticipated, the interaction with the speculative design elicited emotional reactions such as delight, positive excitement, and irritation in participants, which can be interpreted as indicators of an autonomy experience. While most participants found it rather difficult to express what they take autonomy to mean when asked directly, it was easy for them to connect the presented autonomy statements with different system versions. This exercise revealed that participants preferred system versions that they felt enhanced their autonomy and that the availability of meaningful options increased their feeling of being autonomous.

REFERENCES

Blöser, C., Schöpf, A., & Willaschek, M. (2010). Autonomy, experience, and reflection: On a neglected aspect of personal autonomy. Ethical Theory and Moral Practice, 13, 239–253. https://doi.org/10.1007/s10677-009-9205-3

Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping technology with moral imagination. MIT Press.

Stappers, P. J., & Giaccardi, E. (2017). Research through design. In The encyclopedia of human-computer interaction (pp. 1–94). The Interaction Design Foundation.

van den Hoven, J., Vermaas, P. E., & van de Poel, I. (Eds.). (2015). Handbook of ethics, values, and technological design: Sources, theory, values and application domains. Springer Science+Business Media. https://doi.org/10.1007/978-94-007-6970-0

Vugts, A., Van Den Hoven, M., De Vet, E., & Verweij, M. (2020). How autonomy is understood in discussions on the ethics of nudging. Behavioural Public Policy, 4(1), 108–123. https://doi.org/10.1017/bpp.2018.5

 
3:35pm - 4:50pm(Papers) Education II
Location: Auditorium 8
Session Chair: Andreas Spahn
 

AI and democratic education: A critical pragmatist perspective

Michał Wieczorek

Dublin City University, Ireland

This paper examines the relationship between artificial intelligence and democratic education. AI and other digital technologies are currently being touted for their potential to “democratise” education, even if it is not clear what this would entail (see, e.g., Adel et al., 2024; Kamalov et al., 2023; Kucirkova & Leaton Gray, 2023). By analysing the discourse surrounding educational AI, I distinguish four distinct but interrelated meanings of democratic education: equal access to quality learning, education for living in a democracy, education through democratic practice, and democratic governance of education. I argue that none of these four meanings can render education democratic on its own, and present Dewey’s (1956; 2016) notion of democratic education as integrating these distinct conceptualisations. Dewey emphasises that education needs to provide children with skills and dispositions necessary for democratic living, experience in communication and cooperation, opportunities to codetermine the shape of democratic institutions and education itself, and equal opportunities to participate in learning. By examining today’s commercial AI tools (Holmes & Tuomi, 2022; Khan, 2024) and the information-centric models of learning underlying them (focusing in particular on Individual Tutoring Systems and educational chatbots such as the GPT-4-based Khanmigo), I argue that their emphasis on individualisation of learning, their narrow focus on the mastery of the curriculum, and the drive to automate teachers’ tasks are obstacles to democratic education. I demonstrate that: 1) AI deprives children from opportunities to gain experience in democratic living by reducing quality education to efficient transmission of information and divorcing knowledge from practical engagement; 2) AI makes it difficult for children to acquire communicative and collaborative skills and dispositions by substituting engagement with peers and teachers with conversation with always agreeable and patient machines; the increased corporate influence over education systems habituates students to an environment over which they have little or no control, potentially impacting how they will aproach shared problems as democratic citizens. I conclude by outlining some suggestions for aligning educational AI with a pragmatist notion of democracy and democratic education, and by connecting the contemporary trends in educational AI to wider, historical debates surrounding educational technology.

References

Adel, Amr, Ali Ahsan, and Claire Davison. ‘ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives’. Education Sciences 14, no. 8 (August 2024): 814. https://doi.org/10.3390/educsci14080814.

Bergviken Rensfeldt, Annika, and Lina Rahm. ‘Automating Teacher Work? A History of the Politics of Automation and Artificial Intelligence in Education’. Postdigital Science and Education, 2023. https://doi.org/10.1007/s42438-022-00344-x.

Dewey, John. The Child and the Curriculum: And The School and Society. University of Chicago Press, 1956.

Dewey, John. Democracy and Education. Gorham, Me: Myers Education Press, 2018.

Holmes, Wayne, and Ilkka Tuomi. ‘State of the Art and Practice in AI in Education’. European Journal of Education 57, no. 4 (2022): 542–70. https://doi.org/10.1111/ejed.12533.

Kamalov, Firuz, David Santandreu Calonge, and Ikhlaas Gurrib. ‘New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution’. Sustainability 15, no. 16 (January 2023): 12451. https://doi.org/10.3390/su151612451.

Khan, Salman. Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). New York: Viking, 2024.

Kucirkova, Natalia, and Sandra Leaton Gray. ‘Beyond Personalization: Embracing Democratic Learning Within Artificially Intelligent Systems’. Educational Theory 73, no. 4 (2023): 469–89. https://doi.org/10.1111/edth.12590

Watters, Audrey. Teaching Machines: The History of Personalized Learning. The MIT Press, 2021. https://doi.org/10.7551/mitpress/12262.001.0001.



AI in primary and secondary education: Sphere transgressions and value disruptions based on a scoping review of policy documents

Yuri Gawein Toussaan Tax1, Marthe Stevens1, Tamar Sharon2, Femke Takes1

1Nationaal Onderwijslab AI, Netherlands, The; 2Radboud Universiteit

AI is expected to play a prominent role in primary and secondary education. Both the promises to solve persistent educational problems as well as the potential for risks are widely acknowledged (Holmes 2022; UNESCO 2023). In this presentation we provide results of a scoping review of European and Dutch policy documents for AI in K-12 education. The aim of this research was two-fold: Firstly, to identify the educational values and practices most affected by AI-technology. Secondly, to provide insight into the types of value-disruptions (Swierstra and Vermaas 2022) that take place in education.

Using the sphere transgression framework (Sharon 2021a; 2021b; Stevens, Kraajieveld, and Sharon 2024) we argue that the introduction of AI into education brings with it values from outside the educational sphere – such as efficiency, control and personalization, which may potentially disrupt the long-standing norms, values and practices foundational to education – such as personal development, pedagogical autonomy and inclusion – thus potentially reshaping the education sphere. Using this framework, we were able to identify three types of value-disruptions triggered by the introduction of AI in the educational sphere. Firstly, explicit value conflicts: conflicts identified by the authors of the policy documents: efficiency vs. pedagogical autonomy, personalization vs. socialization, and standardization vs. inclusivity. Secondly, tacit value conflicts: conflicts that are not explicitly identified as such in the documents but that can be expected to emerge according to the documents: standardization vs personal development and personalization vs. democracy. Thirdly, value redefinitions, i.e., the reinterpretation of thick educational values into thinner versions that align better with the capacities of AI: a redefinition of “inclusivity” as “digital accessibility” and “personalized learning” without “personal development”.

Our findings provide a novel insight into how primary and secondary education are influenced through AI. We conclude by arguing that the value-disruptions we have identified are not exhaustive and further research into the broad landscape of value-disruptions through a spere-transgression theory as well as in-depth analyses of particular value-disruptions and their consequences is urgently warranted to develop AI that support educational values and practices.

 
4:50pm - 5:20pmCoffee & Tea break
Location: Voorhof
5:20pm - 6:20pmKeynote 4 - Robert Rosenberger - Sartre's letter opener and the hard problem in the philosophy of technology
Location: Blauwe Zaal
Session Chair: Gunter Bombaerts
6:20pm - 6:50pmAwards presentation
Location: Blauwe Zaal
7:00pm - 9:30pmConference dinner
Location: Markthal
Date: Saturday, 28/June/2025
8:15am - 8:45amRegistration
Location: Voorhof
8:45am - 9:45am(Symposium) Third wave continental philosophy of technology
Location: Blauwe Zaal
 

Third wave continental philosophy of technology

Chair(s): Pieter Lemmens (Radboud University, Netherlands, The), Vincent Blok (Wageningen University), Hub Zwart (Erasmus University), Yuk Hui (Erasmus University)

Since its first emergence in the late nineteenth century (starting with Marx, Ure, Reuleaux and Kapp and coming of age throughout the twentieth century via a wide variety of authors such as Dessauer, Spengler, Gehlen, Plessner, the Jünger brothers, Heidegger, Bense, Anders, Günther, Simondon, Ellul and Hottois), philosophy of technology has predominantly sought to think ‘Technology with a capital T’ in a more or less ‘metaphysical’ or ‘transcendentalist’ fashion or as part of a philosophical anthropology.

After its establishment as an academic discipline in its own right from the early 1970’s onwards, philosophy of technology divided itself roughly into two different approaches, the so-called ‘engineering’ approach on the one hand and the so-called ‘humanities’ or ‘hermeneutic’ approach on the other (Mitcham 1994).

Within this latter approach, the transcendentalist framework remained most influential until the early 1990’s, when American (Ihde) and Dutch philosophers of technology (Verbeek) initiated the so-called ‘empirical turn’, which basically criticized all macro-scale or high-altitude and more ontological theorizations of technology such as Heidegger’s Enframing and Ellul’s Technological Imperative as inadequate and obsolete and instead proposed an explicit move toward micro-scale and low-altitude, i.e., empirical analyses of specific technical artefacts in concrete use contexts (Achterhuis 2001).

From the 2010’s onwards, this empirical approach has been reproached for obfuscating the broader politico-economic and ontological ambiance. Particularly European philosophers of technology expressed renewed interest in the older continentalist approaches and argued for a rehabilitation of the transcendental or ontological (as well as systemic) question of technology (Zwier, Blok & Lemmens 2016, Zwart 2021), for instance in the sense of the technosphere as planetary technical system responsible for ushering in the Anthropocene or Technocene (Cera 2023), forcing philosophy of technology to think technology big again (Lemmens 2021) and calling not only for a ‘political turn’ (Romele 2021) but also for a ‘terrestrial turn’ in the philosophy of technology (Lemmens, Blok & Zwier 2017).

Under the influence of, among others, Stiegler’s approach to the question of technics (Stiegler 2001), Hui’s concepts of cosmotechnics and technodiversity (Hui 2016) and Blok’s concept of ‘world-constitutive technics’ (Blok 2023), we are currently witnessing the emergence of what may be called a ‘third wave’ in philosophy of technology which intends, in dialectical fashion, to surpass the opposition between transcendental and empirical, and instead engages in combining more fundamental approaches to technology and its transformative, disruptive and world-shaping power with analyses of its more concrete (symptomatic) manifestations.

This symposium aims to open a debate among authors exemplifying this third wave, with a view to the contemporary intimate technological revolution, specifically focusing on the themes technology and human identity, human nature, agency and autonomy, artificial intelligence, robots and social media, and the environment and sustainability.

 

Presentations of the Symposium

 

The ‘third wave’ of philosophical questioning: on concepts coming into action (Technology and being there)

Hub Zwart
Erasmus University

In an entry entitled “The Tyrants of the Spirit” (Daybreak, § 547) Friedrich Nietzsche confronted the “tyrannical” philosophical thinker, who aspires to capture the condition of the world in a single word, and considers asking small questions as something contemptible, with the modus operandi of modern scientific research, conducted in a selfless, hands-on and collaborative fashion. Perhaps the empirical turn in philosophy of technology, which aimed to move away from “high-altitude” theorizations of technology towards micro-scale empirical analyses of specific technical artefacts, was an effort to make philosophy of technology less “tyrannical” and more collaborative, thereby losing something along the way, however, obfuscating the global political ambiance of technoscientific change and the entanglement of technological innovations with practices of power.

I will explore whether this opposition can be superseded. On the one hand, for philosophers, proximity is important: being there, entering the scenes and sites where technoscientific change becomes tangible and concrete, probing the philosophical dimension of decisive developments in dialogue. At the same time, it is the vocation of philosophers to zoom out, coming to terms with technoscience by seeing concrete technoscientific applications as exemplifications of global disruptive processes of change, as concrete universals, and as the realisation / actualisation of fundamental concepts (fundamental answers to the question what is being, e.g., ‘everything is information’) which are brought into action and called into question. Thus, coming to terms with technology requires us to supersede the divide between theoretical and practical dimensions of philosophical enquiry.

 

«Die Frage nach der Technik» as «Die Frage nach der Philosophie»

Agostino Cera
University of Ferrara

Behind/within every interpretation of technology lies a Weltanschauung, which is simultaneously a Menschenanschauung, and thus a vision of philosophy itself: an interpretation of it as a form of knowledge (or even a form of life).

Starting from this premise, I would like to highlight what I consider the current epistemic crisis in the philosophy of technology, a concern that justifies my staunch defense of a continental/transcendental approach. In my view the ultimate price (the true legacy) of the “empirical turn” might be the self-suppression of the philosophy of technology as philosophy, in the sense of a negation of the epistemic difference of philosophy as a form of knowledge.

Often, what is superficially called inter- or trans-disciplinarity hides a more or less explicit process of epistemic colonization (or submission), whereby a philosophy in a state of minority – no longer believing in itself – seeks its own legitimacy in the realm of the hard sciences, adopting (imitating) their methods and paradigms, primarily the solutionist one. This self-suppression of philosophy’s epistemic difference, this covert cupio dissolvi, manifests in the current metamorphosis of the philosophy of technology into a “problem-solving activity”.

For several years, I have sought to give a coherent shape to this concern through a critical historicization of the newest philosophy of technology, that is, from the empirical turn to postphenomenology. This work culminates in what I have called the ontophobic turn: the current panic-stricken reluctance, on the part of much of the philosophy of technology, to name “Technology” (in the singular and with a capital T); its growing intolerance of anything that cannot be reduced to the problem-solution schema. This kind of critique was already formulated by Franco Volpi in the early 2000s, when discussing the difference between “philosophies in the nominative and genitive cases”.

By analogy, one could draw a parallel with the cyclical patterns in the history of philosophy. My impression is that postphenomenology relates to the philosophy of technology in a way similar to how Neo-Kantianism related to classical German philosophy (Kantianism and Idealism). At the beginning of the 20th century, the Baden and Marburg schools expressed the conviction that the era of philosophy’s epistemic difference and autonomy had ended, and that philosophy could find a new legitimacy only as epistemology, that is, as an ancilla scientiae: a theoretical guarantor for the empirical activities of the sciences and their successes. Mutatis mutandis, postphenomenology in its fully developed form (i.e. in its second- and third generation) resembles, in its own way, a declaration of distrust in philosophy itself, accompanied by an attempt to rebrand itself as an ancilla technologiae, serving technological progress as its theoretical and especially rhetorical-ideological support. Reduced to their essence, many examples of contemporary philosophy of technology are exercises in apologetics and justificationism, aimed at reassuring and placating public opinion to perpetuate the status quo with a clear conscience. Hence its present balsamic function, its lubricating vocation.

In my view, the solutionism/ontophobic turn as the benchmark of this epistemic crisis in philosophy is not the direct and intended product of the work of first-generation postphenomenology, but a subsequent outcome of it. A kind of collateral damage, stemming from its own success. Like some of his fellow peers (Borgmann, Feenberg, Winner ...), Don Ihde claimed the legitimate right of a new generation of scholars to tread their own path, escaping the shadow cone of the “founding fathers.” To this end, he engaged in a consciously sharp critique of figures like Heidegger and Husserl, while acknowledging the importance of their work. Over time, however, with the advent of second-generation (Verbeek) and now third-generation postphenomenologists (Rosenberger), Ihde’s work has been transformed into a damnatio memoriae, a rejection of that burdensome past. The implicit message has become: “The classical philosophy of technology is a metaphysical ballast we can finally shed by ignoring it”. Indeed, this “historical ignorance” – this lack of awareness in complete good conscience – represents one of the distinguishing features of the latest generation of scholars in the philosophy of technology.

From this observation arises a call for the recovery of a “Philosophy of Technology in the Nominative Case” – a “countermovement” to the current inertia, whose first step involves reclaiming the historical dimension of the philosophy of technology on multiple levels. This entails not only knowledge of its roots (authors and currents that initiated this line of inquiry) but increasingly a historicization of its own experience: the necessity for the philosophy of technology to reconstruct its history and become conscious of its path.

In a context like the Anthropocene, where “human activities [i.e., our technological agency] have become so pervasive and profound that they rival the great forces of Nature and are pushing the Earth into planetary terra incognita” (Steffen & Crutzen), technology stands out as the current subject of both “history” (according to Anders) and nature itself, thus becoming an “integral epochal phenomenon”. It follows that today philosophy of technology is philosophy of history, the current form of philosophy of history. This makes it the most exposed and most advanced line in the entire philosophical field, which is to say that what is happening today to the philosophy of technology is something that is happening – or will happen shortly – to philosophy tout court.

Based on these assumptions, I think that what the philosophy of technology needs here and now is not yet another “turn” (terrestrial, political, ethical…) but rather a “return”, understood as the pride of: reclaiming, cultivating, and defending its epistemic difference. That is to say, do stop being ashamed of what authentically is. This is obviously a huge issue, which if I had to reduce into a formula I could not express any better than Heidegger, who circumscribes an enclave impregnable from any problem-solving when he states that “Fragen ist die Frömmigkeit des Denkens (questioning is the piety of thought)”. Exactly this enclave philosophy is called upon to defend today.

 

Heidegger and the limits of the empirical turn

Matheus Ferreira de Barros
Pontifical Catholic University of Rio de Janeiro

The philosophy of technology, as a relevant theoretical field in contemporary philosophy, has a history that dates back to the classic philosophers of technology, as well as to the subsequent movement known as the empirical turn. However, as discussed in the field’s specialized literature, several impasses currently challenge the objectives of the empirical turn, like its political-economical underlying premises, and the question of the Anthropocene. These impasses are particularly evident when considering technological phenomena from a planetary perspective, making it difficult nowadays to conceive a philosophical inquiry exclusively focused on analyzing technical objects and their local usage contexts.

These perspectives, therefore, raise some questions related to the history of the philosophy of technology itself, such as: How can we face these challenges? Do we need another kind of “turn” in the philosophy of technology to confront them? Would it be left to us to “overcome” the empirical turn, just as it sought to overcome the classical philosophy of technology? We will then critically engage with this internal movement of linear progression that lies implicitly in the empirical turn. Consequently, the confrontation with “tradition” and its “destruction” to pave the way for new philosophical perspectives on technology is a central question for us. The metaphysical assumptions of the empirical turn as a non-foundationalist perspective lead us to interpret it through the conceptual framework of a philosopher acknowledged for his original and insightful reading of the history of metaphysics—Martin Heidegger.

Therefore, this paper aims to explore this historical-philosophical trajectory, beginning with the mapping of the actual debate about the limits of the empirical turn. Then, we will use the own perspective of Heidegger’s philosophy to analyse what a “turn” would mean, taking mainly the concept of overcoming of metaphysics (Überwindung der Metaphysik). With this development, we will provide a series of reflections - which takes a Heideggerian background - about the opposition between transcendental and empirical that lies in the history of the philosophy of technology.

 
8:45am - 9:45am(Symposium) The History of the Philosophy of Technology: The German tradition
Location: Auditorium 15
 

The History of the Philosophy of Technology: The German tradition

Chair(s): Darryl Cressman (Maastricht University)

The History of the Philosophy of Technology posits the philosophy of technology as a wide-ranging and comprehensive field of study that includes both the philosophical study of particular technologies and the different ways that technology, more broadly, has been considered philosophically. Influenced by the history of the philosophy of science, the history of ideas, and the history of the humanities, our aim is to examine how different individuals and traditions have thought about technology historically. This includes, but is not limited to: the work of different thinkers throughout history, both well-known and overlooked figures and narratives, including non-western traditions and narratives that engage with technology; analyzing the cultural, social, political, and sociotechnical contexts that have shaped philosophical responses to technology, including historical responses to new and emerging technologies; exploring the disciplines and intellectual traditions whose impacts can be traced across different philosophies of technology, including Science and Technology Studies (STS), the history of technology, critical theory, phenomenology, feminist philosophy, hermeneutics, and ecology, to name only a few; histories of different "schools" of philosophical thought about technology, for example French philosophy of technology, Japanese philosophy of technology, and Dutch philosophy of technology; mapping the hidden philosophies of technology in the work of philosophers (e.g. Foucault, Arendt, Sloterdijk) and traditions whose work is not often associated with technology (e.g. German idealism, logical empiricism, existentialism, lebensphilosophie); and, exploring the contributions of literature, art, design theory, architecture, and media theory/history towards a philosophy of technology.

This panel focuses on the German tradition within the philosophy of technology. Perhaps best associated with both the Heideggerian/phenomenological tradition and the Marxist/critical theory tradition, German-speaking philosophers have made significant contributions across the history of the philosophy of technology.

 

Presentations of the Symposium

 

Heidegger, "the intimate technology revolution," and AI

Natalie Nenadic
University of Kentucky

The unprecedented power of today’s technology is making the detrimental facets of technology’s permeation into our lives exceptionally visible. Through addictive social media, the fact that technology is standing in for social relations, with harmful consequences for real ones and mental health, becomes especially perspicuous (Haidt, 2024). Technology’s reduction of us to extractable data (Zuboff, 2019), parlayed into intimate emotional recognition and exploitation of us (Wylie, 2019) and mimicry of those emotions through AI, makes technology’s takeover of our bodies and psychology especially stark (Roose, 2024; Cahn, 2024).

This condition has been interpreted as an “intimate technological revolution” and, as such, a development that confronts us with novel vulnerabilities and risks to our well-being, including to our personal identity and relations in the world. This “revolution,” in turn, poses new existential and ethical questions for philosophers. They center on better understanding this condition through novel conceptual frameworks to aid us in responsibly handling it.

I argue that this interpretation rests on some confusion, where sorting through that confusion will aid in effectively confronting these detrimental features of technology. For this interpretation appears to conflate what today’s technology is capable of bringing into sharper relief and making more widely visible with a fundamentally new technological phenomenon, indeed a “revolution,” as if canonical thinkers (e.g., Heidegger, Arendt) haven’t already identified and treated it in foundational ways. This conflation is largely a result of insufficiently considering insights from the history of the philosophy of technology, in particular Heidegger’s analysis of modern technology, which ontologically distinguishes it from pre-modern technology and tools. His analysis constitutes foundational thinking about “the intimate technological revolution.”

Certainly, AI-centered technology is new. Accordingly, so are the specifics of how this technology permeates our lives and the current extent of that permeation. However, what is not new is a paradigm shift in our understanding of modern technology as permeating and taking over our lives in a manner that places the most intimate aspects of our humanity at existential risk, indeed constituting a “supreme danger” (Heidegger, 1977, 1954; Heidegger, 2012)

I explicate Heidegger’s analysis of “the intimate technological revolution,” where he shows that human freedom, hence our humanity, is most at risk through technology’s ubiquitous, imperceptible, and unprecedented capacity to alienate human beings from life. He posited this philosophy in the midst of modern technology’s relative inception over seventy years ago, when these features were much harder to notice. The power of today’s technology has borne out his analysis that technology would press along this trajectory, with deeper “intimate” detriment, now making it harder not to notice these features of technology.

Understanding Heidegger’s foundational analysis brings powerful conceptual resources to the much-needed task of charting the genuinely novel concepts that navigating today’s manifestations of “the intimate technological revolution” demands. For this understanding keeps us from misinterpreting these manifestations as the decisive moment of this “revolution.” It equips us to know the difference between novel concepts and claims that “reinvent the wheel” in a manner that scratches the surface of the original.

Bibliography

Cahn, A. F. (2024, December 19). An Autistic Teenager Fell Hard for a Chatbot. The New York Times.

Haidt, J. (2024). The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness. New York: Penguin.

Heidegger, M. (1977, 1954). The Question Concerning Technology and Other Essays. New York: Harper Torchbooks.

Heidegger, M. (2012). Bremen and Freiburg Lectures: Insight Into That Which Is and Basic Principles of Thinking. Bloomington: Indiana University Press.

Roose, K. (2024, October 24). Can AI Be Blamed for a Teen's Suicide? The New York Times.

Wylie, C. (2019). Mindf*uck: Cambridge Analytica and the Plot to Break America. New York: Random House.

Zuboff, S. (2019). The Age of Surveillance Capitalism. Profile Books: London.

 

Simondon, Heidegger, and the digitalization of farming

Mariska Bosschaert
Wageningen University

Heidegger theorized that we understand the world in which we live and act differently in different era’s. For instance, humans used to assign various meanings to trees, but now trees are understood as a resource to store CO2. To study the world in which we live and act, Heidegger used language. He did not reflect on concrete technologies for he argued that concrete technologies did not affect this world. However, various concrete technologies have been part of our changing understanding of the world, as the clock and the electric telegraph. The aim of this paper is to understand how digitalization is changing our understanding of farming. However, before being able to reflect on farming, it is first important to develop a methodology that enables us to study the world in which we live and act in a way that includes the role of concrete technologies. It is not easy to reflect on the world in which we live and act, for we cannot step outside of this world. It is about questioning what is self-evident to us. To develop such a methodology, Simondon’s evolutionary perspective on technology will be used. Simondon’s theory puts concrete technologies in the perspective of a broader evolutionary process, which enables us to question what has changed in the course of this process and to subsequently question what has become self-evident to us. After developing this methodology, it can be used to explore how digitalization is currently changing our understanding of farming. Digital technologies are increasingly integrated into farming, a development known as precision farming. Precision farming refers to the approach in which all kinds of digital technologies together should enable farmers to precisely address the needs of individual crops and animals so that they can increase efficiency, thereby boosting productivity and mitigating climate change impacts. Consequently, farmers increasingly understand farming through data. For instance, in the ideal of precision farming, cows are no longer understood as the physical animals but primarily as their digital representations so that the data can show what their exact needs are. This shift raises important questions about the relationship between farmers and their crops and animals. The central question of this paper therefore is: How is datafication of agriculture changing our understanding of farming?

 
8:45am - 9:45am(Symposium) John Dewey and philosophy of technology: bridging the ethical epistemical and political
Location: Auditorium 13
 

John Dewey and philosophy of technology: bridging the ethical, epistemic and political

Chair(s): Michał Wieczorek (Dublin City University, Ireland)

This symposium seeks to provide an opportunity for philosophers interested in pragmatism to reflect on the applicability of John Dewey’s (1929; 1957; 2008; 2016) ideas to today’s technological landscape. Dewey has long been recognized as an original philosopher of technology (Hickman, 1992; 2001) and he has been an influential figure in the early days of philosophy of technology, notably shaping the views of thinkers such as Don Ihde or Carl Mitcham. However, he has not been widely referenced in contemporary philosophical works dealing with technology – at least, until very recently. The last few years have brought an increasing interest in the application of Dewey’s philosophical ideas to today’s technologies, with authors discussing how data-intensive technologies affect users’ behaviour (Gerlek & Weydner-Volkmann 2022; Wieczorek, 2024), analysing how technologies change collectively held values (van de Poel & Kudina, 2022) and assessing the impact of technology on democracy and political deliberation (Coeckelbergh, 2024).

We argue that pragmatist approaches can provide further novel insights for our field and respond to contemporary developments in philosophy of technology. In recent years, philosophers have devoted attention to the epistemic, ethical and political impacts of new technologies, but Dewey’s pragmatism provides opportunities to integrate such considerations instead of studying them in isolation (Medina, 2013). For Dewey, the ways in which we create knowledge and use our tools is intimately connected to individual ethical considerations, collective deliberation on our values, or the power relations and institutions we rely on to distribute the burdens and benefits entailed by our ways of living together. Consequently, each of the authors participating in the symposium demonstrates different ways in which Dewey’s philosophy can help us analyze how contemporary technologies affect our epistemic practices, normative positions and political participation. Throughout the panel, authors will apply Dewey’s ideas to different technologies (e.g., generative AI, social media platforms) and consider their implications at the individual, interpersonal and political level. A subsequent discussion will ask the panel participants and audience members to discuss the reasons for the renewed interest in Dewey’s philosophy among philosophers of technology and propose suggestions for further philosophical problems posed by contemporary tools that are particularly well-suited for a pragmatist analysis. Consequently, the symposium will offer an opportunity to forge connections between (Deweyan) pragmatism and other strands in contemporary philosophy of technology and to establish a research agenda that will help navigate the complexities of today’s technological landscape.

References

Coeckelbergh, Mark. ‘Democracy as Communication: Towards a Normative Framework for Evaluating Digital Technologies’. Contemporary Pragmatism 21, no. 2 (31 July 2024): 217–35. https://doi.org/10.1163/18758185-bja10088.

Dewey, John. Experience and Nature. London: George Allen & Unwin, Ltd., 1929.

Dewey, John. Human Nature and Conduct. New York: Random House, 1957.

Dewey, John. The Later Works, 1925-1953, Volume 7: 1932 Ethics. Edited by Jo Ann Boydston. The Later Works, 1925-1953 7. Carbondale (Ill.): Southern Illinois University Press, 2008.

Dewey, John. The Public and Its Problems: An Essay in Political Inquiry. Edited by Melvin L. Rogers. Athens, Ohio: Swallow Press, 2016.

Gerlek, Selin, and Sebastian Weydner-Volkmann. ‘Self-Tracking and Habitualization. (Post)-Phenomenological and Pragmatist Perspectives on Reflecting Habits with the Help of Digital Technologies’. Von Menschen Und Maschinen: Mensch-Maschine-Interaktionen in Digitalen Kulturen, 2022, 138–51. https://doi.org/10.57813/20220623-152405-0.

Medina, José. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Studies in Feminist Philosophy. Oxford ; New York: Oxford University Press, 2013.

Hickman, Larry A. John Dewey’s Pragmatic Technology. 1. Midland Book ed., 2. [print.]. The Indiana Series in the Philosophy of Technology. Bloomington: Indiana University Press, 1992.

Hickman, Larry A. Philosophical Tools for Technological Culture: Putting Pragmatism to Work. The Indiana Series in the Philosophy of Technology. Bloomington: Indiana University Press, 2001.

Hickman, Larry A. ‘Postphenomenology and Pragmatism: Closer Than You Might Think?’ Techné: Research in Philosophy and Technology 12, no. 2 (2008): 6.

van de Poel, Ibo, and Olya Kudina. ‘Understanding Technology-Induced Value Change: A Pragmatist Proposal’. Philosophy & Technology 35, no. 2 (June 2022): 40. https://doi.org/10.1007/s13347-022-00520-8.

 

Presentations of the Symposium

 

Intelligent writing habits: a Deweyan take on the postphenomenology of generative AI

Sebastian Weydner-Volkmann
Ruhr-University Bochum

Postphenomenology is closely related to John Dewey’s Pragmatism. It is, “in effect, precisely a pragmatic phenomenology” (Ihde 2016: 106), i.e. one that integrates Dewey’s anti-foundationalism, anti-essentialism, his skepticism towards universals and replacement with a means-ends-continuum, his functionalism and his perspectivism (Hickman 2008: 100). Something that is notably missing in this list is Dewey’s work on overcoming a dualist conception of body and mind and the important role that the intelligent transformation of habits plays in his philosophy.

Ihde and most postphenomenologists that follow him have traditionally focused on the mediating function of technologies as part of the “I–Technology–World” relation and, drawing from Merleau-Ponty, the role that habit and embodiment plays in this. “Postphenomenology examines concrete instances of human-technology interrelations, and an early discovery is that each technology engaged calls for nuanced embodiment skills” (Ihde & Kaminski 2020: 274). Robert Rosenberger’s concept of “abstract relational strategies” builds on that and describes how skills and deeply-sedimented habits of using one technology may be productively transferred to other technologies.

Dewey’s philosophy, on the other hand, emphasizes that the use of technology should be understood and evaluated in continuation with human practices and the formation of habits: technology use is part of the means-ends-continuum that, in Deweys ethics, characterizes human dealings with problematic situations in general; as such, it is an important aspect of our co-evolution with our environment. Here, Dewey proposes a much more active, transformational perspective on habits: in his ethics the conscious transformation of passive, potentially dysfunctional habits towards situatively functional, “intelligent” habits is central to forming our own character. This makes Dewey’s perspective on technology use highly intimate, as he raises the questions of how we can use technology to influence our character for the better, i.e. for personal growth.

This makes it worth to revisit the Deweyan roots of postphenomenology. I will show this based on the example of writing with generative AI tools. Here, Dewey’s ideas about intelligent habit formation can provide an evaluative perspective that complements postphenomenology’s description of what Ihde called the “latent telic inclinations” of technology use. In this sense, using ChatGPT as a writing tool raises questions about what character we aim to give ourselves.

References

Hickman, Larry A. 2008. “Postphenomenology and Pragmatism: Closer Than You Might Think?” Techné: Research in Philosophy and Technology 12 (2): 6.

Ihde, Don. 2016. Husserl’s Missing Technologies. First edition. Perspectives in Continental Philosophy. New York: Fordham University Press.

Ihde, Don, and Andreas Kaminski. 2020. “What Is Postphenomenological Philosophy of Technology?” In Jahrbuch Technikphilosophie. Autonomie Und Unheimlichkeit, edited by Alexander Friedrich, Petra Gehring, Christoph Hubig, Andreas Kaminski, and Alfred Nordmann, 259–88. Nomos Verlagsgesellschaft mbH & Co. KG. https://doi.org/10.5771/9783748904861-259.

 

Dewey and the interaction between technology and morality

Ibo van de Poel
TU Delft

In my contribution to the panel, I will explore how Dewey’s (moral) philosophy and particularly his notion of experimentation can inform an account of the interaction between technology and morality. Experimentation plays an important role in Dewey’s philosophy. He believes that the ultimate test of beliefs, be they scientific, technological, or moral, is their application in practice. This means that both processes of technological development and of moral inquiry can be seen as experimental. Both are based on certain hypotheses about what will work or about what is morally good that need to be tested in practice through experimentation.

For technological development, this means that only through putting technologies in practice we will know whether they work and what their ultimate social consequences are. This makes technological development an iterative search process. Interestingly, Dewey makes similar claims about morality. Moral prescripts or values are not eternally given but are responses to morally problematic situations and might help to solve these, or not. To know whether they do so requires experimentation and the outcomes of such experiments might lead to new or better values or moral prescripts.

Taken together, these ideas suggest the following interaction between technological development and morality. Technological development, or at least responsible technological development, aims at the development of new technologies that do not just meet human needs, but also address existing moral problems. However, when put in practice, they might fail to do so or give rise to new moral problems. These new moral problems might give rise to new or changing values for technological development. Consequently, technological development might lead to changes in morality and these changes in morality in turn may lead to new technological developments. So conceived, technology and morality develop in constant interaction influencing each other’s course.

 

Democracy as communication: democracy from a Deweyan perspective and its implications for evaluating technology

Mark Coeckelbergh
University of Vienna

Are current digital technologies supporting democracy? Answering that question depends, among other things, on what is meant by democracy. This article mobilizes a communicative conception of democracy. While it is generally accepted that communication is important for democracy, there are directions in democracy theory that understand communication as not merely instrumental but as central to what democracy is and should be. Inspired by Dewey, Habermas, and Young, this paper articulates a conception of democracy as communication. It is then argued that this “deep-communicative” ideal of democracy, together with the usual ethical and epistemic norms of communication as sketched by O’Neill, offer a tentative normative framework for evaluating digital technologies in relation to democracy.

We should assess technologies’ compatibility and support for the communicative dimension of democracy based on the extent to which they 1) enable us to identify the originator of communication and the epistemic norms and standards held by them; 2) encourage lies, deception and sloppiness in regards to accuracy and evidence; 3) promote bullying, manipulation, plagiarism, and other violations; 4) foster a communicative climate in which virtues such as honesty, civility, openness, tolerance, and trustworthiness can flourish; 5) promote culture-specific and religion-specific norms and means of expression, as applicable to the given context) and 6) encourage patientce and openmindendness in listening to particular personal and communal narratives and passionate expressions. I argue that interventions in the development, use, and regulation of our digital technologies, and an appropriate kind of education of citizens, is required to ensure that digital technologies become communication technologies in this richer sense of the word: not just instruments to transfer and share data and information (to which then the usual norms of communication should be applied), but technologies that support the establishment, growth, and maintenance of democratic forms of life.

 

AI, the public, and its problems: a Deweyan perspective

Olya Kudina
TU Delft

This presentation explores the intersection of John Dewey's philosophical insights and the contemporary challenges posed by artificial intelligence (AI), particularly through his seminal works Experience and Nature and The Public and Its Problems. Helpful here will be several specific points from Dewey's scholarship.

First, his study of the experimental inquiry into morally problematic situations and positioning values here as preliminary responses to such situations in interaction with the social and material environment. Think of how the value of deliberation in elections takes shape in interaction with people, social media platforms, and the algorithmically curated content. Second, I will draw on Dewey's inquiry into democratic processes and the role of technology in creating publics through shaping communal interests and shared knowledge practices. Drawing on the same example, consider how the practices of AI-facilitated microtargeting on social media bring together groups of people aligned with a specific political content, boosting the visibility of this targeted content vis-à-vis alternative discourses.

Jointly, this will provide a framework for understanding AI's impact on society. I will propose that this impact can be understood as at least twofold: by meditating the content of the public discourse and the material means that facilitate access to information and public dialogue. I will wrap up by continuing to draw on Dewey and suggesting how AI can not only disrupt but also enhance collective experiences and advance public discourse, for example, through fostering inclusive dialogue and critical engagement with diverse perspectives on social media.

 
8:45am - 9:45am(Symposium) In search of legitimation: the dynamic tensions in the regulation of privacy and data rights in Vietnam
Location: Auditorium 12
 

In Search of Legitimation: The Dynamic Tensions in the Regulation of Privacy and Data Rights in Vietnam

Chair(s): Toan Le (Swinburne University of Technology/University of Economics, Ho Chi Minh City, Vietnam)

Writing about a so-called ‘authoritarian privacy’ regime in China, a modern-day socialist regime in East Asia, Professor Mark Jia argued that

'China’s turn to privacy law should be understood more centrally as a story of popular legitimation. The party-state’s turn to privacy law reflects a kind of authoritarian responsiveness, an effort to co-opt privacy by framing the party-state as the principal defender of privacy rights'.

Professor Jia’s writing shocked the privacy scholarship with awe, primarily due to his explanation of how a state’s recognition of citizens’ privacy rights and legal protection can occur even in an authoritarian system. Equally important, however, is his conception of ‘popular legitimation’–the socialist party-state utilizing privacy protection regimes as a form of justification to portray itself as the ultimate defender of citizens’ privacy while secretly facilitating subtle digital abuses under that discourse. Such legitimation is perilous, as it distorts the universal understanding of privacy as a human right, enticing citizens to accept the state’s definition (through both discourses and actions) of privacy and undermining the values protected by universal privacy norms.

The concept of ‘legitimation’ aligns with Max Weber’s ideas about legitimacy and authority in the context of law. Weber suggests that state actions, as defined by law, gain legitimacy from the law itself, irrespective of its content, provided the law is made rationally. Legitimacy stems from the rational authority of the law, and based on this legitimacy, the state asserts control over society. As long as the law is supported by an established ideology, the state can wield authority grounded in legitimacy.

Four years before Professor Jia’s essay, Dr. Emmanuel Pernot-Leplay published an article warning about the rise of a ‘third approach’ to privacy law, separate from the American and continental European models. As Professor Jia’s writing confirms, this approach centralizes privacy enforcement and supervisory authorities under a single party-state system, leaving a high risk of totalitarian control and abuse.

What is interesting is that China is not the only player in the legitimation game. What is of issue is that this ‘approach’ is gradually being mirrored in a neighboring socialist party-state: Vietnam.

Digital transformation is occurring at a dizzying pace in Vietnam under the directions of the Party-State in Vietnam. As much as the digital revolution offers opportunities to citizens, it also poses the risks of abuse. The need for regulation of data and privacy rights in light of the digital abuses occurring in society is justifiable, and the Party-State in Vietnam has taken swift actions.

A new national identity system that combines fingerprint and face recognition technologies to enhance security and a directive to private and state-owned banks to collect highly sensitive data and provide access to such data to the State are just two of the actions currently being implemented. The massive collection of data has been justified on the grounds that the Party-State is making laws to protect the citizens and consumers from the threats of digital abuse.

The Party-State insists that the collection of data is required to enhance personal and national security. However, there have been few public consultations, and concerns about privacy have been quietly made by some actors. The result is that there are dynamic tensions between state and non-state actors in the regulation of data and privacy rights in Vietnam.

The papers in this panel will explore the approach to regulating data and protecting privacy rights. It will address issues relating to ‘data propertization’ and privacy rights, the latter is itself a contested concept in Vietnamese society. The presenters will focus on the ability of the Party-State to legitimize its laws and regulations and analyze the sources of legitimacy that it draws on. Arguably, such ‘legitimation’ is not only happening in the areas of data protection; it even expands to other matters such as crypto-assets and financial regulation. The panel will analyze the complex roles the Party-State is playing in different areas of regulation and its attempts to use law as a basis to further its legitimacy in governing Vietnamese society.

 

Presentations of the Symposium

 

Digital transformation and personal autonomy amid the authoritarian governance: An analysis of Vietnam’s 2024 Data Law

Thiem Hai Bui
Institute for Legal Studies and Legal Aid

As the era of digital transformation has accelerated at a high speed across the world, there are increasing concerns over the vulnerabilities and risks for human rights. One of the key concerns is centered around personal autonomy in light of the intimate technological revolution. In the context of widespread digital surveillance, constant data collection, and the growing power of tech companies and authoritarian governments vis-a-vis societies, a fundamental human right to personal autonomy and to make informed decisions regarding one’s private information has been at risk. The massive collection of personal data, including sensitive data like health, location, and communications, often without individuals' full consent or awareness of what data is being collected, how it’s being used, or who has access to it raises the question of justice and moral responsibility about the control and ownership of data and make informed decisions in this era. As digital technologies evolve, it becomes increasingly clear that people need stronger frameworks to ensure they have meaningful control over their personal data, as well as access to transparent processes that respect their rights. These issues are reflected and analyzed in the case of Vietnam’s Data Law promulgated in 2024.

Keywords: Digital Transformation, Digital Surveillance, Personal Autonomy, Governance, Intimate Technology

 

Everyone is safe now: constructing the meaning of data privacy regulation in vietnam

Tu Thien Huynh
Monash University/University of Economics, Ho Chi Minh City, Vietnam

This paper explores Vietnam’s distinctive approach to data privacy regulation and its implications for the understanding of privacy law. While global data privacy regulations are premised on individual freedom and integrity of information flows, the recent Vietnamese Decree 13/2023/NĐ-CP on Personal Data Protection (herein PDPD) prioritizes state oversight and centralized control over information flows to safeguard collective interests and cyberspace security. The fresh regulatory logic puts data privacy under the regulation of government agencies and moves the privacy law arena even further away from the already distant judicial power. This prompts an exploration of the nuances underlying the ways regulators and the regulated communities understand data privacy regulation. The article draws on social constructionist accounts of regulation and discourse analysis to explore the epistemic interaction between regulators and those subject to regulation during the PDPD’s drafting period. The process is highlighted by the dynamics between actors within a complex semantic network established by the state’s policy initiatives, where tacit assumptions and normative beliefs direct the way actors in various communities favor one type of thinking about data privacy regulation over another. The findings suggest that reforms to privacy laws may not result in “more privacy” for individuals and that divergences in global privacy regulation may not be easily explained by drawing merely from cultural and institutional variances.

Keywords: Data Privacy, Regulatory Communities, Collective Interests, Social Constructionist, Epistemic Interaction

 

Not-Too-Late for Data Propertization in Vietnam: Trends, Blockages and Proposals

Khoi Trong Dao
Faculty of Civil Law, University of Law, Vietnam National University, Hanoi

Data has become an increasingly vital asset in the global wave of digitalization, capturing the attention of enterprises, societies, and nations worldwide. This growing significance has driven the trend of ‘data propertization’, a process denoted by governmental policies to fine-tune existing legal frameworks to recognize data as a form of property subject to collection, exploitation, and transfer for commercial purposes. While the concept itself is no novel globally, achieving a standardized approach to data propertization remains challenging for various reasons. While the economic value of data is undisputed, debates persist regarding its legal characteristics, including its nature, ability to be possessed and transferred, and the property rights that can be established over it.

For a developing country like Vietnam, formulating policies and laws addressing data propertization's complexities poses even greater challenges. Vietnam has made efforts to establish a legal framework for data governance, but these efforts primarily emphasize personal aspects of data rather than focusing on its economic and proprietary dimensions. Meanwhile, the Vietnamese property law framework has several ambiguities, from fundamental legal doctrines and definitions to the system of real rights, appearing to be ill-prepared to confront challenges posed by such a difficult-to-define, complex, and interconnected asset as data.

To address these critical issues, this paper aims to examine the conceptualization of ‘data’ in property law and the concept of data propertization, before exploring the rising trends in data policy and the regulatory frameworks being developed to support the emergence of a data industry in Vietnam. Afterward, the paper delves into the unresolved legal challenges within Vietnam's property law and laws governing data, highlighting the limitations that hinder the recognition of data as a property. Finally, the study provides a set of general approaches drawing on international practices and the local specific needs that should be taken for Vietnam and similar developing countries in perfecting their property law in this evolving global data-driven world.

Keywords: Data, Data Propertization, Vietnam, Property law, Data Governance

 

Exploring changes in vietnam’s crypto-assets regulation: networks, nodes, and gravity

Khanh Thuy Le
Swinburne University of Technology, University of Economics Ho Chi Minh City,

Vietnam is emerging as a popular destination for crypto-asset investments, both regionally and globally. It consistently ranks among the top five countries in terms of profits generated from cryptocurrencies and other like assets. However, the legal status of crypto-assets in Vietnam remains uncertain. This is not only due to Vietnam’s fragmented property framework but also because the state’s approach to crypto-assets has fluctuated. This raises the question of what dynamics drive such change. This paper explores the regulation of crypto assets in Vietnam as a response to their disruptive effects on the existing regulatory framework. Drawing from the theory of network communitarianism, a Vietnamese crypto-assets regulatory regime is imagined as an analytically constructed ‘space’ constituted by different networks of state and non-state actors. These actors employ different discursive strategies to participate in the policy processes and shape the cognitive and normative assumptions toward crypto-asset regulation. The paper suggests an insight that networked, discursive interactions between state and non-state actors underpin the state’s shifting attitude towards crypto-asset regulation, reflecting an internal struggle between different ideas of property, market, and state regulation of the economy.

Keywords: Legal Charaterisation, Cryto-Asset Regulation, Data as Property, Network Communitarianism, Discursive interactions

 

Promoting innovative technologies and addressing privacy concerns in green finance in vietnam

Tram Anh Ngoc Nguyen
University of Warwick/University of Economics, Ho Chi Minh City, Vietnam

Sustainable development goals are on the agenda to mitigate the severe effects of climate change and environmental crises on the global economy and society. Green finance is one of the key financial tools aimed at achieving the green growth goals that Vietnam has committed to in the national strategy for the 2021-2030 period, with a vision toward 2050. The promotion of innovative technologies in green finance also raises privacy regulations issues. Vietnam lacks a dedicated legal framework for data protection tailored to address problems in the financial industry, including green finance. This could lead to the misuse of sensitive financial and environmental data. This paper examines the current framework for the marriage of technology and finance toward sustainability goals. Several issues related to privacy, data protection, and liability will be addressed and the solutions are explored.

Keywords: Green Finance, Environmental Data, Data Protection, Privacy Rights, Legal Framework

 
8:45am - 9:45am(Symposium) Ethical lessons from the second quantum revolution
Location: Auditorium 11
 

Ethical lessons from the second quantum revolution

Chair(s): Benedict Lane (TU Delft, Netherlands, The)

The SPT2025 conference theme, “The Intimate Technological Revolution,” highlights the profound ways emerging technologies are transforming personal, societal, and political landscapes. Quantum Technologies (QT), as a major contemporary frontier of technological innovation, exemplify these transformative dynamics through their implications for individual moral responsibility, national technological sovereignty, international ethical governance, and global security. Thus, the “second quantum revolution” can be viewed as an important contemporaneous counterpart of the “intimate technological revolution” – these parallel revolutions can be seen as mutually reinforcing, both thematically and in their concrete impact on society. As with the intimate technological revolution, the multilayered and deeply interconnected ethical ramifications of QT force us to reevaluate many established ways of thinking ethically about technology, with important lessons to be learned even beyond the context of QT.

This panel examines the socio-politico-ethical challenges posed by QT, and aims to enrich the broader discourse on the ethical impact of technology on society by using QT as a case study through which to explore:

i) the appropriateness of dominant normative frameworks for assessing emerging technologies, such as QT, given existing interdependencies and dynamics of power;

ii) the development of governance structures aimed at anticipating the societal impact of emerging technologies, such as QT, the role of different stakeholders in shaping and assessing such structures, and the interplay between discourses surrounding the governance of different emerging technologies, such as QT and AI;

iii) the connections between the geopolitical strategic implications of emerging technologies, such as QT, and the (potentially irresponsible) escalatory discourse surrounding them;

iv) the roles and responsibilities of engineers and scientists with regards to the ethics and responsible governance of technological innovation and with respect to ongoing changes in the societal mandate for science;

v) the role of inclusive ecosystem design, equitable access to education and careers, and stakeholder engagement in tackling systematic demographic biases in the innovation process.

This panel builds on the success and momentum of an earlier panel, going beyond previous discourse by connecting the ethics of QT to the ethical vulnerabilities, risks, and opportunities inherent in technological innovation. Incorporating insights from applied ethics, political philosophy, and Science and Technology Studies (STS), the panel offers a multidimensional exploration of the ethical implications of QT, bridging theoretical inquiry with practical ethical challenges, and offering insights relevant to engineers, policymakers, and philosophers of technology alike. By emphasizing the multidimensionality and interconnectedness of the ethical issues surrounding QT, the panel raises ethical questions and contributes ethical insights with significant relevance for the intimate technological revolution.

 

Presentations of the Symposium

 

A relational ethics approach to navigate the socio-technical challenges of quantum technologies: Addressing the gaps in Responsible Innovation and Design for Values

María Palacios Barea
Faculty of Technology, Policy, and Management, TU Delft

As a critical technology area, quantum technologies (QT) are shaped by competitive innovation dynamics, where the race to lead often comes at the expense of collaboration and transparency (World Economic Forum, 2022). These dynamics influence not only the trajectory and pace of innovation, but also determine who benefits from advancements and who risks being excluded (World Economic Forum, 2022). Technological governance must account for these broader developments to assess and anticipate QT’s ethical challenges (Coenen et al., 2022).

Responsible Innovation (RI) and Design for Values (DfV) are two approaches that aim to integrate ethical and societal values into technological development (Stilgoe et al., 2013; van den Hoven et al., 2015). While both frameworks offer valuable tools, they have been critiqued for their limitations in addressing the socio-technical complexities of emerging technologies. RI has faced criticism for its insufficient engagement with the political and economic systems that shape innovation, thereby overlooking the fundamental role of power structures (Blok & Lemmens, 2015; van Oudheusden, 2014). Similarly, value-sensitive approaches, such as DfV, have been deemed too narrow in scope, focusing primarily on abstract values without addressing the broader socio-political context in which technologies are developed and deployed (Hagendorff, 2021; Resseguier & Rodrigues, 2020). These critiques underscore the need to explore other normative frameworks that can address the ethical challenges posed by QT’s socio-political landscape.

Relational ethics has emerged as an alternative, and potentially complementary, paradigm for tackling the ethical dimensions of emerging technologies (Albertson et al., 2021; Birhane, 2021; Hagendorff, 2021; Hollanek, 2024; Pavlovic & Hafner Fink, 2023; Szymanski et al., 2021). This approach emphasizes principles such as interdependence, power dynamics, responsiveness, and contextuality, all of which appear relevant for engaging with the socio-technical complexities of QT’s innovation dynamics.

These principles have gained traction in AI ethics, for instance, where value-driven approaches have struggled to address similarly competitive and fast-paced innovation dynamics (Birhane, 2021; Hagendorff, 2021; Hollanek, 2024; Pavlovic & Hafner Fink, 2023). In this context, researchers argue that relational ethics may offer a more holistic lens to evaluate and address the ethical implications of AI, notably by recognizing the interconnectedness of actors and the shifting power dynamics that shape innovation outcomes. Meanwhile, researchers in RI have argued that insights from relational ethics could help bridge existing gaps in current frameworks, providing a more nuanced and contextual approach to governance (Albertson et al., 2021; Szymanski et al., 2021).

Building on these arguments, this research investigates whether the principles of relationality can address the shortcomings of RI and DfV, particularly in the context of QT. To this end, it explores how relational ethics may offer a normative foundation for ethical governance by accounting for the socio-technical complexities and competitive pressures of QT innovation.

 

Infrastructures of responsible quantum technologies

Adrian Schmidt, Zeki C. Seskir
Institut für Technikfolgen­abschätzung und System­analyse, Karlsruher Institut für Technologie

As quantum technologies (QT) advance, their growing importance in societal, technological, and geopolitical discussions has led to critical debates about their responsible development and deployment. These technologies have the potential to transform multiple sectors but simultaneously raise pressing ethical, legal, and social questions. Key concerns include their potential to widen the digital divide both between countries and within societies, the implications of QT for global governance, and their role in reshaping geopolitical realities. In response to these emerging challenges, an increasing number of organizations, institutions, and collaborative efforts worldwide have begun to establish ethical, legal, and social aspects (ELSA) frameworks and governance structures designed to ensure the responsible advancement of QT. These initiatives not only reflect the international recognition of QT's transformative potential but also signify the efforts to navigate its associated risks.

This talk explores the evolving global infrastructure surrounding responsible QT, focusing on different models and approaches that have been introduced to the literature. Through case studies and analysis of organizational efforts, the presentation will also examine the reception of these frameworks by stakeholders, including governments, research institutions, and the broader public. Furthermore, the talk will delve into how efforts in QT governance are influenced by broader technological shifts, particularly the rise of responsible artificial intelligence (AI). The sudden proliferation of generative AI technologies like ChatGPT has prompted intensified discussions about ethical technology development. By conducting bibliometric analysis and literature reviews, this presentation highlights how the responsible AI discourse intersects with and shapes the evolution of responsible QT frameworks.

Ultimately, this talk emphasizes the necessity of interdisciplinary collaboration and global cooperation to build effective infrastructures for responsible quantum technologies. It will advocate for a comprehensive understanding of how ethical frameworks can anticipate technological advancements and proactively address the potential societal implications of cutting-edge innovations like QT.

 

Revisiting the Security Dilemma in the Context of the Quantum Internet

Sybolt Doorn
Faculty of Technology, Policy, and Management, TU Delft

Since the Quantum Internet (QI) is anticipated to introduce new security-related capabilities, it may directly affect national security interests and expectations. Such expectations are pertinent factors in security dilemmas and thus are important to study for our understanding of the broader ecosystem that supports the emergence of QI. Literature on emerging technologies identify five key attributes: (i) radical novelty, (ii) relatively rapid growth, (iii) coherence in its foundational principles, (iv) significant potential impact, and (v) inherent uncertainty and ambiguity. Due to their novelty, impact, growth, and uncertainty, such technologies are particularly vulnerable to be captured by hype. In the context of QI, I argue that hype discourses about QI capabilities could trigger an early-stage security dilemma and thus tying this technology more precisely to national security issues.

Security dilemmas are said to occur between states that can interpret one another’s behavior as potentially offensive. Due to the ambiguity and opacity of state actions, and the considerations of worst-case scenarios important for state survival, the interpretation of other state’s decisions can create cycles of response in terms of offensive measures. Relating this to QI, the expected advancements in this field relating specifically to cryptographic improvements could be interpreted as a potential threat in which responses lead to competitive escalation amongst states. Based on this theoretical understanding of possible relations on an international scale regarding QI, I aim to draw two key insights: first that the security dilemma can already manifest in the realm of social expectation and consequently influence developmental trajectories, and second that hype cycles can have a broader range of effects that extend beyond economic investment and public orientation.

I situate this study through an analysis of quantum strategy documents of the European Union. It follows from this analysis that certain capabilities of QI are emphasized that align more explicitly with national security interests then other potential capabilities of said technologies. However, I additionally explain how state dynamics through QI-involved security dilemma can mainly be indicated through such an analysis, but that different approaches to strategic technological development do not avoid such dilemmas altogether. To avoid distrust and escalation additional auxiliary institutional trajectories are required and thus need to be sought out, potentially including QI.

 
8:45am - 9:45am(Symposium) Philosophy of technology in Latin America and SPT's global support
Location: Auditorium 10
 

Philosophy of Technology in Latin America and SPT’s Global Support

Chair(s): Pieter Vermaas (TU Delft, the Netherlands, Netherlands, The)

This symposium presents results of the SPT Special Interest Group for collecting recommendation on how SPT can support the development of philosophy of technology in all regions globally (https://www.spt.org/sigs/sig-global-support-to-philosophy-of-technology/). The aim is twofold: making these results available to SPT, and reviewing how future efforts by SPT to support philosophy of technology globally can be improved.

The results concern philosophy of technology in Latin America. In different regions of this part of the world there are active communities working in the field, were notably Colombia is demonstrating a growth through its Network of Philosophy of Technology (https://redcolfiltec.wordpress.com). These communities are moreover planning to collaborate on, for instance, work on Latin American visions on philosophy of technology and on reaching out to SPT for association. The symposium will present these communities and sketch their planned activities. It moreover will bring together authors from these communities presenting their latest results.

 

Presentations of the Symposium

 

Philosophy of Technology in Latin America and SPT’s Global Support

pieter vermaas
TU Delft, the Netherlands

This symposium presents results of the SPT Special Interest Group for collecting recommendation on how SPT can support the development of philosophy of technology in all regions globally (https://www.spt.org/sigs/sig-global-support-to-philosophy-of-technology/). The aim is twofold: making these results available to SPT, and reviewing how future efforts by SPT to support philosophy of technology globally can be improved.

The results that will be presented at the symposium concern philosophy of technology in Latin America. In different regions of this part of the world there are active communities working in the field, were notably Colombia is demonstrating a growth through its Network of Philosophy of Technology (https://redcolfiltec.wordpress.com). These communities are moreover planning to collaborate on, for instance, work on Latin American visions on philosophy of technology and on reaching out to SPT for association.

The symposium will present these communities and sketch their planned activities. It moreover will bring together authors from these communities presenting their latest results.

Efforts by the special interest group to reach out to other areas of the world have not led to similar tangible results. The symposium therefore also aims at collecting feedback from the participating Latin American scholars and from the SPT membership how future efforts to collect recommendations for supporting philosophy of technology globally can be organized alternatively. For instance, within contacts with Latin American communities it was important not to cling too hard on the label of philosophy of technology but use a more wider scope that also includes STS and more political work on technologies. An open question is what adjustments can be made for enabling SPT to also effectively reach out to other areas of the world.

Structure of the symposium

The symposium combines three related aims. In the first part SPT’s efforts are reviewed to collect recommendations about how it can support philosophy of technology in all regions globally. Discussions with panelists and with the audience are directed at collecting initial recommendations from the philosophy of technology communities in Latin America and at improving SPT’s efforts towards other areas in the world. In the second part an overview will be given of the communities in Latin America, and of the activities that are developed to strengthen the communities. Third a number of regular presentations from authors from Latin America are added. (Note to the local organizers of SPT2025: this third part can be composed from contributions that are accepted for presentation; abstract for these contributions are submitted separately by the authors involved.)

Schedule

Introduction (5 minutes)

- Pieter Vermaas and Judith Sutz

Part 1:

Recommendations by panel (15 minutes)

- Daian Flórez,

- Andrés Santa-María

- Judith Sutz

Discussion (10 minutes)

- Audience

Part 2:

Survey of Latin American philosophy of technology (20 minutes)

- Diego Lawler on Argentina

- Ángel Rivera on Colombia

- Andrés Santa-María on other regions and overall plans

Discussion (10 minutes)

Audience

Part 3:

N presentations of regular papers by Latin American authors N x 30 minutes

(There are planned submissions by Sara María Guzmán, Diego Lawler, Juan Carlos Moreno, Ángel Rivera, and Andrés Santa-María)

total: 60 minutes plus 30 per accepted talk

Participants

Coordinator and moderator:

- Judith Sutz (online), Universidad de la República, Uruguay; jsutz@csic.edu.uy.

- Pieter Vermaas (in person), Philosophy Department, TU Delft, the Netherlands; p.e.vermaas@tudelft.nl

Panelists:

- Daian Flórez (in person), Universidad de Caldas-Universidad Nacional de Colombia, Colombia; daian.florez@ucaldas.edu.co.

- Andrés Santa-María (in person), Universidad Técnica Federico Santa María, Chile; andres.santamaria@usm.cl.

- Judith Sutz (online), Universidad de la República, Uruguay; jsutz@csic.edu.uy.

Surveyors:

- Diego Lawler (in person), Instituto de Investigaciones Filosóficas SADAF/CONICET/UNQ, Argentina: diego.lawler@gmail.com.

- Ángel Rivera (in person), Universidad de Antioquia, Colombia; angel.riveran@udea.edu.co.

- Andrés Santa-María (in person), Universidad Técnica Federico Santa María, Chile; andres.santamaria@usm.cl.

 
8:45am - 9:45am(Symposium) A political (re-)turn in the philosophy of engineering and technology
Location: Auditorium 9
 

A Political (Re-)Turn in the Philosophy of Engineering and Technology - Political philosophy of technology: between tragedy and utopia

Chair(s): Giovanni Frigo (Karlsruhe Institute of Technology)

Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).

To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.

References

Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.

Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12

Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y

Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8

 

Presentations of the Symposium

 

Where to political philosophy of technology?

Martin Sand
Delft University of Technology

The submitted symposium commences on the assumption that a philosophy of technology focused on the individual – on her virtues, moral sentiments and behavior – is ill-suited for the emerging technologies that exceed national boundaries and have consequences way beyond the lifetime of biological beings. It seems obvious that one must ponder how collective decision-making should take this into account to diminish the negative consequences of technology on health, the environment and society. Still, the dichotomy between individual ethics and politics insinuated in this exposition is contentious: From the literature on utopianism – the images and fancies of the ideal society, as one might put it – we learn that one cannot clearly distinguish those outlooks that commence on creating such ideal society by mere political means, incentives and institutions, and those that focus primarily on the individual, her ethos and behavior. Douglas Mao suggests, correctly I think, that “most utopias mix the two styles.” (Mao 2020, p. 88) Perhaps, therefore, a political philosophy of technology needs to complement rather than supplement philosophy of technology, and must clarify how it aspires to do so.

It is oftentimes – in particular in discussions about technological utopianism, as my recent research suggests (Sand 2024) – argued that technological utopians place unjustifiable faith into the power of technology to save us and the planet from problems that have been caused by the development and use of said technologies (Winner 1986, Saage 2016). Politics, social interventions and ethics are all better equipped to respond to those challenges. Here, interestingly, politics is contraposed to technology, not to individual ethics. Though, on the one hand, one wonders where this faith in politics stems from given the 50 years of human-made climate inaction, 20th century genocides and the many atrocities currently committed around the world. Why would one continue believing in the might of political forms of organization, given that those need after all be initiated and supported by individual humans (Sand 2018). On the other hand, and this will be the main argument of my contribution, technological utopias’ suggestive power, and creative import stems precisely from their promise to dissolve the problems that many political theories accept (e.g. immutability of human nature, limited altruism etc.). Whether possible or not, the utopian outlook itself induces reflection, conceptual novelty and hope that, thereby, advances how we pursue and conceive the ideal society.

Mao, D. (2020). Inventions of Nemesis: Utopia, Indignation, and Justice. Princeton University Press.

Saage, R. (2016). Is the classic concept of utopia ready for the future? In S. D. Chrostowska & J. D. Ingram (Eds.), Political Uses of Utopia: New Marxist, Anarchist, and Radical Democratic Perspectives (pp. 57-79). New York.

Sand, M. (2018). The virtues and vices of innovators. Philosophy of Management, 17(1), 79–95. https://doi.org/10.1007/s40926-017-0055-0

Sand, M. (2024). Technological Utopianism and the Idea of Justice Palgrave / Springer.

Winner, L. (1986). Mythinformation. In The Whale and the Reactor (pp. 98-117). University of Chicago Press.

 

The tragedy of great power technologies

Carl Mitcham
Colorado School of Mines

As engineering enhacies technological power, extending it to genetic and subatomic depth and amplifying it to global dimensions and across generations, tragedies are ever more conceivable. Unprecedented increases in global health and wealth are shadowed by unintended threats from nuclear proliferation, toxic chemical pollutions, global warming, and runaway AI. To reflect on and illuminate the potential for tragedy, I draw on two focused analyses of the contemporary situation: John Mearsheimer’s The Tragedy of Great Power Politics (2001) and Stephen Gardiner’s A Perfect Moral Storm: The Ethical Tragedy of Climate Change (2011). I use Gardiner to argue that climate change immeasurably intensifies what Mearsheimer sees as the intractable tragedy of great power security competition, and Mearsheimer to challenge Gardiner’s shallow imagining of international political solutions to “global environmental tragedy.” In both cases, this entails arguing for a relationhip between moral theory or ethics and politics that prioritizes political philosophy.

I begin with an account of Gardinier’s moral storm metaphor (based on Sebastian Junger’s “perfect storm” narrative about a fishing boat sunk in 1991 by the convergence of three storms) of the intersection of three “moral storms” (or intractable moral difficulties) associated with engineering technologically driven global climate change: (a) the asymmetry of power between rich and poor nations, (b) the asymmetry of power between present and future generations, and (c) the absence of guidance from robust general theories. Although Gardiner acknowledges a political dimension to the perfect storm, international relations remain in soft focus. I continue with an explication of Mearsheimer’s influential (if controversial) study of great power politics since the 18th century and his analysis of how anarchic relations between great powers produce unrelenting security competition often leading to open conflict. Although Mearsheimer recognizes the contribution of technology to this competition, he does not grant it the driving character it has in Gardiner. Important qualifier: Like both Gardiner and Mearsheimer, I’m concerned less with developing and defending policy responses to climate change or great power competition than to clarifying our techno-political predicament. To this end I will present and defend two theses:

1. The environmental tragedy of climate change is more technological-political philosophical than moral theoretical or ethical. As Mearsheimer argues in a companion study, The Great Delusion: Liberal Dreams and International Realities (2018), when push comes to shove the power of nationalism repeatedly trumps liberalism, undermining prospects for any effective collaborative, global, great technological power response to climate change.

2. The tragedy of great power politics is more a technological-political philosophical issue than one soley of international relations. Engineering and great power technologies have an inherent tendency to weaken both domestic social glue by aggravating factional disagreements about how best to adapt or respond and international alliances or institutions that might be marshalled to address the double-headed tragic potential of security competition under conditions of technological power driven climate change.

As integration of these two theses, I will propose reformulating Spinoza’s theological-political problem and a technological-political one.

 

Problematising Political Uses of History in the Philosophy of Technology

Christopher Coenen
Karlsruhe Institute of Technology

In the modern age, technological superiority has become perhaps the most important yardstick for judging societies, both one’s own and others, and accordingly for implicit or explicit judgements about states and their political order. At the same time, such judgements are often still based on older forms of evaluating states and societies and are deceptively interwoven with competing or complementary standards that are also more recent. Simplifying statements about long historical developments, which at best correspond to the state of the respective national historical narrative taught in high schools, and politically charged myths structure the otherwise very differentiated analyses of technological developments.

The importance of this problem has increased in times of heightened international tensions and intensified global cultural exchange. While philosophy of technology and other science and technology studies often radically question over-generalising notions of technological progress and science and thoroughly analyse their inherent ideological aspects, they usually avoid an equally careful approach to implicit or explicit claims about history. This problem is aggravated by a global tendency in science systems to justify funding programmes with often geopolitically motivated grand political narratives.

This contribution aims to illustrate the problem by using the example of implicit or explicit references in the philosophy of technology to politically charged ideas about the history of Christianity and the pre-Christian Roman Empire, as well as the uncritical use of ideas based on outdated conceptions of world history. Of course, philosophy of technology does not have to become professional historical research, but those who want to analyse the global relevance of such technoscientific developments as the rise of computer technologies are well advised to make their assumptions about relevant long-term historical developments explicit and to be sensitive to the historical-political weight of only seemingly innocent modern concepts.

Key literature

Adas, Michael. 1989. Machines as the Measure of Men: Science, Technology, and Ideologies of Western Dominance. Cornell University Press

Amir, Samin. 2009, Eurocentrism (2nd ed.)

Li Bocong. 2022. An Introduction to the Philosophy of Engineering: I Create, Therefore I Am. Translated by Wang Nan and Shunfu Zhang. Springer

 
9:50am - 10:50am(Symposium) Third wave continental philosophy of technology
Location: Blauwe Zaal
 

Third wave continental philosophy of technology - Part II

Chair(s): Pieter Lemmens (Radboud University), Vincent Blok (Wageningen University), Hub Zwart (Erasmus University), Yuk Hui (Erasmus University)

PiSince its first emergence in the late nineteenth century (starting with Marx, Ure, Reuleaux and Kapp and coming of age throughout the twentieth century via a wide variety of authors such as Dessauer, Spengler, Gehlen, Plessner, the Jünger brothers, Heidegger, Bense, Anders, Günther, Simondon, Ellul and Hottois), philosophy of technology has predominantly sought to think ‘Technology with a capital T’ in a more or less ‘metaphysical’ or ‘transcendentalist’ fashion or as part of a philosophical anthropology.

After its establishment as an academic discipline in its own right from the early 1970’s onwards, philosophy of technology divided itself roughly into two different approaches, the so-called ‘engineering’ approach on the one hand and the so-called ‘humanities’ or ‘hermeneutic’ approach on the other (Mitcham 1994).

Within this latter approach, the transcendentalist framework remained most influential until the early 1990’s, when American (Ihde) and Dutch philosophers of technology (Verbeek) initiated the so-called ‘empirical turn’, which basically criticized all macro-scale or high-altitude and more ontological theorizations of technology such as Heidegger’s Enframing and Ellul’s Technological Imperative as inadequate and obsolete and instead proposed an explicit move toward micro-scale and low-altitude, i.e., empirical analyses of specific technical artefacts in concrete use contexts (Achterhuis 2001).

From the 2010’s onwards, this empirical approach has been reproached for obfuscating the broader politico-economic and ontological ambiance. Particularly European philosophers of technology expressed renewed interest in the older continentalist approaches and argued for a rehabilitation of the transcendental or ontological (as well as systemic) question of technology (Zwier, Blok & Lemmens 2016, Zwart 2021), for instance in the sense of the technosphere as planetary technical system responsible for ushering in the Anthropocene or Technocene (Cera 2023), forcing philosophy of technology to think technology big again (Lemmens 2021) and calling not only for a ‘political turn’ (Romele 2021) but also for a ‘terrestrial turn’ in the philosophy of technology (Lemmens, Blok & Zwier 2017).

Under the influence of, among others, Stiegler’s approach to the question of technics (Stiegler 2001), Hui’s concepts of cosmotechnics and technodiversity (Hui 2016) and Blok’s concept of ‘world-constitutive technics’ (Blok 2023), we are currently witnessing the emergence of what may be called a ‘third wave’ in philosophy of technology which intends, in dialectical fashion, to surpass the opposition between transcendental and empirical, and instead engages in combining more fundamental approaches to technology and its transformative, disruptive and world-shaping power with analyses of its more concrete (symptomatic) manifestations.

This symposium aims to open a debate among authors exemplifying this third wave, with a view to the contemporary intimate technological revolution, specifically focusing on the themes technology and human identity, human nature, agency and autonomy, artificial intelligence, robots and social media, and the environment and sustainability.

 

Presentations of the Symposium

 

Pharmacology of artificial intelligence: Stiegler’s exotranscendantal philosophy of digital technology

Anne Alombert
Université Paris 8

The aim of this paper is twofold :

I will first show how Bernard Stiegler's philosophy of technology, which he describes as an “exotranscendental” philosophy, enables him to move beyond the alternative between transcendentalism and empiricism (1). I will then show how this exotranscendantal philosophy of technology can help us to think the intimate and political issues at stake with the advent of generative artificial intelligence (2).

1) To begin with, I will show that from Technics and time to his last seminars, Stiegler suggest a way of thinking that goes beyond the opposition between the transcendental and the empirical, asserting a “double recusation of empiricism and transcendentalism” (Technics and time, vol.3). Such a thought implies to consider technical and technological environments as conditioning experience and knowledge, not in a transcendental or a priori way, but in a factic and evolving way. However, this thought is not empirical, because when they condition experience or thinking, technical environments cannot become the object of that experience or of that thought. Hence the allegory of the flying fish, which encourages us to take a temporary leap out of our everyday technological milieu, in order to understand how it affects our psychic capacities and shapes our collective relations.

2) Drawing on this perspective, I will then examine how our current technological, industrial and algorithmic environments affect our psychic capacities and shape our collective relationships. I will show that contemporary so called « generative artificial intelligence » constitutes new kinds of computational automatons which can be described as new « pharmaka » and which imply new risks. From an intimate or psychic point of view, these digital automatons risk leading to a proletarianization of expression and to a new kind of symbolic misery. From a cultural and collective point of view, they risk leading to a standardization of social memory, which tends to become a capital to be exploited by Big Tech companies. In order to face these new dangers, I will suggest to think hermeneutic and deliberative technologies, as well as algorithmic pluralism, which I consider as the current exotranscendantal conditions of possibility of technodiversity and noodiversity.

 

Response against Reaction: Stiegler’s positive philosophy of technology

Benoit Dillet
University of Bath

Bernard Stiegler is the philosopher of response. The question that could not stop haunting him was: how do we – as psycho-collective individuations – respond to today’s constantly evolving and accelerating present conditioned by our technologies? This disposition in the philosophical field has important consequences for thinking politics, beyond the ethical turn of ‘responsible tech’. For Stiegler, political decisions always arrive too late, they are taken in face of the technological shocks that reconfigure our ways of knowing, feeling, perceiving and working. It is here that he provides an original take on the concept of responsibility, not simply as a duty-first ethic but as the courage to act in response to specific circumstances. This courage to act usually interrupts the normal course of philosophical development (or academic life) and almost takes place as a vocation, as a call to act. It is not responsibility in the sense of having to be accountable to someone for something or to act as a moral agent but to rearrange the order of questions.

Moving away from the critiques of the reification or the essentialisation of technology by the empirical turn in philosophy of technology, I begin this presentation by showing the instrumentalisation of technology for hegemonic projects. In the present conjuncture and the sheer power grab that Big Tech companies have amassed in the last decade, it is no longer relevant to simply debate about which type of technology is worth philosophising, Heidegger’s hammer or Idhe’s cellphone, or providing ethical guidance for the use of emerging technologies. We should consider the current oligarchs of the Big Tech industry and their infrastructural project as proponents of technodeterminism. In response to this situation, I read the work of Bernard Stiegler as having renewed with transcendental perspectives on technology, ‘with a capital T’ (Lemmens 2021; Smith 2015), by integrating a critique of political economy (and ecology). His concept of technosphere in particular can illustrate this move (Stiegler 2019).

Following Stiegler, I understand technology from a relational ontological perspective – technology and the human are co-individuating rather separate spheres. Stiegler has also provided his philosophy with a particular political orientation. We can arrive at some important breakthroughs socially and politically if we examine the political domain from a technological perspective, not as two distinct domains since ‘politics is a technological phenomenon’ (Hui 2024: 3). Politicising technology means to think the orientation of technology historically and collectively. The relationship between technology and the people (or demos) is thus central to establishing principles for a political logic of technology.

Drawing from Stiegler’s first writings as well as his engagement with political and cultural institutions, I aim to unpack Stiegler’s oft-misunderstood ideas about political will. Stiegler’s project has changed over the course of his writings, from a politics of memory in his early work to the creation of the political think tank Ars Industrialis, as well as propositional politics in different domains such as an economy of contribution or a positive pharmacology. I will show that these different political moments in Stiegler form a single positive political vision about technology.

 
9:50am - 10:50am(Symposium) The History of the Philosophy of Technology: the French Tradition
Location: Auditorium 15
 

The History of the Philosophy of Technology: the French Tradition

Chair(s): Federica Buongiorno (University of Florence)

The History of the Philosophy of Technology posits the philosophy of technology as a wide-ranging and comprehensive field of study that includes both the philosophical study of particular technologies and the different ways that technology, more broadly, has been considered philosophically. Influenced by the history of the philosophy of science, the history of ideas, and the history of the humanities, our aim is to examine how different individuals and traditions have thought about technology historically. This includes, but is not limited to: the work of different thinkers throughout history, both well-known and overlooked figures and narratives, including non-western traditions and narratives that engage with technology; analyzing the cultural, social, political, and sociotechnical contexts that have shaped philosophical responses to technology, including historical responses to new and emerging technologies; exploring the disciplines and intellectual traditions whose impacts can be traced across different philosophies of technology, including Science and Technology Studies (STS), the history of technology, critical theory, phenomenology, feminist philosophy, hermeneutics, and ecology, to name only a few; histories of different "schools" of philosophical thought about technology, for example French philosophy of technology, Japanese philosophy of technology, and Dutch philosophy of technology; mapping the hidden philosophies of technology in the work of philosophers (e.g. Foucault, Arendt, Sloterdijk) and traditions whose work is not often associated with technology (e.g. German idealism, logical empiricism, existentialism, lebensphilosophie); and, exploring the contributions of literature, art, design theory, architecture, and media theory/history towards a philosophy of technology.

This panel focuses on the French tradition within the philosophy of technology. Referred to by some as a "biological" philosophy of technology, the French tradition refers to work by writers including Gilbert Simondon, Georges Canguilhem, and Bernard Stiegler, to name only a few, each of whom present a unique approach to technology.

 

Presentations of the Symposium

 

Huamans, Technique, and Machine in Canguilhem's philosophy

Emanuele Clarizio
Catholic University Lille

Canguilhem’s reflection on technique permeates his entire philosophy from its very beginnings (i.e. the late 1930s) and is generally configured as a ‘biological philosophy of technique’, where technique is understood in continuity with life. However, whenever Canguilhem attempts to think of the machine - which happens in a recurring manner from the essay on Machine and Organism (1947) up to The Question of Ecology (1974) - he offers an extremely negative view of it from a moral point of view (as an instrument of normalization and rationalization) and a reductionist one from an epistemological point of view (as an assemblage of juxtaposed parts and not as an individual with a specific mode of existence). Whereas technique, understood as technical activity, is an act of the living, the machine is an intellectualist fabrication that serves to inscribe an external purpose in matter. From this point of view, the machine has less to do with vital creation than with an act of social normalization, and should be understood against the backdrop of a reflection of the human being with society, rather than the human being with life.

In fact, the organism, the machine and society represent three different types of organization for Canguilhem, of which only the former deserves to be qualified as autonomous, while the machine and society tend towards automatism. If, therefore, there is in Canguilhem a ‘biological philosophy of technology’, this does not include his reflection on the machine, for which we should perhaps speak of a ‘social philosophy of the machine’. There is, in short, a discontinuity between Canguilhem’s thought of ‘technique’ and his thought of the ‘machine’, which is also rooted in a sort of duality in his anthropology: the human being is understood by Canguilhem in some respects as a living being (whose inventive actions serve to interact with the environment), and in others as a cultural and social being, without these two points of view ever coming together completely. The distance between the concept of technique and the concept of machine thus refers to an anthropology that could be described as dualist, at once biological and socio-cultural.

 

A History of Vitalism in French Philosophers of Technology

Hannes van Engeland
Maastricht University

The French tradition in the philosophy of technology has been the object of a renewed interest and has received significant attention in recent years, especially so for the work of for example Bernard Stiegler, Georges Canguilhem and Jacques Ellul. Rather interestingly, I believe that many philosophers in this tradition were heavily influenced by Vitalism- the claim that not all aspects of reality, especially living organisms, can be explained by mechanistic processes alone.

In this presentation I therefore want to propose the hypothesis that within French philosophy of technology there is a tradition of vitalism and that the combination of vitalism and philosophy of technology leads to a political stake in the philosophies of vitalist thinkers.

I will do this by succinctly introducing the thought of three thinkers, namely: Henri Bergson, Raymond Ruyer and Gilbert Simondon. Bergson was a very influential thinker and heavily influenced the whole tradition of French philosophy. Ruyer is a philosopher that was heavily influenced by him and, though very influential in his time, has fallen into obscurity but recently is being rediscovered and even translated into English. Simondon is perhaps the most well-known figure in contemporary philosophy of technology and was influenced by both previous thinkers.

Each of them developed a philosophy of technology and made normative claims about how technology should relate itself towards humans and society and vice versa. For every thinker I will first briefly introduce their philosophy and in doing so underline the vitalism that is present in their thought. I will then, secondly, go on to discuss how they were normative and political in their own time and how they thought the living and non-living should relate to one another by focussing on a specific discussion they had or a specific term they introduced.

In the final part of my presentation I will then elaborate how each of these thinkers can be interesting to think about technology in our own time, exemplifying the enduring significance of French philosophy of technology and its potential contributions to contemporary discussions on the ethical and societal implications of technological development.

 

Pharmacology of plasticity: bridging stiegler and malabou

Pietro Prunotto
University of Turin

This paper explores the intersection of Bernard Stiegler’s epiphylogenesis and Catherine Malabou’s concept of plasticity, two frameworks rooted in Jacques Derrida’s deconstructive legacy yet rarely brought into dialogue. By proposing a “pharmacology of plasticity,” the paper bridges Stiegler’s focus on technics and externalized memory with Malabou’s emphasis on the transformative potential of neuronal plasticity, extending their relevance to contemporary challenges in philosophy and technology.

Malabou’s plasticity—defined as the capacity to give, receive, and destroy form—resonates with Stiegler’s pharmakon, which conceptualizes technics as both a remedy and a threat. Both philosophers address the interplay between life, technology, and transformation, shedding light on how these dynamics shape subjectivity in an era of cognitive capitalism and algorithmic governance. Stiegler’s theory of epiphylogenesis situates technical objects as co-constitutive of human individuation, while Malabou underscores the biological and philosophical implications of trauma, adaptability, and materiality.

Despite shared themes, significant divergences emerge. Malabou critiques Stiegler’s perceived dualism between the human and the machinic, while Stiegler highlights gaps in Malabou’s engagement with technics, particularly her reading of Hegel. To reconcile these perspectives, the paper argues that plasticity itself operates pharmacologically, embodying both creative and disruptive potentials. This duality offers a critical framework for addressing socio-political and ecological challenges in technologically mediated societies, where the boundaries between biology and technology are increasingly blurred.

By synthesizing Stiegler’s organology and Malabou’s materialist turn, the “pharmacology of plasticity” illuminates the interaction between externalized memory and cerebral plasticity, proposing new ways to understand the politics of individuation and disindividuation. Engaging Derrida’s notions of trace and différance, the paper reaffirms the relevance of deconstruction for contemporary debates on technics, life, and transformation.

Ultimately, this synthesis provides critical tools for rethinking the relationship between technology and subjectivity, demonstrating how the “pharmacology of plasticity” can inform resistance and critique in an age defined by cognitive capitalism and technological governance. By exploring the intersections of technics and life, the paper extends the scope of deconstruction to address urgent questions about the Anthropocene, fostering a nuanced understanding of transformation and its implications for ethics and politics.

 
9:50am - 10:50am(Symposium) Postphenomenology II: practical applications
Location: Auditorium 14
 

Postphenomenology II: practical applications

Chair(s): Kirk M. Besmer (Gonzaga University, United States of America)

Postphenomenology is a methodological approach that seeks to understan human-technology relations by analysing the multiple ways in which technologies mediate human experiences and practices. One of the key accomplishments of postphenomenology is the development of a conceptual repertoire and vocabulary for analysing technologies in use, by focusing on how they become part of human embodiment, give rise to particular forms of sedimentation and resulting habits and practices, and more generally lead users to perceive and experience the world in particular ways. Given its conceptual resources, postphenomenology is often used to provide rich descriptions of actual technologies as they are taken up in human practices.

The rationale behind this panel is that because postphenomenology is a philosophy from technologies, it needs to continuously let its vocabulary be challenged by technological developments. That is, not only does postphenomenology offer a new perspective on technologies, these technologies, in turn, also offer a new perspective on postphenomenology. This dialectical process is reflected in the four papers in this symposium, each of which take a postphenomenological approach to a specific type of technology.

The first paper argues that contrary to Critical Algorithm Studies, which characterizes the black box nature of algorithms as an epistemological problem solved by transparency, a postphenmenological approach frames algorithmic opacity as a hermeneutic issue, thereby allowing for a reconceptualization of algorithmic power and resistance. Taking the meme as a use case, the second paper focuses on the postphenomenological concepts of multistability and variational theory to consider ideas like contextual meaning, participatory co-shaping, technological layering, nuanced emotional expression, and virality. The third paper focuses on good health care practices. There is no denying that health care is technologically mediated, and insofar as one makes oneself vulnerable when seeking health care, trust is a key constituent in health care. The third paper takes a postphenomenological approach to describe how technologies establish, challenge, and reinforce trust in healthcare. Harking back to themes in Husserl, the fourth paper takes a postphenomenological approach to current global crises, such as pandemics, famines, war, etc., arguing that crises create their own technologies, dubbed, “technologies of crisis.”

 

Presentations of the Symposium

 

Appropriating hidden technologies: a postphenomenological response to critical algorithm studies

Olya Kudina1, Anthony Longo2
1TU Delft, 2University of Antwerp

This paper examines the political implications of algorithmic opacity through the lens of postphenomenology in discussion with dominant critiques within Critical Algorithm Studies (CAS). Many CAS scholars have argued that the hidden, ‘black box’ character of algorithms undermines the possibility of political action and resistance, framing algorithms as epistemological problems to be solved with transparency. This critique aligns with a ‘hermeneutics of suspicion’ (Ricoeur 1970), largely focused on the uncovering of concealed power structures. Such critiques presuppose that the physical or spatio-phenomenal presence, or ‘visibility’, of technologies is a necessary condition for their political appropriation. The corresponding methodological assumption is that because algorithms are ‘invisible’, empirically-oriented methods are at best unhelpful and at worst damaging to understand how algorithms affect our online interactions. Therefore, scholars have widely rejected or questioned (post)phenomenology as a method to study algorithms. In response, we argue that postphenomenology offers an alternative theoretical framework by rethinking the conditions of appropriation. Drawing on its insights, we show that the absence of material presence does not preclude political engagement. Instead, as technological appropriation is always already preceded by acts of interpretation, we find that users imaginatively construct algorithms as objects with particular qualities. This hermeneutic process decouples appropriation from materiality, positioning algorithms as hermeneutic rather than purely epistemological problems. Building on this framework, we explore three modes of ‘algorithmic resistance’ (mastering, confusing, and manipulating algorithms) each demonstrating users’ creative engagements with algorithmic systems. These practices highlight the interpretive dimensions of resistance, which macro-oriented Marxist approaches prevailing in CAS often overlook. By integrating postphenomenological insights, we argue for a reconceptualization of algorithmic power and resistance, proposing that acts of subjectivation and desubjectivation emerge through a hermeneutic ‘lemniscate’ that mediates reflective and pre-reflective relations to technology. This contribution not only seeks to enrich debates on political agency in algorithmic environments by foregrounding the interpretive practices that underlie resistance, but defend a micro-level postphenomenological approach against critiques that it lacks political relevance.

 

Intimate technology: the postphenomenological meme use case

Stacey Irwin
Millersville University

Social media is a kind of digital media, which is a technology bundle that combines technological devices and information that is created, viewed, and/or shared digitally. Information that is contained in digital media includes text, audio, video, and images. This presentation explores the Use Case of that which can be imitated, the meme. Social media is the most popular way to push a meme. While some researchers have shared that social media is not in itself “social” (Majkut 2010; Turkle 2012), others look at current experiences and feel that indeed, social media and the technologies embedded within it, are social and even intimate (Carr 2015; Manovich, 2001). That is the nature of participatory media (Jenkins, 2018; 2016). Participation means that anyone and everyone connected to an available system has the ability to post, push, stream, and create content to multiple audiences any time of day or night. This daily participation, a habit for some, is facilitated by technology’s mediated and enfolded elements that provide “technologically texture” in the everyday lifeworld (Ihde 1990, 1). The main movement that provides a social experience is the ability to participate in the process of making and sharing and even remaking and resharing. This is the crux of meme creation and use predicated on technologies like the internet, devices, digitally available materials, software and apps, shared on a very wide scale or to a single individual. Sometimes this process cements culture and other times it breaks it (Irwin, 2021). Philosophy of Technology thinkers help to analyze the participatory spaces of social media, facilitated by technologies that seemingly provide private and intimate spaces. This postphenomenological Use case focuses on multistability and variational theory, to consider ideas like contextual meaning, participatory co-shaping, technological layering, nuanced emotional expression and virality (Fried 2021; Hasse 2008; Ihde 1993).

 

Postphenomenology and technologies in times of multiple crises

Markus Bohlmann
University of Muenster

Our present is characterized by the experience of multiple crises that overlap in many ways, e.g., pandemics, famines, ecological crises, migration, war, demographic change, or crises of democracy. In addition to regulatory political solutions, such crises always offer technological solutions both large and small. Crisis has been a classic subject of phenomenology since Husserl's Vienna Lecture of 1935. At that time, Husserl already understood crisis as a disease-like state of society. For him, the European crisis at the time was primarily characterized by a lack of cultural progress. He distinguished “to live” as “creating culture within historical continuity” from medically defined life (Husserl. 1965. Phenomenology and the crisis of philosophy. Transl. Lauer. New York: Harper & Row. P. 150). In today's crises, it is no longer possible to make such a clear distinction between culture, lived life and medically defined life. Technologies of crisis are important cultural products that still show “historical continuity” even after the crisis has been overcome. Technologies are an important part of crisis itself.

Technologies of crises have not yet been examined in a comprehensive manner from a postphenomenological perspective. In postphenomenology, there are analyses of technologies whose distance from humans has led to technological breakdowns and even crises. In Technology and the Lifeworld, for example, Don Ihde interpreted the reactor accident at the Three Mile Island nuclear power plant in 1979 as problematic because “the nuclear power system was observed only through instrumentation” (Ihde. 1990. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press. P. 85). This is a case of technologies as the cause of crisis, most recently experienced in the Fukushima nuclear crisis 2011. Current crises, on the other hand, create their own technologies, they have a meaning for humans in crisis, they are close to the body (proximal) and play a decisive role in how crisis is given to us. They are technologies of crisis.

In this presentation, I will use examples of educational technologies of crisis to illustrate some important theoretical aspects of technologies of crisis, for example:

1. Technologies of crisis are proximal.

2. Crisis is about life itself, but not as “mere life,” but as “creature” (Agamben and back to Benjamin).

3. Sovereignty in crisis, especially technological sovereignty, is the exercise of biopower (Foucault). In a crisis, there are regulatory sovereign actions and technological ones.

4. In crisis, technological action replaces interpersonal action.

5. Primary stabilities of technologies of crisis cannot be evaluated ethically. It is always about the whole of life here.

6. Secondary stabilities of technologies of crisis focus the field of awareness on the crisis itself.

7. Multiple crises trigger a struggle for attention, which becomes a struggle for the sedimentation of technologies.

 
9:50am - 10:50am(Symposium) John Dewey and philosophy of technology: bridging the ethical epistemical and political
Location: Auditorium 13
9:50am - 10:50am(Symposium) In search of legitimation: the dynamic tensions in the regulation of privacy and data rights in Vietnam
Location: Auditorium 12
9:50am - 10:50am(Symposium) Ethical lessons from the second quantum revolution
Location: Auditorium 11
 

Ethical lessons from the second quantum revolution

Chair(s): Benedict Lane (TU Delft, Netherlands, The)

The SPT2025 conference theme, “The Intimate Technological Revolution,” highlights the profound ways emerging technologies are transforming personal, societal, and political landscapes. Quantum Technologies (QT), as a major contemporary frontier of technological innovation, exemplify these transformative dynamics through their implications for individual moral responsibility, national technological sovereignty, international ethical governance, and global security. Thus, the “second quantum revolution” can be viewed as an important contemporaneous counterpart of the “intimate technological revolution” – these parallel revolutions can be seen as mutually reinforcing, both thematically and in their concrete impact on society. As with the intimate technological revolution, the multilayered and deeply interconnected ethical ramifications of QT force us to reevaluate many established ways of thinking ethically about technology, with important lessons to be learned even beyond the context of QT.

This panel examines the socio-politico-ethical challenges posed by QT, and aims to enrich the broader discourse on the ethical impact of technology on society by using QT as a case study through which to explore:

i) the appropriateness of dominant normative frameworks for assessing emerging technologies, such as QT, given existing interdependencies and dynamics of power;

ii) the development of governance structures aimed at anticipating the societal impact of emerging technologies, such as QT, the role of different stakeholders in shaping and assessing such structures, and the interplay between discourses surrounding the governance of different emerging technologies, such as QT and AI;

iii) the connections between the geopolitical strategic implications of emerging technologies, such as QT, and the (potentially irresponsible) escalatory discourse surrounding them;

iv) the roles and responsibilities of engineers and scientists with regards to the ethics and responsible governance of technological innovation and with respect to ongoing changes in the societal mandate for science;

v) the role of inclusive ecosystem design, equitable access to education and careers, and stakeholder engagement in tackling systematic demographic biases in the innovation process.

This panel builds on the success and momentum of an earlier panel, going beyond previous discourse by connecting the ethics of QT to the ethical vulnerabilities, risks, and opportunities inherent in technological innovation. Incorporating insights from applied ethics, political philosophy, and Science and Technology Studies (STS), the panel offers a multidimensional exploration of the ethical implications of QT, bridging theoretical inquiry with practical ethical challenges, and offering insights relevant to engineers, policymakers, and philosophers of technology alike. By emphasizing the multidimensionality and interconnectedness of the ethical issues surrounding QT, the panel raises ethical questions and contributes ethical insights with significant relevance for the intimate technological revolution.

 

Presentations of the Symposium

 

Identifying alternatives to the de facto division of moral labour in ELSA engagement with quantum technology development

Clare Shelley-Egan, Benedict Lane
Faculty of Technology, Policy, and Management, TU Delft

The formulation of an ELSA or Quantum and Society ‘approach’ is central to our role as ELSA researchers within the Dutch quantum innovation ecosystem. Notwithstanding Key Performance Indicators orienting our ELSA work, there is also a need to think concretely and effectively about how to engage with scientists embedded in the ecosystem. The limitations of ELSA studies have been well rehearsed, while the outcomes of ELSA studies have generally been modest, ranging from some modifications to research agendas, to increased reflexivity.

In this contribution, we engage with one key point of criticism of ELSA as a normative programme: that is, the division of moral labour that exists between ELSA researchers and natural scientists and technology developers. This division manifests such that ELSA scholars are tasked with addressing ethical, governance and broader issues, while scientists and developers proceed with work on scientific, technical and engineering issues, without concerning themselves with the ethical, legal, or social aspects of their work. This division of moral labour is historically entrenched and difficult to overcome, given the contemporary landscape of research and organisation and the nature of ‘organised irresponsibility’.

This contribution engages with this lack of integration of labours through a pragmatist lens, going beyond descriptive and historical engagement with this divide to offer building blocks to identifying viable alternatives to the current normative arrangement. It does so by engaging with the notion of ‘moral progress’. Pragmatists understand moral progress as “justification that sticks”, where justification itself “emerges from the process of inquiry” (Kitcher, 2021). As a result of shifts in the relationship between science and society and persistent “responsibility gaps” (Da Silva, 2024) in the institution of science, past justifications of the moral division of labour in science and technology research have not “stuck” and therefore require reassessment. We develop a theoretical framework for doing so. We go on to employ our framework constructively to make proposals for future divisions of moral labour in science and technology research. Accordingly, our paper draws out the following questions:

• What kind of division of moral labour would be more suited to our needs in the present?

• What is feasible, for both scientists and ELSA scholars?

• What is justifiable? Do the benefits outweigh the costs, e.g. opportunity costs of missing out on risky innovation?

 

Responsible innovation ecosystems: advancing quantum for good through gender-transformative approaches

Shamira Ahmed
Faculty of Technology, Policy, and Management, TU Delft

Innovation ecosystems tend to have technology development, risk mitigation, and economic success as the end goals of innovation governance, less is known about the governance of innovation ecosystems with respect to ethical and societal concerns.

As quantum technologies advance, there is a growing need to ensure their development aligns with societal needs, ethical considerations, environmental responsibilities, and gender equality. This paper explores the gender-dimensions (or lack-there of) that shape the development of emerging quantum governance for the public good.

By analysing the gender dimensions of innovation governance in Quantum Delta NL, the Dutch national quantum ecosystem, the study aims to reveal whether the tradeoffs between economic competitiveness and on societal benefits of quantum technology development or what is termed ‘Quantum for Good’ include navigating the gender-dimensions that shape the development and diffusion of quantum governance for the public good.

The paper adopts an interdisciplinary framework, integrating the capabilities approach (Nussbaum, 2011), socio-technical foresight (Geels & Schot, 2007), and design for values (van den Hoven et al., 2015) to frame the gender dimensions of the emerging field of responsible quantum governance. By synthesizing an interdisciplinary framework with emerging research on gender in quantum computing and innovation ecosystems (Granstrand & Holgersson, 2020), the paper aims to identify interventions necessary for an enabling policy and regulatory environment that addresses systemic gender biases in quantum technology development (Kop et al., 2024; Stilgoe et al., 2013).

Beyond co-creating potential interventions that include involve ecosystem actors working closely together to address barriers faced by women in quantum fields, including limited access to resources and representation (Murphy, 2024). The paper will explore strategies for inclusive ecosystem design, equitable access to quantum education and careers, and critical partnerships between stakeholders to drive systemic change in the Netherland’s Quantum ecosystem. Finally the paper will analyse the role of gender-responsive innovation principles in shaping organizational practices and quantum technology applications.

The findings from the paper aims to contribute to the evolving discourse on responsible quantum ecosystem governance, prioritizing gender equality and inclusiveness to support “Quantum for Good” . The findings will provide insights into creating practical frameworks for that address gender dimensions of quantum innovation ecosystems,

offering valuable guidance for a wide range of stakeholders: policymakers, industry leaders, and researchers in navigating the ethical and inclusivity challenges of emerging quantum technologies.

 
9:50am - 10:50am(Symposium) A political (re-)turn in the philosophy of engineering and technology
Location: Auditorium 9
 

A Political (Re-)Turn in the Philosophy of Engineering and Technology - Power and domination

Chair(s): Glenn Miller (Texas A&M University)

Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).

To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.

References

Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.

Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12

Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y

Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8

 

Presentations of the Symposium

 

Why Representations of the Future (Should) Matter for Political Philosophy of Technology? ‘Modal Power’ and Socio-Technical Directionality

Sergio Urueña
University of the Basque Country UPV/EHU

Over the past three decades, there has been a growing interest in how representations of the future co-configure technological and sociotechnical practices. Concepts such as “visions” (Schneider & Lösch, 2019), “expectations” (Konrad & Alvial Palavicino, 2017; Pollock & Williams, 2010), “promises” (Parandian et al., 2012), “hype cycles” (van Lente et al., 2013), “sociotechnical imaginaries” (Jasanoff & Kim, 2015), or “hermeneutic circles” (Grunwald, 2017) reveal diverse ways in which technological production is intertwined with, and shaped by, future-oriented representations. This attention has primarily emerged within fields such as Science and Technology Studies (STS) and normative approaches like various forms of Technology Assessment. Yet, it remains an underexplored niche within philosophical inquiry. What critical and/or interventive commitments, then, should a political philosophy of technology have toward these future-oriented phenomena?

This paper argues that any political philosophy of technology aiming to address the socio-material and discursive mechanisms underpinning the co-production of technology must include among its objects of analysis the ways in which representations of the future shape, perform, and sustain the development of technologies (and their associated socio-political orders). This focus is warranted, I will contend, because such future-oriented representations—embedded within the present—play a pivotal role in modulating “modal power,” understood as the capacity, whether explicit or subtle, to set the spaces of possibility deemed (im)plausible and (un)desirable in guiding (technological) praxis (Urueña, 2022). If a central task of a political philosophy of technology is to discern how technology co-configures sociopolitical orders and engages with power—creating, reinforcing, or redistributing it—then attention to modal power, and its influence on sociotechnical trajectories, should be considered a fundamental dimension of such a philosophy. This is especially critical for political philosophy of technology focused on “socially disruptive” new and emerging technologies, which are the primary (though not the sole) catalysts and capitalizers of future temporality.

References

Grunwald, A. (2017). Assigning meaning to NEST by technology futures: extended responsibility of technology assessment in RRI. Journal of Responsible Innovation, 4(2), 100–117. https://doi.org/10.1080/23299460.2017.1360719

Jasanoff, S., & Kim, S.-H. (Eds.). (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press.

Konrad, K., & Alvial Palavicino, C. (2017). Evolving Patterns of Governance of, and by, Expectations: The Graphene Hype Wave. In D. M. Bowman, E. Stokes, & A. Rip (Eds.), Embedding New Technologies into Society: A Regulatory, Ethical and Societal Perspective (pp. 187–217). Pan Stanford Publishing.

Parandian, A., Rip, A., & Te Kulve, H. (2012). Dual dynamics of promises, and waiting games around emerging nanotechnologies. Technology Analysis & Strategic Management, 24(6), 565–582. https://doi.org/10.1080/09537325.2012.693668

Pollock, N., & Williams, R. (2010). The business of expectations: How promissory organizations shape technology and innovation. Social Studies of Science, 40(4), 525–548. https://doi.org/10.1177/0306312710362275

Schneider, C., & Lösch, A. (2019). Visions in assemblages: Future-making and governance in FabLabs. Futures, 109, 203–212. https://doi.org/10.1016/j.futures.2018.08.003

Urueña, S. (2022). Anticipation and modal power: Opening up and closing down the momentum of sociotechnical systems. Social Studies of Science, 52(5), 783–805. https://doi.org/10.1177/03063127221111469

van Lente, H., Spitters, C., & Peine, A. (2013). Comparing technological hype cycles: Towards a theory. Technological Forecasting and Social Change, 80(8), 1615–1628. https://doi.org/10.1016/j.techfore.2012.12.004

 

Gadgets, gimmicks, garbage: domination and irresponsible innovation

Lukas Fuchs
University of Stirling

This presentation sketches a political economic critique of irresponsible innovations. Despite diverse research on technological innovations and their impact (ultra-processed foods, automobiles, cushioned running shoes, social media, nonstick cookware), we lack systematic understanding of why societies develop, disseminate, adopt and retain useless and harmful products. The presentation first presents cases of irresponsible innovations and investigates the mechanisms that propel them. Second, drawing on the normative notion of freedom as non-domination, it studies to which extent the market-led influx of innovations has made societies unfree to decide their own fate and individual consumers unable to make autonomous lifestyle choices.

 

Energy, war, power: political philosophy of engineering and technology during armed conflict

Giovanni Frigo
Karlsruhe Institute of Technology

The context of warfare provides unique conditions for thinking radically about political philosophy of engineering and technology. This paper focuses on two crucial issues that connect energy and war – energy interdependence and the weaponization of energy systems. Ethics (understood in individualistic terms) appears insufficient for dealing with these problems, which seem to require collective political solutions instead. Concrete energy projects that aim to contribute to the energy transition from fossil fuel-based energy systems to more sustainable and renewable ones usually take place within the borders of national-states. Here, issues related to energy autonomy, security, safety, and sovereignty play a major role. Although these projects depend on national public policies, they are often connected to broader policy agreement at the union, federal or international level. This interconnectedness, along with the interdependencies created by inter-state energy markets, imply issues related to international law, international relations, and transboundary political equilibria. The emergence of warfare may pose unique challenges to both intra- and inter-state political relations. What is the role of political philosophy of engineering and technology in understanding (energy) engineering and (energy) technologies during warfare? To sketch a provisional answer and discuss the two issues of energy interdependence and weaponization of energy systems, we present the case of the Zaporizhzhia Nuclear Power Plant (ZNPP) located in South-Eastern Ukraine. The ZNPP is Europe's largest nuclear facility and, at the moment of writing, a highly contested and weaponized socio-technical system that is currently under Russian military occupation. The aim of the article is twofold. On the one hand, we propose to realistically consider the (im)possibility of advancing energy transition during armed conflicts. On the other hand, we offer a reflection about the geopolitics of energy from the standpoint of political philosophy of engineering and technology.

 
10:50am - 11:50amCoffee & Tea break
Location: Voorhof
10:50am - 11:50amPoster session
Location: Senaatszaal
 

Polished, primitive, or sophisticated: What videogame graphics can tell us about colonial and postcolonial aesthetics.

Afra Willems

Saxion, Netherlands, The



All in on AI: A critical look at the effects of creating with AI-powered tools

Denzel Hagen, Marcello Gómez Maureira, Kristi Claassen

University of Twente, Netherlands, The



Digital colonialism and critical communication infrastructures: submarine cables and data and power routes in Portugal and Brazil

Ana Carolina Haddad

Researcher, Portugal



EduLARP as an educational method for discussing ethical impact of intimate technologies

Verena Schulze Greiving

Saxion Hogeschool, Netherlands, The



How do scientists accept knowledge generated by AI technology?——A case study of AlphaFold

Enrong Pan, Ziming Wang

Sun Yat-sen University, China, People's Republic of



How semantic web technologies afford information processing agents

Yaoli Du

TU Braunschweig, Germany



Research ethics education using scientific Communication: the case of kyushu university in japan

Toshiya Kobayashi

Kyushu University



Shareable health data dashboard for social support during grief recovery

Angelos Chatzimparmpas1, Sam Muller2, Sanne Schoenmakers3

1Utrecht University, Netherlands, The; 2University Medical Center Utrecht, Netherlands, The; 3Eindhoven University of Technology, Netherlands, The



Speculative Ethics; Practicing philosophy of technology in design education

Wouter Eggink

University of Twente, Netherlands, The



Traffic lights: from social justice to digital surveillance

Victoria Lobatyuk

Peter the Great St.Petersburg Polytechnic University, Russian Federation



Where am I? -- Self and Attention in the Digital Net Culture

Domenico Schneider

TU Braunschweig, Germany

 
11:50am - 12:50pm(Symposium) Third wave continental philosophy of technology
Location: Blauwe Zaal
 

Third wave continental philosophy of technology - Part III

Chair(s): Pieter Lemmens (Radboud University), Vincent Blok (Wageningen University), Hub Zwart (Erasmus University), Yuk Hui (Erasmus University)

Since its first emergence in the late nineteenth century (starting with Marx, Ure, Reuleaux and Kapp and coming of age throughout the twentieth century via a wide variety of authors such as Dessauer, Spengler, Gehlen, Plessner, the Jünger brothers, Heidegger, Bense, Anders, Günther, Simondon, Ellul and Hottois), philosophy of technology has predominantly sought to think ‘Technology with a capital T’ in a more or less ‘metaphysical’ or ‘transcendentalist’ fashion or as part of a philosophical anthropology.

After its establishment as an academic discipline in its own right from the early 1970’s onwards, philosophy of technology divided itself roughly into two different approaches, the so-called ‘engineering’ approach on the one hand and the so-called ‘humanities’ or ‘hermeneutic’ approach on the other (Mitcham 1994).

Within this latter approach, the transcendentalist framework remained most influential until the early 1990’s, when American (Ihde) and Dutch philosophers of technology (Verbeek) initiated the so-called ‘empirical turn’, which basically criticized all macro-scale or high-altitude and more ontological theorizations of technology such as Heidegger’s Enframing and Ellul’s Technological Imperative as inadequate and obsolete and instead proposed an explicit move toward micro-scale and low-altitude, i.e., empirical analyses of specific technical artefacts in concrete use contexts (Achterhuis 2001).

From the 2010’s onwards, this empirical approach has been reproached for obfuscating the broader politico-economic and ontological ambiance. Particularly European philosophers of technology expressed renewed interest in the older continentalist approaches and argued for a rehabilitation of the transcendental or ontological (as well as systemic) question of technology (Zwier, Blok & Lemmens 2016, Zwart 2021), for instance in the sense of the technosphere as planetary technical system responsible for ushering in the Anthropocene or Technocene (Cera 2023), forcing philosophy of technology to think technology big again (Lemmens 2021) and calling not only for a ‘political turn’ (Romele 2021) but also for a ‘terrestrial turn’ in the philosophy of technology (Lemmens, Blok & Zwier 2017).

Under the influence of, among others, Stiegler’s approach to the question of technics (Stiegler 2001), Hui’s concepts of cosmotechnics and technodiversity (Hui 2016) and Blok’s concept of ‘world-constitutive technics’ (Blok 2023), we are currently witnessing the emergence of what may be called a ‘third wave’ in philosophy of technology which intends, in dialectical fashion, to surpass the opposition between transcendental and empirical, and instead engages in combining more fundamental approaches to technology and its transformative, disruptive and world-shaping power with analyses of its more concrete (symptomatic) manifestations.

This symposium aims to open a debate among authors exemplifying this third wave, with a view to the contemporary intimate technological revolution, specifically focusing on the themes technology and human identity, human nature, agency and autonomy, artificial intelligence, robots and social media, and the environment and sustainability.

 

Presentations of the Symposium

 

Philosophy of technology today: ethics without self

Amelie Berger-Soraruff
Maison Francaise d'Oxford

This paper argues that the shared enthusiasm for applied ethics in philosophy of technology has led scholars to develop a technicist account of ethics. This approach, I claim, can cause more harm than benefit for individuals, for the measures developed through this type of ethics often lack a solid account of the self and a sustained consideration of subjective experience. I will defend Stiegler’s work on the ethics of the self as potentially filling this ethical vacuum.

The continental and analytical traditions of philosophy of technology are typically said to hold opposite views. On one hand, ‘Humanities philosophy of technology’ accepts the primacy of the human over technologies and elaborates a discourse that is essentially anthropometric, while on the other hand, the neo-empirical branch aims to understand technology from the very point of view of technologies (Mitcham 1994).

Recently, a variety of thinkers have challenged this picture to propose a much more symbiotic relationship between the human and its technological environment (Latour; Ihde; Verbeek). It is because both continental and analytical traditions sense that technology not only shapes individual lives and social institutions (Kroes et al.2008) but also recasts and transforms the core foundations of human existence, that they equally fear the impact of technoscientific advancements on our ‘humanity’ and all the values, ideals, concepts, and goals we hold dear. For these reasons, the field of applied ethics, with its intention to propose effective solutions to urgent social problems, has become the main way, if not the only way, to do philosophy of technology.

This, I argue, has led to a paradox. That of failing to propose a coherent vision of selfhood and engage with the complexity of subjective experience, while deploying unprecedented efforts to ‘protect the human’. Meanwhile, our supposedly life-enhancing innovations, policies, and procedures keep failing or deceiving us (Rodgers and Bremner 2018; Andre Gorz 2001; Enzo Mari 2012), and some thinkers have come to regret how the technologization of ethics is flattening the complexity of human experience (Chun 2006; Harari 2018).

In this paper, I present Stiegler as a “philosopher of the self.” Indeed, he has shown that alienation, madness, violence and despair are the prices we pay when we neglect subjective life (2010; 2012; 2013). I argue that his former background in phenomenology, coupled with his engagement with Foucault’s ethics and his concern for cultural/existential meaning, are valuable for thinking of the human as a complex subject and not simply as a set of organs (which is often the option preferred by scholars) or the mere expression of its technological milieu. In fact, his work may succeed in articulating a broad account of technology, both theoretical and practical, without neglecting the necessity to reflect on who we are aside from technological beings. In this respect, Stiegler’s work is fit to address what I see as a shortcoming in the current study of technology that is reluctant to address the question of the subject, at the risk of proposing inadequate solutions and deepening the crisis we are already in.

 

Rationality after the ‘algorithmic turn’

Natalia Juchniewicz
University of Warsaw

The concept of rationality has been questioned and criticised in XXth-century philosophy because of its association with the figure of domination (Herrschaft). To possess reason is to break with myth and to dominate nature (both external and internal). At the same time, the development of rationality based on the principles of purposive-instrumental thinking leads to a purely economic and quantitative perspective on what technology is supposed to serve. Adorno and Horkheimer criticise the principle of rationality and the absolute faith in reason, pointing out the difficulty of freeing human thought from the myth that recurs on another, ideological-economic level.

Habermas, on the other hand, seeks to rehabilitate reason by pointing to its communicative dimension, which makes it possible to establish interpersonal bonds and mutual recognition, thus fulfilling the condition of respect for human dignity and non-instrumentalisation (in the spirit of Kant's categorical imperative and Hegel's philosophy of recognition). At the same time, Habermas perceives that in modernity takes place a colonisation of the lifeworld by the logic of systems and a disconnection of expert languages from the experience of 'ordinary people', creating a gap between theory and practice.

In my paper I want to raise the main question of what we mean by rationality today when the process of decision making (from everyday choices to political and economic strategies), action planning or problem solving is coupled with technology, including learning artificial intelligence. I call this set of phenomena the 'algorithmic turn', which at the same time reorganises our thinking about what it means to be a thinking and learning subject (in the sense of a discussion about whether artificial intelligence is the intelligence along the lines of or in the contrast to human intelligence).

The following questions emerge from the main one: Is rationality currently experiencing a renaissance by becoming digital and algorithmic – in other words, is there more rationality because we have tools that realise the ideal of instrumental reason, but also means of communication that allow the coordination of plans and strategies between people on a large scale – or do the phenomena of the algorithmisation of thought and action diminish the agency of the subject and the autonomy of human thinking? Is the notion of algorithmic rationality self-contradictory? Does it convey a sense similar to instrumental or communicative rationality, or is it an entirely new way of thinking about thinking?

I will try to answer these questions by pointing to the relationship between technology and thinking, memory, action and environment, as well as transindividuality and sensibility (based on the theories of Heidegger, Stiegler, Simondon and Hui), in order to show the positive potential of digital and algorithmic rationality as a concept. At the same time, I will draw attention to the important critical dimension of algorithmic rationality and its limitations in more socially-oriented philosophy (Han, MacIntyre).

 

For the Ontological Rehabilitation of the Techno-Aesthetic Feeling

Andrea Zoppis
University of Ferrara

Building on insights from Maurice Merleau-Ponty in the late 1960s, this paper begins by rethinking the value and role of his phenomenological method concerning the question of technology. Although not explicitly recognised as a philosopher of technology, Merleau-Ponty’s approach proves to have great potential for describing and explaining the role of technology in human life, especially concerning its impact on the intercorporeal dimension of relations among humans, non-human entities, and the environment. Considering Merleau-Ponty’s programmatic announcement of an ‘ontological rehabilitation of the sensible’ (Merleau-Ponty 1964), I will first show how the ‘sensible’ – particularly in light of the so-called ‘intimate technological revolution’ – becomes a crucial ground to interrogate to assess the actual impacts of technology on human existence.

I will next turn to Gilbert Simondon’s notion of techno-aesthetic feeling (Simondon 2012) to further develop this insight. This will allow me to understand the necessity for technology itself to undergo an ontological rehabilitation that embraces its sensible, affective and imaginative implications. In this respect, I will consider Simondon’s theory of the ‘genetic cycle of images’, as presented in Imagination and Invention (Simondon 2023), to highlight some fundamental implications concerning the mental images that underpin technological invention, especially according to what Simondon describes as their quasi-organismicity. This ontological reconsideration of technology’s sensible and imaginal matrix will highlight how deeply it penetrates the human experience.

In this regard, I will then consider Mikel Dufrenne’s philosophy to examine the aesthetic impacts (aisthesis) of technology on human sensibility and affectivity. In particular, I will consider the notion of quasi-subject (Dufrenne 1973) to account for a profound agentic power of technical objects undergoing their more explicit technical functionalities. Through their sensible and expressive configurations, these objects surreptitiously create a ‘neo-environment’ for human beings (Cera 2023), fundamentally shaping our experience. Human beings are, therefore, prey to a technological process of psycho-sociological alienation (Simondon 2017) or ‘anesthetization’ (Dewey 2005, Montani 2014), gradually becoming quasi-objects (Dufrenne 1966).

I will therefore argue that it is only through an ontological reconsideration of the sensible, affective and imaginative charge intrinsic to technical objects that it becomes possible to direct the ‘intimate technological revolution’ towards an emancipatory outcome. Following Merleau-Ponty, Simondon, and Dufrenne, this paper advocates for a philosophical technical culture that preserves the primordial contact with sensible intercorporeality and with the terrestrial environment to which technologies intrinsically belong.

 
11:50am - 12:50pm(Symposium) The History of the Philosophy of Technology: History and Historicity of the Empirical Turn
Location: Auditorium 15
 

The History of the Philosophy of Technology: The Empirical Turn and the historization of the philosophy of technology

Chair(s): Darryl Cressman (Maastricht University, Netherlands, The)

The History of the Philosophy of Technology posits the philosophy of technology as a wide-ranging and comprehensive field of study that includes both the philosophical study of particular technologies and the different ways that technology, more broadly, has been considered philosophically. Influenced by the history of the philosophy of science, the history of ideas, and the history of the humanities, our aim is to examine how different individuals and traditions have thought about technology historically. This includes, but is not limited to: the work of different thinkers throughout history, both well-known and overlooked figures and narratives, including non-western traditions and narratives that engage with technology; analyzing the cultural, social, political, and sociotechnical contexts that have shaped philosophical responses to technology, including historical responses to new and emerging technologies; exploring the disciplines and intellectual traditions whose impacts can be traced across different philosophies of technology, including Science and Technology Studies (STS), the history of technology, critical theory, phenomenology, feminist philosophy, hermeneutics, and ecology, to name only a few; histories of different "schools" of philosophical thought about technology, for example French philosophy of technology, Japanese philosophy of technology, and Dutch philosophy of technology; mapping the hidden philosophies of technology in the work of philosophers (e.g. Foucault, Arendt, Sloterdijk) and traditions whose work is not often associated with technology (e.g. German idealism, logical empiricism, existentialism, lebensphilosophie); and, exploring the contributions of literature, art, design theory, architecture, and media theory/history towards a philosophy of technology.

This panel focuses on the history and historicity of the empirical turn in the philosophy of technology. The papers in this panel explore, from one perspective, the long history of the empirical turn in Dutch philosophy of technology, and from another perspective, the influence on the empirical turn on how the history of the philosophy of technology is conceptualized.

 

Presentations of the Symposium

 

The long History of the Empirical Turn: Dutch Philosophy of Technology, 1930-1990

Massimiliano Simons
Maastricht University

The field of philosophy of technology currently understands itself as having made an 'empirical turn', associated with a number of edited volumes published by Dutch philosophers of technology (Kroes & Meijers 2000; Achterhuis 2001). What is often overlooked, however, is that these volumes are the product of a longer tradition of Dutch engineers reflecting on technology. The aim of this talk is to map how these early philosopher-engineers and their institutions shaped the 'empirical turn' as we know it today. I will do this in two ways.

First, I will highlight the work of a number of Dutch engineer-philosophers who developed philosophies of technology before the 1990s. A first case is Arie Korevaar (1886-1964), a Delft engineer who developed an early philosophy of technology in Techniek en Wereldbeschouwing (1934). A second case is Hendrik van Riessen (1911-2000). In his doctoral dissertation Filosofie en Techniek (1949), and further developed in subsequent works (Van Riessen 1952, 1953), he developed a philosophy of technology based on Dutch reformational philosophy, an influential neo-Calvinist philosophical movement in the Netherlands in the 20th century. Finally, there is the case of Andries Sarlemijn (1936-1998), who extensively collaborated with Peter Kroes in Eindhoven in the 1980s and 1990s (Sarlemijn & Kroes 1988; 1990). Sarlemijn devoted himself to the history of science and technology, leading to a typology of technology (Sarlemijn 1984), which was rooted in a theory of analogy (Sarlemijn & Kroes 1988) and resulted in a multifactorial model that sought to identify both internal and external factors in the development of technology (Sarlemijn 1993).

Second, I want to situate these cases in a broader institutional history by looking at the emergence of philosophy chairs, journals, and workshops in the philosophy of technology in the Netherlands. The Dutch Reformed Church, for example, sponsored several chairs in the philosophy of technology in the Netherlands from the 1950s onward, especially in Delft and Eindhoven, occupied by Van Riessen and others. Even more important was the role of the Royal Dutch Society of Engineers (KIVI). Within this institution, groups reflecting on philosophy and technology date back to the 1960s, but the main breakthrough came in the early 1990s with the creation of a subsection on 'Filosofie & Techniek' (see Van Gijn & Eekels 2000). It was this section that was responsible for the institutional support of the empirical turn, by sponsoring the workshop and lecture series that formed the basis of the two edited volumes (Kroes & Meijers 2000; Achterhuis 2001) and by supporting a number of chairs and publication venues which were crucial for this new empirical turn to declare itself.

Taken together, then, these two lines of argument will show how the identity of the "empirical turn" that solidified around 2000 was a product of a longer history of philosophy of technology in the Netherlands, deeply rooted in the practices and ideas of engineers.

References

Achterhuis, H. (Ed.) (2001). American philosophy of technology: the empirical turn. Indiana University Press.

Gijn, J. van, & Eekels, J. (2000). Hebben ingenieurs nog meer te vertellen? : werkzaamheden en bevindingen van de afdeling Filosofie en Techniek van het Koninklijk Instituut van Ingenieurs. Damon.

Korevaar, A. (1934). Techniek en wereldbeschouwing. De Erven F. Bohn N.V.

Kroes, P., & Meijers, A. (Eds.) (2001). The empirical turn in the philosophy of technology. JAI.

Kroes, P., & Sarlemijn, A. (eds.) (1984). Dynamica van de technische wetenschappen: de wisselwerking tussen wetenschap en techniek vanuit wetenschapsfilosofisch standpunt. TWIM-onderzoekscentrum.

Riessen, H. van (1949). Filosofie en techniek. Kampen.

Riessen, H. van (1952). Roeping en probleem der techniek. J.H. Kok.

Riessen, H. van (1953). De maatschappij der toekomst. Wever.

Sarlemijn, A. (1984). Historisch gegroeide relaties tussen natuurwetenschap en techniek. TWIM-onderzoekscentrum.

Sarlemijn, A. (1993). Designs are cultural alloys. STeMPJE in design methodology, 191-248. In: Vries, MJ. de, Cross, N. and Grant, D.P. (eds.), Design Methodology and Relationships with Science. Kluwer Academic Publishers.

Sarlemijn, A., & Kroes, P. (1988). Technological Analogies and their Logical Nature. In: Durbin, P. (eds) Technology and Contemporary Life. Philosophy and Technology. Springer.

Sarlemijn, A., & Kroes, P. (eds.) (1990). Between Science and Technology. North-Holland Delta.

 

Regimes of Historicity of Technology: Toward an Epistemology of the History of the Philosophy of Technology

Agostino Cera
University of Ferrara

My contribution starts with recognizing the need to develop an epistemology of the history of the philosophy of technology. This entails a self-reflective effort to move it definitively beyond a merely chronicling dimension (i.e. “antiquarian-monumental”, according to Nietzsche’s classification in the Second Untimely Meditations) toward a fully critical dimension. For this transition to occur, the strictly historical component must be complemented by a hermeneutic one, that is, engaging with the past not merely as memory (recording and preservation) but also as action (taking a stance and even passing judgment). Unlike simple historiography, historicity depends primarily on how we choose to view the past. In the field of the philosophy of technology, integrating this hermeneutic dimension translates into the question: “How many and what meanings have been attributed to the term ‘technology’?”

From an operational standpoint, an important precedent in this field is the work of Carl Mitcham, who distinguishes contemporary philosophy of technology into two fundamental approaches: the “engineering approach” on one hand and the “humanities” or “hermeneutic approach” on the other. My proposal involves broadening the chronological scope considered by Mitcham.

One of the major dividing lines that, often unconsciously, organizes and distinguishes various philosophies of technology is precisely historical. Specifically, whether the history of technology (i.e. the ways in which the concept of “technology” has been conceived over time) should be seen through the lens of continuity or discontinuity. These methodological approaches correspond to two fundamental meanings of technology: 1) As an anthropological constant; 2) As an epochal phenomenon.

In my view, with the rise of the empirical turn and later post-phenomenology, we find ourselves in a context dominated by a continuist and monothematic approach, viewing technology essentially – if not exclusively – as an anthropological constant. Technicity is interpreted as the cornerstone of humanization, the evolutionary success of our species. Anthropogenesis and technogenesis are treated as synonymous. Based on this assumption, any concrete technology from any era (from flint to the atomic bomb, from axes to AI) is seen as an expression or manifestation of this technicity and, as such, of the same "human nature". This implies that, in principle, no technology can be rejected. The ontological naturalization of technology thus becomes a vehicle for its hermeneutic neutralization, reflecting a tendency toward apologetics and justificationism, characteristic of the current mainstream in this field of study.

The alternative is a discontinuist history that, without denying the value of technology as an anthropological constant, identifies breaks in continuity within its history and development. These involve “paradigm shifts” (Kuhn) and “regimes of historicity” (Hartog), which result in substantial transformations, both semantic and hermeneutic, of what we continue to denote with a single term: “technology”.

To make this discussion more concrete, I offer three examples of this critical, discontinuist, and polysemic historicization.

The first, and most famous, is Martin Heidegger’s distinction between classical technology, characterized by Hervorbringen (bringing forth), which mimetically follows natural models, and modern technology, characterized by Herausfordern (challenging), which opposes and seeks to surpass and replace natural models titanically.

The second example is Lewis Mumford’s tripartition of technical regimes, based on Patrick Geddes’ work. In Technics and Civilization, Mumford distinguishes between eotechnic (the “clock age”), paleotechnic (the “steam age”), and neotechnic (the “electric age”).

The third example is Jacques Ellul, whose phenomenology of technology identifies a historical progression from the “technical operation” (“which includes every operation carried out in accordance with a certain method to achieve a particular end”) to the “technical phenomenon” (“which introduces the technological ratio operandi in any human context, that is, ‘in every field men seek to find the most efficient method’”) and to the “technical system” (“technology having become a universum of means and media, it is in fact the environment [milieu] of man”).

 
11:50am - 12:50pm(Symposium) Postphenomenology III: new theoretical horizons
Location: Auditorium 14
 

Postphenomenology III: new theoretical horizons

Chair(s): Bas de Boer (University of Twente, Netherlands, The)

Postphenomenology is a methodological approach that seeks to understand human-technology relations by analyzing the multiple ways in which technologies mediate human experiences and practices. Postphenomenology searches to continuously update itself in light of technological developments, arising socio-political issues, and emerging theoretical issues. The three papers in this panel explore new theoretical horizons and investigates how postphenomenological research can broaden its scope and respond to emerging socio-technical challenges.

One of the key accomplishments of postphenomenology is the development of a vocabulary for analyzing technologies in use, by focusing on how they become part of human embodiment, give rise to particular forms of sedimentation and resulting habits, or more generally make users perceive the world in a particular way. The conceptual repertoire of postphenomenology is heavily shaped by the work of Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, and more recently also by that of Bruno Latour. The rationale behind this panel is that a more explicit engagement with other thinkers helps expanding postphenomenology’s conceptual repertoire enables to respond to some recurring criticisms of postphenomenology, as well as enables an analysis of technologies beyond their direct usage.

The three papers in this panel each engage with a different thinker to expand postphenomenology. The first paper compares Ihde’s theory of technological mediation with Hegel’s theory of mutual recognition. It is argued that Hegel’s account of intersubjectivity could present a critical expansion to Ihde’s account of technology-mediated intentionality, equipping postphenomenology with a better answer to the recurrent critique of its inattention to the socio-historical dimension of technology. The second paper mobilizes the work of Jean-Paul Sartre to analyze the phenomenon of griefbots that can provide a post-mortem ‘digital self’ with which others can interact. Using Sartre’s understanding of death, the paper asks: Do griefbots create new ways of grieving and controlling one’s legacy or rather make explicit existing tensions in how we approach our legacy and relate to the dead? The third paper shows how the notion of tertiary retentions as developed in the Bernard Stiegler can help postphenomenology to develop an account of temporality that it currently lacks. It is argued that developing this account is especially relevant for postphenomenological analyses of digital technologies.

 

Presentations of the Symposium

 

The technical artefact mediating between hegel and ihde

Fernando Secomandi
Delft University of Technology

In this presentation, I expand postphenomenology’s concept of technology-mediated intentionality, pioneered by Don Ihde, by engaging with G.W.F. Hegel’s concept of mutual recognition. Hegel’s social and political philosophy has gained increasing interest in the contemporary philosophy of technology, particularly for examining human relations with artificial intelligence and other automated technologies. Postphenomenology, in turn, is a well- established post-Heideggerian perspective that emphasizes the non-neutral agency of human- made artifacts in shaping experiences of the world and the self.

I demonstrate how both Ihde and Hegel are concerned with how “otherness” mediates humans’ subjective experiences. But while Ihde focuses on the non-human other (i.e., technical artifact), Hegel highlights the human other (i.e., self-consciousness).

For Ihde, my analysis centers on his interpretation of the Husserlian concept of intentionality and development of a variational methodology to analyze multistable visual phenomena in Experimental Phenomenology (1986). These contributions laid the groundwork for his later phenomenological descriptions of the mediating role of technical artifacts in Technics and Praxis (1979), Technology and the Lifeworld (1990), and other works.

For Hegel, I focus on his accounts of mutual recognition in the Phenomenology of Spirit (1807) and other posthumously published writings, with particular attention to the passage commonly known as the “master-slave” (or lord-bondsman) dialectic. Contrary to dominant interpretations of this passage, I argue that Hegel develops an early account of technology- mediated recognition through the slave’s activity of self-objectification (i.e., work) under coercion from the master. According to this original interpretation, self-consciousness ultimately evolves through the mediation of an opposing self-consciousness, but this experience is indispensably shaped by the jointly formed technical object.

I conclude my talk by addressing past critiques of postphenomenological research,

particularly its alleged inadequacy in engaging with the political and historical dimensions of human-technology relations. I argue that many of these critiques stem from the absence of an intersubjective foundation for examining the interplay between human intentionality and technological mediation. Efforts to establish such a foundation are currently being pursued by various researchers and perspectives. A Hegelian perspective on recognition could contribute to this endeavor by elucidating how human subjectivity is transformed through technologically mediated encounters with other humans.

 

My life continues without me: sartre on death and personally-curated griefbots

Kirk Besmer
Gonzaga University

There are several companies offering to produce a ‘digital twin’ of you: data is collected through interviews or questionnaires, which is, then, algorithmically collated into a digital version of your personality that can interact with others through text, voice, and/or video. While they are marketed as having multiple uses, the salient use is to provide a post-mortem ‘digital self’ with which others can interact. These personally-curated griefbots are the focus of this paper. In keeping with the conference theme, one could hardly consider a more intimate technology than an ‘algorithmic echo’ of one’s personality with which others interact in meaningful ways after one’s death.

This paper will examine personally-curated griefbots by considering Sartre’s understanding of death as presented in Being and Nothingness, a work that he calls “An Essay in Phenomenological Ontology.” Bringing Sartre’s ontology of the self into postphenomenological analyses provides another lens to examine certain technologies, particularly griefbots. Sartre’s ontology of the self, centered on freedom and facticity, emphasized how others fundamentally shape our being. Upon recognizing "the look" of another, I must acknowledge my own objectification – my freedom becomes alienated by “the other.” It is not just that I can never know what others truly think of me. That is certainly true whether one has read Sartre or not. For Sartre, what makes my objectifications by others so challenging to my freedom is that I cannot deny and must accept that my ‘being-for-others’ is a constitutive aspect of my being. This permanent alienation at the heart of my existence initiates a range of irremediable tensions in human relationships: from antagonistic negotiations (at best) to interminable conflict (at worst).

While living, my ‘being-for-others’ is an ontological dimension of my existence that, while alienating, can be transcended. Given the ontological structure of the self for Sartre, however, death marks the final and complete triumph of my being-for-others over my being-for-itself. In death, I become nothing more than an object for others to determine in their stories, memories, and beliefs about me. In short, upon death, my being is exhausted in my being-for-others, which is the ultimate and final alienation of my freedom.

Personally-curated griefbots appear to offer some control over one’s complete and total objectification in death. Using Sartre's understanding of death, this paper asks: Do these technologies create novel ways of anticipating and controlling one’s legacy and of grieving for others? Or rather, do they make explicit existing tensions in how we approach our own legacy and how we relate to the dead?

 

Postphenomenology and temporality: digital technologies and tertiary retentions

Bas de Boer
niversity of Twente, Netherlands, The

The hypothesis of this paper is that postphenomenology lacks an account of temporality, and hence is unable to analyze how technologies mediate human-world relations over time. Although there is some work on the relations between technologies, sedimentation, and habit formation, temporality is not thematized in itself. This talk shows how the work of Bernard Stiegler can form a starting-point for articulating the temporal dimension of human-technology relations. I will specifically focus on the temporal dimension of digital media: the network of technologies that enables the transmission of digital content (e.g., social media, mobile applications).

For Stiegler, technologies essentially are mnemotechnologies that are constitutive of memory. Digital media shape what Stiegler calls tertiary retentions: they give rise to particular ways of anticipation and perception. Focusing on this notion reveals the close connection between Stiegler and Husserl’s phenomenology of time consciousness, shows how Stiegler’s analysis of technics finds it basis in phenomenology, as well as clarifies its relevance for understanding the temporality of technics. In this talk, I suggest that Stiegler’s approach to the temporality of technics forms an important addition to postphenomenology for two reasons: (1) it enables to recognize that technological artefacts are often part of larger technological infrastructures that structure temporality, and (2) helps articulating why specific technological infrastructures might have undesirable consequences, for instance by pointing to what Stiegler has called the industrialization of memory.

The talk is structured as follows. First, I argue that the issue of temporality is typically neglected in postphenomenology and show why this is a problem. Second, I will outline the basics of Stiegler’s understanding of technics, particularly focusing on how he conceptualizes the relationship between technics and memory. Second, I show how he updates Husserl’s analysis of time-consciousness through the introduction of the notion of tertiary retention. Third, I argue that, in updating Husserl in this way, Stiegler’s work enables for a phenomenological analysis of the temporality involved in contemporary digital media. Fourth, I show how Stiegler’s analysis can augment the postphenomenological approach to analyzing human-technology relations.

 
11:50am - 12:50pm(Symposium) TechnoPedia - an online Philosophy and Ethics of Technology Encyclopedia
Location: Auditorium 13
 

Introducing the 4TU.ethics encyclopedia of philosophy and ethics of technology

Chair(s): Jochem Zwier (Wageningen University & Research, the Netherlands, Netherlands, The), Vincent Blok (Wageningen University & Research, the Netherlands), Udo Pesch (Technical University Delft), Wybo Houkes (Eindhoven Technical University)

During this one hour symposium, we will present a novel 4TU.ethics project to develop and publish an online encyclopedia for philosophy and ethics of technology. The session aims to introduce and raise awareness about the project, showcase upcoming lemmas, and, importantly, to gather input and feedback from participants.

First, we will present the rationale, relevance, and ambitions behind this project. Although many good handbooks and introductions to philosophy and ethics of technology now exist, these can be difficult to find and navigate for those who are new to the field. The 4TU.ethics encyclopedia is designed in such a way that lemmas serve as a ‘primer’ of ‘first entry point’ for students or researchers shifting to a novel subject or domain. The latter is particularly significant in light of the fact that philosophy and ethics of technology are increasingly practiced outside technical universities as well. Having a focused point of entry then not only helps orient newcomers, but also increases visibility of existing expertise and work, thus reducing the risk of reinventing the wheel.

Secondly, the symposium will demonstrate several upcoming lemmas, written and reviewed by experts, maximally devoid of jargon, around 5000 words in length, which offer students basic but up-to-date introductions on the a) history, b) systematic discussions, and c) contemporary controversies of a particular topic. By showcasing an operational version of our online encyclopedia and by reporting developments down the road, we hope to initiate a dialogue with participants regarding the usability and potential improvements of the project as it currently exists, and extend an open invitation to contribute to this encyclopedia.

Participants to the symposium include members of the editorial board (dr. Jochem Zwier (Wageningen University) – project coordinator; dr. Udo Pesch (TU Delft); Prof. dr. Vincent Blok (Wageningen University); and Prof. dr. Wybo Houkes (TU Eindhoven)), as well as several lemma authors (Prof. dr. Sabine Roeser (TU Delft); Lucien von Schomburg (University of Greenwich); Mariska Bosschaert-Bakhuizen (Wageningen University).

 

Presentations of the Symposium

 

[no separate papers in this symposium, see NB below]

Jochem Zwier 4TU Zwier
Wageningen University & Research, the Netherlands

[no separate papers in this symposium, see NB below]

 
11:50am - 12:50pm(Papers) Phenomenology II
Location: Auditorium 12
Session Chair: Tom Hannes
 

A better self: transhumanism and deincarnation.

Orane Kail

Université Vincennes - Saint-Denis, France

Through exploring prospectives hinted by sci-fi popular culture, but more so through a critical analysis of some transhumanist ideas, this paper aims to develop a critical reflection on a new ideal that I’ll call “deincarnation”. Deincarnation refers to the literal negation of one’s embodied lived experience or incarnate condition; it has meaning in opposition to it.

The fact that as of now we exist internally and interact externally with a body, through this body, may be challenged by technological progress, or at least such a challenge is presented as possible – and for some, desirable. Through the transhumanist discourse on the hope of one day being able to upload one’s consciousness into a machine (whether a digital network or a robot), there is the fantasy that technology may help us achieve immortality, and that such an achievement should come at the cost of having a flesh-and-bones body. Because our mortality is understood as stemming from our bodily condition, the body can and should be disregarded as being only the obsolete vessel for and even a limitation on our immortal consciousness – our soul.

The intimate relationship that one develops with their body through it being their lived experience is presented as essentially lesser than the possibility of a disincarnate, “purer” existence outside of it. In this sense, robots are often depicted as such a purer form of existence, by allowing the dematerialized human inside of it to have a body that is not bodily in any way: the blood is replaced by a blue or white fluid, clean cables replace gooey viscera, and female-presenting robots have no body hair. Anything that expresses a body’s flesh (smell, fluids, weight, age) is seen as undeserving of carrying a consciousness that must be of another nature, an ethereal numerical essence.

Perfection should come from rejecting the limitations on our hypothetically unlimited being that embodiment imposes on this being. But is our embodied existence so undeserving of our spiritual and rational one? Are not both the indissociable entirety of one’s self? This critical analysis of the idea that perfection resides in the negation of incarnate existence (i.e. existence itself) will underline the subtextual revival of a cartesian dualism through the prophetic ambitions of transhumanism as a discourse. It will also heavily rely on gender theory as well as phenomenology to discuss the rejection of our intimate bodily-experienced life that is the transhumanist proposal of deincarnation.

Bibliography:

- BOSTROM Nick, The Fable of the Dragon Tyrant, 2005. Superintelligence, 2014.

- HARAWAY Donna, Cyborg Manifesto, 1984.

- CLARK Andy, Natural-Born Cyborgs, 2004.

- MERLEAU-PONTY Maurice, Phenomenology of Perception, 1945.

- FOUCAULT Michel, History of Sexuality, 1984.

- WACHOWSKI Lana and Lilly, The Matrix, 1999-2021.

- FARGEAT Coralie, The Substance, 2024.



Psychopathology, criminalization and portable technologies among people experiencing homelessness and mental illness: a postphenomenogical analysis

Vincent Laliberté

McGill University, Canada

The number of people experiencing homelessness is increasing in cities across the Western world (Lancet Public Health 2023), accompanied by high rates of mental illness (Fazel, Geddes, and Kushel 2014) and criminalization (Gaetz 2013). While the lack of affordable housing (Colburn and Page Aldern 2022) is widely recognized as the primary driver of this crisis, less attention has been paid to how individuals’ lives are increasingly enmeshed with technologies and how this may contribute to psychopathology or to criminalization.

This presentation examines how portable technologies shape the psychic life and social reintegration of individuals experiencing homelessness and mental illness. The analysis is based on a long-term ethnographic work in a shelter-based clinic in Montreal with Simon, a patient I followed as both a psychiatrist and an anthropologist. Over a one-year period, during which Simon transitioned from living in the shelter to securing an apartment, he was convicted for sending a series of text messages to his ex-girlfriend, in violation of a restraining order. He was also required to wear an ankle bracelet for almost a year, a condition that brought numerous challenges, including jeopardizing his part-time job as a funeral service provider, where he constantly risked crossing into restricted areas.

I employ a postphenomenological framework to analyze Simon’s interaction with a smartphone (Richardson 2020; Wellner 2015) and the proximity alert bracelet, the latter having been developed during the covid pandemic for contact tracing are now increasingly used in Canada (Griffiths 2023) and globally. The concept of “multistability” (Rosenberger and Verbeek 2015) helps illuminate the significant disconnect between the intended purpose of these technologies and Simon’s daily relations with them. I also draw on Robert Rosenberger’ (2017) work on “callous objects”, which explores how certain objects and technologies subtly limit access to public spaces for people experiencing homelessness.

I argue that Simon’s psychopathology and criminal behaviours are deeply entangled with these technologies, which calls for a nuanced understanding of their impact on marginalized population. In addition to having broad implications for informing public policy, this presentation also shows how postphenomenology has untapped and much needed potential for advancing the fields of psychiatry and psychiatric anthropology.

References

Colburn, Gregg, and Clayton Page Aldern. 2022. Homelessness Is a Housing Problem: How Structural Factors Explain U.S. Patterns. University of California Press.

Fazel, Seena, John R Geddes, and Margot Kushel. 2014. “The Health of Homeless People in High-Income Countries: Descriptive Epidemiology, Health Consequences, and Clinical and Policy Recommendations.” The Lancet 384 (9953): 1529–40. https://doi.org/10.1016/S0140-6736(14)61132-6.

Gaetz, Stephen. 2013. “The Criminalization of Homelessness: A Canadian Perspective.” European Journal of Homelessness, 357–62.

Griffiths, Nathan. 2023. “Use of Electronic Monitoring Bracelets Has Surged in B.C. Here’s How They Work.” Vancouver Sun, 2023, Nov 14 edition.

Richardson, Ingrid. 2020. “Postphenomenology, Ethnography, and the Sensory Intimacy of Mobile Media.” In Reimagining Philosophy and Technology, Reinventing

Ihde, edited by Glen Miller and Ashley Shew, 159–74. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-35967-6_10.

Rosenberger, Robert, and Peter-Paul Verbeek. 2015. “A Field Guide to Postphenomenology.” In Postphenomenological Investigations : Essays on Human-Technology Relations, edited by Robert Rosenberger and Peter-Paul Verbeek. Lanham: Lexington books.

Rosenberger, Robert. 2017. Callous Objects: Designs against the Homeless. University of Minnesota Press.

The Lancet Public Health. 2023. “Homelessness in Europe: Time to Act.” The Lancet Public Health 8 (10): e743. https://doi.org/10.1016/S2468-2667(23)00224-4.

Wellner, Galit P. 2015. A Postphenomenological Inquiry of Cell Phones: Geneaologies, Meanings, and Becoming. Lanham: Lexington Books.

 
11:50am - 12:50pm(Symposium) A political (re-)turn in the philosophy of engineering and technology
Location: Auditorium 9
 

A Political (Re-)Turn in the Philosophy of Engineering and Technology - Political liberal philosophy

Chair(s): Lukas Fuchs (University of Stirling)

Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).

To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.

References

Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.

Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12

Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y

Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8

 

Presentations of the Symposium

 

Reciprocity & Reasonability in the Age of AI

Paige Benton
University of Johannesburg

Scholars often emphasise that a shortcoming of John Rawls’s theory of justice is that he did not address the role of technology in shaping future citizens (Risse 2023). Technology has been shaping citizens for decades. However, the rise of AI technology highlights a greater threat to how citizens and society may be shaped by technology. The rise of algorithmic amplification of polarising content, erosion of mutual trust, and strained public discourse have contributed to democratic instability. In this talk, I argue that John Rawls's framework of justice offers civic virtues that, when cultivated, can provide a normative safeguard of democracy in the Digital Age. Specifically, the virtues of reasonability—the capacity to engage in fair dialogue and consider diverse perspectives—and reciprocity—mutual trust and cooperation —are essential for citizens to cultivate a stable liberal democratic society to be possible (Rawls 2005 48-50, 213).

I claim that these virtues act as a kind of political inoculation, helping citizens resist the divisive and polarising content amplified by AI-driven digital platforms. The political virtues of reasonability and reciprocity foster open dialogue and trust, two elements necessary for stability in liberal constitutional democracies. The cultivation of these virtues, in theory, could help reduce the amplification of distrust between citizens, undermining civic friendship. Reasonability and reciprocity protect democratic institutions from the destabilising effects of digital dissent, ensuring that AI technology serves democratic ends rather than undermining them. These virtues do so, I claim, since citizens who have developed these capacities would have developed a sufficient sense of justice necessary for trying to develop a political consensus with those they are in moral conflict with. Given this, I argue liberal constitutional governments have a moral imperative to cultivate these two political virtues within the public sphere. This imperative to act remains that of governments with regard to domestic justice. This is due to the potential long-term consequences of governmental inaction on cultivating civic virtues which can undermine the preservation of democratic stability.

 

Technologies as promoters of justice. a capability-based framework

Daniel Lara De La Fuente
University of Málaga

Any political philosophical approach to technology and engineering should incorporate justice concerns if we understand them as the distributional, recognitional and procedural aspects involved in the implementation of technical innovations in a given political community. Yet a question remains to be answered: how is this broad commitment to be specified? In this paper, I provide a theorical framework to assess under what conditions technological innovations can be considered just. Grounded on the capability approach, I argue that technological innovations are just and suitable to face socioecological challenges if they are effective environmental conversion factors, net ecological stabilizers and protectors of human central capabilities at levels of sufficiency. The article illustrates these criteria through a case study that analyzes the role of Artificial Intelligence in increasing the reliability, safety and scalability of nuclear fission power reactors. Overall, this theoretical assessment can be taken into account for practical purposes such as evaluating social, economic and environmental costs and benefits in policy making.

 

A rawlsian philosophy of technology and engineering?

Michael W. Schmidt
Karlsruhe Institute of Technology

Historically, Rawlsian ideas have been prominent in the philosophy of technology and engineering (Mitcham 2024). However, especially with regard to AI related topics, Rawls’s theory gained renewed attention. Some go as far to state that Rawls is “AI’s favorite philosopher” (Procaccia 2019; see Franke 2021). So, how could and should a Rawlsian philosophy of technology and engineering be pursued?

Indeed, one of the most common approaches to incorporate Rawlsian ideas is at least problematic: The idea to apply Rawls’s original position – a hypothetical situation where one decides without knowledge of one’s personal identity, social situation, and personal features (“veil of ignorance”) and thus is forced to an impartial perspective – to any issue at hand. This is problematic since the original position with the important detail of risk averse agents is justified, according to Rawls, only for the decisions about the basic structure of society. Correspondingly a general application of the derived difference principle (maximin rule) is at least questionable (Rawls 1999, 133). Of course, one could work with Rawls’s other – and lexically prior – principles of justice, that can be derived from the original position. Serious basic rights issues that technologies and engineering are raising can be tackled in this way as well as other issues of inequality. If one wants to defend the usage of the original position, one might explore what Rawls calls the four-step-sequence: a gradual lift of the veil of ignorance so that information of the specific socio-technical setting is included in the deliberation of the agents in the original position. However, whether one should accept this new hypothetical decision situation as ethically normative guide, depends on its justifiability via Rawls’s method of reflective equilibrium.

With this method, especially its collective or public form that aims at what Rawls called “full reflective equilibrium” (Rawls 2001, 31f.) I see another promising way to incorporate a basic Rawlsian idea into the philosophy of technology and engineering: The normative idea that decisions that affect the basic structure of society must be justified to all reasonable citizens. In order to provide such a justification, one aims at full reflective equilibrium by systematizing shared political beliefs and tries to show that the accepted decision is part of the most plausible systematization. Now, since it’s actually an empirical question which beliefs are shared and how they are weighed, a full reflective equilibrium cannot be pursued from the armchair and calls for innovative participatory and transdisciplinary research.

I would like to end by highlighting an issue a Rawlsian approach should tackle in the current historical situation: In light of the growth of various forms of antiegalitarian authoritarianism and doubts on the capacity of genuine reform of capitalist democracy, any egalitarian approach must help to provide attractive alternative political visions. A focus on utopian thinking in a Rawlsian philosophy of technology and engineering thus seems warranted (e.g. Sand 2025) but the reflection or creation of technological utopias should be integrated in holistic utopian thinking in order to provide what Rawls called “realistic utopias”.

References

Franke, Ulrik. 2021. “Rawls’s Original Position and Algorithmic Fairness.” Philosophy & Technology, November. https://doi.org/10.1007/s13347-021-00488-x.

Mitcham, Carl. 2024. “Brief for Political Philosophy of Engineering and Technology.” NanoEthics 18 (3): 14. https://doi.org/10.1007/s11569-024-00463-8.

Procaccia, Ariel. 2019. “AI Researchers Are Pushing Bias Out of Algorithms.” Blooomberg.Com, March 7, 2019. https://www.bloomberg.com/opinion/articles/2019-03-07/ai-researchers-are- pushing-bias-out-of-algorithms, accessed June 30, 2021.

Rawls, John. 1999. A Theory of Justice: Revised Edition. Cambridge, Massachusetts: Belknap Press.

———. 2001. Justice as Fairness: A Restatement. Edited by Erin Kelly. Cambridge, Massachusetts: Harvard University Press.

Sand, Martin. 2025. Technological Utopianism and the Idea of Justice. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-75945-1.

 
11:50am - 12:50pmAuthor-Meets-Critics session
Location: Forum
 

Are LLMs Creative?

Fernando Nascimento1, Scott Davidson2

1Bowdoin College, United States of America; 2West Virginia University

This paper employs Paul Ricoeur's theory of threefold mimesis to compare the creative capacities of Large Language Models (LLMs) with human semantic innovation. While recent studies suggest LLMs can match human performance on creativity tests, we argue that a broader hermeneutical approach reveals fundamental differences in how humans and LLMs generate meaning. First, LLMs lack the embodied experience that grounds human semantic innovation, operating instead through mimesis lexios (imitation of language) rather than mimesis praxeos (imitation of action). Second, following Ricoeur's emphasis on the role of reception in creativity, we argue that LLM outputs require human embodied interpretation and engagement to be recognized as genuine semantic innovation. This analysis suggests that LLMs’ creative potential remains fundamentally tied to human embodied experience and interpretive engagement, and contributes to ongoing debates about computational creativity and human-AI collaboration in meaning-making.



Design for Democracy: Deliberation, experimentation, aesthetic engagement, anti-power

Filippo Santoni de Sio

Eindhoven University of Technology, Netherlands, The

These are two related chapters from the book Human Freedom in the Age of AI.

Chapter 10: Design for Democracy: Deliberation and experimentation.

This chapter proposes to move from critiquing the negative impact of digital technology on democratic practices to exploring avenues for ‘designing for democracy.’ The first part of the chapter briefly presents two philosophical theories of democracy: deliberative democracy of, e.g. Rawls and Habermas, and democracy as experimentation and inquiry in the pragmatism of, e.g. Dewey and Addams. It illustrates how the two theories led to two different kinds of projects of ‘design for democracy’: the Brazilian open-government platform e-Democracia for citizen participation in deliberation and legislation, and Carl DiSalvo's Careful Coding project for the data-driven maintenance of the built environment in a local community. The second part of the chapter presents two design approaches which arguably reflect the two democratic theories: approaches where the design process is enriched and improved by stakeholder involvement and participation (Value-Sensitive Design and Scandinavian Participatory Design) and those where design is used to spark and expand social critique and to support the development of existing civic experiments (Critical and Social/Participatory Design). The chapter concludes by proposing to support both approaches to design for democracy and to explore possibilities of creating a mixed approach.

Chapter 11: Expanding democracy: Design for aesthetic engagement and anti-power.

In this chapter two more possible directions for designing for democracy are introduced. First, expanding the idea of the ‘public sphere’ to explicitly include also non-argumentative forms of contribution, namely aesthetic, artistic, or expressive forms of engagement, and supporting the design of projects that may contribute to democratic debates via aesthetic engagement (more-than-argumentative democracy). Second, expanding democratic participation from deliberation to ‘anti-corruption,’ and designing systems that may support counter-powers in democratic as well as non-democratic regimes (more-than-representative democracy). More-than-argumentative democracy may be supported by new media projects giving voice and visibility to underrepresented groups like migrants, raising awareness of the risks and opportunities of new technologies. More-than-representative democracy can be supported by projects that collect evidence against corrupted or authoritarian regimes with the help of new technologies.



The routledge international handbook of engineering ethics education

Tom Børsen1, Diana Adela Martin2, Gunter Bombaerts3

1Aalborg University, Denmark; 2University College London, United Kingdom; 3TU Eindhoven, The Netherlands

The Routledge International Handbook of Engineering Ethics Education was published open access in December 2024 as a collaborative and international project bringing together 6 editors and more than 100 authors across the world. The volume contains 6 sections which elaborate on the foundations of engineering ethics education, teaching methods, accreditation and assessment, and interdisciplinary contributions, from the perspectives of teaching, research, philosophy, and administration.

This session aims to discuss the significance of such a resource as well as the intersections between engineering ethics education and the philosophy of technology. Potential topics for discussion address prospects of engineering ethics education and the legitimacy of engineering ethics education as a field of research relevant to philosophy and philosophers. Reading the entire volume is not mandatory for taking part in this session (although reading the introductory chapter ‘Mapping engineering ethics education’ might help the discussion).



What's wrong with technological mediation theory (and how to fix it)

Phillip Honenberger1,2

1Center for Equitable AI & Machine Learning Systems (CEAMLS), Morgan State University, United States of America; 2Department of Philosophy & Religious Studies, Morgan State University

Theorists of technology sometimes describe human experience and action as “mediated” by technologies (Ihde 1990, 2006; Latour 1994, 1999; Verbeek 2005, 2011; Van den Eede 2011, Arzroomchilar 2022). But their accounts of what technological mediation is, including its principal components and modes of operation, differ from one another significantly. In this paper I conduct a critical review of some prominent accounts, highlighting problems and oversights in each. I then trace these problems to two basic sources: (1) overly narrow assumptions about what kinds of phenomena can count as “technological mediation,” how cases of technological mediation work, and what should be done about them; and (2) an insufficiently reflective approach to how technological mediation fits within a larger taxonomy (including, most saliently, non-technological mediation and non-mediational technology-involving relations). I then sketch the outlines of a theory of technological mediation that overcomes these problems. After drawing a few implications from this theory, I conclude.



Why should we revive the definition of technology as applied science?

Daian Tatiana Flórez1, Carlos García2

1Universidad de Caldas-Universidad Nacional de Colombia, Colombia; 2Universidad de Caldas-Universidad de Manizales

In this paper, we argue that equating technology with applied science has been prematurely dismissed based on arguments that have not undergone thorough scrutiny. Prima facie, there are two main reasons to revisit the question of whether technology should be equated with applied science. Firstly, the historical case often cited by those who argue that technology can advance independently of science -the steam engine- is controversial and can be refuted with evidence showing that it resulted from James Watt's scientific knowledge. Secondly, even if we acknowledge that there is unique technological knowledge in various domains (as exemplified by engineering theories), this alleged epistemic independence does not suffice to refute the equation [technology = applied science]. Based on the above, we argue that although the origin (historical dimension) of technology is practical, science -with its theories and methods- constitutes a conditio sine qua non (epistemological dimension) for technology as a form of knowledge.

 
12:50pm - 2:20pmLunch break
Location: Senaatszaal
12:50pm - 2:20pmPoster session
Location: Senaatszaal
2:20pm - 3:45pm(Symposium) Third wave continental philosophy of technology
Location: Blauwe Zaal
 

Third wave continental philosophy of technology - Part IV

Chair(s): Pieter Lemmens (Radboud University), Vincent Blok (Wageningen University), Hub Zwart (Erasmus University), Yuk Hui (Erasmus University)

Since its first emergence in the late nineteenth century (starting with Marx, Ure, Reuleaux and Kapp and coming of age throughout the twentieth century via a wide variety of authors such as Dessauer, Spengler, Gehlen, Plessner, the Jünger brothers, Heidegger, Bense, Anders, Günther, Simondon, Ellul and Hottois), philosophy of technology has predominantly sought to think ‘Technology with a capital T’ in a more or less ‘metaphysical’ or ‘transcendentalist’ fashion or as part of a philosophical anthropology.

After its establishment as an academic discipline in its own right from the early 1970’s onwards, philosophy of technology divided itself roughly into two different approaches, the so-called ‘engineering’ approach on the one hand and the so-called ‘humanities’ or ‘hermeneutic’ approach on the other (Mitcham 1994).

Within this latter approach, the transcendentalist framework remained most influential until the early 1990’s, when American (Ihde) and Dutch philosophers of technology (Verbeek) initiated the so-called ‘empirical turn’, which basically criticized all macro-scale or high-altitude and more ontological theorizations of technology such as Heidegger’s Enframing and Ellul’s Technological Imperative as inadequate and obsolete and instead proposed an explicit move toward micro-scale and low-altitude, i.e., empirical analyses of specific technical artefacts in concrete use contexts (Achterhuis 2001).

From the 2010’s onwards, this empirical approach has been reproached for obfuscating the broader politico-economic and ontological ambiance. Particularly European philosophers of technology expressed renewed interest in the older continentalist approaches and argued for a rehabilitation of the transcendental or ontological (as well as systemic) question of technology (Zwier, Blok & Lemmens 2016, Zwart 2021), for instance in the sense of the technosphere as planetary technical system responsible for ushering in the Anthropocene or Technocene (Cera 2023), forcing philosophy of technology to think technology big again (Lemmens 2021) and calling not only for a ‘political turn’ (Romele 2021) but also for a ‘terrestrial turn’ in the philosophy of technology (Lemmens, Blok & Zwier 2017).

Under the influence of, among others, Stiegler’s approach to the question of technics (Stiegler 2001), Hui’s concepts of cosmotechnics and technodiversity (Hui 2016) and Blok’s concept of ‘world-constitutive technics’ (Blok 2023), we are currently witnessing the emergence of what may be called a ‘third wave’ in philosophy of technology which intends, in dialectical fashion, to surpass the opposition between transcendental and empirical, and instead engages in combining more fundamental approaches to technology and its transformative, disruptive and world-shaping power with analyses of its more concrete (symptomatic) manifestations.

This symposium aims to open a debate among authors exemplifying this third wave, with a view to the contemporary intimate technological revolution, specifically focusing on the themes technology and human identity, human nature, agency and autonomy, artificial intelligence, robots and social media, and the environment and sustainability.

 

Presentations of the Symposium

 

After cybernetics, after thinking

Yuk Hui
Erasmus University

Heidegger’s claim about the end of philosophy and its succession by cybernetics calls for a thinking whose relation to technology remains unclear. Is thinking—understood as oriented toward Being—a sufficient response to the current planetary condition? Specifically, is the thinking that originates from the Abendland capable of addressing the planetary nature of technology, which has already surpassed its control and produced a generalized Besinnungslosigkeit ? Stiegler transforms Heidegger’s notion of penser (to think) into panser (to care or to heal), broadening the thinking of Being into a therapeutic thinking by first exposing thinking to danger—namely, to the abyss of the Gestell (enframing). This generalization, however, necessitates a return to localities and a diversification that exceeds the task of pursuing what thinking is; instead, it calls for an individuation of thinking that is adequate to the planetary condition.

 

Evil incorporated. the tragic philosophy of technology of mehdi belhaj kacem

Pieter Lemmens
Radboud University

Since the 1990s, philosophy of technology has to a large extent been dominated by the so-called empirical turn, which basically consisted, negatively, of (1) a rejection of all ontological, metaphysical and even anthropological considerations of technology understood as ‘Technology with a capital T’ – exemplified by the so-called classical or transcendentalist philosophers of technology such as Marx (Capital), Heidegger (Enframing), Ellul (Technological Imperative) and Mumford (Megamachine) – and, positively, (2) an explicitly pragmatic dedication to analyzing concrete technical artefacts and their impact on the human-technology relation in specific use contexts. Adoption of this empirical stance was prevalent among others in postphenomenology (Ihde, Verbeek), science and technology studies (Bijker, Irwin, Pinch) and to a lesser extent also in the critical constructivism approach of Feenberg and his followers. All attempts at a more fundamental, i.e., profoundly philosophical theorization of the phenomenon of technology as such – in the sense introduced by Heidegger as a questioning into the essence of technology – was thus dismissed by mainstream philosophy of technology as overly abstract, reductionistic, wrongheaded or even downright inadequate.

Within continental philosophy more generally though, so-called ‘Technology with a capital T’ has come to be acknowledged over time more and more as the phenomenon that actually constitutes and conditions the very thing that has arguably always been the central topic of philosophy – from its very beginning in antiquity onwards – and that is to say the transcendence pervading and animating human existence, i.e., what Heidegger called being-in-the-world or Dasein: the onto-logical or metaphysical nature of human being. This is notably the case, albeit in very different ways, in the work of thinkers such as Günther, Hottois, Janicaud, Schürmann, Lacoue-Labarthe, Stiegler and Sloterdijk, and more recently also Yuk Hui – the latter arguing for a non-eurocentric reprisal of the question concerning technology sensu Heidegger based on a recognition of the existence of a plurality of culture-specific cosmo-technae and a fundamental technodiversity.

Another ‘transcendentalist’ or ‘metaphysical’ approach to technology, profoundly original and presented as part of an all-encompassing ontological framework called the system of pleonectics, has been developed in more recent times by the French-Tunesian philosopher Mehdi Belhaj Kacem (1973). This system understands the phenomenon of beingness in its relation to Being (i.e., the ontological difference) in neo-Anaximandrian terms as a dialectic of appropriation and expropriation and conceives of human being as a process of techno-mimetic, and as such monstrous, appropriation-expropriation. The decisive conceptual move made by Kacem is to demonstrate the originally transgressive or archi-transgressive nature of human transcendence as rooted in techne-mimesis as a violent appropriation and confiscation of the laws of physis. As such, he rehabilitates in a secular fashion the original connection traditionally pronounced by religion but totally denied in contemporary philosophy of technology between technology (or science) and evil.

In my paper I will present Kacem’s innovative ‘transcendentalist’ understanding of technology, expounding its remarkable resonances with the views of both Stiegler and Sloterdijk, and arguing for its profound relevance for our current diabolic era of planetary catastrophe, massive oligarchic corruption and deception, emerging global technocracy and intense cognitive and psychological warfare in the context of escalating geopolitical conflict.

 

Towards an Evolutionary Turn in the Philosophy of Technology

Marco Pavani
University of Turin

This paper aims to contribute to scholarship in the philosophy of technology by advocating for an “evolutionary turn” in this field of study. I aim to show that the contemporary philosophy of technology should pay more attention to the evolutionary dimension of the human relation to technology, also and most importantly in a biological and even genetic sense (Blad, 2010; Moore, 2017). This perspectival shift does not seek to replace the currently well-established empirical turn (Achterhuis, 2001), but rather to broaden and enrich its scope by highlighting what—in my view—are hitherto neglected aspects of our relation to technology.

First, I will argue that considering the evolutionary dimension of our relation to technology may help us better articulate the relationship between philosophical and scientific practices. I will submit that adopting this evolutionary perspective requires philosophical analyses to be scientifically up-to-date and consistent and suggest the Extended Evolutionary Synthesis (Laland et al., 2015) as the reference scientific paradigm for this research. I will also claim that an evolutionarily oriented philosophy of technology may enable us to consider scientific findings more critically by emphasizing the role played by technologies not only in our evolution but also in the study of this evolution carried out by evolutionary anthropology (Sloterdijk, 2016).

Second, I will underscore how an evolutionary turn in the philosophy of technology does not exclusively concern the study of our prehistoric past but also contributes to the understanding of the current epoch. I will point out that appreciating how our relation to technology influences our biology even today may bear major relevance to ongoing debates about the legitimacy of technological interventions over the human lifeform (e.g., Morris, 2006). I will also argue that investigating the evolutionary origin of our relation to technology, insofar as it consists in a narrative reconstructing how we became who we are, performatively influences our self-representation (Latour & Strum, 1986), holding sway over current biopolitical paradigms.

Third, I will outline how this approach may also enable us to reframe the question of the human relation to technology from a conceptual viewpoint. I will highlight the limitations of the “human-technology relations” approach exemplified by postphenomenology (e.g., Ihde, 1990; Verbeek, 2005) and submit that we should not talk about relations between humans and technologies but rather about relations between technologies and biological organs in order to fully appreciate how the human lifeform is actually constituted by technology, in an essential rather than merely extrinsic and accidental way.

I will therefore interpret Stiegler’s (e.g., 2014) idea of a “general organology” as a methodological apparatus suitable for reframing the question of “the human” from an evolutionary perspective, conceiving of it as the impermanent outcome of the negotiation between biological organs, technologies and social organizations, on the one hand, and the reconstruction of this relation, itself exerted as this threefold relation, on the other hand. The resulting insights, I believe, might help us better understand the phenomenon of technology and provide us with improved conceptual tools to analyse it.

 

Can we read Stiegler environmentally

Martin Ritter
Academy of Sciences of the Czech Republic

Although one of the most tangible effects of technology is its impact on the natural environment, philosophers of technology pay most attention to its other effects, such as on human perception or agency, and the like. Some work has been done at the intersection of philosophy of technology and environmental philosophy, for example by the proponents of the so-called terrestrial turn in philosophy of technology, but arguably more research is needed. Let me put it thus: we need to focus not only on the human-technology-world relationship but also on the human-technology-environment relationship.

Perhaps surprisingly, I suggest reading Bernard Stiegler as a thinker on whom we can build to accomplish this task. Stiegler recognizes the essential dependence of humans, in their very humanity, on their environment. For Stiegler, however, the environment we must consider in the first place is not natural but artificial: technology is not just something we create but an element, or milieu, founding the very existence of humans and their world. Technology, however, creates not only a medium of human existence but also, directly or indirectly, a new environment for other inhabitants of the Earth as well. Because of this double effect, Stiegler’s philosophy of technology implies, or at least demands, environmental philosophy as a philosophy of the environment.

The first aim of my presentation is to reconstruct Stiegler’s ideas about how human technological mediation contributes to environmental degradation. Stiegler captures this through his conception of the so-called general organology, which describes the transformation of life by means other than biological, namely by technology. More specifically, he uses the entropy-negentropy duality to capture this process. Stiegler emphasizes the anti-entropic potency of life and especially of human life, but he leaves behind anthropocentric prejudices precisely because he does not conceptualize human life as exceptional in its autonomy but rather as dependent on technology. However, in his (justified) focus on the noetic character of humans and their entropization, he tends to undervalue, or at least pay insufficient attention to, the entanglement of the human as noetic process with non-noetic processes, be they material or biological, outside of human psycho-somatic bodies.

The second aim of my talk is to focus on, and re-evaluate, this “environment” of human life. This complex task involves thinking about technology, which is a human milieu, not (only) in terms of its difference from the natural environment, but rather in terms of its dependence on it and interconnectedness, even symbiosis with it. Technological artifacts (and human beings) do not emerge ex nihilo but use, even exploit, and transform what is “given” by nature. Doing justice to this technology-environment relationship challenges Stiegler’s somewhat one-sided emphasis on the technology-human relationship in its potentially negentropic effect and probably requires the development of more nuanced concepts to capture the environmental issues associated with technology as a medium that affects not only human life.

 
2:20pm - 3:45pm(Symposium) The History of the Philosophy of Technology: Hidden philosophers of technology
Location: Auditorium 15
 

The History of the Philosophy of Technology: Hidden philosophers of technology

Chair(s): Massimiliano Simons (Maastricht University, Netherlands, The)

The History of the Philosophy of Technology posits the philosophy of technology as a wide-ranging and comprehensive field of study that includes both the philosophical study of particular technologies and the different ways that technology, more broadly, has been considered philosophically. Influenced by the history of the philosophy of science, the history of ideas, and the history of the humanities, our aim is to examine how different individuals and traditions have thought about technology historically. This includes, but is not limited to: the work of different thinkers throughout history, both well-known and overlooked figures and narratives, including non-western traditions and narratives that engage with technology; analyzing the cultural, social, political, and sociotechnical contexts that have shaped philosophical responses to technology, including historical responses to new and emerging technologies; exploring the disciplines and intellectual traditions whose impacts can be traced across different philosophies of technology, including Science and Technology Studies (STS), the history of technology, critical theory, phenomenology, feminist philosophy, hermeneutics, and ecology, to name only a few; histories of different "schools" of philosophical thought about technology, for example French philosophy of technology, Japanese philosophy of technology, and Dutch philosophy of technology; mapping the hidden philosophies of technology in the work of philosophers (e.g. Foucault, Arendt, Sloterdijk) and traditions whose work is not often associated with technology (e.g. German idealism, logical empiricism, existentialism, lebensphilosophie); and, exploring the contributions of literature, art, design theory, architecture, and media theory/history towards a philosophy of technology.

This panel focuses on hidden philosophers of technology. Examining the work of scholars who fall outside of what are typically considered philosophers of technology can draw out perspectives and methods that useful for thinking about technology. In this panel, the work of Walter Benjamin, Bertrand Russel, and Hannah Arendt will be explored as philosophies of technology.

 

Presentations of the Symposium

 

Irradiating the Intimate: the Storytelling of Walter Benjamin's Technological Revolution of the Intimate

Dominic Smith
University of Dundee

•1932-1934: Walter Benjamin writes, collects and rearranges a set of stories on his Berlin childhood. The story he settles on for last, ‘Moon’, relates a childhood night terror and shifts from the unnerving effects of moonlight on everyday objects to a reflection on the ontological question: Why is there something rather than nothing?

•29 January 1933: Benjamin delivers what would be the last of 93 broadcasts he gave for German regional radio, 1927-1933. The broadcast comprises readings from his story set. Hitler becomes German chancellor the next day, and his torchlight procession becomes the first nationwide broadcast on German radio (Rosenthal 2014: xx-xxi). Benjamin, a left-wing Jewish intellectual, is forced into exile in France in March 1933.

•1987: A ‘final version’ of Benjamin’s story set is discovered and posthumously published as a new edition of Berlin Childhood around 1900 (Benjamin 2006). The original ending of ‘Moon’ is cut, and ‘Moon’ shifts from the end of the text to the main body.

The technologies with which Walter Benjamin is best associated, thanks to his essay ‘The Work of Art in the Age of Its Mechanical Reproducibility’, are photography and film. As I argue in my book Bridging Benjamin (Minnesota University Press, forthcoming), however, and as the above-assembled events show, there are timely stories to be told about another medium in which Benjamin was not merely a critic, but a practitioner and educator: radio.

Part one places the first event in terms of recent work on Benjamin’s cosmology (Neyrat 2022). It argues that ‘Moon’ expresses, for the space of childhood, what Hui has theorised as a ‘cosmotechnics’ (Hui 2019, 2021). Part two relates a hidden story of the second event: the Nazi ascendency meant Benjamin was unable to broadcast a scheduled radio play he had written for children, ‘Lichtenberg: A Cross Section’, the transcript of which tells of strange moon beings who use fantastical technologies to study terrestrial life (Benjamin 2014). I here read ‘Lichtenberg’ as an obverse of ‘Moon’, and argue that it exhibits a practice of localising philosophy through story. Part three focuses on the third event. I argue that the posthumous rearrangement of ‘Moon’ is consistent with a storytelling practice the story already expresses: against reifying the intimate as a sentimental possession, ‘Moon’ shows that to tell the story of something intimate is already to irradiate it and render it a kind of ghost, in the manner of the bright flash of photography filling up a room. Against a recent reading of ‘Lichtenberg’ by Peter E Gordon (2023), I conclude that all three of the events about storytelling discussed in this paper have something much more important to tell us when connected up: a story not merely about radio as an audio medium, but as an intimate and haunting form of irradiation that has become a necessary infrastructural condition for our contemporary networked age.

References

Benjamin, W. 2014. ‘Lichtenberg: A Cross Section’. In Radio Benjamin, L. Rosenthal (ed.), 336-359. London: Verso.

Benjamin, W. 2006. Berlin Childhood around 1900. Translated by H Eiland. Cambridge MA: Harvard University Press.

Gordon, P. E. 2023. ‘President of the Moon Committee: Walter Benjamin’s Radio Years’, The Nation. https://www.thenation.com/article/society/walter-benjamin-radio-years/ (accessed 14 December 2024).

Hui, Y. 2016. The Question Concerning Technology in China: An Essay in Cosmotechnics. Cambridge MA: MIT Press.

Hui, Y. 2021. Art and Cosmotechnics. Minneapolis: Minnesota University Press.

Neyrat, F. 2022. Le cosmos de Walter Benjamin: un communisme du lointain. Paris: Éditions Kimé.

Rosenthal, L. 2014. ‘Walter Benjamin on the Radio: An Introduction’. In Radio Benjamin, L.Rosenthal (ed.), ix-xxix. London: Verso.

 

Arendt and the Philosophy of Technology

Jurgita Imbrasaite
University for Applied Sciences Europe, Hamburg

This paper explores Hannah Arendt’s contributions to the philosophy of technology, situating her analysis of the Vita Activa in The Human Condition within the broader historical discourse on technological transformation. While artificial intelligence (AI) exemplifies the disruptive potential of modern technologies, Arendt’s framework offers an alternative to the prevailing utilitarian perspective that reduces technology to its service or disservice to humanity. Instead, she invites us to consider technology as an environment that reshapes the human condition and opens possibilities for plurality and collective life.

Central to Arendt’s thought is the concept of "work" (Herstellen), grounded in the classical notion of techné as a purposeful activity distinct from the flux of nature. Drawing on Aristotle’s distinction between poiesis (making) and praxis (acting), Arendt highlights how work creates a durable, human-made world, a shared "in-between" that sustains political and social life. Unlike Aristotle, who subordinated techné to higher forms of knowledge, Arendt elevates its significance, emphasizing the stabilizing role of durable artifacts in anchoring human activity.

Modern technology, however, disrupts this framework. According to Arendt, it not only radically changes what we do when we are active but also alters our worldview and the structures through which we understand existence. Arendt critiques the shift from world-building “work” to "technical doing" (technisch tun), which she describes as a fundamental transition to the technical creation and channeling of nature-like processes directly "into the world itself." She attributes the notion of automation to the cycle of nature, such as a seed containing a tree and being indistinguishable from it during growth, to explain how technical processes, increasingly automated in her time, mirror nature-like processes. These processes dissolve distinctions between means and ends, beginnings and outcomes, product and process. AI exemplifies this transition, challenging traditional boundaries and undermining the durability of the human-made world. Yet, Arendt’s insights also open the possibility of understanding AI as part of a new "technological condition" comparable to nature, a condition that could reliably support human activity without being subordinated to anthropocentric goals.

By moving beyond the utilitarian framing of AI as a servant to humanity, this paper argues for rethinking technology as an evolving environment. Revisiting Arendt enables us to critically reimagine the role of technology in shaping the conditions for plurality and political action in an increasingly interconnected world.

 

Technology and Historical Time: Insights from the Annales and Hermeneutics

Darryl Cressman
Maastricht University

Using Ferdinand Braudel's distinction between the longue durée and the history of the event, one of the consequences of the empirical turn has been a methodological bias towards the history of the event, which has come at the expense of longer periods of historical time. This may seem like an obvious bias because by its very nature, technology, like politics, seems well-suited to the short time span and the immediate distinctiveness of the event (or the artifact, in this case). The popular imagination abounds with stories that frame and anticipate new technologies as transformative and historical consciousness is strongly influenced by narratives of sociotechnical disruption and abrupt breaks with the past. Yet, the choice of historical time is not fixed, "historical time, far from being a 'natural' phenomenon, is fundamentally a cultural one" (Le Goff 1988, p.3). It would not be incorrect, for example, to consider the relationship between the social and the technical through moments of rapid transformation and change. But it would also not be incorrect to suggest that these moments of sociotechnical change are inseparable from longer historical continuums within which they occur and through which difference and disruption blends seamlessly into an almost indistinct similarity. To favour one over the other is rarely nothing more than method, fashion, or habit.

In this talk I use examples from the history of musical culture to propose concepts and terms well-suited for thinking about technology through longer periods of historical time that extend beyond any one technology. To do this, I turn to the tradition of hermeneutic philosophy, and in particular the work of Hans-Georg Gadamer, to connect the discussion of historical time to the ways that historical experience informs our interpretations of technologies. From this, I translate Hans Robert Jauss' (1982 [1960]) concept of a "horizon of expectations" for the study of technology, examining how this concept can explain how individual and collective engagements with new and emerging technologies are always pre-conditioned by the experiences and habits that have been developed through engagements with existing technologies.

I conclude by suggesting that an attention to sociotechnical continuities can point to ways of thinking about technology that are well-suited to critiquing contemporary capitalist society. Resisting the capitalist imperative to desire, celebrate, fear, and consume the spectacle of the new can, at the very least, point to continuities that connect sociotechnical ways of being that fall outside of contemporary techno-socio-economic priorities: "the shrinking of the consciousness of historical continuity is more than an aspect of decline – it is necessarily linked with the principle of progress in bourgeois society" (Adorno 1970, p.13, qtd. in Schmidt 1981, p.2). Although the examples I use are neither political nor contentious in the conventional sense of the term, the concepts, ideas, and terms that I draw from this example can, hopefully, prove to be useful for larger critical ambitions.

 
2:20pm - 3:45pm(Papers) Prediction
Location: Auditorium 13
Session Chair: Wouter Eggink
 

Technological predictions: rethinking design through active inference and the free energy principle

Luca Possati

University of Twente, Netherlands, The

This paper explores human-technology relationships through the lens of active inference and the Free Energy Principle (FEP). Active inference, rooted in Bayesian brain theory, suggests that the brain generates predictions about sensory inputs and updates beliefs to minimize surprise or prediction errors, enabling organisms to reduce uncertainty and optimize interactions with their environment. The FEP, introduced and developed by neuroscientist Karl Friston, expands this idea, proposing that biological systems aim to minimize free energy—a measure of the discrepancy between expected and actual sensory input—to sustain homeostasis (Friston 2013; Parr et al. 2022). These frameworks can provide a novel perspective on human-technology interactions.

At the heart of this paper is a straightforward yet powerful idea: every artifact embodies a set of predictions—not only about how its user will interact with it but also about the environment in which it operates. At the same time, the artifact reflects the expectations of its designer, who conceived and built it based on assumptions about its purpose, functionality, and intended user behavior. In this sense, artifacts act as mediators, encoding and enabling the interaction of predictive models from multiple agents: the designer, the artifact itself, and the user. This perspective positions artifacts as dynamic networks of predictions, where human-technology interactions are shaped by the continuous coordination and adaptation of these models over time.

The inquiry centers on two primary questions:

How do active inference and the FEP extend to artifacts? Artifacts can be seen as mediators of predictions through three mechanisms: precision crafting, curiosity sculpting, and prediction embedding. Precision crafting directs attention to specific environmental features, aiding users in managing their inferential load. Curiosity sculpting enables exploration and uncertainty reduction, refining user predictive models. Prediction embedding encapsulates the artifact’s own predictive capacity, shaping and reflecting its intended use. These mechanisms, though interconnected, can operate independently or progressively.

Can active inference and the FEP inform UX design? By conceptualizing the relationship between designer, artifact, and user as a triad of generative models, this approach provides tools to address challenges in UX design, such as enhancing user engagement and optimizing functionality. It offers a dynamic framework that goes beyond static models, capturing the evolving interactions within the system.

To operationalize this framework, this paper introduces the Designer-Artifact-User (DAU) tool, a software platform developed to simulate and analyze artifact-based interactions. The DAU tool leverages the formalism of active inference and the FEP to model how predictions evolve across the triad of designer, artifact, and user, facilitating the refinement of design processes. By employing advanced computational models, the tool provides a powerful resource for exploring the dynamic interactions between these entities. It is specifically designed for researchers, designers, and engineers seeking to deepen their understanding of complex socio-technical systems. The framework's practical application is illustrated through a case study of the smartphone. This analysis examines how smartphones embody and influence the expectations of both users and designers, demonstrating how active inference can enhance interactions and align design intentions with user behavior.

References

Friston, K. (2013). “Life as we know it.” Journal of the Royal Society Interface 10(86): 20130475.

Parr, T., Pezzulo, G., and K. Friston. (2022). Active Inference. Cambridge, MA: MIT Press.



AI Oracles and the Technological Re-Enchantment of the World

Lucy Císař Brown1,3, Petr Špecián1,2

1Charles University, Czech Republic; 2Prague University of Economics and Business, Czech Republic; 3Czech Academy of Sciences

Artificial intelligence systems are becoming an increasingly sophisticated and pervasive presence in our day-to-day lives. In the likely case of their continuing development, they could soon transcend their current status as tools or consultants (Špecián and Císař Brown 2024). Indeed, AIs may emerge as modern-day oracles—entities of great mystique perceived to possess knowledge and capabilities beyond human comprehension whose utterances carry decisive authority despite their purveyors’ lack of accountability. This paper argues that such a transformation represents a form of technological ‘re-enchantment’ of the world, inverting Max Weber’s concept of the disenchantment of modernity.

As individuals and institutions increasingly rely on AI “oracles” for guidance, they gradually resign their agency for the sake of systems they fundamentally cannot understand (Klingbeil, Grützner, and Schreck 2024). The requisite leap of faith far exceeds the trust placed in human experts. Drawing on Weber’s analysis of rationalization and secularization (Weber 1963), we argue that what is often missed in subsequent Weberian analyses is that ‘disenchantment’ did not imply the loss of capacity for faith, but rather its transformation. As modernity has progressed, this capacity has been redirected toward technology, with people systematically convinced to ‘have faith’ in supposedly rational structures, such as ‘the Market,’ they cannot comprehend (Keller 1992).

With the ascent of AI, this epistemic distance widens further. Current AI systems, particularly Large Language Models, are already ascribed the status of competent consultants. As these technologies improve and perceptible errors subside, we anticipate a fateful perceptual shift: from consultant to Oracle—an opaque source of authoritative knowledge whose proclamations are accepted with minimal scrutiny. Unlike traditional oracles, bound to specific times and places, AI offers the potential for a personal, continuous, and all-encompassing relationship with its users—or perhaps, increasingly, disciples—providing apparently omniscient guidance.

The contribution of our paper is a structural analysis of this remarkable technological re-enchantment, wherein increasing AI sophistication leads not to greater comprehension, but to a faith-based abdication of human agency (cf. Collins 2018). We argue that this development represents a pivotal moment in the ongoing dialectic of enchantment, disenchantment, and re-enchantment, challenging us to reconsider fundamental concepts of agency, trust, and the relationship between humanity and technology. By examining how human yearning for omniscient guidance may lead us toward an enchanted acceptance of opaque technological proclamations, we illuminate crucial questions about autonomy and rationality in an AI-mediated world.

Collins, H. (2018). Artifictional intelligence: Against humanity's surrender to computers. Polity Press.

Keller, J. (1992). Nedomyšlená společnost [Unimagined society]. Doplněk.

Klingbeil, A., Grützner, C., & Schreck, P. (2024). Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior, 160, Article 108352. https://doi.org/10.1016/j.chb.2024.108352

Špecián, P., & Císař Brown, L. (2024). Give the machine a chance, human experts ain't that great…. AI & SOCIETY, Article s00146-024-01910–16. https://doi.org/10.1007/s00146-024-01910-6

Weber, M. (1963). The sociology of religion (E. Fischoff, Trans.). Beacon Press. (Original work published 1922)



Sleepwalkers in a scenario of a happy apocalypse?

Helena Mateus Jeronimo

ISEG School of Economics and Management, Universidade de Lisboa & Advance/CSG, Portugal

This paper builds on the idea that, in addition to the uncertainty that has always existed, contemporary society has introduced new contingencies stemming from increasingly complex and sophisticated techno-scientific systems. Nonetheless, many of the problems we face in the 21st century is analyzed as “risks”, which, with their probabilistic nature, artificially conceals uncertainties and creates an illusion of control over randomness and contingencies, perfectly aligning with a culture that rejects unpredictability. There is an excessive, hegemonic, and monolithic tendency to use the probabilistic notion of risk across all kinds of issues, which leads to three levels of error: (i) theoretically, it conflates contexts of risk, which can be evaluated and calculated in terms of probabilities, with situations of uncertainty, which cannot be assessed through measurable calculations; (ii) analytically, it devalues radical uncertainty, falsely converting it into epistemic uncertainties that can be analyzed through quantitative methods to achieve public credibility and acceptance; (iii) normatively, the language of risk tends to legitimize, justify, and ratify the pattern and progress of technology, failing to question the foundations of the instrumental vision that strongly permeates modernity.

The concealment of uncertainty occurs because risk operates as an “abstraction” of problems through numbers – a measure that induce a distant and opaque relationship with objects and scientific-technical systems. Risk enables the rejection of contingency by presenting it as something manageable. As expressed by the historian of science and technology Jean-Baptiste Fressoz, this has occurred through a process of “reflexive disinhibition”, where dangers are acknowledged but they are subsequently normalized to ensure their acceptance through the creation and success of mechanisms (e.g., regulations, safety standards, administrative surveillance, insurance), which generates a climate of “happy apocalypse”.

Faced with the crossroads of the current context, which is heavily impacted by the ecological crisis and the complex systems of Artificial Intelligence, the somewhat ‘unconscious’ ratification of the scientific-technological path is particularly concerning. Consider how algorithms represent forms of invisible power, imperceptible to the senses, and how they construct, organize, and shape our reality—whether in recruitment decisions, judicial rulings, consumer choices, targeted marketing, or even in the governance of nations. Consider how data enables human action or interaction to become subject to the logic of prediction and optimization. In short, consider how machine learning embodies the automatism of technology. The acceptance of the existence of these emerging factors echoes the concept of “technological somnambulism”, as described by Langdon Winner, which arises from the perception of technology as a mere tool – neutral and disconnected from its long-term implications, deeply opaque to its users – and not as a powerful force subtly but potentially irreversibly restructuring the physical and social world in which we live.

In this paper, I argue that the excessive trust placed in solely technical solutions, subsumption of uncertainties into risk analyses, decontextualized strategies of technoscience and the strong influence of the financial and corporate world urgently need to be circumvented by the idea of reasonableness, in addition to rationality, that it should be acknowledged that uncertainties cannot be tamed, and that ethical values and political action should mediate technoeconomic progress.

 
2:20pm - 3:45pm(Papers) Sustainability and energy
Location: Auditorium 12
Session Chair: Gunter Bombaerts
 

Understanding Polarisation in the Energy Transition

Udo Pesch

Delft University of Technology, Netherlands, The

Policies and projects initiated to foster the energy transition are subjected to societal polarisation, meaning that the discussion about their desirability is characterised by positions that are opposed in terms of apparently incommensurable values and world views (Cuppen, Pesch, Remmerswaal, & Taanman, 2019). In this paper, I depict polarisation in the energy domain as an accidental manifestation of broader patterns of polarisation which are caused by a wide range of societal developments. By understanding the causes of these developments, more productive modes for the societal assessment of energy policies and projects can be pursued.

Polarisation can be presented as the societal and political materialisation of two types of moral orientations (cf. Henrich, 2017). First, there is the impersonal disposition, in which trust pertains to rules, systems, and interactions that can be characterised as rationalistic and objective. Social norms are formally institutionalised, while moral norms are based on generic rules and procedures. Second, there is the interpersonal disposition, characterised by trust in direct and concrete relations, interactions, and experiences. Social norms have a local and particularistic character, while moral norms are based on intuitions.

These two types of moral orientations do not necessarily coincide with political or societal groups. In fact, every individual navigates these two attitudes on a daily basis without effort or reflection. What is typical for twenty-first-century societies, however, is that these are increasingly structured on the basis of this bifurcation.

A number of push and pull factors explain this development. In this, push factors consist of the dominance of the impersonal orientation in the economic, political and technological systems. The complexity induced by these systems necessitates coordination over a wide range of actors and institutions, which reinforces the dominance of this orientation (Bauman, 2000; Tainter, 2006). Among the pull factors, there are political leaders who use the opposition between these orientations for electoral gain, social media that amplify opposition, and the tendency of people to shape their self-identity in terms of opposition – the dominance of the impersonal orientation fuels the sentiment of alienation (van Dijck, 2020).

The upshot of these developments can be denoted as moral tribalism that has spilt over into the energy domain (Markowitz & Shariff, 2012). The opposition has come to be understood in terms of a conflict between the political left and right, with each side exploiting only one of the orientations while ridiculing the opposite side. This framing not only ignores the heterogeneity and fluidity of human morality but also denies the diversity of positions and options possible to further the energy transition. To overcome this, the idea of ‘bridging events’ – derived from innovation sciences – is useful (Garud & Ahlstrom, 1997; Rip & Van Lente, 2013). These bridging events allow actors to learn about alternative orientations so that innovation processes are established on a richer basis of insights, needs, and visions. The paper will end with outlining some of the conditions for having productive bridging events in the context of the energy transition.

References:

Bauman, Z. (2000). Liquid modernity. Polity, Cambridge.

Cuppen, E., Pesch, U., Remmerswaal, S., & Taanman, M. (2019). Normative diversity, conflict and transition: Shale gas in the Netherlands. Technological Forecasting and Social Change, 145, 165-175. doi:https://doi.org/10.1016/j.techfore.2016.11.004

Garud, R., & Ahlstrom, D. (1997). Technology assessment: a socio-cognitive perspective. Journal of Engineering and Technology Management, 14(1), 25-48.

Henrich, J. (2017). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter: Princeton University Press.

Markowitz, E. M., & Shariff, A. F. (2012). Climate change and moral judgement. Nature Climate Change, 2(4), 243.

Rip, A., & Van Lente, H. (2013). Bridging the gap between innovation and ELSA: The TA program in the Dutch Nano-R&D program NanoNed. Nanoethics, 7, 7-16.

Tainter, J. A. (2006). Social complexity and sustainability. Ecological Complexity, 3(2), 91-103. doi:https://doi.org/10.1016/j.ecocom.2005.07.004

van Dijck, J. (2020). Governing digital societies: Private platforms, public values. Computer Law & Security Review, 36, 105377. doi:https://doi.org/10.1016/j.clsr.2019.105377



"Koyaanisqatsi", that is, Technophany at work

Agostino Cera

Università di Ferrara, Italy

In order to show in its literal meaning the concept of “Technophany” – that is, unlike the way it was conceived by Gilbert Simondon in the early 1960s, i.e. as “a form of mediation which allows technology to be re-integrated into culture” – my paper deals with Godfrey Reggio’s "Koyaanisqatsi", understood as a film (philosophical) essay. Together with “Powaqqatsi” (1988) and “Naqoyqatsi” (2002), “Koyaanisqatsi” (1982) is part of the Qatsi-trilogy. The titles are taken from the language of Hopi, a native American tribe. Koyaanisqatsi, in particular, means “life out of balance”.

Koyasnisqatsi represents the best film treatment of a philosophical topos during the 20th century: “the question concerning technology”. In particular, it shares the approach of the “first-generation of philosophers of technology” – Heidegger, Jonas, Mumford, Anders... – who was able to grasp that technology has become “form of life”. At the same time, he also shows (share with) some limits of this first generation, that is, essentialism, apriorism, determinism, dystopian attitude.

According to 14 years spent as a monk, Reggio has a religious approach to the cinema. His films equate to millenarianist preachers that advice humanity about a new apocalypse. To produce the same experience into the viewer, he makes an "epoché of logos", that is, renounces the conventional language and uses music (Philip Glass’s music) as an alternative language, within which to insert some single words from an otherworldly language: the Hopi language.

While describing his work, Reggio gives an impressive philosophical characterization of contemporary technology: “What I tried to show is that the most important event of perhaps our entire history […] is gone unnoticed […] the transiting from the natural environment into a technological milieu […] Technology has become as ubiquitous as the air we breathe […] Life unquestioned is life lived in a religious state”.

According to this description, Reggio’s work makes visible the technology as our current (neo-)environment or oikos. "Koyaanisqatsi" is able to give an aesthetic (i.e. visual, acoustic, emotional) concreteness to highly abstract concepts, such as:

1. the “disenchantment” of the world (Weber), that is, our society as a “megamachine” (Mumford);

2. the relation between human being and nature as “challenging”, i.e. the interpretation of every entity as a standing-reserve (Heidegger);

3. the matter of fact that technology represents the current “subject of history” (Anders) or our “destiny” (Jonas);

4. the “total mobilization” (Jünger) as basic law of the megamachine;

5. that Homo technologicus’ hybris is different from classical Prometheanism. Our “neo-Prometheanism” takes the form of “Icarism”, which does not consist of a hyper-approximation to the sun, but a “hyper-distancing from the Earth”, i.e. the loss of Earthness (worldhood/Weltlichkeit) as a founding feature of the human condition.

Insofar as "Koyaanisqatsi" realizes a visible incarnation of our Zeitgeist, it should be considered an event in itself: the “epiphany of technocosmos”, or better, a “technophany” in the literal sense of the term. Here and now the only still possible manifestation of the sacred depends on technology: the new theo-phany or hiero-phany can only be a techno-phany.

 
2:20pm - 3:45pm(Papers) Legislation
Location: Auditorium 11
Session Chair: Lambèr Royakkers
 

Between Human and Algorithmic Decisions: Analyzing the Ambiguities in the AI Act Definition of AI

David Doat

Catholic university of Lille, Belgium

In contemporary academic discourse on artificial intelligence, scarcely any publication neglects to underscore the extensive deployment of artificial intelligence systems (AIS), akin to a pervasive social phenomenon, permeating all facets of human endeavor: healthcare, education, justice, security, as well as economic and financial sectors, private life, and beyond. These systems serve to optimize information processing procedures, generate recommendations, and automate complex processes among others. However, a significant number of these systems also render "decisions" that influence individuals' life courses, thus affecting their trajectories and opportunities, their quality of life, and their overall well-being. The European Regulation on Artificial Intelligence (AI Act) adopts a definition that encompasses decision-making as one of the operations executed by AI. The objective of my presentation is to elucidate that the incorporation of the concept of "decision" in the legal definition of AI articulated in the current text of the AI Act is the source of profound ambiguity, incapable of withstanding philosophical scrutiny, necessitating rectification. Even though the concept of decision is utilized in numerous contexts familiar to computer scientists and within the reference frameworks of the discipline, such as the IEEE Computer Science Curricula, its invocation within a legal definition of AI bears the risk of significant categorical confusion among European citizens from diverse cultural, educational, professional, and disciplinary backgrounds. Given the inherent distinctions between human decision-making acts and automated "decision-making" acts, legislative restraint from employing the notion of "decision" in the established definition of AI would have been preferable. This is corroborated in the judicial domain, wherein it is emphasized that "the use of AI tools can support the decision-making power of judges or judicial independence, but should not replace them, as the final decision must remain a human activity". Consequently, my presentation will advocate this thesis by advancing four arguments. The initial argument will involve an original philological and philosophical analysis of the concept of decision, highlighting its formal structure and anthropological specificity. The second argument will draw from the philosophy of language and the theory of the performativity of speech acts (Austin, Searle, Chomsky). The third argument will derive from theories of embodied cognition and Dewey's conception of the relationship between the decision-maker and the associated environment. The final argument will present an analysis and critique of the interests and limitations inherent in attempts to formalize decisions within decision theory. In conclusion, I will emphasize the legal necessity of distinguishing between metaphorical uses of the concept of decision-making, wherein delegation to machines incurs no epistemic or ethical consequences, and the authentic dynamics of human decision-making, which cannot be supplanted by algorithmic processes. In light of the proposed analysis, I will revisit the inherent ambiguity in the use of the concept of decision within the definition of AI in the European AI Act, advocating for modifications to the definition's text based on the analyses presented. In this context, I will provide several proposals for discussion.



Test and regulation: how testing to regulation leads to failure

Matthew James Phillip Wragg

University of Edinburgh, United Kingdom

As a rudimentary form of technology, construction products and systems affect our day to day lives in ways that we hope not to perceive or engage with. The majority of us will enter and exit a building without considering that someone, somewhere, should know that the materials and systems used to make the structure are the right products and systems for that structure, and that they are fit for their intended use (Construction Product Regulations, 2011).

Yet when buildings do fail, the impact is felt acutely. From relatively minor failures that can result in long term health issues (Murphy, 2006; Awaab Ishaak – Prevention of future deaths report, 2022), to cumulative failures that lead to catastrophe (Grenfell Tower Inquiry, Phase 2, 2024), we seek to understand why these failures have happened, even though the regulations that are present are there to provide assurances of repeatable behaviour of a product being fit for its intended use (Chhobra, 2020), i.e. uses that should not lead to, often, preventable failure.

How do we make claims of fitness for use, and what is the relationship between fitness of use in the test environment and in situ? In this paper I shall argue that by removing the unnecessary variables that inform how and what we test for when confirming the future performance of a product in test (Downer, 2007), we limit our epistemic claims of product performance to being of data gathered about how a product performs in comparison to a product type, rather than being of how this product performs under certain conditions, leaving open the door to both physical and epistemic failure.

Using current regulatory schemes and experience drawn from working with a UKAS accredited Product Certification Body and Technical Assessor (those accredited to be able to validate claims of performance made by manufacturers) I shall be discussing how we use standardisation, test, and assessment to restrict the regulated uses (functions) of a product to maintain epistemic accuracy by reducing the scope of the epistemic claims but not by reducing the potential functions of a product. By relying on what a manufacturer deems as relevant to declare due to a products foreseeable use (CE Marking of Construction Products – Step by Step, 2014), we have a limited understanding of what expected performance is that can only account for very few of both the regulated and the potential uses of a product.

Although my work concentrates on how we create and continually verify and validate epistemic claims within the context of the construction industry and civil engineering, this is situated is within a broader context relevant to philosophy of technology: what is the purpose of the test environment and how do we use it to regulate artefacts?

 
2:20pm - 3:45pm(Papers) Farming
Location: Auditorium 10
Session Chair: Tijn Borghuis
 

Reconciling technology and tradition: exploring the sustainability of drone-assisted wild berry foraging in Finland

Anne-Marie Oostveen

Cranfield University, United Kingdom

This paper investigates the potential drone technology, to enhance the sustainability of wild berry foraging in Finland. Wild berries represent a vital forest product in Nordic countries, but less than 10% of the annual crop is harvested [1]. With domestic reluctance toward commercial berry picking due to low economic incentives, reliance on foreign seasonal workers has grown significantly [2,3,4]. We adopt a multidisciplinary lens, engaging with philosophical and ethical reflections on the implications of introducing drones into natural foraging ecosystems. The analysis spans the three pillars of sustainability—economic, environmental, and social—while probing the tensions between technological advancements and their unintended consequences.

Economically, autonomous drones can enhance wild berry picking operations, leading to increased profits and viability. Drones equipped with advanced sensors and artificial intelligence (AI) offer transformative potential in this context. By building 3D forest models and precisely locating berry clusters, drones can optimise harvest operations, increase yields by 50%, and improve working conditions through navigation aids and support tools. These innovations could attract local and recreational foragers, reducing dependence on migrant labour. The paper also considers the interplay between commercial scalability and preserving access rights under Finland's "Everyone's Right" framework [5].

Environmentally, drone technologies provide tools for geospatial analysis, enabling sustainable resource management. By determining berry ripeness and productivity, drones facilitate harvesting at ideal times, minimising environmental impact. Real-time monitoring helps prevent overharvesting in vulnerable areas, supporting ecosystem health by guiding pickers to optimal areas while leaving sufficient resources for regeneration and wildlife. Additionally, empirical data gathered by drones and analysed with AI can be provided to governmental institutions to support evidence-based policymaking. This data-driven approach may inform temporary foraging restrictions when specific foraging areas are identified as vulnerable. However, risks such as wildlife disturbances [6], noise pollution [7,8], drone debris [9], and the environmental footprint of drone manufacturing and operation necessitate mitigation measures, including quieter designs, responsible waste management practices, and sustainable power solutions [10].

From a social perspective, wild berry foraging is deeply embedded in cultural traditions and recreational practices in Finland [11,12]. Drones can enhance this experience by improving efficiency and safety, encouraging local participation, and offering new opportunities for small-scale entrepreneurship. However, public concerns regarding privacy, noise, and the perceived encroachment of technology into traditional foraging practices must be addressed. The philosophical critique extends to autonomy, exploring whether technologies designed to "assist" foragers could inadvertently infantilise or alienate them. Moreover, public acceptance hinges on the perceived trade-off between technological benefits and the preservation of personal and cultural values. Transparent communication, community engagement, and ethical safeguards will be essential for societal buy-in.

This paper advocates for a nuanced approach to integrating drone technology into wild foraging, emphasizing the importance of ethical guidelines, regulatory oversight, and active stakeholder engagement. It challenges us to consider how innovations in robotics and AI intersect with longstanding human-environment relationships, urging a future where technological progress aligns with cultural respect and ecological stewardship.

By foregrounding these philosophical reflections, the paper seeks to contribute to broader discourses on sustainable technology in natural resource industries, providing a template for ethically responsible innovation.

References

[1] Steensig, S.L. (2021), Did Finland do enough to protect its foreign berry pickers from Covid? Euronews, 16/09/2021. https://www.euronews.com/2021/09/16/did-finland-do-enough-to-protect-its-foreign-berry-pickers-from-covid

[2] Ruokavirasto (2023). Luonnonmarjojen ja -sienten kauppaantulomäärät vuonna 2023. Marsi 2022. https://www.ruokavirasto.fi/globalassets/viljelijat/tuet-ja-rahoitus/marsi-2022-raportti.pdf

[3] Piha, E.A. (2022). Seasonal Labour Migration and Sustainable Development in the Finnish Wild Berry Industry. Master’s Degree in Interdisciplinary Studies in Environmental, Economic and Social Sustainability. Universitat Autonoma de Barcelona.

[4] Lacuna-Richman, C. (2021). The Seasonal Migration of Thai Berry Pickers in Finland: Non-wood Forest Products for Poverty Alleviation or Source of Imminent Conflict? In Social-Ecological Diversity and Traditional Food Systems (pp. 91-105). CRC Press.

[5] Ministry of the Environment. (2016). Everyman’s Right. Legislation and Practice.

[6] Frąckiewicz, M. (2023) The Role of Drones in Wildlife Research and Conservation. Accessed online (11/5/2023) https://ts2.space/en/the-role-of-drones-in-wildlife-research-and-conservation/

[7] Christie, K. S., Gilbert, S. L., Brown, C. L., Hatfield, M., & Hanson, L. (2016). Unmanned aircraft systems in wildlife research: current and future applications of a transformative technology. Frontiers in Ecology and the Environment, 14(5), 241-251.

[8] Mulero-Pázmány, M., Jenni-Eiermann, S., Strebel, N., Sattler, T., Negro, J. J., & Tablado, Z. (2017). Unmanned aircraft systems as a new source of disturbance for wildlife: A systematic review. PloS one, 12(6), e0178448.

[9] Nentwich, M., & Horváth, D. M. (2018). The vision of delivery drones: Call for a technology assessment perspective. TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis/Journal for Technology Assessment in Theory and Practice, 27(2), 46-52.

[10] Chamberlain, A., & Griffiths, C. (2013). Wild food practices: understanding the wider implications for design and HCI. Green Food Technology: Ubicomp opportunities for reducing the environmental impacts of food, Ubicomp.

[11] Giraud, N. J., Kool, A., Karlsen, P., Annes, A., & Teixidor-Toneu, I. (2021). From trend to threat? Assessing the sustainability of wild edible plant foraging by linking local perception to ecological inference. bioRxiv.

[12] Hall, C. M. (2013). Why forage when you don't have to? Personal and cultural meaning in recreational foraging: a New Zealand study. Journal of Heritage Tourism, 8(2-3), 224-233.



What did the rice-farming calendar do? -Changing relationships between farmers and farmland in Japanese rice farming

Toshihiro Suzuki

Sojo University, Japan

“The rice-farming calendar” plays a very important role in rice farming in Japan. It is a manual for rice cultivation that is provided to farmers by regional agricultural cooperatives. It began in the 1950s and continues today. The calendar indicates what farmers need to do and when they need to do it in order to successfully cultivate rice. It contains wisdom based on the experience of local farmers and research results by agronomists and includes information on what agrochemicals and fertilizers to use and when to use them. Since agrochemical and fertilizer suppliers were also involved in the preparation of the calendar, farmers can order pesticides, herbicides and fertilizers semi-automatically by using the calendar.

Farmers can now achieve a semi-automated rice harvest with the rice farming calendar, making rice production, which used to require expertise, accessible to everyone. In this sense, the rice-farming calendar has certainly played an important role in the stable production of rice in Japan.

At the same time, however, it is also true that the rice-farming calendar has weakened the connection between humans and the rice paddies. Farmers no longer need to worry about the condition of their rice paddies, since following the rice-farming calendar automatically produces a harvest. The major problem ahead of this trend is that farmers will no longer have the opportunity to engage in ethical reflection on the condition of their farmland.

For example, the amount of agrochemicals used to control pests and weeds is set slightly higher than necessary when it is mentioned in the rice farming calendar, to ensure that there is no damage. Farmers apply the prescribed amount of agrochemicals without knowing much about the situation in their rice fields, such as what kind of insects are present or what kind of weeds are growing.

What can we do about it? Here, I would like to introduce the “pesticide reduction movement” that took place in Japan in the 1970s. While the movement's goal was to reduce the use of pesticides, the emphasis in the movement was to have farmers face every rice paddy one by one.

For example, various “devices,” such as “insect-watching boards”, appeared in the movement. These devices were intended to establish a relationship between the farmer (human) and the insects in the farmland, and between the farmer and the farmland.

The rice farming calendar is still in use today. While recognizing its merits, we need to rethink how we can build a relationship between the farmer and the farmland today.



Preparing the Field for AI and Data Intensive Agroecological Research

Emma Cavazzoni1, Sabina Leonelli2, Daniele Giannetti3, Niccolò Patelli4, Giacomo Vaccari5

1Technical University of Munich, Germany; 2Technical University of Munich, Germany; 3Università degli Studi di Parma, Italy; 4Università degli Studi di Modena e Reggio Emilia, Italy; 5Consorzio Fitosanitario Provinciale: Modena. Italy

In this paper, I explore what it means to prepare the field for AI and data intensive agroecological projects. Conducting research in the field demands choosing or modifying natural places to tailor them to machines and quantitative measurements, ensuring the production of reliable, consistent data while navigating the myriad challenges inherent in unpredictable environments where unexpected occurrences are commonplace (Kohler, 2002). This process involves the preparation of the field and the meticulous construction of objects that can be investigated. Although not always acknowledged as scientific labor, such activity plays a pivotal role in laying the foundation for meaningful research outcomes. Drawing parallels with scholarly insights into fossil construction, focusing particularly on the work of Wylie (2015), this presentation unravels the complexities of this essential yet often overlooked task. I ground my reflections on six months of ethnographic work and collaborations with an agroecological interdisciplinary project dealing with a plethora of objects such as data, insects, and fruits: Haly.Id. Haly.Id is a Horizon based in Northern Italy that develops innovative technologies like drones and camera traps for a targeted monitoring of the presence in crop fields of the brown marmorated stink bug Halyomorpha Halys (H. halys) – a highly invasive pest that feeds on fruits and seriously harms production in southern Europe, the United States, and eastern Asia (Bariselli, Bugiani and Maistrello, 2016; Ferrari et al., 2023; Giannetti et al., 2024).

The discussion is centered around three key dimensions that significantly influence the process. The first one pertains to the intricate tapestry of social relations. This includes how the division and integration of labor and expertise, along with the resulting dynamics, shape the direction of object construction and field preparation. In Haly.Id, for instance, decisions are fragmented across disciplines and skills, often resulting in the loss of farmers’ input by the time engineers design monitoring technologies. The second axe revolves around the environment. The preparation and construction of field and objects for automated agroecological research are shaped by factors such as unpredictable weather patterns and complex environmental interactions. Being concerned with natural fields rather than controlled lab environments, researchers have limited control over parameters such as temperature, humidity, and light exposure (Knorr-Cetina, 1992; De Bont, 2015). In Haly.Id, freezing and local floodings significantly influenced the development of pears as objects of study in plant-pest interactions. The third dimension I consider is the methods employed. Decisions regarding which aspects to monitor and how to integrate technologies with field elements such as territory, species composition, ecology, and climate greatly influence the preparation of the field and the construction of the objects involved. In Haly.Id, for example, these were shaped by the need to create a system that was not only technically achievable but also useful given pest control methods already on the ground.

Bibliography

Bariselli, M., Bugiani, R. and Maistrello, L. (2016) ‘Distribution and damage caused by Halyomorpha halys in Italy’, EPPO Bulletin, 46(2), pp. 332–334. Available at: https://doi.org/10.1111/epp.12289.

De Bont, R. (2015) Stations in the Field: A History of Place-Based Animal Research, 1870-1930. Chicago, IL: University of Chicago Press. Available at: https://press.uchicago.edu/ucp/books/book/chicago/S/bo18991041.html (Accessed: 24 February 2024).

Ferrari, V. et al. (2023) ‘Evaluation of the potential of Near Infrared Hyperspectral Imaging for monitoring the invasive brown marmorated stink bug’, Chemometrics and Intelligent Laboratory Systems, 234, p. 104751.

Giannetti, D. et al. (2024) ‘First use of unmanned aerial vehicles to monitor Halyomorpha halys and recognize it using artificial intelligence’, Pest Management Science, n/a(n/a). Available at: https://doi.org/10.1002/ps.8115.

Knorr-Cetina, K. (1992) ‘The Couch, the Cathedral, and the Laboratory: On the Relationship Between Experiment and Laboratory in Science’, in A. Pickering (ed.) Science as Practice and Culture. Chicago, IL: University of Chicago Press. Available at: https://philarchive.org/rec/CETTCT (Accessed: 29 January 2024).

Kohler, R.E. (2002) Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. Chicago, IL: University of Chicago Press. Available at: https://press.uchicago.edu/ucp/books/book/chicago/L/bo3640043.html (Accessed: 18 January 2024).

Wylie, C.D. (2015) ‘“The artist’s piece is already in the stone”: Constructing creativity in paleontology laboratories’, Social Studies of Science, 45(1), pp. 31–55. Available at: https://doi.org/10.1177/0306312714549794.

 
2:20pm - 3:45pm(Symposium) A political (re-)turn in the philosophy of engineering and technology
Location: Auditorium 9
 

A Political (Re-)Turn in the Philosophy of Engineering and Technology - Technological mediation

Chair(s): Michael W. Schmidt (Karlsruhe Institute of Technology)

Technological and engineering choices increasingly determine our world. Presently, this affects not only our individual well-being and autonomy but also our political and collective self-constitution. Think of digital technologies like social media and their combination with AI, the corresponding echo chambers and filter bubbles, deep fakes and the current state of liberal democracy and the rise of authoritarian governments. Despite nation states having to reframe sovereignty in a globalised world (Miller, 2022), there is the potential for impactful collective action with regard to technological choices and practices of engineering, so that a simple form of technological determinism is to be discarded. In this light, the current focus of ethically normative philosophy of technology on individual action and character is alarmingly narrow (Mitcham, 2024). We urgently need a political (re-)turn in the philosophy of engineering and technology and, correspondingly, a turn towards engineering and technology in disciplines that reflect on the political sphere (Coeckelbergh, 2022).

To foster such a political (re-)turn in the philosophy of engineering and technology, we propose a panel at the SPT 2025 conference that brings together different theoretical perspectives and approaches that reflect the necessary diversity of such a political (re-)turn. We aim to both examine the contribution of applied political philosophy (e.g. political liberalism; Straussian political philosophy) to the question of technological disruption, as well as offer a roadmap for an explicitly political philosophy of technology that engages, for example, with the ways that AI will change the nature of political concepts (e.g. democracy, rights) (Coeckelbergh, 2022; Lazar, 2024). With global AI frameworks already shaping the global political horizon, it is pertinent to acknowledge and assess the current relationship between engineering, technology and politics. The panel might also be the first meeting of a newly forming SPT SIG on the Political Philosophy of engineering and technology, which will be proposed to the SPT steering committee.

References

Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (1. Aufl.). Polity.

Lazar, S. (2024). Power and AI: Nature and Justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Hrsg.), The Oxford Handbook of AI Governance (S. 0). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12

Miller, G. (2022). Toward a More Expansive Political Philosophy of Technology. NanoEthics, 16(3), 347–349. https://doi.org/10.1007/s11569-022-00433-y

Mitcham, C. (2024). Brief for Political Philosophy of Engineering and Technology. NanoEthics, 18(3), 14. https://doi.org/10.1007/s11569-024-00463-8

 

Presentations of the Symposium

 

Toward a robust political philosophy of technology: aiming to transform and transcend regionalizations

Glenn Miller
Texas A&M University

This presentation offers a brief historical narrative of the state of political philosophy of technology, on the one hand, and political philosophy, as it engages technology, on the other, and sketches out some nascent transformative lines of scholarship from philosophers of technology and political philosophers that bridges their usual disciplinary separation.

As philosophy of technology moved from its “classical period” – technology with a capital T, to use Don Ihde’s phrase – to its “empirical turn,” the focus of most of the energy in philosophy and technology has been directed toward technological transformations of individual and communal experience and action, with an eye toward immediate outcomes. The “political” dimension of philosophy of technology usually examines experienced or predicted social consequences that accompany technological adoption, evaluates or makes policy recommendations, usually from a Western democratic perspective, or offers critiques of globalization and capitalism.

Over the same time, political science has trended toward quantitative analysis using a social science approach, and, in the process, relegated political theorists asking fundamental political questions to the sidelines in many universities. When work on political philosophy, whether quantitative or qualitative, explores technology, it tends to treat it as a background condition or as a driver for technological, military, or economic competitiveness, rather than as a topic that also deserves philosophic reflection, nearly always without reference to research in philosophy and technology.

As human physical and cognitive activity is increasingly mediated by mechanical and digital technologies, and these technologies become more powerful, and the political beliefs, norms, and institutions that arose in less technological societies, functioning more or less independently, no longer seem adequate, political philosophy of technology must be transformative – extending, synthesizing, and refashioning existing scholarship – and transcending disciplinary specializations and other limiting regionalizations. This demands theoretic work by philosophers of technology on the structures, components, and processes of political institutions, the informal modes of interaction that complement, support, or weaken these institutions, and on the foundational concepts on which they are built.

To catalyze more transformative and transcending work in this area, brief sketches of recent scholarship currently undertaken by philosophers of technology and political philosophers from a variety of specializations are provided. Of the former, one can look to my symposium colleagues, perhaps most prominently Carl Mitcham, but also to Mark Coeckelbergh, Peter-Paul Verbeek, Yuk Hui, and others. Of the latter, Jürgen Habermas identifies concerns with social media platforms and the public sphere; Joshua Cohen and Archon Fung explore the presuppositions of modern democracy challenged by digital technology and provide some initial policy recommendations; and Timothy Burns, a political philosopher also inspired by Strauss, who aims to develop insights on democracy, technology, and education. The paper concludes with a brief explanation of the opportunities for scholars interested in contributing to a new Special Collection in the journal NanoEthics on “Political Philosophy of Technology.”

 

Artificial intelligence and common goods: an uneasy relationship

Avigail Ferdman
Technion - Israel Institute of Technology

Artificial Intelligence (AI) is potentially the most disruptive technology in human history. Political philosophers have been warning against AI’s erosion of democracy, freedom, rights and justice. Other philosophers like Albert Borgmann (1984) have been worrying that technology might profoundly change us as human beings. The ‘AI virtue ethics’ literature has responded accordingly by urging us to cultivate techno-moral virtues, to reaffirm and reclaim our humanity (Vallor 2016; 2024). To date, however, there is no philosophical account of the moral principles for collective action necessary to ensure that we continue to flourish as humans in the age of AI. As a result, scholarship tends to emphasize an individual-responsibility perspective, crucially missing important structural dimensions of the problem with AI, that is, collective responsibility towards flourishing. Without a political philosophy of flourishing in the age of AI, we are left without a theoretical account of what we owe to each other in terms of the conditions for living well-rounded, flourishing lives in the age of AI.

In response I argue that to obtain a comprehensive account of the obligations we have to one another, we must better understand the conditions for human flourishing, given that AI stands to transform concepts such as human agency and the common good. Specifically, the concept of the ‘common good’ plays a critical role as a bridge between the ethical and the political. Drawing from Alasdair MacIntyre (1984; 1998; 2017) and Charles Taylor (1995), I aim to show that common goods are both constitutive of flourishing, and threatened by AI. The paper will demonstrate the threats that AI might pose for common goods by analysing two concepts: knowledge and moral decision making.

Knowledge can be perceived as an ‘epistemic commons’: the sharing in the production of knowledge as a common good. Knowledge acquired by interacting with AI could accelerate a ‘tragedy of the epistemic commons’, by undermining the social and relational capacities associated with generation of knowledge, so long as it replaces attention-sharing with AI-generated information, lacking in joint action. AI might also create ‘epistemic distance’ between persons, undermining reciprocity.

Growing reliance on Artificial Moral Advisors for moral decision making, and the mediation of social relations through AI may erode the relational dimensions of joint action by transforming the goods in question from common to individual, thereby obviating the opportunity to develop joint commitments. AI mediation might cause epistemic distance that makes genuine engagement with others difficult. Underlying this is a process of “ir-reciprocity” (Yeung et al. 2023), where the reliance on AI assistants makes persons suspicious of other persons’ willingness to engage in relationships of trust. Thus, the discussion of the common good in the age of AI must account for how the conditions for reciprocity are impacted by AI.

Using the analysis of how AI might undermine common goods, the paper will propose criteria for determining the conditions under which AI environments could promote common goods that are constitutive of human flourishing, including: joint commitment, joint action, reciprocity, relationality and trust. The discussion will contribute to a better understanding of our collective obligations towards AI environments that are conducive to human flourishing.

Borgmann, Albert. 1984. Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago, IL: University of Chicago Press.

MacIntyre, Alasdair. 1984. After Virtue. 2nd edition. Notre Dame, Indiana: University of Notre Dame Press.

———. 1998. “‘Politics, Philosophy and the Common Good.’” In The MacIntyre Reader, edited by Kelvin Knight, 235–52. University of Notre Dame Press. https://doi.org/10.2307/j.ctv19m62gb.17.

———. 2017. “Common Goods, Frequent Evils.” Presented at the The Common Good as Common Project, University of Notre Dame, March 26. https://www.youtube.com/watch?v=9nx0Kvb5U04.

Taylor, Charles. 1995. “Irreducibly Social Goods.” In Philosophical Arguments, 127–45. Cambridge, Mass.: Harvard University Press.

Vallor, Shannon. 2016. Technology and the Virtues. New York: Oxford University Press.

———. 2024. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press. https://doi.org/10.1093/oso/9780197759066.001.0001.

Yeung, Lorraine K. C., Cecilia S. Y. Tam, Sam S. S. Lau, and Mandy M. Ko. 2023. “Living with AI Personal Assistant: An Ethical Appraisal.” AI & SOCIETY. https://doi.org/10.1007/s00146-023-01776-0.

 

Explainable AI as a rhetorical technology

Wessel Reijers, Tobias Matzner, Suzana Alpsancar
University Paderborn, Germany

This paper argues that explainable AI should be considered a rhetorical rather than an epistemic technology and outlines the normative implications of this perspective. Explainable AI captures both a set of technological solutions and a normative ideal to address the ‘black box’ problem of AI systems based on layered networks of artificial neurons. This problem implies that the outputs of these systems cannot be adequately explained because the process that produced them is fundamentally opaque. Technical and policy discourses have put explainable AI forward as an epistemic-normative ideal, closely related to the notion that ‘explaining’ AI outputs would lead to transparency. Thus positioned, explainable AI is considered as an epistemic technology that enables a form of truth-finding.

In this paper, we argue that this view of explainable AI is mistaken, and that it should rather be considered a rhetorical technology. This is not to say that explainable AI cannot play an epistemic role, for instance in the natural sciences, but rather that when it is considered as a normative principle it appeals to a rhetorical rather than an epistemic ideal. From the outset, it appears that ‘explainability’ of AI is not always normatively relevant (e.g., in the context of discovering new astronomical phenomena) but becomes relevant in the context of decision-making in the realm of human affairs, which triggers a responsibility requirement. In this context, following Hannah Arendt, we deal with the exchange of (considered) opinions rather than with scrutinizing epistemic truths. Such truths, when placed in the context of human decision-making, may even gain a despotic character. Instead, we need to cultivate a sensis communis, rhetorical discourse that cultivates the virtues of those engaged in the exchange of opinions.

Hence, when we put forward the normative requirement that AI systems be explainable, we consider them in the context of decision-making, supporting a ‘good’ rhetorical exchange. Following Aristotle, rhetoric is the faculty of observing in a given situation the possible means of persuasion. This situation is institutionally and technologically mediated; for instance, in the way that a court (including its buildings and procedures) mediates the rhetorical capacities of jury members asked to consider a verdict in a case. Similarly, explainable AI configures the setting of the rhetorical discourse, for instance when it plays a role in determining a person’s credit score. It does not help unearth a hidden ‘truth’ about the credit score but rather contributes to the exchange of considered opinions concerning why a credit score is justified (or not).

This new perspective has several normative implications. First, it answers to some of the valuable criticisms of explainable AI, which consider it as a rhetorical foil or a ‘fools gold.’ Second, it requires us to look beyond the AI system as an ‘autonomous’ agency making decisions, to the whole decision-making context. Third, it urges us to consider how explainable AI should support a rhetorical discourse that is conducive of rather than detrimental to the virtues of the people it interacts with.

 
3:50pm - 4:30pmClosing and Members Meeting
Location: Blauwe Zaal

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany