Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: Auditorium 5
Date: Wednesday, 25/June/2025
3:00pm - 4:30pm(Symposium) Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue
Location: Auditorium 5
 

Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue

Chair(s): Dazhou Wang (University of Chinese Academy of Sciences, Beijing), Christopher Coenen (Institute of Technology Assessment and Systems Analysis (KIT-ITAS)), Aleksandra Kazakova (University of Chinese Academy of Sciences, Beijing)

 

Presentations of the Symposium

 

Basic Ideas on Engineering science and engineering scientists: a contribution to philosophy of engineering science

Dazhou Wang1, Christopher Coenen2
1University of Chinese Academy of Sciences, Beijing, 2nstitute of Technology Assessment and Systems Analysis (KIT-ITAS)

 

Practice is the source of true knowledge: Lesson from the flight experiments of Samuel Langley and the Wright brothers

Fangyi Shi, Nan Wang
University of Chinese Academy of Science, Beijing

 

A reflection on the development of cryogenic engineering

Zhongjun Hu1, Dhazou Wang2
1Chinese Academy of Sciences, Beijing, 2University of Chinese Academy of Sciences, Beijing

 

Effective development of gulong shale oil under the guidance of engineering philosophy

He Liu
Chinese Academy of Engineering, Beijing

 

Yin Ruiyu and metallurgical process engineering: a philosophical reflection

Anjun Xu1, Zhifeng Ciu2
1School of Metallurgical and Ecological Engineering, Beijing, 2University of Science and Technology Beijing, Beijing

 

Engineering innovations in novel supercritical fluids energy and power systems: from fundamentals to application demonstrations

Lin Chen
Chinese Academy of Sciences, Beijing

 

The enhancement of technical requirements for astronaut training in deep space exploration and philosophical reflections

Zhihui Zhang
Chinese Academy of Sciences, Beijing

 

Ethical frontiers in human stem cell-based embryo model

Yaojin Peng
Chinese Academy of Sciences, Beijing

 

AI-driven synthetic biology: engineering philosophy, challenges, and ethical implications

Lu Gao
Chinese Academy of Sciences, Beijing

 

Rethinking numbers, data, and algorithms from philosophical perspective

Tiejian Luo
University of Chinese Academy of Sciences, Beijing

 

Bridging the responsibility gap: ethical responsibility pathways and framework reconstruction in artificial intelligence

Shuchan Wan, Cheng Zhou
Peking University

 

Refusal to grant AI subject qualification: reasons and practical approaches

Dongming Cao, Xiaohui Jiang, Junjie Wu
Northeastern University, Shenyang

 
5:00pm - 6:30pm(Symposium) Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue
Location: Auditorium 5
Date: Thursday, 26/June/2025
8:45am - 10:00am(Papers) Large Language Models I
Location: Auditorium 5
 

How LLMs diminish our autonomy

Björn Lundgren1,2, Inken Titz3

1Friedrich-Alexander-Universität, Germany; 2Institute for Futures Studies, Sweden; 3Ruhr-Universität Bochum



Intimacy as a Tech-Human Symbiosis: Reframing the LLM-User Experience from a Phenomenological Perspective

Stefano Calzati

TU Delft, Netherlands, The



Large language models and cognitive deskilling

Richard Heersmink

Tilburg University, Netherlands, The

 
10:05am - 11:20am(Papers) Large Language Models II
Location: Auditorium 5
 

“Who” is silenced when AI does the talking? Philosophical implications of using LLMs in relational settings

Tara Miranovic, Katleen Gabriels

Maastricht University, Netherlands, The



Connecting Dots: Political and Ethical Considerations on the Centralization of Knowledge and Information in Data Platforms and LLMs

Anne-Marie McManus

Forum Transregionale Studien, Germany



LLMs and Testimonal Injustice

William James Victor Gopal

University of Glasgow, United Kingdom

 
11:50am - 1:05pm(Papers) Algorithms
Location: Auditorium 5
 

Algorithms, abortion, and making decisions

Hannah Steinhauer

Virgina Tech, United States of America

Abortion is usually framed in the context of decisions— someone who supports abortion rights is often referred to as “pro-choice.” In the digital age, it is notable that algorithms— the sets of instructions that design the technologies we use to communicate every day— are also described using the language of decisions. This has been exemplified by the recent emergence of “decision sciences.” I argue that both abortion and algorithms are framed as decisions in ways that are misleading. Abortion is usually framed as a personal decision for a pregnant person; outside of the sociopolitical and cultural context in which that person lives. The language of “choice” falsely makes the assumption that everybody has equal access to abortion, which feminist theorists have pointed out is not the case. Even before the overturn of Roe, abortion was often inaccessible, specifically to marginalized groups, in the United States. Algorithms, on the other hand, are framed as decisions that are made solely by computers, which leaves out human bias that is embedded in these technologies. That algorithms are a result of human decision-making, and replicate and reproduce human biases, has been shown by digital studies scholars. Furthermore, algorithms, in the digital age, are a necessary component of abortion access— Google searches, as well as use of other internet platforms, can lead someone to medically accurate accessible information about abortion access, but it can also lead someone to misinformation that could ultimately result in preventing a wanted abortion from happening. I argue that, in the post-Roe United States, as well as in the digital age generally, algorithmic information technologies will play a central role in reproductive justice. Furthermore, it is necessary to understand dynamics of computer and human decision making, which really are both forms of human decision making, in order to get the full picture of how digital technology relates to and allows for abortion access.



The power topology of algorithmic governance

Taicheng Tan

Beijing University of Civil Engineering and Architecture, China, People's Republic of

As a co-product of the interplay between knowledge and power, algorithmic governance raises fundamental questions of political epistemology while offering technical solutions constrained by value norms. Political epistemology, as an emerging interdisciplinary field, investigates the possibility of political cognition by addressing issues such as political disagreement, consensus, ignorance, emotion, irrationality, democracy, expertise, and trust. Central to this inquiry are the political dimensions of algorithmic governance and how it shapes or even determines stakeholders’ political perceptions and actions. In the post-truth era, social scientists have increasingly employed empirical tools to quantitatively represent algorithmic political bias and rhetoric.

Despite advancements in the philosophy of technology, which has shifted from grand critiques to micro-empirical studies, it has yet to fully open the space for political epistemological exploration of algorithmic governance. To address this gap, this paper introduces power topology analysis. Topology, a mathematical field that studies the properties of spatial forms that remain unchanged under continuous transformation, has been adapted by thinkers like Gilles Deleuze, Michel Foucault, Henri Lefebvre, Bruno Latour, and David Harvey to examine the isomorphism and fluidity of power and space. Power, like topology, retains continuity even through transformations, linking the two conceptually.

This paper is structured into four parts. The first explores the necessity and significance of power topology in conceptualizing algorithmic power and politics through the lens of political epistemology. The second examines the generative logic and cognitive structure of power topology within algorithmic governance. The third analyzes how power topology transforms algorithmic power relations into an algorithmic political order. The fourth proposes strategies for democratizing algorithmic governance through power topology analysis.

The introduction of power topology analysis offers a reflexive perspective for the philosophy of technology to re-engage with political epistemology—an area insufficiently addressed by current quantitative research and ethical frameworks. This topological approach provides a detailed portrait of algorithmic politics by revealing its power topology. Moreover, it redefines stakeholder participation by demonstrating how algorithms stretch, fold, or distort power relations, reshaping the political landscape. By uncovering the material politics of these transformations, power topology encourages the philosophy of technology to reopen political epistemological spaces and adopt new cognitive tools for outlining the politics of algorithmic governance. Ultimately, this framework aims to foster continuous, rational, and democratic engagement by stakeholders in the technological transformation of society, offering a dynamic and reflexive tool for understanding the intersection of power, politics, and algorithms.



Believable generative agents: A self-fulfilling prophecy?

Leonie Alina Möck1, Sven Thomas2

1University of Vienna, Austria; 2University of Paderborn, Germany

Recent advancements in AI systems, in particular Large Language Models, have sparked renewed interest in a technological vision once confined to science fiction: generative AI agents capable of simulating human personalities. These agents are increasingly touted as tools with diverse applications, such as facilitating interview studies (O’Donnell, 2024), improving online dating experiences (Batt, 2024), or even serving as personalized "companion clones" of social media influencers (Writer, 2023). Proponents argue that such agents, designed to act as "believable proxies of human behavior“ (Park et al. 2023) offer unparalleled opportunities to prototype social systems and test theories. As Park et al. (2024) suggest, they could significantly advance policymaking and social science by enabling large-scale simulation of social dynamics.

This paper critically examines the foundational assumptions underpinning these claims, focusing on the concept of believability driving this research. What, precisely, does "believable" mean in the context of generative agents, and how might an uncritical acceptance of their believability create self-fulfilling prophecies in social science research? This analysis begins by tracing the origins of Park et al.’s framework of believability to the work of Bates (1994), whose exploration of believable characters has profoundly influenced the field.

Drawing on Günther Anders’ (1956) critique of technological mediation and Donna Haraway’s (2018, 127) reflections on "technoscientific world-building“, this paper situates generative agents as key sites where science, technology, and society intersect. Ultimately, it calls for a critical reexamination of the promises and perils of generative agents, emphasizing the need for reflexivity in their conceptualization, as well as their design and application. By interrogating the assumptions behind believability, this research contributes to a deeper understanding of the socio-technical implications of these emerging AI systems.

Building on Louise Amoore’s (2020) concept of algorithms as composite creatures, this paper explores the implications of framing generative agents as "believable." In the long run, deploying these AI systems in social science research risks embedding prior normative assumptions into empirical findings. Such feedback loops can reinforce preexisting models of the world, presenting them as objective realities rather than as socially constructed artifacts. The analysis highlights the danger of generative agents reproducing and amplifying simplified or biased representations of complex social systems, thereby shaping policy and theory in ways that may perpetuate these distortions.

References

Amoore, Louise (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.

Anders, Günther (1956). Die Antiquiertheit des Menschen Bd. I. Munich: C.H. Beck.

Batt, Simon (2024). „Bumble Wants to Send Your AI Clone on Dates with Other People's Chatbots.” Retrieved from https://www.xda-developers.com/bumble-ai-clone-dates-other-peoples-chatbots/.

Contreras, Brian (2023). „Thousands Chatted with This AI ‘Virtual Girlfriend.’ Then Things Got Even Weirder.” Retrieved from https://www.latimes.com/entertainment-arts/business/story/2023-06-27/influencers-ai-chat-caryn-marjorie.

Haraway, Donna Jeanne (2018). Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. Second edition. New York, NY: Routledge, Taylor & Francis Group.

O’Donnell, James (2024). „AI Can Now Create a Replica of Your Personality.” Retrieved from https://www.technologyreview.com/2024/11/20/1107100/ai-can-now-create-a-replica-of-your-personality/.

Park, Joon Sung, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein (2023). „Generative Agents: Interactive Simulacra of Human Behavior.“ In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763.

Park, Joon Sung, Carolyn Q Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S Bernstein (2024). „Generative Agent Simulations of 1,000 People“, Retrieved from arXiv. https://doi.org/10.48550/arXiv.2411.10109.

 
3:35pm - 4:50pm(Papers) Machine Learning
Location: Auditorium 5
 

“Does it really hurt that much?” The Ethical Implications of Epistemically Unjust Practices in Machine Learning Based Migraine Assessments

Sasha Lee Smit

University of Edinburgh, United Kingdom



Fair to understansd fairness contexually in machine learning

Jyoti Kishore

Indian Institute of Technology, India



Technology as a constellation: The challenges of doing ethics on enabling technologies

Sage Cammers-Goodwin, Michael Nagenborg

University of Twente

 
5:20pm - 6:35pm(Papers) Language
Location: Auditorium 5
 

Is extensible markup language perspectivist?

Timothy Tambassi

Ca' Foscari University of Venice, Italy



Wittgenstein’s Woodsellers and AI: Interpreting Large Language Models in practice: Rationality First vs Coherence First approaches

Mark Robrecht Theunissen

The New School, United States of America



Time and Temporality in Engineering Language

Aleksandra Kazakova

University of Chinese Academy of Sciences, China, People's Republic of

 
Date: Friday, 27/June/2025
8:45am - 10:00am(Papers) Generative AI
Location: Auditorium 5
 

A network approach to public trust in generative AI

Andrew McIntyre1, Federica Russo2, Lucy Conover2

1University of Amsterdam; 2Utrecht University



Memes, generative AI, humor, and intimacy: collective empowerment through shared laughter

Alberto Romele1, Fabrizio Defilippi2

1Sorbonne Nouvelle University, France; 2University of Paris Nanterre, France



Creative AI and human achievement

Alice Courtney Helliwell

Northeastern University London, United Kingdom

 
10:05am - 11:20am(Papers) Computing and quantification
Location: Auditorium 5
 

The productive function of technology with regard to subjectification and the example of affective computing

Sebastian Nähr-Wagener, Orsolya Friedrich

FernUniversität Hagen, Germany



The Soylent Mentality: "Efficiency Fundamentalism" and the Future of Food

Ryan Jenkins

Cal Poly, San Luis Obispo, United States of America



The ethical fabric of computational social science research: norms, practices, and values

Chirag Arora

TU Delft, Netherlands, The

 
11:50am - 1:05pm(Papers) Ethics VI
Location: Auditorium 5
 

From artificial wombs to lab grown embryos: technologies and the myth of the ex-utero human

Llona Kavege1, Amy Hinterberger2

1The University of Edinburgh; 2University of Washington



AI agency in medical practices: The case of pathology

Oceane Fiant

Universite Cote d'Azur, France



Could Artificial Intelligence Assuage Loneliness? If so, which kind?

Ramon Alvarado

University of Oregon, United States of America

 
3:35pm - 4:50pm(Papers) Data II
Location: Auditorium 5
 

Rediscover Bodily Experience in the Era of Digital Intelligence through Data Privacy issues

Zhengyang Zhou

Fudan University, China, People's Republic of



Epistemology of ignorance and datafication – To interrogate the necessity for secrecy in AI through marginalised groups’ experiences

Marilou Niedda

Utrecht University, Netherlands, The



Reclaiming control of thought and behavior data through the right to freedom of thought.

Kristina Pakhomchik

University of Vienna, Austria

 
Date: Saturday, 28/June/2025
8:45am - 9:45am(Symposium) Virtue ethics (SPT Special Interest Group on virtue ethics)
Location: Auditorium 5
9:50am - 10:50am(Symposium) Virtue ethics (SPT Special Interest Group on virtue ethics)
Location: Auditorium 5
11:50am - 12:50pm(Symposium) TechnoPedia - an online Philosophy and Ethics of Technology Encyclopedia
Location: Auditorium 5
 

TechnoPedia - an online Philosophy and Ethics of Technology Encyclopedia

Chair(s): Jochem Zwier (Wageningen University & Research, the Netherlands, Netherlands, The), Vincent Blok (Wageningen University & Research, the Netherlands), Udo Pesch (Technical University Delft), Wybo Houkes (Eindhoven Technical University)

 

Presentations of the Symposium

 

[no separate papers in this symposium, see NB below]

Jochem Zwier 4TU Zwier
Wageningen University & Research, the Netherlands

 
2:20pm - 3:45pm(Papers) Prediction
Location: Auditorium 5
 

Technological predictions: rethinking design through active inference and the free energy principle

Luca Possati

University of Twente, Netherlands, The



AI Oracles and the Technological Re-Enchantment of the World

Lucy Císař Brown1,3, Petr Špecián1,2

1Charles University, Czech Republic; 2Prague University of Economics and Business, Czech Republic; 3Czech Academy of Sciences



Sleepwalkers in a scenario of a happy apocalypse?

Helena Mateus Jeronimo

ISEG School of Economics and Management, Universidade de Lisboa & Advance/CSG, Portugal

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany