Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Symposium) Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue
Time:
Wednesday, 25/June/2025:
5:00pm - 6:30pm

Location: Auditorium 5


Show help for 'Increase or decrease the abstract text size'
Presentations

Engineering science, artificial intelligence and philosophy: an interdisciplinary dialogue

Chair(s): Dazhou Wang (University of Chinese Academy of Sciences, Beijing), Christopher Coenen (Institute of Technology Assessment and Systems Analysis (KIT-ITAS)), Aleksandra Kazakova (University of Chinese Academy of Sciences, Beijing)

Taking Socrates as an example, philosophy is essentially a dialogue. Guided by this spirit, this forum sincerely invites engineering scientists, computer scientists, engineering practitioners, and philosophers of science and technology to engage in an interdisciplinary dialogue. This dialogue aims to explore the nature of engineering science, the complex connections between engineering and science, the characteristics of artificial intelligence, its impact on engineering science, and so on. The content of the symposium covers many cutting-edge fields, including aviation engineering, cryogenic engineering, petroleum exploration engineering, metallurgical process engineering, astronaut training, human stem cell-based embryo models, AI-driven synthetic biology, biomedicine,swarm intelligence, and data science and engineering. Through these cases, participants are providing multi-dimensional philosophical insights from their respective professional backgrounds.

The speakers from the fields of philosophy, engineering and computer science are making this forum not only an interdisciplinary dialogue but also a cross-boundary one. By sharing their research findings and reflections, experts from different fields facilitate a deeper understanding of the relationship among natural science, engineering science, and engineering practices, of the relationship between AI and engineering, and of basic concepts of philosophy of engineering science, embodying the fundamental spirit of philosophy.Through interdisciplinary collaboration, participants can better understand the complexity of engineering science, explore its potential in practical applications, and lay a solid foundation for future technological innovation. The achievements of this dialogue are not only reflected at the academic level, especially relevant to the philosophy of engineering science, but may also have a profound impact on engineering practice, driving the common progress of engineering and philosophy.

 

Presentations of the Symposium

 

Ethical frontiers in human stem cell-based embryo model

Yaojin Peng
University of Chinese Academy of Sciences, Beijing

In recent years, remarkable progress has been achieved in human stem cell-based embryo models (HSEMs), which have significantly advanced our understanding of early embryonic development, facilitated the creation of precise disease models, and enabled efficient drug screening. By simulating key processes of human development, HSEMs provide unprecedented opportunities to explore fundamental biological questions that were previously inaccessible. However, the rapid progress in this field has sparked widespread philosophical, ethical, social, and regulatory debates. Central concerns include the blurred boundaries of this technology, uncertainties about the ethical and moral status of embryo models, disputes surrounding the ethical sourcing of stem cells, and the broader societal implications of potential misuse or unintended applications. Moreover, the lack of clear and harmonized international standards exacerbates governance challenges, complicating efforts to address these critical issues and fueling public discourse on the acceptability of such technologies.

This study conducts a comprehensive analysis of the latest developments, trends, and defining characteristics of HSEM research while highlighting the ethical and regulatory challenges unique to this field. By leveraging interdisciplinary approaches, we explore the controversies and risks posed by this rapidly evolving technology. HSEM research raise ethical and governance challenges, including their moral status, fidelity verification, and risks in reproductive applications. While current HSEMs lack equivalence to human embryos, future developments may change this. Ethical concerns include fidelity limitations under the “14-day rule”, cloning risks, genome editing, and challenges to traditional family structures. To navigate these challenges, we propose a dynamic ethical risk management framework designed to integrate adaptive regulation, proactive stakeholder engagement, and rigorous scientific oversight. This framework serves as a tool to identify, assess, and mitigate ethical risks while fostering a research environment that harmonizes technological innovation with societal values and ethical principles.

In addition to addressing immediate risks, this framework emphasizes the need for sustainable and forward-looking governance mechanisms. By aligning technological advancements with ethical norms and social responsibility, it aims to promote the standardized and responsible development of HSEM technologies. Furthermore, the framework offers decision-making support for policymakers, providing actionable guidance to establish robust regulations that ensure the safe, ethical, and beneficial application of these groundbreaking scientific advancements.

 

AI-driven synthetic biology: engineering philosophy, challenges, and ethical implications

Lu Gao
Chinese Academy of Sciences, Beijing

From the perspective of the philosophy of engineering, artificial intelligence (AI)-empowered synthetic biology not only fosters the integration of technology and life sciences but also highlights the engineering attributes of biological systems. Synthetic biology, as an interdisciplinary field, aims to enable the precise design of living systems and biological processes, driving breakthroughs in therapies, materials, energy, crops, and data storage. With the introduction of AI, the engineering process in synthetic biology has accelerated, shifting from traditional trial-and-error methods to more automated Design–Build–Test–Learn (DBTL) approaches. This transformation has led to greater standardization and precision in biological research.

However, AI-empowered synthetic biology also presents potential risks. Firstly, the complexity and diversity of biological systems mean that artificially designed organisms may produce unpredictable outcomes, particularly in ecological environments where gene-driven mutations or synthetic organisms may disrupt ecological balance. Secondly, biases and incomplete data within AI technologies may affect the accuracy of synthetic biology designs, leading to faulty decisions or design defects.The engineering nature of synthetic biology not only alters the research paradigm of biology but also raises ethical discussions about whether humanity should control life. AI-empowered synthetic biology may alter biodiversity and species evolution, bringing about fears of technological failure. Moreover, in the context of globalization, the influence of multinational corporations on technological innovation and data privacy will pose additional challenges.

In face of multiple challenges in ethics, technology, and social governance raised by AI-empowered synthetic biology,to ensure sustainability, transparency, and fairness of the technology, as well as to develop adaptive regulatory frameworks, will be key to the healthy progression of this field.

 

Bridging the responsibility gap: ethical responsibility pathways and framework reconstruction in artificial intelligence

Shuchan Wan, Cheng Zhou
Peking University

The problem of responsibility gap highlights how learning automata in AI can make it difficult to attribute moral responsibility to humans. New AI models like neural networks and reinforcement learning have moved beyond rule-based programming, resulting in unpredictable and uncontrollable behaviors. This creates situations where AI cannot fully bear responsibility, nor can its creators or operators. Bridging this gap requires rethinking human-machine relationships and responsibility allocation.

This paper reviews three approaches to this issue: instrumentalism, human-machine collaboration, and joint responsibility. According to instrumentalism, AI is a tool to achieve a certain goal, but not the goal per se. Since the goal is where values reside in, AI is not value-ladened and hence should not be held as morally responsible. To hold AI as morally responsible is therefore both logically inconsistent and a morally irresponsible. The pitfall in instrumentalism, though, is that it neglects the functional side of tools. According to the theory of human-machine collaboration, AI can be held as morally responsible, but only when it achieves a specific level of agency. The challenge with this theory mainly lies in the ambiguity in determining which level of agency has AI achieved. According to joint responsibility,AI and its user should be considered as a single agent. When this single agent is morally responsible, both AI and its user are morally responsible. The difficulty with this approach, though, is the existence of a practically reasonable and non-arbitrary way to distribute the single agent’s responsibility between AI and its user.

To address challenges in these approaches, this paper comes up with an extended agent responsibility model. This model is based on an extended view of agent, as with the joint responsibility approach above. However, what distinguishes this model from the original joint responsibility approach consists in that this model sees the coupling between AI and its user as a dynamic one, instead of a static one as in the case of original approach. To deal with responsibility distribution within this dynamic coupling, this paper introduces causal contribution principles and attempt to divide moral responsibility according the causal responsibility of each part. By doing so, distribution of moral responsibility becomes significantly more tractable and non-arbitrary, in comparison with the original joint responsibility approach.

 

Basic Ideas on Engineering science and engineering scientists: a contribution to philosophy of engineering science

Dazhou Wang1, Christopher Coenen2
1University of Chinese Academy of Sciences, Beijing, 2Institute of Technology Assessment and Systems Analysis (KIT-ITAS)

Overall, the philosophy of engineering science remains an underdeveloped research field, as evidenced by the relevant research findings showcased in the two handbooks, Handbook of the Philosophy of Technology and Engineering Science (Edited by Anthonie Meijers) and Handbook of the Philosophy of Engineering(Edited by Diane P. Michelfelder and Neelke Doorn). In this paper, we present some basic ideas about engineering science and engineering scientistsbased on previous researches to stimulate further philosophical studies of engineering science: (1) The core goal of engineering science is to design and create artifacts (such as tools, equipment, systems, etc.) through scientific methods, making it reasonable to define it as "artificial science." In this sense, artificial intelligence research also belongs to a special type of engineering science, which can be called linguistic engineering science. (2) Engineering science is not directly derived from natural science but is inspired by it and developed through the summarization of engineering practices, forming a unique theoretical system. (3) Distinguished from natural science, modeling, approximate computation, and parameter determination hold a central position in engineering science. The theoretical structure of engineering science is therefore more problem-oriented than traditional axiomatic systems. (4) Engineering science and engineering innovation are interdependent and mutually influential, with no clear sequence between them, reflecting a complex interactive relationship. (5) In a long-term perspective, the development of engineering science indeed combines gradual progress and revolutionary change, but its revolutionary nature differs from the paradigm shifts described by Thomas Kuhn in natural science. This indicates that the development of engineering science requires close interaction among natural scientists, engineering scientists, and engineering practitioners. (6) Engineering scientists indeed occupy a "boundary" position between natural science and engineering practice. They not only need to understand scientific principles but also “creatively”apply them to practical engineering problems, making them vital in modern society.We hope that this paper would foster dialogue between the philosophy of science and the philosophy of engineering, deepening the understanding of the nature of engineering science and its unique role in engineering innovation as well as in society at large.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany