Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Ethics VI
Time:
Friday, 27/June/2025:
11:50am - 1:05pm

Session Chair: Nynke van Uffelen
Location: Auditorium 5


Show help for 'Increase or decrease the abstract text size'
Presentations

From artificial wombs to lab grown embryos: technologies and the myth of the ex-utero human

Llona Kavege1, Amy Hinterberger2

1The University of Edinburgh; 2University of Washington

Two years ago, Nature's Method of the Year award went to the research developments in stem cell-based embryo models (SCBEMs), previously referred to as embryoids or sometimes commonly (and erroneously) as synthetic embryos. These three-dimensional cellular assemblies are derived from diploid pluripotent stem cells that can recapitulate early embryonic development. Human SCBEMs are particularly beneficial to scientists by providing a promising alternative model organism to counter the limitations of nonhuman animal models, the inaccessibility of human embryos during gastrulation, and the ethical restrictions of human embryo research. Most debates over SCBEMs in the literature have focused on nomenclature and the ethical and legal implications if SCBEMs became so sophisticated that they could not be distinguished from egg-sperm fertilized human embryos.

Another research project gaining speed in recent years is the different devices being developed to enable partial-ectogestation. This consists in transferring a fetus from the maternal womb to a machine system recreating conditions of gestation to prolong fetal development ex utero and improve the survival chances of would-be preterm neonates. Here again, the reception of EGS has varied in the literature and revolved around terminological disagreements, and the ethical and ontological status of birth and the fetus post transfer.

In this paper, we approach these two seemingly disparate research undertakings from a lens that examines the technologies that are enabling these advances rather than looking at the organisms whose development they facilitate.

By critically examining the infrastructures supporting SCBEMs and ectogestation — i.e. the bespoke incubators, biobags, and the sociotechnical systems and research agendas driving their development— we aim to move beyond entity-centered ethical discussions and foster a more comprehensive understanding of the role and impact of these technologies in our lives.

We draw from the theory of technological mediation, STS, and bioethics scholarship to examine the interplay between these two technologies and how their developments ties to a larger trend in reproductive sciences where technologies mediate and increasingly frame human embryos and fetuses as standalone entities, detached from a pregnant person's body.

Furthermore, we offer a novel interpretation of a recent Alabama (USA) court case that granted personhood to cryopreserved embryos based on the future possibility of full ectogestation, highlighting the pivotal role these technologies play in reimagining and reinventing how we conceive of and engage with the beginning of life.

Finally, we reflect on the bioethical implications of these technological advances in political climates where they could be used to restrict reproductive rights and access to healthcare.



AI agency in medical practices: The case of pathology

Oceane Fiant

Universite Cote d'Azur, France

The introduction of artificial intelligence (AI) systems in medicine, whether to assist or replace physicians in certain tasks, is often seen as a promising solution to reduce or even eliminate "inter-observer variability" (Tizhoosh et al., 2021)—that is, discrepancies in the interpretation of the same medical data by different healthcare professionals. This phenomenon is particularly relevant to pathology, a critical medical specialty in cancer care. This specialty involves diagnosing, establishing prognoses, and guiding therapeutic management of cancers based on the analysis of cellular and tissue samples. However, pathologists' analyses can vary significantly depending on both the practitioners and the specific characteristics of the centers where they work (Rabe et al., 2019). In this context, AI is expected to help pathologists refine their analyses and enhance their reproducibility, thus addressing the problem of inter-observer variability (Shafi and Parwani, 2023).

However, the idea that AI systems alone could reduce or even eliminate inter-observer variability merits closer examination. This presentation seeks to demonstrate that achieving this effect requires conditions beyond the inherent properties of these systems. In other words, using the case of pathology, this presentation aims to show that AI systems do not possess inherent "agency" (Verbeek, 2005)—that is, they cannot independently transform medical practices or solve issues such as inter-observer variability. Instead, their agency can only be exercised within a "milieu" (Triclot et al., 2024) that must possess specific characteristics for these systems to produce the expected outcomes.

Based on field research, this presentation will specifically demonstrate that deploying AI systems developed for pathology will only lead to effective homogenization of practitioners’ analyses if glass slides—the primary material on which pathologists base their analyses—are highly standardized. Achieving this, in turn, requires a profound transformation of the instrumentation and workflow of pathology laboratories—a transformation that remains far from complete.

References:

Rabe K., Snir O. L., Bossuyt V., Harigopal M., Celli R. et Reisenbichler E. S. (2019), “Interobserver variability in breast carcinoma grading results in prognostic stage differences”, Human Pathology 94, p. 51‑57.

Shafi S. et Parwani A. V. (2023), “Artificial intelligence in diagnostic pathology”, Diagnostic Pathology 18, 109.

Tizhoosh H. R., Diamandis P., Campbell C. J. V., Safarpoor A., Kalra S., Maleki D., Riasatian A. et Babaie M. (2021), “Searching images for consensus: Can AI remove observer variability in pathology?”, The American Journal of Pathology 191, p. 1702‑1708.

Triclot M. (Dir.) (2024), Prendre soin des milieux : manuel de conception technologique, Éditions matériologiques.

Verbeek, P.-P. (2005), What things do: Philosophical reflections on technology, agency, and design, Pennsylvania State University Press.



Could Artificial Intelligence Assuage Loneliness? If so, which kind?

Ramon Alvarado

University of Oregon, United States of America

Generative AI technologies, such as large language models and generative pretrained

transformers, as well as the interfaces with which we interact with them have developed

impressively in recent years. Not only can they abstract and generate impressive results, but it is becoming increasingly easier for most of us to prompt them. We can enter input not only in

multiple human and machine languages, but also in multiple modalities: text, audio, video, etc.

Given these developments, the uses for AI have transcended the confines of academic, industrial, or scientific settings and entered our everyday lives. Soon, trimmed down versions capable of running locally on personal devices will be able to accompany and assist us wherever and whenever we need them (Carreira et al., 2023).

Although philosophers of technology have considered the implications of pervasive technical

mediation for long before the advent of these more recent technologies, AI is distinct in non-

trivial ways. AI technologies, for example, are first and foremost epistemic technologies [1] —

technologies primarily designed, developed and deployed as epistemic enhancers (Humphreys,

2004). Furthermore, in their generative form, their powerful and versatile multimodal output

capacities can be seen as enabling them to play some part in what are usually considered social roles (Kempt, 2022; Symons and Abumusab, 2024). At the very least, their sophisticated and responsive output can be seen as playing the role of an interlocutor. As such, AI can be prompted, queried, and interacted with as one would with an assistant, a peer, a friend, a romantic partner, or a caregiver.

Given these latter points plus the undeniable prudential good of social connections, and the

societal and communicative aspects conventionally taken to be at the center of significant social

challenges such as those related to loneliness, technologists and practitioners have begun to

ponder and test the use of AI in these socially rich contexts (De Freeitas et al., 2024; Savic, 2024, Sullivan et al., 2023). Philosophers have also begun to pay attention to both these uses as well as to their implications. Symons and Sanwoolu (forthcoming) for example, suggest that given that an AI product could be available to many people simultaneously and without conventional social or physical restrictions, it will be unable to meet certain conditions— such as scarcity, uncertainty, and friction— that ground meaningful social connections. If this is true, then AI will be unable to have any bearing on or assuage loneliness, or so some of these arguments go.

In this paper, I argue that there is no such thing as ‘addressing loneliness’ simpliciter. There are

distinct kinds of loneliness, and they are responsive to distinct kinds of interventions (Creasy,

2023; Alvarado, 2024). Hence, perhaps it proves more fruitful to ask which kind of loneliness

could AI address, if any. I conclude by suggesting that as an epistemic technology, AI may very

well be able to address epistemic loneliness (Alvarado, 2024)— a kind of loneliness that arises in virtue of the absence of epistemic peers with which to construct, accrue or share knowledge. This may be the case, however, only if we can deem AI as an epistemic partner (ibid)— a willing,able, actual, and engaging epistemic peer.

[1] Alvarado suggests that epistemic technologies can be epistemic to the degree to which they meet the following three conditions. They are primarily designed, developed and deployed in a) epistemic contexts (e.g., inquiry), to deal with epistemic content (e.g., symbols, propositions etc.) via epistemic operations (analysis, prediction, etc. ) (Alvarado, 2023) While AI is not the only technology to meet some or all of these conditions, according to Alvarado, AI meets them to the highest degree amongst computational methods, which makes it a paradigmatic example of an epistemic technology.

Bibliography

Alvarado, R. (2022a). What kind of trust does AI deserve, if any?. AI and Ethics, 1-15.

Alvarado, R. (2023). AI as an Epistemic Technology. Science and Engineering Ethics, 29(5), 32.

Alvarado, Ramón (2024) What is Epistemic Loneliness? [Preprint]

Carreira, S., Marques, T., Ribeiro, J., & Grilo, C. (2023). Revolutionizing Mobile Interaction:

Enabling a 3 Billion Parameter GPT LLM on Mobile. arXiv preprint arXiv:2310.01434.

Creasy, K. (2023). Loved, yet lonely.

De Freitas, Julian and Uğuralp, Ahmet Kaan and Uğuralp, Zeliha and Puntoni, Stefano, AI



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany