Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Ontology
Time:
Friday, 27/June/2025:
10:05am - 11:20am

Session Chair: Ibo van de Poel
Location: Blauwe Zaal


Show help for 'Increase or decrease the abstract text size'
Presentations

Information as dispositions: an ontological analysis

Mitchell Roberts

Texas A&M University, United States of America

Information is commonly understood with reference to data or signals (Shannon 1948; Dretske 1981; 2008; Landauer 1996). For example, Luciano Floridi summarizes the General Definition of Information (GDI) as “well-formed, meaningful data” (2013; 2010). Data here refers to the (typically physical) entities that are “manipulated” or “transformed” into information. Examples include DNA, high and low voltages in a computer, and English words on a piece of paper. It is likewise common to consider these objects as “containing” or “embedding” information. But what do we mean by this? Is information something “over and above” the physical entities that carry it, or is it somehow reducible to physical phenomena? In this essay, I aim to answer these questions. Namely, I argue that information can be reduced to physical dispositions. Dispositions are physical properties of objects that are typically understood in terms of counterfactual conditionals – “X is disposed to Φ when M iff X would Φ if it were the case that M.” Some physical objects (like DNA and tree rings) are disposed to play a certain functional role in a given system, and it is these dispositions that we refer to when we speak of information. For instance, consider a basic digital calculator. In this case, the high and low voltages (the electrical bits) are the data and the hardware of the calculator is the relevant system. The high and low voltages contain information because they are disposed to interact with the hardware in such a way that certain calculations are made. An important consequence of this view is that information is mind-independent – it exists even in the absence of any human perceivers.



The Picture of Existence: Ontological commitments and existential trade-offs in the age of intimate technologies

Ângelo Nunes Milhano

University of Évora, Portugal - Praxis: Centre of Philosophy, Politics and Culture

The intimate technologies we came to depend upon — such as smartphones, wearables, smart glasses, or VR glasses (maybe, in the foreseeable future, even smart brain implants powered by A.I.) — seem to be able to create an image of human behavior through algorithmic interpretation of routines and social interactions, promoting what, drawing inspiration from Heidegger’s "The Age of the World Picture" (1950), we will refer to as a “picture of existence”. The existential trade-offs and ontological commitments underlying the use and mass appropriation of this type of digital technologies create a very particular opening of the world, through which they subtly exert power over its users by shaping their perceptions and actions in accordance with a specific understanding of what it means to be human: a producer and/or consumer of data.

The mediation of our identity operated by these technologies appears to foster a “hyperreal” understanding of our subjective experiences. Thanks to it, individuals started to prioritize their curated digital personas over genuine engagement with the real world. The digital age’s “picture of existence” has increasingly replaced the necessary confrontation with our individuality with a compulsive need for instant connectivity, thereby amplifying psychological distress and fostering existential indifference.

While opening new possibilities of being-in-the-world, these technologies appear to have deeply altered human subjectivity and existential experiences. A shift that can be exemplified by the pervasiveness of the digital representation of the self, which fosters dependence on constant connectivity and diminishes opportunities for authentic self-reflection and connection with others. Drawing from existential, phenomenological, and postphenomenological perspectives on technology, with works from Heidegger, Stiegler, Yuk Hui, Ihde, among others, the paper here proposed intends to discuss how intimate technologies can constrain our authentic selfhood, by imposing predefined frameworks for interaction with the world and the other beings/entities we share it with. This still aligns with Heidegger’s notion of “enframing”, through which the author criticized technology’s role in reconfiguring human relations with the world by perceiving it through a lens of a resource available for instrumentalization. This paper aims to call for philosophical reflection on the existential consequences of the mass assimilation of intimate technologies into our lives and the potential loss of authentic selfhood in the digital age. We argue that, by critically examining these technologies’ inherent existential trade-offs and underlying ontological commitments, individuals might be able to reclaim the ontological grounding of their selfhood and resist the passive acceptance of the “picture of existence” that these technologies impose.



Re-ontologising psychiatric illness using deep learning: ethical concerns beyond the clinic

Emily Postan

University of Edinburgh, United Kingdom

Problem and argument

Should we welcome the use of deep-learning (DL) to (re)classify psychiatric diagnostic and disease risk categories, by identifying underlying patterns in neurobiological and other health data?

This question could be answered solely from a clinical perspective – would data-driven DL-generated psychiatric nosology result in better healthcare and clinical outcomes [1]? This paper argues, however, that this delivers only a partial picture of ethically significant considerations. It demonstrates how mental health diagnoses and risk profiles function not only as clinical tools, insofar as they constitute human kinds, they also play key roles in our personal and social identities, and in shaping our social environments [2]. I argue, therefore, that ethicists must also ask whether DL-generated psychiatric categories trained on neurodata (and other biodata) would serve the interests of those thus classified beyond the clinic. Moreover, I explain why these DL-generated categories that are treated as human kinds are likely to exhibit several problematic features, including: opacity; abstraction from lived experience; and amenability to bio-essentialism.

I conclude that these problematic features mean that DL-generated psychiatric classifications that perform well in for their intended clinical purposes, could nevertheless fail us when it comes to fulfilling wider epistemic and practical functions of human kinds – particularly by failing to support the needs of the people classified to understand their experiences and navigate their socially embedded lives

This paper exposes the limits of current health AI ethics debates by highlighting the way that new diagnostic categories reontologise our world, beyond the clinic. It provides fresh reasons for tempering enthusiasm about the value of DL-generated nosology in psychiatry, and offers conceptual and normative tools with which we can ask whether DL-driven diagnostics would really serve the needs of those diagnosed.

Background

There is considerable optimism that DL could provide new data-driven bases for (re)categorising and subdividing diagnostic and prognostic categories [3]. This method might seem to offer particular benefits in psychiatry – where the boundaries of disease categories and reliability of diagnoses are notoriously contested [4]. This is, therefore, an important juncture to ask whether these healthcare applications of DL are ethically desirable.

Method

This paper is grounded in bioethical and conceptual analysis, drawing on scholarship in social ontology concerning the construction and nature of human kinds [5, 6] and work on embodied identity-making [7]. It is also informed by empirically-grounded understandings of the ways that health categories influence identity-making [2].

References

[1] Wiese, W., & Friston, K. J. (2022). AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness. Behavioural Brain Research, 420, 113704.

[2] Postan, E. (2021). Narrative devices: Neurotechnologies, information, and self-constitution. Neuroethics, 14(2), 231.

[3] MacEachern, S. J., & Forkert, N. D. (2021). Machine learning for precision medicine. Genome, 64(4), 416.

[4] Starke, G., Elger, B. S., & De Clercq, E. (2023). Machine learning and its impact on psychiatric nosology: Findings from a qualitative study among German and Swiss experts. Philosophy and the Mind Sciences, 4.

[5] Hacking, I. (2007). Kinds of people: Moving targets. Proceedings-British Academy, 151, 285.

[6] Mallon, R. (2016). The construction of human kinds. Oxford University Press.

[7] Postan, E. (2022). Embodied Narratives: Protecting Identity Interests Through Ethical Governance of Bioinformation. Cambridge University Press.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany