Problem and argument
Should we welcome the use of deep-learning (DL) to (re)classify psychiatric diagnostic and disease risk categories, by identifying underlying patterns in neurobiological and other health data?
This question could be answered solely from a clinical perspective – would data-driven DL-generated psychiatric nosology result in better healthcare and clinical outcomes [1]? This paper argues, however, that this delivers only a partial picture of ethically significant considerations. It demonstrates how mental health diagnoses and risk profiles function not only as clinical tools, insofar as they constitute human kinds, they also play key roles in our personal and social identities, and in shaping our social environments [2]. I argue, therefore, that ethicists must also ask whether DL-generated psychiatric categories trained on neurodata (and other biodata) would serve the interests of those thus classified beyond the clinic. Moreover, I explain why these DL-generated categories that are treated as human kinds are likely to exhibit several problematic features, including: opacity; abstraction from lived experience; and amenability to bio-essentialism.
I conclude that these problematic features mean that DL-generated psychiatric classifications that perform well in for their intended clinical purposes, could nevertheless fail us when it comes to fulfilling wider epistemic and practical functions of human kinds – particularly by failing to support the needs of the people classified to understand their experiences and navigate their socially embedded lives
This paper exposes the limits of current health AI ethics debates by highlighting the way that new diagnostic categories reontologise our world, beyond the clinic. It provides fresh reasons for tempering enthusiasm about the value of DL-generated nosology in psychiatry, and offers conceptual and normative tools with which we can ask whether DL-driven diagnostics would really serve the needs of those diagnosed.
Background
There is considerable optimism that DL could provide new data-driven bases for (re)categorising and subdividing diagnostic and prognostic categories [3]. This method might seem to offer particular benefits in psychiatry – where the boundaries of disease categories and reliability of diagnoses are notoriously contested [4]. This is, therefore, an important juncture to ask whether these healthcare applications of DL are ethically desirable.
Method
This paper is grounded in bioethical and conceptual analysis, drawing on scholarship in social ontology concerning the construction and nature of human kinds [5, 6] and work on embodied identity-making [7]. It is also informed by empirically-grounded understandings of the ways that health categories influence identity-making [2].
References
[1] Wiese, W., & Friston, K. J. (2022). AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness. Behavioural Brain Research, 420, 113704.
[2] Postan, E. (2021). Narrative devices: Neurotechnologies, information, and self-constitution. Neuroethics, 14(2), 231.
[3] MacEachern, S. J., & Forkert, N. D. (2021). Machine learning for precision medicine. Genome, 64(4), 416.
[4] Starke, G., Elger, B. S., & De Clercq, E. (2023). Machine learning and its impact on psychiatric nosology: Findings from a qualitative study among German and Swiss experts. Philosophy and the Mind Sciences, 4.
[5] Hacking, I. (2007). Kinds of people: Moving targets. Proceedings-British Academy, 151, 285.
[6] Mallon, R. (2016). The construction of human kinds. Oxford University Press.
[7] Postan, E. (2022). Embodied Narratives: Protecting Identity Interests Through Ethical Governance of Bioinformation. Cambridge University Press.