Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Personality, pediatrics and psychiatry
Time:
Thursday, 26/June/2025:
3:35pm - 4:50pm

Session Chair: Luca Possati
Location: Auditorium 2


Show help for 'Increase or decrease the abstract text size'
Presentations

Personality without theory: Engineering AI personalities

Roman Krzanowski1, Isabela Lipinska2

1The Pontifical University of John Paul II in Krakow; 2Polskie Towarzystwo Informatyczne, Warsaw

The development of autonomous AI systems is rapidly advancing, with these systems expected to assume a wide variety of roles traditionally filled by humans, such as companions, advisors, educators, healthcare providers, and even coworkers. As AI systems integrate into these diverse functions, a key challenge is designing role-specific synthetic personalities that align with the tasks they are assigned. This raises fundamental questions about what constitutes a personality in AI systems, how these synthetic personalities relate to moral agency, and the extent to which AI can be imbued with human-like personality traits.

To address these questions, we examine three distinct frameworks for understanding AI personalities: (1) the ontological difference, which posits that human agents and AI systems are fundamentally different; (2) the strong reductive view, which asserts that human agents and AI systems are essentially the same; and (3) the weak functional reduction, which suggests that while human agents and AI systems may be functionally similar, they are not identical. These frameworks influence how much of human personality can be simulated within AI systems and how synthetic personalities might interact with moral decision-making processes.

A central issue in this discussion is whether AI systems can possess personalities and, if so, in what form. While human personalities are complex and often elusive, with traits that are difficult to define or quantify, synthetic personalities in AI systems are programmable and malleable. The challenge lies in the fact that personality in AI is not intrinsic, but rather a set of behaviors and traits designed to suit particular functions. Even with advances in AI, it remains unclear whether we can truly emulate human personalities, as this would require exact replication, which is beyond current technological capabilities.

This inquiry also addresses the role of synthetic personalities in moral agency. If AI systems are designed with particular personalities, how might these personalities influence their decision-making and ethical behavior? Should AI agents be explicitly designed with certain personality traits to promote ethical decision-making, or is this unnecessary given the role-specific nature of their tasks? Furthermore, can we predict how AI systems will behave based on their synthetic personalities? These questions extend to whether traditional personality tests designed for humans could be adapted to evaluate the personalities of AI systems.

We propose that synthetic personalities in AI systems will be primarily shaped by system design requirements and will not be inherently tied to moral behavior. Unlike human personalities, which evolve and adapt over time, AI personalities are explicitly crafted by developers to fulfill specific roles. As no personality theory would fit the context of AI systems we denote these personalities in AI as personalities without theories

This presents both opportunities and challenges: on one hand, AI personalities can be optimized for particular tasks; on the other, there is the potential for misuse, manipulation, or unintended consequences, particularly as AI systems become more autonomous and capable of modifying their own behavior. In future iterations of AI, systems with self-learning capabilities might alter their synthetic personalities without human intervention, introducing the risk of unpredictable or undesirable outcomes.

Given these challenges, we argue that existing personality assessment tools for humans are insufficient for evaluating AI personalities. Instead, new frameworks and tests will need to be developed to assess synthetic personalities in AI systems, ensuring they meet design goals and align with the intended ethical guidelines. Such assessments would need to verify that AI systems' personalities are compatible with their roles and are not prone to manipulation or error.

In conclusion, while AI systems may exhibit personality-like traits, these traits will not constitute a true personality in the human sense. Instead, they will be functionally designed features, directly linked to the roles AI systems are tasked with. As such, the relationship between synthetic personalities and moral agency is not inherent but must be explicitly designed and tested. This calls for further research into the ethical implications of synthetic personalities and the development of new methods for evaluating and ensuring that these AI systems fulfill their intended moral and functional roles.



The use of AI in pediatrics - an assessment matrix for consent requirements

Tommaso Bruni, Bert Heinrichs

Forschungszentrum Jülich GmbH, Germany

Inside bioethics, extensive attention has already been devoted to the ethics of medical AI. However, in the extant literature there’s little discussion about how to ethically regulate the development and use of pediatric AI. In this paper, we argue that informed consent requirements for using AI in pediatrics should not be of a “one-fits-all” nature but should rather be gauged to the kind of AI tool. We deal with four archetypal cases of pediatric AI tools: tools that assist physicians in mundane tasks such as the writing of medical notes, tools that help the physician diagnose a condition or recommend a treatment, tools that gather data from the patient, and tools that are embedded in medical devices that act on the patient’s body, like automated insulin delivery (AID) systems, or are anyway to be used directly by the patients, like AI tools that provide psychotherapy. The use of these AI tools comes with different levels of risk. We argue that informed consent requirements for research and clinical deployment of these tools must be adapted to their level of risk. However, in the case of pediatric AI it's not easy to make a direct risk assessment, mostly because there is little empirical research on these tools. We hence put forth two dimensions that can act as proxies for the level of risk. First, the level of involvement of the physician, i.e. the extent to which the AI acts on the physician (rather than the patient) or the physician is monitoring or controlling what the AI tool does. Second, the invasiveness of the AI intervention, i.e. how directly it involves the patient’s body and mind. We focus on where the four above-mentioned archetypal cases are situated in the plane formed by these two dimensions. The position determines how strict the informed consent procedure ought to be. For instance, an AI tool which helps write notes features a high level of physician involvement and a minimal level of invasiveness. We claim that in such cases consent can be framed in an opt-out fashion and be included in the standard form for data treatment, for instance as a box to be ticked in case of opposition. In middle-range cases, for instance when the AI provides the physician with a treatment recommendation, consent should be given in written form, for instance by ticking either of two boxes, one for acceptance and one for refusal, but without requiring the full informed consent procedure. When the AI tool is a clinical intervention proper, the traditional, full informed consent procedure is to be used, and the assent of the cognitively mature, competent minor is necessary. Legal requirements (for instance for health data treatment) change according to jurisdiction and must be upheld. However, we focus on the ethical requirements for consent and hope that ethical inquiry will guide the legal regulation’s future evolution.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany