Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Large Language Models I
Time:
Thursday, 26/June/2025:
8:45am - 10:00am

Session Chair: Alexandra Prégent
Location: Auditorium 5


Show help for 'Increase or decrease the abstract text size'
Presentations

LLMs, autonomy, and narration

Björn Lundgren1,2, Inken Titz3

1Friedrich-Alexander-Universität, Germany; 2Institute for Futures Studies, Sweden; 3Ruhr-Universität Bochum

LLMs, Autonomy, and Narrative

Suppose Jane talks about Joe to several of her friends, to strangers, indeed, to whomever she meets. While this issue has been phrased in terms of privacy rights violation, we sidestep that discussion and focus on how Jane might affect Joe’s ability to define his own persona, his personal identity, the narrative about his person. However, we are not focused on people talking about other people, but on how technology shapes these narratives.

Our focus here is on large language models (LLMs), which have raised a plethora of ethical discussions (e.g., authorship, copyright). Here we instead focus on an issue that has garnered less—if any—attention in the literature: How LLMs shape our personal narratives.

In our talk we aim to explain two main things. First, we aim to explain how LLMs might diminish individual’s control over their self-narrative and how that reduced control, in turn, affects their ability to shape their own narrative, and thus identity (see Titz 2024). The result is a potential diminishment of their autonomy. Here we will also argue (contra Marmor 2015) that people's interest in having reasonable control over self-presentation is an autonomy rather than privacy rights issue (see Lundgren 2020; b; Munch 2020). We aim to show that LLMs diminishing control over self-narratives has to do with their presenting content by making use of a distinct narrative framing.

Second, we will argue that this might be one specification of a more general loss of control over narratives. The production of narratives is changing from a human to a computational activity. Thereby, AI systems are reducing our human control (i.e., our collective autonomy) over the production of narratives. Story-telling is an integral part of human social binding and cultural productions, and it is expectable that LLMs negatively impact on its quality.

References

Lundgren, B. (2020a). A dilemma for privacy as control. The Journal of Ethics, 24(2), 165-175.

Lundgren, B. (2020b). Beyond the concept of anonymity: what is really at stake. K. Macnish & J. Galliot (ed.) Big data and democracy, 201-216.

Marmor, A. (2015). What is the right to privacy? Philosophy and Public Affairs, 43(1), 3-26.

Munch, L. A. (2020). The Right to Privacy, Control Over Self‐Presentation, and Subsequent Harm. Journal of Applied Philosophy, 37(1), 141-154.

Titz, I. (2024), Debunking Cognition. Why AI Moral Enhancement Should Focus on Identity. J.-H. Heinrichs, B. Beck & O. Friedrich (ed.) NeuroProstethics. Ethical Implications of Applied Situated Cognition, 103-128.



Intimacy as a Tech-Human Symbiosis: Reframing the LLM-User Experience from a Phenomenological Perspective

Stefano Calzati

TU Delft, Netherlands, The

In the first two chapters of their book, Calzati and de Kerckhove (2024) identify two ecologies – language and digital – based on different operating logics: the former enabling the creation and sharing of meaning; the latter enacting sheer computability and efficiency. This leads to what the authors call “today’s epistemological crisis” due to the partially incommensurable world-sensing that these two ecologies produce. What happens when we apply these ideas to Large Language Models (LLMs)? This paper sets out to answer this question, ultimately outlining a research agenda which substantially departs from current studies on LLMs (Chang et al. 2024; Hadi et al. 2024), also within STS (Dhole 2023).

LLMs are deep learning algorithms trained on vast datasets able to recognize, predict, and generate responses based on provided inputs. To the extent to which language is the operating system enabling human communication, its automation inevitably produces effects that reverberate to the core of what it means to know (Mitchell & Krakauer 2023). This requires exploring the epistemological hybridization arising whenever LLMs and users – i.e., digital and language ecologies – interface with each other.

The popularization of LLMs has led to various applications across fields (Hadi et al. 2024). This growing body of research tends to maintain an essentialist standpoint towards the LLM-user relation, meaning that LLMs and users are considered as two distinct poles converging through interaction, mutual feedback, and in-the-loop or ex-post supervision (Chang et al. 2024). While important for benchmarking the effectiveness of LLMs, this essentialist standpoint falls short of regarding and exploring the dynamic co-evolution of the human-technology pair – that is, a symbiosis – and its consequent impact on the creation (and validation) of knowledge.

In this regard, it is useful to adopt a phenomenological approach (cf. Delacroix 2024; Harnad 2024) endowed with the task of digging into and untangling the effects and affects that the LLMs-user symbiotic experience produces from an epistemological point of view. At stake is the “why”, more than the “how”: why LLMs produce the outputs they produce? Is it possible to detect patterns across the LLM-user symbiotic experiences? To what extent does the LLM-user co-dependent experience lead, in the longer run, to forms of idiosyncratic knowledge?

Here I outline three phenomenological research axes, which can all contribute to tackle these questions, beyond current work. One axis focuses on the entanglement between prompts and generated outputs through longitudinal and comparative (across different language systems, as well as intra- and inter-LLMs) studies. This could bear witness to converging and/or diverging meaning-making patterns or, conversely, to a degree of randomization of the LLM-user experience’s outputs. A second axis turns the attention to what today we consider “hallucinations” or “glitches” of LLMs. The goal, in this sense, would be to map longitudinally and comparatively these occurrences to investigate the extent to which they are indeed rhapsodic outputs or, instead, possible traces of a broader tech-human epistemology in the making. A third axis entails the ethnographic study of LLMs’ uses by socioculturally and linguistically diverse users to explore the affects of the LLM-user experience in terms of perceived contextual fitness and finesse of the outputs generated. In this regard, the case of 1 the Road (2017), which predates current LLMs and was promoted as the first travelogue written entirely by a neural network, is discussed as a proto form of artificial hypomnemata.

References

Artificial Neural Network. (2017). 1 the Road. Jean Boîte Editions.

Calzati, S. & de Kerckhove, D. (2024). Quantum Ecology: Why and How New Information Technologies Will Reshape Societies. MIT Press.

Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., ... & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45.

Delacroix, S. (2024). Augmenting judicial practices with LLMs: re-thinking LLMs' uncertainty communication features in light of systemic risks. Available at SSRN, https://dx.doi.org/10.2139/ssrn.4787044

Dhole, K. (2023). Large language models as Sociotechnical systems. In Proceedings of the Big Picture Workshop (pp. 66-79). Association for Computational Linguistics., Singapore. https://doi.org/10.18653/v1/2023.bigpicture-1.6

Hadi, M. U., Al Tashi, Q., Shah, A., Qureshi, R., Muneer, A., Irfan, M., ... & Shah, M. (2024). Large language models: a comprehensive survey of its applications, challenges, limitations, and future prospects. TechRxiv. https://doi.org/10.36227/techrxiv.23589741.v6

Harnad, S. (2024). Language writ large: LLMs, ChatGPT, meaning and understanding. Frontiers in Artificial Intelligence, 7, 1490698. https://doi.org/10.3389/frai.2024.1490698

Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120.



Large language models and cognitive deskilling

Richard Heersmink

Tilburg University, Netherlands, The

Human cognizers frequently use technological artifacts to aid them in performing their cognitive tasks, referred to as cognitive artifacts (Heersmink 2013). Critics have expressed concerns about the effects of some of these cognitive artifacts on our onboard cognitive skills. In some contexts and for some people, calculators (Mao et al 2017), navigation systems (Hejtmánek et al 2018) and internet applications such as Wikipedia and search engines (Sparrow et al 2011), transform our cognitive skills in perhaps undesirable ways. Critics have pointed out that using calculators has reduced our ability to perform calculations in our head; navigation systems have reduced our ability to navigate; and having access to the internet results in storing less information in our brains. Such worries go back to Socrates who argued that writing ‘will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory’. Socrates’ argument can be generalized beyond writing and memory. If cognitive artifacts perform information-storage or computational tasks for us, the systems in the brain that would otherwise perform or execute those tasks tend to lose their strength or capacity.

In this talk, I’ll extend this worry to large language models (LLMs) in a two-part manner. First, through lack of practice, those who use LLMs extensively may lose some of their writing skills. Writing a text involves various sorts of cognitive tasks, including spelling, formulating grammatically and stylistically correct sentences, developing logical relationships between concepts, evaluating claims, and drawing inferences. When we consistently outsource writing tasks to LLMs, it is possible that our writing skills and the cognitive skills that build up our writing skills are reduced through a lack of practice.

Second, some philosophers have argued that writing is one way through which our minds and cognitive systems are extended (Menary 2007; Clark 2008). On this view, creating and manipulating written words and sentences is part of our cognitive processing. One of the major advantages of writing for cognitive purposes is that it enables us to manipulate external representational vehicles (words, sentences, paragraphs) in a way that is not possible in our heads. For example, having written a sentence, we can read it, evaluate it and rewrite it, if necessary. The same is true for paragraphs and larger pieces of text. Written text provides a cognitive workspace with informational properties that are complementary to the properties of our internal cognitive workspace (Sutton 2010). When we spend substantially less time writing (either with pen and paper or with a computer), that venue for extending our minds and cognitive systems gets less developed, which may impoverish our overall cognitive skills.

References

Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press.

Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information, and categories. Review of Philosophy and Psychology, 4, 465–481.

Hejtmánek, L., Oravcová, I., Motýl, J., Horáček, J. & Fajnerová, I. (2018). Spatial knowledge impairment after GPS guided navigation: Eye-tracking study in a virtual town. International Journal of Human Computer Studies, 116, 15–24.

Mao, Y., White, T., Sadler, P. & Sonnert, G. (2017). The association of precollege use of calculators with student performance in college calculus. Educational Studies in Mathematics, 94, 69–83.

Menary, R. (2007). Writing as thinking. Language Sciences, 29, 621–632.

Sparrow, B., Liu, J. & Wegner, D. M. (2011). The Google effect: The cognitive effects of having information at your fingertips. Science 333, 776–778.

Sutton, J. (2010). Exograms and interdisciplinarity. In The Extended Mind (ed. Menary, R.) 189–225. MIT Press.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany