The epistemology of AI: public perceptions and interventions for responsible use
Aviv Barnoy
Erasmus University Rotterdam, Netherlands, The
The rapid evolution of large language models (LLMs) has challenged traditional epistemological theories, sparking debates on the epistemic value of non-human testimony. Freiman’s (2024) theory of LLM testimony offers a conceptual framework of six epistemic dimensions: intention, normative assessment, trust, propositional content, independence from human involvement, and phenomenological similarity. This study empirically examines these dimensions, aligning them with public perceptions and testing their cohesiveness.
While trust in AI has been extensively studied (Glickson & Woolley, 2020), research on LLMs as knowledge sources remains limited. This study addresses gaps in understanding the public's epistemic perceptions of AI testimony, particularly in fostering critical thinking amid growing concerns about AI bias (Kundi et al., 2023). The primary focus is public perceptions of AI-generated testimony across six epistemic dimensions outlined by Freiman (2024): 1. having intention; 2. can be normatively assessed; 3. constitute an object in trust relations; 4. content is propositional; 5. generated and delivered with no direct human involvement; and 6. output is phenomenologically perceived like humans.
Using a pre-registered survey of 831 U.S. participants, we explored two research questions (and tested six hypotheses):
1. To what extent do Freiman’s AI testimony criteria (ATC) align with public perceptions?
2. How are these perceptions associated with epistemic beliefs, AI familiarity, usage, and demographics?
The study confirms the epistemic relevance of the six factors, though with varying levels of public agreement. Scores of all the factors ranged between 2,98 to 3,59 on a 5-point laker scale, with SD of up to 1,2.
To address RQ1 and test H1, factor and cluster analyses confirmed the cohesiveness of the six variables, confirming H1. A single-factor solution explained 47% of variance (KMO=0.819; Cronbach’s α=0.75), with higher loadings for traditional dimensions (Trust: 0.81; Intention: 0.76; Normative Assessment: 0.76) compared to newer dimensions (Phenomenological Similarity: 0.73; Propositional: 0.59; Human Involvement: 0.35). Cluster analysis identified two groups: a majority that could be considered a consensus (93%) showing general agreement with the ATC and a minority (7%) with significantly lower agreement.
Hierarchical regression analysis provided mixed results. Perceptions of ATC were positively predicted by epistemic beliefs (β=0.15, p<0.001), ease of use (β=0.36, p<0.001), and AI usage (β=0.15, p<0.001). However, familiarity with AI was not a significant predictor (β=-0.04, p=0.44). Contrary to expectations, demographic factors such as income, gender, and education were not significant predictors, with age showing only a weak positive effect (β=0.09, p=0.01).
The findings provide empirical evidence supporting the epistemic relevance of Freiman’s six categories of AI testimony criteria (ATC), offering critical validation of the framework's applicability. The results reveal that while all six dimensions hold conceptual significance, public perceptions are not uniform across them. This disparity highlights the need to continue fine-tuning the factors. The dominant cluster’s broad acceptance of the criteria suggests a general receptiveness to AI-generated knowledge.
Predictive analyses further illuminate the factors influencing perceptions of ATC, and the relation of the factors to general epistemic perceptions. Beyond the theory, these findings carry implications for efforts to foster critical engagement with AI outputs.
The epistemic competences needed for Human Machine Interaction: Dealing with individual, social, and other factors to solve the engineering problem of interface design
Michael Poznic1, Vivek Kant2
1Karlsruhe Institute of Technology, Germany; 2Indian Institute of Technology Kanpur
While digitalization in the technological sector grows in various industries throughout the world, there is a need to comprehend how humans interact with such digital technologies. Specifically, an area of design engineering known as Human Machine Interaction endeavors to design interfaces in complex sociotechnical systems. The aim of this paper is to analyze what is required for interface design in the context of such complex systems from an epistemological point of view. Our focus is the epistemic achievement of engineering understanding that interface designers are striving for. So, the topic of this paper is which kind of epistemic features this achievement of understanding encompasses. We will discuss a particular example of an interface within a power plant as a representative example for an energy infrastructure. In complex systems such as powerplants, the engineers are often controlling energy-based processes through the use of digital technologies in control rooms. Operators have to comprehend the functioning of the processes through display technologies and information appearing on the screens (interfaces). A prominent challenge for engineers is the task of how to structure the information in such circumstances so that several background factors can be considered. First, the state of the individual operators, in terms of the physiological and psychological make-up should be considered. If the operator is fatigued, then there are more chances of comprehending information incorrectly. Next, the team-based factors have to be considered. One example is how to deal with trouble-shooting problems in the control room (such as unexpected alarms); working in teams requires specific competencies that affect the performance of the individual, as well as of the whole team in the control room. Further, organizational and management policies such as assignment of shift work, incentives, and other aspects of workforce management affect productivity and workers’ behavior. Similarly, regulations and legislations also impact the functioning of the sector as a whole, as well as the individual operational practices of the operators in the control room can be affected by them. The engineering designer has to take all of these individual and extra-individual factors into account, though. When the interface is designed, it ultimately depends on these factors. The challenge for the designer is to gain an understanding of how these different aspects of the problems come together as reflected in the individual and extra-individual factors. There are various forms of knowledge the engineering designer has to compile. Yet, how they need to be collated and comprehended together to yield the understanding required to design an interface for a complex sociotechnical system is an open question. We will discuss a concrete example to spell out the different roles of the epistemic features as regards the design engineer’s understanding of the interplay of factors that contribute to the development of interfaces in such complex systems.
Intimate compression: AI systems and the personal nature of architectural knowledge
Simon Maris
University of Applied Sciences Trier, Germany
We introduce the concept of "intimate compression" to examine how AI systems transform architectural knowledge from embodied practice into encoded patterns. The term "intimate" draws from M. Birna van Riemsdijk's framework of intimate computing [1], which argues that intimate technologies make us vulnerable in a specific way by affecting physical, psychological and social aspects of our identity. Van Riemsdijk links our relations with and through intimate technologies to the process of forming intimate relationships, involving self-disclosure and partner responsiveness. In the context of architecture, intimate knowledge refers to the deeply personal understanding that architects develop through direct bodily engagement with space, materials, and context. As Schrijver notes, much of architecture's knowledge "resides beneath the surface, in nonverbal instruments" [2] that articulate spatial imagination and design process.
Drawing on Polanyi's original concept of tacit knowledge [3] and its architectural interpretation by Pallasmaa [4], we examine how AI systems attempt to compress these intimate dimensions of architectural practice. This compression manifests in how current design tools try to encode spatial understanding, material intuition, and design thinking - aspects that traditionally "escape quantifiable dimensions of research" [2]. Unlike traditional design tools that simply execute commands, AI systems actively attempt to learn and reproduce these personal aspects of architectural work.
We argue that this process of intimate compression raises critical questions about the nature of architectural expertise and the role of embodied knowledge in design. As AI systems compress and encode cultural contexts and complex information, they reshape the foundations of architectural research and practice. This intimate compression has profound implications for how architects engage with pressing challenges such as sustainability, potentially offering new perspectives but also introducing additional layers of complexity.
New forms of vulnerability emerge when deeply personal aspects of architectural practice encounter algorithmic encoding. As AI systems increasingly mediate the relationship between architect and design process, they reshape the intimate foundations of architectural knowledge production. We conclude by discussing the implications of these shifts for architectural education, practice, and research, highlighting the need for critical frameworks to address the personal and embodied dimensions of architectural knowledge in an era of AI-driven design. As the landscape of architectural research evolves, intimate compression emerges as a key concept for understanding and shaping the future of the discipline. This paper aims to bridge concepts from architecture, research, AI, and society to provoke reflection and debate on the consequences of the intimate technological revolution for architectural knowledge production.
References:
[1] Birna van Riemsdijk, M. Intimate Computing. Abstract presented at the Philosophy Conference "Dimensions of Vulnerability". Vienna, April 2018.
[2] Schrijver, L. The Tacit Dimension: Architecture Knowledge and Scientific Research. Leuven, May 2021.
[3] Polanyi, M. The Tacit Dimension. London, 1967.
[4] Pallasmaa, J. The Thinking Hand : Existential and Embodied Wisdom in Architecture. Chichester, 2009.
|