Technological predictions: rethinking design through active inference and the free energy principle
Luca Possati
University of Twente, Netherlands, The
This paper explores human-technology relationships through the lens of active inference and the Free Energy Principle (FEP). Active inference, rooted in Bayesian brain theory, suggests that the brain generates predictions about sensory inputs and updates beliefs to minimize surprise or prediction errors, enabling organisms to reduce uncertainty and optimize interactions with their environment. The FEP, introduced and developed by neuroscientist Karl Friston, expands this idea, proposing that biological systems aim to minimize free energy—a measure of the discrepancy between expected and actual sensory input—to sustain homeostasis (Friston 2013; Parr et al. 2022). These frameworks can provide a novel perspective on human-technology interactions.
At the heart of this paper is a straightforward yet powerful idea: every artifact embodies a set of predictions—not only about how its user will interact with it but also about the environment in which it operates. At the same time, the artifact reflects the expectations of its designer, who conceived and built it based on assumptions about its purpose, functionality, and intended user behavior. In this sense, artifacts act as mediators, encoding and enabling the interaction of predictive models from multiple agents: the designer, the artifact itself, and the user. This perspective positions artifacts as dynamic networks of predictions, where human-technology interactions are shaped by the continuous coordination and adaptation of these models over time.
The inquiry centers on two primary questions:
How do active inference and the FEP extend to artifacts? Artifacts can be seen as mediators of predictions through three mechanisms: precision crafting, curiosity sculpting, and prediction embedding. Precision crafting directs attention to specific environmental features, aiding users in managing their inferential load. Curiosity sculpting enables exploration and uncertainty reduction, refining user predictive models. Prediction embedding encapsulates the artifact’s own predictive capacity, shaping and reflecting its intended use. These mechanisms, though interconnected, can operate independently or progressively.
Can active inference and the FEP inform UX design? By conceptualizing the relationship between designer, artifact, and user as a triad of generative models, this approach provides tools to address challenges in UX design, such as enhancing user engagement and optimizing functionality. It offers a dynamic framework that goes beyond static models, capturing the evolving interactions within the system.
To operationalize this framework, this paper introduces the Designer-Artifact-User (DAU) tool, a software platform developed to simulate and analyze artifact-based interactions. The DAU tool leverages the formalism of active inference and the FEP to model how predictions evolve across the triad of designer, artifact, and user, facilitating the refinement of design processes. By employing advanced computational models, the tool provides a powerful resource for exploring the dynamic interactions between these entities. It is specifically designed for researchers, designers, and engineers seeking to deepen their understanding of complex socio-technical systems. The framework's practical application is illustrated through a case study of the smartphone. This analysis examines how smartphones embody and influence the expectations of both users and designers, demonstrating how active inference can enhance interactions and align design intentions with user behavior.
References
Friston, K. (2013). “Life as we know it.” Journal of the Royal Society Interface 10(86): 20130475.
Parr, T., Pezzulo, G., and K. Friston. (2022). Active Inference. Cambridge, MA: MIT Press.
AI Oracles and the Technological Re-Enchantment of the World
Lucy Císař Brown1,3, Petr Špecián1,2
1Charles University, Czech Republic; 2Prague University of Economics and Business, Czech Republic; 3Czech Academy of Sciences
Artificial intelligence systems are becoming an increasingly sophisticated and pervasive presence in our day-to-day lives. In the likely case of their continuing development, they could soon transcend their current status as tools or consultants (Špecián and Císař Brown 2024). Indeed, AIs may emerge as modern-day oracles—entities of great mystique perceived to possess knowledge and capabilities beyond human comprehension whose utterances carry decisive authority despite their purveyors’ lack of accountability. This paper argues that such a transformation represents a form of technological ‘re-enchantment’ of the world, inverting Max Weber’s concept of the disenchantment of modernity.
As individuals and institutions increasingly rely on AI “oracles” for guidance, they gradually resign their agency for the sake of systems they fundamentally cannot understand (Klingbeil, Grützner, and Schreck 2024). The requisite leap of faith far exceeds the trust placed in human experts. Drawing on Weber’s analysis of rationalization and secularization (Weber 1963), we argue that what is often missed in subsequent Weberian analyses is that ‘disenchantment’ did not imply the loss of capacity for faith, but rather its transformation. As modernity has progressed, this capacity has been redirected toward technology, with people systematically convinced to ‘have faith’ in supposedly rational structures, such as ‘the Market,’ they cannot comprehend (Keller 1992).
With the ascent of AI, this epistemic distance widens further. Current AI systems, particularly Large Language Models, are already ascribed the status of competent consultants. As these technologies improve and perceptible errors subside, we anticipate a fateful perceptual shift: from consultant to Oracle—an opaque source of authoritative knowledge whose proclamations are accepted with minimal scrutiny. Unlike traditional oracles, bound to specific times and places, AI offers the potential for a personal, continuous, and all-encompassing relationship with its users—or perhaps, increasingly, disciples—providing apparently omniscient guidance.
The contribution of our paper is a structural analysis of this remarkable technological re-enchantment, wherein increasing AI sophistication leads not to greater comprehension, but to a faith-based abdication of human agency (cf. Collins 2018). We argue that this development represents a pivotal moment in the ongoing dialectic of enchantment, disenchantment, and re-enchantment, challenging us to reconsider fundamental concepts of agency, trust, and the relationship between humanity and technology. By examining how human yearning for omniscient guidance may lead us toward an enchanted acceptance of opaque technological proclamations, we illuminate crucial questions about autonomy and rationality in an AI-mediated world.
Collins, H. (2018). Artifictional intelligence: Against humanity's surrender to computers. Polity Press.
Keller, J. (1992). Nedomyšlená společnost [Unimagined society]. Doplněk.
Klingbeil, A., Grützner, C., & Schreck, P. (2024). Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior, 160, Article 108352. https://doi.org/10.1016/j.chb.2024.108352
Špecián, P., & Císař Brown, L. (2024). Give the machine a chance, human experts ain't that great…. AI & SOCIETY, Article s00146-024-01910–16. https://doi.org/10.1007/s00146-024-01910-6
Weber, M. (1963). The sociology of religion (E. Fischoff, Trans.). Beacon Press. (Original work published 1922)
Sleepwalkers in a scenario of a happy apocalypse?
Helena Mateus Jeronimo
ISEG School of Economics and Management, Universidade de Lisboa & Advance/CSG, Portugal
This paper builds on the idea that, in addition to the uncertainty that has always existed, contemporary society has introduced new contingencies stemming from increasingly complex and sophisticated techno-scientific systems. Nonetheless, many of the problems we face in the 21st century is analyzed as “risks”, which, with their probabilistic nature, artificially conceals uncertainties and creates an illusion of control over randomness and contingencies, perfectly aligning with a culture that rejects unpredictability. There is an excessive, hegemonic, and monolithic tendency to use the probabilistic notion of risk across all kinds of issues, which leads to three levels of error: (i) theoretically, it conflates contexts of risk, which can be evaluated and calculated in terms of probabilities, with situations of uncertainty, which cannot be assessed through measurable calculations; (ii) analytically, it devalues radical uncertainty, falsely converting it into epistemic uncertainties that can be analyzed through quantitative methods to achieve public credibility and acceptance; (iii) normatively, the language of risk tends to legitimize, justify, and ratify the pattern and progress of technology, failing to question the foundations of the instrumental vision that strongly permeates modernity.
The concealment of uncertainty occurs because risk operates as an “abstraction” of problems through numbers – a measure that induce a distant and opaque relationship with objects and scientific-technical systems. Risk enables the rejection of contingency by presenting it as something manageable. As expressed by the historian of science and technology Jean-Baptiste Fressoz, this has occurred through a process of “reflexive disinhibition”, where dangers are acknowledged but they are subsequently normalized to ensure their acceptance through the creation and success of mechanisms (e.g., regulations, safety standards, administrative surveillance, insurance), which generates a climate of “happy apocalypse”.
Faced with the crossroads of the current context, which is heavily impacted by the ecological crisis and the complex systems of Artificial Intelligence, the somewhat ‘unconscious’ ratification of the scientific-technological path is particularly concerning. Consider how algorithms represent forms of invisible power, imperceptible to the senses, and how they construct, organize, and shape our reality—whether in recruitment decisions, judicial rulings, consumer choices, targeted marketing, or even in the governance of nations. Consider how data enables human action or interaction to become subject to the logic of prediction and optimization. In short, consider how machine learning embodies the automatism of technology. The acceptance of the existence of these emerging factors echoes the concept of “technological somnambulism”, as described by Langdon Winner, which arises from the perception of technology as a mere tool – neutral and disconnected from its long-term implications, deeply opaque to its users – and not as a powerful force subtly but potentially irreversibly restructuring the physical and social world in which we live.
In this paper, I argue that the excessive trust placed in solely technical solutions, subsumption of uncertainties into risk analyses, decontextualized strategies of technoscience and the strong influence of the financial and corporate world urgently need to be circumvented by the idea of reasonableness, in addition to rationality, that it should be acknowledged that uncertainties cannot be tamed, and that ethical values and political action should mediate technoeconomic progress.
|