ISTP 2026 Conference
“Theorizing in Dark Times – Art, Narrative, Politics”
June 8 – June 12, 2026 | Brooklyn, NY, USA
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Panel: AI politics
| ||
| Presentations | ||
The necessity of misunderstanding: How unconscious mistranslations generate what AI cannot The New School, United States of America This paper investigates the phenomenon of Model Collapse in artificial intelligence through the lens of psychoanalytic theory. Model Collapse is observed when Large Language Models (LLMs) are trained recursively on AI-generated data, after which models progressively lose the ability to generate meaningful text. This reveals an empirical difference between human and machine-generated discourse that psychoanalytic theory may help explain. While AI systems can produce text often indistinguishable from human writing, they are fundamentally unable to produce what French psychoanalyst Jean Laplanche terms "enigmatic signifiers": the productive failures of comprehension that paradoxically enable human subjects to generate genuinely new meanings. Laplanche's framework uniquely explains both clinical phenomena and the qualitative difference between human and machine-generated discourse. His account centers on how children's incomplete comprehension of messages from caregivers creates repressed signifiers that constitute the unconscious. These mistranslations are not deficits but productive failures that generate from the perspective of LLM Model Collapse we might term "surplus variance," a mathematical property present in human discourse but absent in synthetic text. This analysis reveals that AI models are paradoxically too successful at integrating training data. They lack the unconscious dimension created by primary repression, where enigmatic signifiers remain partially untranslated and continue to affect discourse production through slips, errors, and creative transformations. This has implications for AI development and theoretical psychology, suggesting that more sophisticated language models may require incorporating controlled imperfections that mirror the generative role of repression in human psychology: productive misunderstandings rather than improved pattern recognition. AI, Subjection, and the Politics of the Informational World PUC Campinas, Brazil In the so-called informational age, artificial intelligence has become a central dispositive in the organization of everyday life, crystallizing what Milton Santos described as the globalitarian regime: an asymmetric technical-scientific-informational environment that concentrates power, accelerates dispossession, and reshapes the conditions of human becoming. Far from a neutral tool, AI operates as a political-epistemic force that formats perception, narrows agency, and amplifies forms of dependency that must be theorized critically. Drawing on the cultural-historical psychology of Vigotski and Luria, as well as on critical psychology (Holzkamp; Parker), this paper pretends to examine how AI can induce cognitive, physiological, and behavioral dependencies that reorganize the very architecture of subjectivity. These processes align with broader neoliberal rationalities that transform individuals into optimized data-subjects: predictable, governable, and increasingly detached from collective forms of meaning-making. From a political standpoint, AI intensifies what Zuboff calls surveillance capitalism and what Morozov characterizes as the illusion of techno-solutionism, converting theory itself into a battleground. In dark times, marked by disinformation, authoritarian drift, social inequality and the erosion of public spheres, critical psychology theory becomes a place of resistance capable of unveiling how power infiltrates the micro-dynamics of sense-making. The idea is not merely to critique technology, but to theorize how subjects can reclaim agency under conditions of epistemic saturation and algorithmic governance. By articulating psychological theory with political economy and critical geography, the paper argues that theorizing remains a vital political practice. On the dangers of anthropomorphizing artificial intelligence Roskilde University, Denmark In the current discourse on artificial intelligence (AI), generative AI, and chatbots, the boundaries between human subjects and sophisticated technology are increasingly and often intentionally blurred. The following contribution argues that the conception of the human subject and how this is historically, socially and discursively produced is central to critical contemporary AI debates. In business narratives, computer science, and popular media, the achievement of “general artificial intelligence” is presented as almost inevitable. Accordingly, new tests and benchmarks are constantly emerging the claim to make “intelligence” measurable. These practices often draw uncritically on dualistic psychological traditions that separate subject and world, mind and materiality, cognition and practice. It is argued that these divisions make it possible to describe machines through anthropomorphic analogies and, at the same time, to conceptualize humans through machine metaphors. Terms such as “understanding,” “reasoning,” and “learning” circulate as seemingly context-free abilities of generative AI systems, rather than being understood as deeply relational, embodied, and socially situated practices. This conceptual distortion is politically and economically situated: It is closely interwoven with promises of growth, funding logics, and capitalist narratives of the future, while questions of subjectivity and sustainability are systematically treated as secondary. This contribution advocates for a critical, dialectical research practice in current debates on AI’s capabilities. Only through conceptual transparency and reflexive research practices will it be possible to adequately investigate both the actual technical potential and the societal significance of AI. | ||

