Between mind and machine: symbolic and phenomenological roots of computation
Chair(s): Lorenzo De Stefano (Università degli Studi di Napoli Federico II, Italy), Felice Masi (Università degli Studi di Napoli Federico II, Italy), Francesco Pisano (Università degli Studi di Napoli Federico II, Italy), Luigi Laino (Università degli Studi di Napoli Federico II, Italy), Caludio Fabbroni (Università degli Studi di Napoli Federico II, Italy)
In the modern era computation profoundly shapes how we think, communicate, and engage with the world. Yet the philosophical foundations of this transformative force—rooted in formal logic, symbolic manipulation—often remain underexamined. This symposium, “Between Mind and Machine: Symbolic and Phenomenological Roots of Computation,” brings together five complementary perspectives that reveal how the concept of computation both arises from and impacts human cognition, culture, and our very sense of self.
The first presentation examines writing as a cognitive practice interwoven with calculation. Although many fields—from archaeology to philosophy of mathematics—treat writing as pivotal to conceptual ordering, Husserl’s idea of “language as calculus” has seldom been applied to his understanding of writing. By revisiting Husserl from this angle, the speaker argues that writing functions as more than a communication tool: it can serve as a calculative medium yielding what one might call “computational evidence,” a clarity generated by the systematic manipulation of symbols. This approach expands our view of writing beyond a passive receptacle of ideas, suggesting instead a dynamic interplay between phenomenology and calculation.
Moving from symbolic notation to mechanical logic, the second talk explores Jevons’ “Logical Piano” (1866)—one of the earliest machines to automate logical inferences. Though overshadowed in standard accounts by Babbage or Boole, Jevons’ device offers a critical insight into the paradox of “intimate technology.” Computation, by its design, is universal and impersonal, yet it increasingly encroaches upon the most private areas of human life. Highlighting how Jevons improved on Boole’s symbolic logic and anticipated modern programming languages, the speaker shows that the idea of a “computational subjectivity”—rooted in Kantian rational autonomy—already contained latent tensions between algorithmic impersonality and personal meaning-making. This tension reverberates in our own time, illuminating why advanced computing technologies can feel both indispensable and disquietingly detached from human concerns.
The third presentation focuses on Turing, traditionally hailed as a principal figure in AI. Turing’s pivotal contributions—his 1936 paper “On computable numbers with an application to the Enscheidungsprobleme” and his 1950 essay “Computing Machinery and Intelligence”—laid the groundwork for viewing machines as potential analogues to human thought. Here, using Eugen Fink’s distinction between “thematic” and “operative” concepts, the seldom-scrutinized assumptions beneath Turing’s explicit claims will be investigated. These include behaviorist and cybernetic elements that frame cognition as algorithmic rule-following. By reading Turing’s response to Ada Lovelace’s skepticism through this lens, the talk shows how Turing’s vision of the “child-machine” presupposes a specific view of learning and development. Such “operative” concepts continue to shape debates on AI: they predispose us to see intelligence in computational terms, even when this perspective sidesteps crucial questions about consciousness, creativity, or the nature of understanding.
The fourth presentation addresses whether the brain itself can be literally viewed as a computer. In contemporary neuroscience, many models interpret neuronal activity as inputs processed by algorithms. Despite the success of these computational approaches, the presenter questions whether such models capture the brain’s true workings or merely provide convenient abstractions. Given the brain’s staggering complexity, computational theories often rely on averaging data and filtering out variances. The speaker contends that these simplifications do not necessarily reveal an inherent computational essence. Rather, they offer valuable but ultimately heuristic insights—tools for managing complexity, rather than unearthing a fundamental computational identity of the brain. This reevaluation reminds us of the broader symposium theme: while formal frameworks can illuminate phenomena, they can also mask the unique richness and variability of lived experience.
In the final talk, computation is treated as a “symbolic form” in Ernst Cassirer’s sense—on par with art, myth, or language. Like these established symbolic forms, computation not only processes but also structures how we conceive and engage with the world. Large Language Models, for instance, handle symbols with astonishing facility yet lack reflexive consciousness. The speaker coins the phrase “shortcut-Geist” to highlight that while LLMs exhibit remarkable pattern recognition and problem-solving, they do not fulfill the deeper cultural-intentional criteria of human Geist. Through Cassirer’s conceptual framework, the presentation stresses that computational environments mold human reality as much as they mirror it. We have thus entered an era where code, algorithms, and digital infrastructures act as powerful cultural forces, shaping perceptions, values, and identities.
Taken as a whole, these five contributions shed light on a crucial paradox: computation, though originally framed as a purely formal and impersonal discipline, is integral to human life—whether via writing practices that encode cognitive strategies, logic machines that promise universal reasoning, AI architectures that blend mechanical rules with behavioral theory, or neuroscientific models that render the brain in algorithmic terms. By examining the historical arcs that gave rise to today’s technologies, alongside phenomenological insights into the subjective dimension of thinking, the symposium underscores how crucial it is to understand computation in a manner that neither overstates its universality nor underestimates its cultural entanglements.
In illuminating these entanglements, the symposium invites a recalibration: might we better reconcile formal computation with the diverse, context-dependent nature of human cognition if we treat symbolic activity as both technological and experiential? Could a re-examination of writing, Jevons’ logic, Turing’s AI concepts, the brain-as-computer analogy, and Cassirerian symbolic forms help us identify presuppositions that shape current debates about learning machines, consciousness, or the ethical boundaries of intimate computing devices?
Ultimately, the symposium demonstrates that computation is neither a mere technical tool nor an immutable feature of the natural world. It is, instead, a complex cultural practice and symbolic framework that interacts with—while also profoundly reshaping—human modes of thought and being. By joining historical, phenomenological, epistemological, and cognitive approaches, the symposium illuminates a shared set of questions: What does it mean to think about—and with—computation today? Where do the boundaries lie between human creativity and formal process? How can a deeper historical and philosophical perspective guide our response to emerging digital paradigms? In posing these questions, the symposium aims not only to clarify the roots of computational thinking but also to enrich the ongoing dialogue about technology’s place within contemporary culture.
Presentations of the Symposium
Writing as calculus. New sciences of writing and phenomenology
Felice Masi Università degli Studi di Napoli Federico II, Italy
Since the 1990s, writing studies have undergone a concrete turn, focusing on the manipulation of material symbols and the links between writing and calculation. On the other hand, the claim that Husserl had an idea of language as calculus has not produced a revision of his conception of writing. I intend to propose a neo-Husserlian analysis of writing as a cognitive function of computation. The essay will thus be divided into four parts. In the first, I will outline the reasons for a neo-Husserlian supplement to the science of writing. In the second part, I will present the main results that archaeological investigations, psychological-cognitive analyses, philosophy of mathematical practice and philosophy of mediality have achieved on writing. In the third part, I will schematically present the Husserlian definitions of counting, operation, calculation, symbolic writing and reading. In the fourth part, I will show the different uses of writing for the achievement of the evidence of clarity and the evidence of distinction and why the latter is also could be defined as computational evidence.
(Logical) Piano lessons: Jevons and the roots of computational subjectivity
Francesco Pisano Università degli Studi di Napoli Federico II, Italy
The concept of intimate technology owes a particular paradoxical nuance to the juxtaposition of intimacy and computation. The digital age, which allows for a deep diffusion and embedding of technology into one’s personal life, is historically and conceptually rooted in the logical theorization of recursively defined procedures for processing a multiplicity of inputs. Such computational procedures – automatic, universally applicable, input-independent – can be seen as structurally impersonal, with the input (or set of inputs) corresponding to significant, appropriately encoded aspects of each individual’s personal life. The vague sense of a profound incompatibility between computing and personal life pervades popular culture. This talk will offer some historical-critical coordinates to better frame this feeling. However, the breadth of the logical prehistory of the digital age makes it necessary to focus on a case study. I will focus on one case in particular: William Stanley Jevons’ (1835-1882) Logical Piano. Constructed in 1866 and first described in an 1870 paper, the Logical Piano was the first modern machine to compute logical inferences automatically. The description and contextualization of this logical machinery will be used as a case study to investigate the relationship between computation, automation, and impersonality. After a summary illustration of the machine's internal structure, I will discuss the connection between this structure and Jevons' system of equational logic, which derived from (and in some critical ways improved upon) the symbolic logic developed by Boole over two decades earlier. I will then highlight how, from the dialogue between Boole and Jevons, a link between the formalization of inference calculus, its automation, and its universalization emerges. In the British logico-philosophical culture of the time, these features were attributed to an idealized notion of computational subjectivity of Kantian derivation. This subjectivity was defined by its freedom from human cognitive limitations and structural logical impersonality. Along the complex lines that bound together the conceptual apparatus that by the 1950s had led to programming languages, through the developments that gave rise to the first precise notions of algorithmic computability in the 1930s, and tracing this history back to the nineteenth century, one recognizes again and again that this peculiar impersonality cannot but haunt any computing technology and thus generate internal friction in any concept of intimate technology.
Turing’s design of a brain. Operative and thematic concepts of computing machinery
Lorenzo De Stefano Università degli Studi di Napoli Federico II, Italy
Alan Turing is unanimously recognized, alongside von Neumann, as the true father of Artificial Intelligence. Although the possibility of constructing logical or intelligent machines had already been explored by Babbage, Lovelace, and Jevons and although the first mathematical model of a neural network was outlined by McCullock and Pitts, it is in Turing’s 1936 essay On Computable Numbers, with an Application to the Entscheindungsprobleme—and above all in Computing Machinery and Intelligence, published in Mind in 1950—that the idea of a potentially human-like artificial intelligence is explicitly thematized. The TM is in fact a machine that following a set of rules (the machine’s “program”), machine can perform any step-by-step computational procedure. This model captures the essence of algorithmic computation and underpins much of modern computer science theory.
Turing’s work lays the theoretical and epistemological groundwork for future debates on Artificial Intelligence, culminating in the foundational 1956 Dartmouth Conference. Yet what are the epistemological premises on which Turing establishes his parallel between computers and thought, and thus between the functioning of the machine and the human brain?
The aim of this essay is to investigate the hermeneutic, epistemological, and ontological assumptions that underpin Turing’s vision and would go on to influence the subsequent debate on Artificial Intelligence. To this end, the theoretical framework of reference is the distinction drawn by the phenomenologist Eugen Fink in his 1957 essay Operative Begriffe in Husserls Phänomenologie, between thematic concepts—namely, descriptive and explicit concepts in a given philosophical perspective (for instance, intentionality in Husserl or the transcendental in Kant)—and operative concepts, which operate behind the thematic concepts, borrowed from different models of thought. These latter remain overshadowed, not even explicit to the author who employs them, yet they continue to act behind a philosophical view as an unnoticed medium through which the thematic concept is conceived.
Within this framework, the theoretical model that Fink applies to Husserlian philosophy is applied here to the thematic concepts Turing develops in Computing Machinery and Intelligence, with the aim of bringing to light which operative conceptual a priori are at play in Turing’s pivotal response to the guiding question “Can machines think?” and, consequently, in his conception of Artificial Intelligence. Particular attention will be paid to the conception of the human being implied by the imitation game, and to the modern—yet also cybernetic and behaviorist—epistemological and conceptual foundations that inform the relationship between human and artificial intelligence. The goal is to expose which vision underlies the notion of a learning machine or a child-machine and on which assumptions Turing bases the analogy between such a human thinking and computing machinery. This amounts to highlighting those epistemological biases that have conditioned the debate on Artificial Intelligence from the outset and that continue to manifest themselves in contemporary discussions. The presentation is divided in four parts: 1 Methodological approach (Fink’s concept-theory). 2Exposition of Turing’s theoretical framework. 3. Identification of the operative concepts in Turing’s view and their historical and genealogical origins. 4. What remains of Turing’s conceptual framework in contemporary debate?
Is your brain a sort of computer?
Claudio Fabbroni Università degli Studi di Napoli Federico II, Italy
The relationship between brain and computer is central to neuroscientific research. Indeed, it is estimated that more than 80% of the articles in theoretical neuroscience focus on computational modeling, because of the efficacy that this mode of investigation has proved. Due to the success of the computational approach, the majority view among neuroscientists is a literal interpretation of the brain-computer comparison, according to which brains are in fact systems that, in various degrees of complexity, encode inputs, manipulate representations and transform them into outputs according to some specific algorithms in order to respond to distal stimuli. That is, the literal, realist interpretation that the brain is a computer supposes that the computational structure is essential to the brain having the cognitive capacities that it does.
This presentation argues against this realist stance, in favor of a more pragmatic one which underlines the heuristic value of the brain-computer comparison. The realist account seems to lack an adequate appreciation of the simplifications and abstractions in place in neuroscientific modeling, which are necessary to make the brain's activity intelligible to human scientists. However, this abstraction is necessary due to the brain’s billions of non-identical neurons and trillions of ever-changing synapses, which are never in the same state twice and have an extremely high trial-to-trial variability. In fact, computational neural models target averaged data, namely simplified and regularized patterns that are created through data processing with the exclusion of outliers and differences between signal and noise. They are artifacts that grant us epistemic access but do not underly a natural regularity. That is to say, the mathematical structures that make cognitive functions intelligible to the scientist should not be taken as straightforward discoveries of inherent, human-independent computational capacities of the brain, but as useful mathematical descriptions, that through the analogy with computers, shed some light on the way the brain works. Thus, a pragmatic, analogical understanding of the brain-computer relationship, which declines to infer from the success of the computational approach that the neural system and its model compute the same functions, seems to be a more appropriate understanding of these models than a literal one.
Computation as a symbolic form between humans and machines
Luigi Laino Università degli Studi di Napoli Federico II, Italy
This paper argues that computation, with its own syntax, grammar, and logic, constitutes a unique symbolic form, akin to language, myth, and art as described by Ernst Cassirer in The Philosophy of Symbolic Forms. Like these other symbolic forms, computation provides a framework for representing, manipulating, and transforming information, fundamentally shaping human cognition and understanding of the world. Accordingly, I will divide the talk into three parts.
First, the presentation will explore the rationale for considering computation as a symbolic form, drawing upon Cassirerian concepts such as the creation of new realities through symbolic activity and filling a gap in his own work.
Second, the focus will shift to analyzing whether Large Language Models (LLMs) can be considered “spiritual” agents (Geist) within this framework. Building upon Cassirer’s concepts of “Ausdruck” (expression), “Objektivierung und Darstellung” (objectification and representation), and “Bedeutung” (signification), the presentation will argue that while LLMs exhibit remarkable abilities, such as generating creative text and engaging in complex symbolic manipulations, their “Geist” remains fundamentally distinct from human intelligence. Drawing inspiration from Cristianini (2023), the presentation will introduce the concept of “shortcut-Geist” to characterize the unique form of intelligence exhibited by LLMs. I will argue that while machines exhibit impressive computational abilities, these abilities often operate in a manner reminiscent of certain aspects of animal intelligence, such as complex pattern recognition and problem-solving. Therefore, LLMs lack the self-awareness and subjective experience characteristic of human Geist, which impinges on their capacity to create “cultural products”.
Nevertheless, the presentation will finally examine the profound impact of computation as a symbolic form on human experience. Based on Cassirer’s emphasis on the role of symbols in shaping human experience and understanding, the presentation will point out that computational technologies are not merely tools but rather integral components of our symbolic environment, forging our perceptions, values, and ultimately, our sense of self.
Bibliography
Cassirer, Ernst. The Philosophy of Symbolic Forms. 3 vols. Translated by Ralph Manheim. Yale University Press, New Haven 1953-1957.
Cristianini, N. La scorciatoia. Il Mulino, Bologna 2023.
|