Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Philosophy of technology IV
Time:
Friday, 27/June/2025:
8:45am - 10:00am

Session Chair: Udo Pesch
Location: Auditorium 1


Show help for 'Increase or decrease the abstract text size'
Presentations

REX with AI? Challenges for the return on experience in the digitized lifeworld

Bruno Gransche

Karlsruhe Institute of Technology, Germany

The perception of self-efficacy is crucial for our identity as autonomous agents. The belief that we can intentionally cause change underpins our self-image, responsibility, and ethical or legal considerations. Our sense of having made a difference is informed by sensory feedback, direct observation, or interpreting traces of our actions (abductions). Skills and competencies develop through learning, which relies on adjusting based on perceived differences between intended and actual outcomes (see e.g. Hubig 2007).

The digitization of our lifeworld and the integration of AI systems with increased ‘agency’ (technical autonomy, see e.g. Gransche 2024) alter the conditions for learning and skill development. This affects our perception of self-efficacy and our ability to solve problems and overcome resistances (see Wiegerling 2021), often associated with intelligence (problem solving). Learning and development are driven by probing possibilities, making mistakes, and overcoming resistances, which help us understand the modal boundaries between the possible and impossible. We improve by learning from errors, provided the right conditions are met.

Hybrid human-AI actions in complex environments, involving automated technical ‘agents’ and spacio-temporally distant human co-agents, may disrupt this dynamic. Even without digital technology and AI, an explicit error culture is needed to maintain this learning dynamic, allowing progress without shaming or dissimulating failure. Digital technology and AI increasingly challenge efforts to update an error culture due to issues like the loss of traceability of individual contributions (see Hubig 2007). This complicates transparency and explainability in AI interactions, disturbing the conditions for learning from errors. The chain of 'try – fail – learn – retry better – succeed' can break if feedback is systemically withdrawn or veiled. This could lead to a situation where only a few tech leaders learn and improve their systems due to their crucial position in collecting feedback and accessing data (see Lanier 2013).

This paper presents this argument in detail and explores socio-technical mitigation strategies. It advocates for a) an explicit error culture that emphasizes the importance of errors and specific contributions for learning and improvement, b) organizational measures to foster such a culture that include error friendliness, opportunities to repeat trials, and feedback links between trials as well as between individuals for organizational learning, c) some supporting technical measures including data access, transparency on demand, and explainability.

This paper’s content was developed in an interdisciplinary project between philosophy of technology and a large global infrastructure and digitization company (corporate group). The project, a pilot in integrated research (see Gransche and Manzeschke 2020), aimed to go beyond ELSI in Germany and resulted in a Transformative Philosophy program (engineering/executives’ education), thus following the goal of an engaged philosophy of technology that Carl Mitcham highlighted in his keynote at SPT2021 in Lille. The talk will briefly present the project results, focusing on error culture or return on experience (REX) in a digitalized lifeworld, and report intriguing insights about the corporate reactions, feedback, and learnings on this topic.

Publication bibliography

- Gransche, Bruno (2024): Technische Autonomie. In Mathias Gutmann, Klaus Wiegerling, Benjamin Rathgeber (Eds.): Handbuch Technikphilosophie. 1st ed. 2024. Stuttgart: J.B. Metzler, 257-266.

- Gransche, Bruno; Manzeschke, Arne (Eds.) (2020): Das geteilte Ganze. Horizonte Integrierter Forschung für künftige Mensch-Technik-Verhältnisse. 1st ed. 2020. Wiesbaden: Springer Fachmedien Wiesbaden; Springer VS, checked on 4/17/2020.

- Hubig, Christoph (2007): Die Kunst des Möglichen II. Grundlinien einer dialektischen Philosophie der Technik; Ethik der Technik als provisorische Moral. 2 volumes. Bielefeld: Transcript (2).

- Lanier, Jaron (2013): Who owns the future? London: Allen Lane.

- Wiegerling, Klaus (2021): Exposition einer Theorie der Widerständigkeit. In Philosophy and Society 32 (4), pp. 499–774. DOI: 10.2298/FID2104641W.



Cognitive maps and the quantitative-qualitative divide

Dan Jerome Spitzner

University of Virginia, United States of America

This paper draws inspiration from a diagrammatic measurement device known as a cognitive map, whose use in research complicates the practice of assigning qualitative or quantitative labels to research methodologies. A cognitive map is a spatial layout of factors connected by arrows, which is assembled by a researcher or research participant to express a phenomenon’s perceived relevant factors and the impacts of those factors on one another. A numerical value may additionally be assigned to any connection between factors to indicate the strength of perceived impact. Cognitive maps are increasingly valued in participatory research for facilitating intercultural dialogue.

In the present study these devices are to serve as a focal point in developing a theory of the quantitative-qualitative divide, which feeds into a larger project that seeks to recontextualize statistical methodology so that it is less limited in its capacity to address multicultural perspectives, imbalances in political power, local and community perspectives, and individual experiences of relevant phenomena. Previous authors have made important contributions to conceptualizing the quantitative-qualitative divide by emphasizing an ethnographic character of quantitative methodologies and connections to deconstructionist and new-materialist perspectives. The present effort embraces these perspectives, but is distinct in attending specifically to issues at the fine-grained level of statistical data-analysis, while also generating new statistical methodology.

Support for assigning the qualitative label to cognitive maps partly derives from their capabilities in participatory research, wherein it is recognized that research participants may find greater success at articulating ideas through visual means, manual manipulation, or other artistic practices. For their capability to assist articulation in this way, cognitive maps share a feature with qualitative and arts-based methodologies. Moreover, the philosophical and methodological pillars embraced by some users of cognitive maps resonate with such perspectives as critical realism and standpoint theory, both of which stray from traditional quantitative orthodoxy. On the other hand, cognitive maps are also valued for decomposing phenomena into factors and attending to causal relationships, a priority that is typically associated with quantitative methodologies. Data-analysis of cognitive-map measurements can be argued to be entirely quantitative, given that all constituent elements of the maps may ultimately be translated into numerical formats and treated under a mathematical model. Adding further complexity to the situation, scholars of mixed-methods research have specifically promoted cognitive maps for the purpose of integrating qualitative and quantitative methodologies.

The proposed theory characterizes the qualitative-quantitative divide as a productive convention, in a new-materialist sense. This perspective offers insights into such topics as the marginalization of qualitative methodology and the presence of methodological hierarchies. A connection of cognitive maps to evidence-based practices sets up a vehicle for investigation into certain research tools (such as those of arts-based research) used for knowledge transfer as a project of evidence-based practices. Other insights into the qualitative-quantitative divide, which nicely overlay with the characteristics of cognitive maps, arise from debates within the history of grounded theory, a class of methodologies that facilitates the generation of theoretical themes, factors, and codes from qualitative data.



Semiotics, technology, and the Infosphere

Andrew Wells Garnar

College of Charleston, United States of America

This paper is a preliminary exploration of using C.S. Peirce’s semiotics as an approach to the philosophy of technology. There has been some scholar using various theories of signs in the philosophy of technology. Some of this comes out of the writings of Baudrillard or Derrida and their debt to Saussure, others drawing on the likes Hjelmslev or Jakobson. Thus far, little has relied on Peirce’s theory of signs. This is unfortunate given that he offers an alternative semiotics less concerned with linguistics as a field of study, that is oriented more towards inquiry, foregrounds embodiment, and approaches signs as inherently dynamic. Since language has come to play a central role in recent science and technology, relying on a theory that makes language central has certain advantages, especially when signs are tied to action. For example, following Luciano Floridi, contemporary information technologies serve to re-ontologize the world. One way this occurs is through the creation of an Infosphere that contains informational entities, whether human or artificial, agents or patients. Many, if not most, of these entities involve language, broadly construed. Through appropriating Peirce’s semiotics, new dynamics of how the Infosphere functions can be demonstrated by approaching these as dynamic, active sign-systems.

To explore this terrain, the paper first briefly considering other uses of semiotics in the philosophy of technology and why a semiotic approach is significant. The second section provides three sketches of what a Peircian approach involves. Rather than laying out a full, detailed theory, the paper introduces examples that shows the promise of his thought when applied to technology through introducing his typology of signs, the concept of endless semiosis, and semiotic conception of human identity. The first considers the concept of semiotic depth and how signs can form systems that all for the creation of an immersive Infosphere. The second examines how this can be understood as sign systems caught up in the endless interpretation of other signs. Lastly, Peirce’s claim that signs and humans reciprocally construct each other will be reexamined in light of technology. The conclusion will summarize how these sketches illuminate Floridi’s Infosphere and the role of humans within it in important and novel ways.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany