Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Generative AI and risk
Time:
Friday, 27/June/2025:
8:45am - 10:00am

Session Chair: Christa Laurens
Location: Auditorium 5


Show help for 'Increase or decrease the abstract text size'
Presentations

Memes, generative AI, humor, and intimacy: collective empowerment through shared laughter

Alberto Romele1, Fabrizio Defilippi2

1Sorbonne Nouvelle University, France; 2University of Paris Nanterre, France

In this presentation, we explore memes about generative AI, arguing that their humor serves as a collective empowerment strategy in the face of uncertainties and fears associated with these technologies. This thesis builds on a broad body of literature addressing the sense of intimacy fostered by memes. For instance, Neghabat (2021) argues that humor is a deeply subjective and intimate experience; thus, finding something funny together (such as sharing memes) creates a sense of intimacy and community—sharing discomfort and addressing negative experiences offers emotional relief and support. In the specific case of generative AI, memes could become a way to build collective knowledge by playing with the intimate fears that haunt our imaginations (Goujon & Ricci, 2024). In other words, memes can potentially become a means of introspection about our uses of generative AI and provide landmarks in response to the rapid pace of recent changes in the field.

Our intervention is structured into three parts. In the first part, we discuss the relationship between memes and sociotechnical imaginaries (Jasanoff & Kim, 2015). In particular, we reference Castoriadis’s concept of the imaginary institution of society to demonstrate how memes contribute to this process, showing us the social meanings surrounding contemporary technological trajectories. In the second part, we provide an empirical analysis and classification of generative AI memes, based on data from two repositories: Imgflip and Know Your Meme. Among other categories, we distinguish between apocalyptic memes, those that deconstruct expectations toward AI, memes about education and labor, and anti-capitalist memes.

In the third part, we substantiate our central thesis: despite their differences, all generative AI memes participate in a collective exorcism of fear through humor. Generative AI technologies, which permeate our lives and mimic human behaviors, evoke a profound sense of intimacy that is both unsettling and collective. It is precisely this shared awareness of their pervasive presence and the vulnerabilities they expose—at physical, personal, and social levels—that drives the creation of collective exorcisms like memes. Far from being trivial, these humorous artifacts allow us to process the risks, redefine ethical frameworks, and reclaim agency over innovations often framed as inevitable. The question we will address at the conclusion of this presentation is as follows: Can we assert that memes about generative AI represent a potential resource for fostering forms of resistance and agonism? Or must we instead recognize that, since misery loves company, this intimacy ultimately serves to cultivate passive acceptance of the status quo?

References:

Castoriadis, Cornelius. The Imaginary Institution of Society. Translated by Kathleen Blamey. Cambridge, MA: The MIT Press, 1987.

Galip, Idil. Methodological and epistemological challenges in meme research and meme studies. In Internet Histories 8(4), 312-330, 2024.

Goujon, Valentin & Ricci, Donato. “Shoggoth with Smiley Face: Knowing-how and letting-know by analogy in artificial intelligence research”. In Hybrid. Naissance et renaissance des mèmes. Vies et vitalités des mèmes, edited by Laurence Allard et al., 2024. https://journals.openedition.org/hybrid/4880

Jasanoff, Sheila & Kim, Sang-Hyun (eds.). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press. 2015.

Neghabat, Anahita. “Ibiza Austrian Memes: Reflections on Reclaiming Political Discourse through Memes.” In Critical Meme Reader: Global Mutations of the Viral Image, edited by Chloë Arkenbout, Jack Wilson and Daniel de Zeeuw, 130-142. Amsterdam: Institute of Network Cultures, 2021.

Rogers, Richard, & Giorgi, Giulia "What is a meme, technically speaking?” In Information, Communication & Society, 27(1), 73–91, 2023.



‘Trust the Machine?’: Conceptualising Trust in the Age of Generative Artificial Intelligence

Pia-Zoe Hahne

University of Vienna, Austria

To accept a new technology, we first need to trust it. With AI, there is not just one specific kind of trust that we put in the system; instead, it is a “multidimensional construct, including trust in functionality, trust in reliability, and trust in data protection” (Wang, Lin & Shao, 2022, p. 340). Current discussions surrounding trust in philosophy of technology focus mainly on the question ‘Is the concept of trust applicable to AI?’. Authors such as Ryan (2020), Alvarado (2023), Brusseau (2023) or Pink et al. (2024) argue against the usage of trust. Others argue for a purely epistemic view on trust relations in AI, also based in the argument that an AI system is not a moral agent (Alvarado, 2023; Ryan, 2020). However, I aim at countering this narrative. While discussions surrounding questions such as ‘Is the concept of trust applicable to AI?’ are necessary, they should not be the sole focus of trust research as the discussions do not alleviate the actual problems that are raised by the usage of trust for AI systems.

The uncertainties surrounding trust exemplify the disruptive nature of AI technologies while debates focussing on the appropriateness of trust fail to consider that the concept is already widely used in praxis-oriented contexts. Despite not being moral agents, AI systems such as AI chatbots are often perceived as having moral agency, meaning a relation that seemingly goes further than just knowledge exchange. The uncertainties around the appropriateness of the concept of trust in AI and its conceptual challenges demonstrate the disruptive nature of AI technologies themselves. How should the concept of trust in relation to AI look? And is trust even the right concept?

Approaches to study these conceptual disruptions often disregard the involvement of stakeholders. This is where a new approach in engaging with conceptual disruptions comes in. Conceptual engineering is an emerging approach in philosophy of technology. Trust is an ideal concept for conceptual engineering as it forms the basis for other concepts and disruptions therefore have far-reaching consequences. Löhr (2023) and Marchiori & Sharp (2024) specifically points out that studying these disruptions necessitates empirical data, demonstrating a new turn in engaging with conceptual disruptions. The influence technology has on trust is not new. However, the intense disruptions influenced by AI present new challenges by moving away from a purely epistemic view on trust in technology as well as the far-reaching consequences on trust between people and trust in institutions.

References

Alvarado, R. (2023). What kind of trust does AI deserve, if any? AI and Ethics, 3, 1169-1183.

Brusseau, J. (2023) From the ground truth up: doing AI ethics from practice to principles. AI & Society, 38, 1651–1657. https://doi.org/10.1007/s00146-021-01336-4

Löhr, G. (2023). Conceptual disruption and 21st century technologies: A framework. Technology in Society, 74, Article 102327.

Marchiori, S. & Scharp, K. (2024). What is conceptual disruption? Ethics and Information Technology, 26(1), Article 18. https://doi.org/10.1007/s10676-024-09749-7

Pink, S., Quilty, E., Grundy, J. et al. (2024). Trust, artificial intelligence and software practitioners: an interdisciplinary agenda. AI & Society. https://doi.org/10.1007/s00146-024-01882-7

Ryan, M. (2022). In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00228-y

Wang, X. Q., Lin, X. L. & Shao, B. (2023). Artificial intelligence changes the way we work: A close look at innovating with chatbots. Journal of the Association for Information Science and Technology, 74(3), 339-353.



The Concept of ai Risk

Lieke Fröberg

University of Hamburg, Germany

As various types of AI-applications have taken the world by storm, the concept of risk seems to have taken an increasingly central role in discussions on their (potential) negative impacts. However, the notion of risk has a variety of conceptualizations that can be at odds with each other (e.g., theoretical versus empirical, realist versus constructivist) (Althaus, 2005; Lupton, 2024; Renn, 2008). Moreover, the specific way in which AI risks are understood can have important consequences, in particular when translated into policy, as is the case in the risk-based approach central to the European AI Act. Thus, to improve the interdisciplinary understanding of the conceptualization of AI risk, I conduct a systematic literature review following Okoli (2022). The goal is to map and theorize the ways in which AI risk is defined, characterized, categorized, and measured. The leading research question is: How is AI risk currently understood and studied?

My findings confirm an increase of research on this topic since 2017. I find that multiple disciplines operationalize the concept of AI risk, each working from plausible, yet markedly distinct and conceptually conflicting, points of departure. Of these proposals, only few substantiate their approach. Interestingly, rather than working in silos, there seems to be a shared use of the concept. Next to this, I find that the majority of papers have an implicitly realist understanding of AI risk, with another large minority of critical realist, and just a handful of constructivist-leaning suppositions. I also map the main topics of interest (including the AI Act, questions around existential risk, and risk perception) as well as common methodological approaches.

In the discussion, I argue that AI risk can be understood as a Boundary Object (Star, 1989, 2010), which can be adapted to local research contexts, whilst retaining a diffuse, broad understanding. This balance between plasticity allowing for specific scientific methods yet robustness across disciplines helps explain the possibility for collaboration without consensus on the precise conceptual underpinning of AI risk. Moreover, I argue that while realist approaches dominate the debate, constructivist approaches could further discussions on political and ethical themes. One such a theme is questions around acceptable risk, which are relevant in a variety of ways but remain under-researched at this point.

This paper addresses an important conceptual development in the field of AI ethics and governance, offers a timely reflection on how to interpret the differing meanings of the concept of AI risk, and points towards fruitful future research endeavors.

References

Althaus, C. E. (2005). A Disciplinary Perspective on the Epistemological Status of Risk. Risk Analysis, 25(3), 567–588. https://doi.org/10.1111/j.1539-6924.2005.00625.x

Lupton, D. (2024). Risk (Third edition). Routledge.

Okoli, C. (2022). Developing Theory from Literature Reviews with Theoretical Concept Synthesis: Topical, Propositional and Confirmatory Approaches.

Renn, O. (2008). Risk Governance: Coping with Uncertainty in a Complex World. Earthscan.

Star, S. L. (1989). The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving. In Distributed Artificial Intelligence (pp. 37–54). Elsevier. https://doi.org/10.1016/B978-1-55860-092-8.50006-X

Star, S. L. (2010). This is Not a Boundary Object: Reflections on the Origin of a Concept. Science, Technology, & Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany