Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Ethics I
Time:
Thursday, 26/June/2025:
10:05am - 11:20am

Session Chair: Andrea Gammon
Location: Auditorium 7


Show help for 'Increase or decrease the abstract text size'
Presentations

Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground

Alexander Martin Mussgnug

University of Edinburgh, United Kingdom

AI is employed in a wide range of contexts  scientists leverage AI in their research, media outlets use AI in journalism, and doctors adopt AI in their diagnostic practice. Yet many have noted how AI systems are often developed and deployed in a manner that prioritizes abstract technical considerations which are disconnected from the concrete context of application. This also results in limited engagement with established norms that govern these contexts. When AI applications disregard entrenched norms, they can threaten the integrity of social contexts with often disastrous consequences. For example, medical AI applications can defy domain-specific privacy expectations by selling sensitive patient data, AI predictions can corrupt scientific reliability by undermining disciplinary evidential norms, and AI-generated journalism can erode already limited public trust in news outlets by skipping journalistic best practices.

This paper argues that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, this paper questions the current prioritization in AI ethics of moral innovation over moral conservation.

I make my argument in three parts. Building upon Helen Nissenbaum’s framework of contextual integrity, I first illustrate how AI practitioners’ disregard for cultivated contextual norms can threaten the very integrity of contexts such as mental health care or international development. Second, I outline how a tendency to understand novel technologies as uncharted ethical territory exacerbates this dynamic by playing into and seemingly legitimizing disregard for contextual norms. In response, I highlight recent scholarship that more substantially engages with the contextual dimensions of AI. Under the label of “integrative AI ethics,” I advocate for a moderately conservative approach to such efforts that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures while addressing three possible objections and engaging with emerging work on foundation models.



The bullshit singularity is near

Dylan Eric Wittkower

Old Dominion University, United States of America

Harry Frankfurt observed that "one of the most salient features of our culture is that there is so much bullshit," where he understands "bullshit" as the use of language to meet a practical end without regard for truth or falsehood (1986). While bullshitting has a rich and storied history in the areas of e.g. politics and marketing, the 20th Century saw substantial innovation and growth in what David Graeber described as "bullshit jobs" (2019), which can be understood in parallel with Frankfurt's definition of bullshit as the use of employment in order to meet a social end without regard for production of socially or individually valuable goods or services.

Generative AI, large language models (LLMs) in particular, have disrupted the bullshit labor market by providing bullshit as a service (BaaS), automating increasingly many bullshit jobs and accelerating pre-existing tendencies toward bullshitification of labor processes in order to leverage the market efficiencies offered by automated bullshit production systems (ABsPS). As work cycles and information cycles approach ABsPS saturation—e.g. grant proposals written by LLMs and reviewed by machine learning algorithms (MLAs) that results in funding for research that uses LLM-written survey questions to generate large datasets analyzed by MLAs and resulting in LLM-written publications that are peer-reviewed through LLMs—remaining places where a human-in-the-loop (HITL) is called upon to provide reality-based assessment or intervention become bottlenecks in bullshit production processes. Once we are able to remove these last antiquated tethers to fact and value, the ABsPS ecosystem will be able to reach its full speed, able finally to beat its wings freely like Kant's dove (3: B8–9) once placed in a vacuum and freed from the air resistance that currently hinders its full efficiency.

After ABsPS have been freed from the HITL, even the echoes of the HITL will become fainter and fainter as future generations of LLMs and MLAs are trained on new corpuses which will themselves be comprised of ABsPS output in ever greater proportion. The "steadily rotating recurrence of the same" („ständig rotierende Wiederkehr des Gleichen“) (Heidegger, 1954) that is the essence and ownmost possibility of ABsPS will be increasingly literally realized in this self-reinforcing cycle of ABsPS coprophagia and coprolalia, producing bullshit that is ever more complete and total. Through this leveraging of prior bullshit achievements to create ever greater bullshit achievements, ABsPS development will reach a bullshit singularity, achieving its apotheosis in a hyperreal simulacrum (Baudrillard, 1981) of meaningfulness itself.

References

Baudrillard, J. (1981). Simulacres et simulation. Éditions Galilée.

Frankfurt, H. (1986). On bullshit. Raritan Quarterly Review, 6(2), 81–100.

Graeber, D. (2019). Bullshit jobs: The rise of pointless work, and what we can do about it. Simon & Schuster.

Heidegger, M. (1954). Was heißt Denken? Max Niemeyer.

Kant, I. (1968[1787]). Kritik der reinen Vernunft, 2. Auflage. Kants Werke Akademie-Textausgabe, Band III. Walter de Gruyter.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany