Ontic capture and technofascism
Maren Behrensen
University of Twente, Germany
In “Dear Octavia Butler,” written as a letter to the late science-fiction author that interrogates central ideas from her novels and stories, Kristie Dotson develops the concept of gestative capture. The concept describes an ideological mandate for “survival at all costs” that reduces human beings capable of bearing children to their potential role in biological reproduction. It captures these “bearers” in their reproductive essence, ignoring their dynamic existence.
Dotson connects the concept of gestative capture to a demographic trend that currently occupies the minds of the far-right in Europe and North America: sharply declining birth rates. The far-right uses this demographic trend to conjure a fight for the “survival of the West” – an obvious racist dogwhistle. This “fight” is then thought to justify the rollback of rights and access to technologies that allow “bearers” to escape gestative capture – the overturning of Roe v Wade is just the most obvious example.
Dotson’s letter is not anti-natalist – her point is that the choice to bear, nurse, and raise children has become increasingly unattractive as means to opt out of reproduction have become more and more accessible. Instead of making this choice more attractive, the political response – which so far has largely come from the far-right – pushes for the expansion of gestative capture. In my contribution, I explore the relation between this specific development and the larger context of far-right politics and “big tech” – inspired by Elon Musk’s obsessive tweeting about declining birth rates, but not limited to him or his companies.
I argue that what Dotson calls gestative capture is part of a broader phenomenon that can be described as ontic capture: the reduction of human beings and their dynamic existence to a fixed essence. This essence can be defined in reproductive terms, but it can also be sexual, ethnic or racial, religious, or economic. Ontic capture is not a new phenomenon in that all social, political, and legal classification systems depend on it to some extent – classification systems which usually have their own survival as their chief ideological mandate.
However, the entrenching of corporate “big tech” in civil society and government – again, Musk is only the most conspicuous example – threatens to render ontic capture overpowered and ungovernable. Especially where so-called “artificial intelligence” is involved, the products of “big tech” tend to be comprehensive systems of classification, surveillance, and social control – from social credit scoring and predictive policing to “social media,” they are designed to capture persons in some predictive, quantifiable essence.
While ontic capture is a part of all classification systems, this technologically overpowered version of it – especially when combined with matching commitments from corporate and political leaders – easily slips into technofascism: the outsourcing of truth to opaque technologies, and the replacement of history and politics with raw predictive power. Historical fascists used friend-foe-propaganda and the radio to will their ideas into existence, current fascists can rely on an entire arsenal of technologies to turbocharge their projects.
Bibliography:
Behrensen, Maren: The State and the Self. Identity and Identities, Rowman & Littlefield 2017.
Dotson, Kristie: “Dear Octavia Butler,” in the Proceedings of the Aristotelian Society 123 (2023), 327-346.
Jenkins, Katharine: “Ontic Injustice,” in the Journal of the American Philosophical Association 6 (2020), 188-205.
McIlroy, Erin: Silicon Valley Imperialism. Techno Fantasies and Frictions in Postsocialist Times. Duke University Press 2024.
Saul, Jennifer: Dogwhistles & Figleaves. How Manipulative Language Spreads Racism and Falsehood, Oxdorf University Press 2024.
Schmitt, Carl: Der Begriff des Politischen, Duncker & Humblot 1932.
Stanley, Jason: “Democratic Lies and Fascist Lies,” in Melissa Schwartzberg and Philip Kitcher (eds.): Truth and Evidence, New York University Press 2021, 209-222.
Teixeira Pinto, Ana: Capitalism with a Transhuman Face. The Afterlife of Fascism and the Digital Frontier, in Third Text 33 (2019), 315-336.
The Politics of social XAI
Suzana Alpsancar1, Eugenia Stamboliev2
1Paderborn University, Germany; 2University of Vienna, Austria
The Politics of social XAI
A key promise of introducing XAI for AI-assisted decision-making is to mitigate the risks of discrimination (bias) and to avoid harming someone wrongfully, thereby adding to the so-called "trustworthiness" of AI systems. A plethora of scandals has demonstrated the urge of that matter over the course of the last years (e.g., the dutch tax authorities child-care scandal). However, while the XAI community has been establishing a variety of explanatory techniques there is still too little understanding of how to deploy these in varying real-world application scenarios in such a way that they actually make up for the great promises. We argue that addressing this gap starts with acknowledging the contextual differences in what makes up explanatory relevance in different real-world scenarios. Acknowledging these differences does not only mean that there is no single XAI tool that could address all application areas but it furthermore implies that the relevance of XAI can never be completely be established by technical features alone. Rather, the relevance of XAI is a feature of the larger socio-technical system. Hence, designers should be aware of this socio-technical character of XAI. While grasping all the specificities of a given real-world application is impossible, we can theorize typical constellations and lay out their typical conditions via scenarios: (a) online recommendation systems, (b) AI-augmented human decision-making in companies for human resources management, and (c) AI deployment in public administration and services. Because these real-world application contexts establish very different social constellations and yet explanatory demands in practice, we need to concretize the function of XAI in a context-sensitive way.
Against this background, part of the XAI community is now engaged with the concept of social XAI – the idea to have a more context-sensitive, flexible, adaptive XAI agent. In our talk, we commit to this idea conceptually by pointing out the challenges for designing such social XAI from an ethical and political point of view. First, we present an overview of the problem of biases and current mitigation strategies and discuss the role of XAI. Second, we argue in favor of concretizing the recent turn to ’the user’ into targeting concrete social relationships by outlining the differences between our three scenarios. Third, we draw attention to how unfair decision-making processes might particularly affect vulnerable groups and what it could mean to use XAI to empower them. Finally, we discuss the challenges of designing relevant social XAI from the perspective of different real-world decision-making scenarios. We conclude by outlining several questions and factors that should be reflected in the design process.
Algorithmic politics and totalitarianism: a critical analysis of ai politics from hannah arendt’s perspective
Donghoon Lee
Virginia Tech, United States of America / Sungkyunkwan University, South Korea
Artificial intelligence (AI) is increasingly proposed as a solution to improve democratic systems, particularly in political decision-making processes. A notable example is Cesar Hidalgo’s algorithmic democracy, which aims to realize direct democracy by having individuals train AI algorithms to automatically process legislation on their behalf. Similarly, New Zealand’s SAM project seeks to create AI-powered political representatives. These initiatives focus on leveraging AI’s data processing and decision-making capabilities to address limitations in current democratic systems.
This paper argues that such technological interventions misunderstand the essence of politics as conceived by Hannah Arendt. In Arendt’s philosophical framework, politics is not merely a matter of decision-making but the realization of human action through citizen’s participation and dialogue. While political theorists emphasize dialogue and participation as components of democracy, Arendt’s contribution lies in identifying these elements as fundamental aspects of human existence, prior to any political model.
The paper examines three aspects of human activity which Arendt identified as labor, work, and action, demonstrating how current AI political initiatives align with the realm of ‘work’ rather than political ‘action.’ While work seeks predictable outcomes through fabrication based on clear blueprints, political action does not suppress uncertainty and irreversibility but harnesses them as driving forces. AI-driven approaches focused on optimizing decisions through data processing risk reducing politics to a technical problem-solving process rather than preserving it as a space for human revelation through words and actions.
This technological reduction of politics has concerning implications. The paper argues that reliance on AI’s decision-making capabilities could lead to a form of technological totalitarianism, where differing opinions are undermined by the presumed superiority of algorithmic solutions. Drawing from Arendt’s analysis of totalitarian movements, this paper notes that AI politics appeals through its promise to eliminate uncertainty by delegating decision-making processes to artificial intelligence, which conflicts with the elements of uncertainty and unpredictability that Arendt identified as essential to political engagement.
The significance of this critique extends beyond the technological optimists’ argument that any current technological limitations or deficiencies can be overcome through future technological advancement. Even as AI capabilities develop, the fundamental misalignment between algorithmic decision-making and the nature of politics persists. This paper contributes to discussions about technology in democratic systems by examining the need to preserve spaces for human political action while pursuing technological innovations.
Reference
Arendt, H. (1998). The human condition. University of Chicago press. Arendt, H. (1998). The human condition. University of Chicago press.
Arendt, H. (1987). Labor, work, action. In Amor mundi: Explorations in the faith and thought of Hannah Arendt (pp. 29-42). Dordrecht: Springer Netherlands.
Arendt, H., & May, N. (1958). The origins of totalitarianism.
Canovan, M. (1992). Hannah Arendt: A Reinterpretation of Her Political Thought. Cambridge University Press.
Coeckelbergh, M. (2022). The political philosophy of AI: an introduction. John Wiley & Sons.
Coeckelbergh, M. (2024). Why AI Undermines Democracy and what to Do about it. John Wiley & Sons.
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International journal of information management, 48, 63-71.
Dewey, J. (2000). Democracy and education (p. 394). New York: Free Press.
Gottsegen, M. G. (1994). The political thought of Hannah Arendt. SUNY Press.
Lechterman, T. M. (2024). The Perfect Politician
Zahavi, D. (2012). The Oxford Handbook of contemporary phenomenology.
|