Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Algorithms
Time:
Thursday, 26/June/2025:
11:50am - 1:05pm

Location: Auditorium 5


Show help for 'Increase or decrease the abstract text size'
Presentations

Algorithms, abortion, and making decisions

Hannah Steinhauer

Virgina Tech, United States of America

Abortion is usually framed in the context of decisions— someone who supports abortion rights is often referred to as “pro-choice.” In the digital age, it is notable that algorithms— the sets of instructions that design the technologies we use to communicate every day— are also described using the language of decisions. This has been exemplified by the recent emergence of “decision sciences.” I argue that both abortion and algorithms are framed as decisions in ways that are misleading. Abortion is usually framed as a personal decision for a pregnant person; outside of the sociopolitical and cultural context in which that person lives. The language of “choice” falsely makes the assumption that everybody has equal access to abortion, which feminist theorists have pointed out is not the case. Even before the overturn of Roe, abortion was often inaccessible, specifically to marginalized groups, in the United States. Algorithms, on the other hand, are framed as decisions that are made solely by computers, which leaves out human bias that is embedded in these technologies. That algorithms are a result of human decision-making, and replicate and reproduce human biases, has been shown by digital studies scholars. Furthermore, algorithms, in the digital age, are a necessary component of abortion access— Google searches, as well as use of other internet platforms, can lead someone to medically accurate accessible information about abortion access, but it can also lead someone to misinformation that could ultimately result in preventing a wanted abortion from happening. I argue that, in the post-Roe United States, as well as in the digital age generally, algorithmic information technologies will play a central role in reproductive justice. Furthermore, it is necessary to understand dynamics of computer and human decision making, which really are both forms of human decision making, in order to get the full picture of how digital technology relates to and allows for abortion access.



The power topology of algorithmic governance

Taicheng Tan

Beijing University of Civil Engineering and Architecture, China, People's Republic of

As a co-product of the interplay between knowledge and power, algorithmic governance raises fundamental questions of political epistemology while offering technical solutions constrained by value norms. Political epistemology, as an emerging interdisciplinary field, investigates the possibility of political cognition by addressing issues such as political disagreement, consensus, ignorance, emotion, irrationality, democracy, expertise, and trust. Central to this inquiry are the political dimensions of algorithmic governance and how it shapes or even determines stakeholders’ political perceptions and actions. In the post-truth era, social scientists have increasingly employed empirical tools to quantitatively represent algorithmic political bias and rhetoric.

Despite advancements in the philosophy of technology, which has shifted from grand critiques to micro-empirical studies, it has yet to fully open the space for political epistemological exploration of algorithmic governance. To address this gap, this paper introduces power topology analysis. Topology, a mathematical field that studies the properties of spatial forms that remain unchanged under continuous transformation, has been adapted by thinkers like Gilles Deleuze, Michel Foucault, Henri Lefebvre, Bruno Latour, and David Harvey to examine the isomorphism and fluidity of power and space. Power, like topology, retains continuity even through transformations, linking the two conceptually.

This paper is structured into four parts. The first explores the necessity and significance of power topology in conceptualizing algorithmic power and politics through the lens of political epistemology. The second examines the generative logic and cognitive structure of power topology within algorithmic governance. The third analyzes how power topology transforms algorithmic power relations into an algorithmic political order. The fourth proposes strategies for democratizing algorithmic governance through power topology analysis.

The introduction of power topology analysis offers a reflexive perspective for the philosophy of technology to re-engage with political epistemology—an area insufficiently addressed by current quantitative research and ethical frameworks. This topological approach provides a detailed portrait of algorithmic politics by revealing its power topology. Moreover, it redefines stakeholder participation by demonstrating how algorithms stretch, fold, or distort power relations, reshaping the political landscape. By uncovering the material politics of these transformations, power topology encourages the philosophy of technology to reopen political epistemological spaces and adopt new cognitive tools for outlining the politics of algorithmic governance. Ultimately, this framework aims to foster continuous, rational, and democratic engagement by stakeholders in the technological transformation of society, offering a dynamic and reflexive tool for understanding the intersection of power, politics, and algorithms.



Believable generative agents: A self-fulfilling prophecy?

Leonie Alina Möck1, Sven Thomas2

1University of Vienna, Austria; 2University of Paderborn, Germany

Recent advancements in AI systems, in particular Large Language Models, have sparked renewed interest in a technological vision once confined to science fiction: generative AI agents capable of simulating human personalities. These agents are increasingly touted as tools with diverse applications, such as facilitating interview studies (O’Donnell, 2024), improving online dating experiences (Batt, 2024), or even serving as personalized "companion clones" of social media influencers (Writer, 2023). Proponents argue that such agents, designed to act as "believable proxies of human behavior“ (Park et al. 2023) offer unparalleled opportunities to prototype social systems and test theories. As Park et al. (2024) suggest, they could significantly advance policymaking and social science by enabling large-scale simulation of social dynamics.

This paper critically examines the foundational assumptions underpinning these claims, focusing on the concept of believability driving this research. What, precisely, does "believable" mean in the context of generative agents, and how might an uncritical acceptance of their believability create self-fulfilling prophecies in social science research? This analysis begins by tracing the origins of Park et al.’s framework of believability to the work of Bates (1994), whose exploration of believable characters has profoundly influenced the field.

Drawing on Günther Anders’ (1956) critique of technological mediation and Donna Haraway’s (2018, 127) reflections on "technoscientific world-building“, this paper situates generative agents as key sites where science, technology, and society intersect. Ultimately, it calls for a critical reexamination of the promises and perils of generative agents, emphasizing the need for reflexivity in their conceptualization, as well as their design and application. By interrogating the assumptions behind believability, this research contributes to a deeper understanding of the socio-technical implications of these emerging AI systems.

Building on Louise Amoore’s (2020) concept of algorithms as composite creatures, this paper explores the implications of framing generative agents as "believable." In the long run, deploying these AI systems in social science research risks embedding prior normative assumptions into empirical findings. Such feedback loops can reinforce preexisting models of the world, presenting them as objective realities rather than as socially constructed artifacts. The analysis highlights the danger of generative agents reproducing and amplifying simplified or biased representations of complex social systems, thereby shaping policy and theory in ways that may perpetuate these distortions.

References

Amoore, Louise (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.

Anders, Günther (1956). Die Antiquiertheit des Menschen Bd. I. Munich: C.H. Beck.

Batt, Simon (2024). „Bumble Wants to Send Your AI Clone on Dates with Other People's Chatbots.” Retrieved from https://www.xda-developers.com/bumble-ai-clone-dates-other-peoples-chatbots/.

Contreras, Brian (2023). „Thousands Chatted with This AI ‘Virtual Girlfriend.’ Then Things Got Even Weirder.” Retrieved from https://www.latimes.com/entertainment-arts/business/story/2023-06-27/influencers-ai-chat-caryn-marjorie.

Haraway, Donna Jeanne (2018). Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. Second edition. New York, NY: Routledge, Taylor & Francis Group.

O’Donnell, James (2024). „AI Can Now Create a Replica of Your Personality.” Retrieved from https://www.technologyreview.com/2024/11/20/1107100/ai-can-now-create-a-replica-of-your-personality/.

Park, Joon Sung, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein (2023). „Generative Agents: Interactive Simulacra of Human Behavior.“ In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763.

Park, Joon Sung, Carolyn Q Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S Bernstein (2024). „Generative Agent Simulations of 1,000 People“, Retrieved from arXiv. https://doi.org/10.48550/arXiv.2411.10109.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany