Recent advancements in AI systems, in particular Large Language Models, have sparked renewed interest in a technological vision once confined to science fiction: generative AI agents capable of simulating human personalities. These agents are increasingly touted as tools with diverse applications, such as facilitating interview studies (O’Donnell, 2024), improving online dating experiences (Batt, 2024), or even serving as personalized "companion clones" of social media influencers (Writer, 2023). Proponents argue that such agents, designed to act as "believable proxies of human behavior“ (Park et al. 2023) offer unparalleled opportunities to prototype social systems and test theories. As Park et al. (2024) suggest, they could significantly advance policymaking and social science by enabling large-scale simulation of social dynamics.
This paper critically examines the foundational assumptions underpinning these claims, focusing on the concept of believability driving this research. What, precisely, does "believable" mean in the context of generative agents, and how might an uncritical acceptance of their believability create self-fulfilling prophecies in social science research? This analysis begins by tracing the origins of Park et al.’s framework of believability to the work of Bates (1994), whose exploration of believable characters has profoundly influenced the field.
Drawing on Günther Anders’ (1956) critique of technological mediation and Donna Haraway’s (2018, 127) reflections on "technoscientific world-building“, this paper situates generative agents as key sites where science, technology, and society intersect. Ultimately, it calls for a critical reexamination of the promises and perils of generative agents, emphasizing the need for reflexivity in their conceptualization, as well as their design and application. By interrogating the assumptions behind believability, this research contributes to a deeper understanding of the socio-technical implications of these emerging AI systems.
Building on Louise Amoore’s (2020) concept of algorithms as composite creatures, this paper explores the implications of framing generative agents as "believable." In the long run, deploying these AI systems in social science research risks embedding prior normative assumptions into empirical findings. Such feedback loops can reinforce preexisting models of the world, presenting them as objective realities rather than as socially constructed artifacts. The analysis highlights the danger of generative agents reproducing and amplifying simplified or biased representations of complex social systems, thereby shaping policy and theory in ways that may perpetuate these distortions.
References
Amoore, Louise (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.
Anders, Günther (1956). Die Antiquiertheit des Menschen Bd. I. Munich: C.H. Beck.
Batt, Simon (2024). „Bumble Wants to Send Your AI Clone on Dates with Other People's Chatbots.” Retrieved from https://www.xda-developers.com/bumble-ai-clone-dates-other-peoples-chatbots/.
Contreras, Brian (2023). „Thousands Chatted with This AI ‘Virtual Girlfriend.’ Then Things Got Even Weirder.” Retrieved from https://www.latimes.com/entertainment-arts/business/story/2023-06-27/influencers-ai-chat-caryn-marjorie.
Haraway, Donna Jeanne (2018). Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. Second edition. New York, NY: Routledge, Taylor & Francis Group.
O’Donnell, James (2024). „AI Can Now Create a Replica of Your Personality.” Retrieved from https://www.technologyreview.com/2024/11/20/1107100/ai-can-now-create-a-replica-of-your-personality/.
Park, Joon Sung, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein (2023). „Generative Agents: Interactive Simulacra of Human Behavior.“ In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22. https://doi.org/10.1145/3586183.3606763.
Park, Joon Sung, Carolyn Q Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S Bernstein (2024). „Generative Agent Simulations of 1,000 People“, Retrieved from arXiv. https://doi.org/10.48550/arXiv.2411.10109.