Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Authenticity
Time:
Friday, 27/June/2025:
11:50am - 1:05pm

Session Chair: Filippo Santoni de Sio
Location: Auditorium 3


Show help for 'Increase or decrease the abstract text size'
Presentations

AI-produced research in the humanities: Am I the author? Does it matter?

Thomas Nelson Metcalf1,2

1Institute for Science and Ethics, University of Bonn, Germany; 2Spring Hill College, Mobile, Alabama, United States

Large language-models (“LLMs”) can produce broadly convincing argumentation about a variety of academic subjects, including those in the humanities. Interestingly, these LLMs can also be “trained” on one’s own academic writing: one may submit one’s own works to an LLM and have it thereby come to learn one’s research style, interests, and substantive commitments in one’s field. And this training can be supplemented by ongoing conversations with the chatbot. Thus, the LLM can come to know you and your academic research very intimately, if you let it (and sometimes, even if you don’t).

Suppose that an academic researcher in the humanities—for example, a philosopher or literary critic—trains an LLM on two key “datasets”: (1) the subject-matter literature the researcher wishes to write about; and (2) the researcher’s own academic-research style, works, interests, goals, and orientations, in the forms of written works and real-time chat conversation. The LLM may already know the subject matter of the research well, but in this way, the LLM also comes to know the researcher very well. Now the researcher prompts the LLM to produce an original manuscript based on those two datasets. The researcher then submits the manuscript, under their own name, to a journal or conference, without explicitly acknowledging the role of the LLM in producing the manuscript.

This raises several philosophically interesting questions, some of which have not been addressed in the literature so far. First, we can ask whether the researcher committed any kind of research misconduct or plagiarism. But we might also ask whether we should hope or dread that such practices become common in the academy.

I will argue that, in general, the researcher has not necessarily committed any serious research misconduct. The researcher’s approach is not fundamentally different from current research practices in the humanities, and it bears an intimate-enough connection to the researcher’s identity that it does not qualify as any kind of plagiarism.

Yet what are the advantages and disadvantages of an academia in which this “research” method becomes common? I suggest that the advantage will be the higher production of research output, which will have a certain kind of greater familiarity with the existing academic literature. The disadvantages will result from the fact that academic research is now quite a bit easier.

This may produce several problems, but at least one is philosophically interesting: Such a system would weaken the intimate, identity-based connection between the author and the work produced. This will undermine a kind of valuable vulnerability that has thus far been present in academic research. If it becomes much easier to produce academic research, and the research produced is less associated by the author with their own identity, then there will be less motivation to ensure academic and even moral quality in the research.

I respond to objections; provide recommendations about how to cultivate some of the benefits of LLM-assisted research in the humanities while avoiding some of these problems; and briefly draw connections to related areas of the philosophy of technology.

Acknowledgments:

None of this text and research was generated by AI.

References:

Babushkina, D., & Votsis, A. (2022). Disruption, technology and the question of (artificial) identity. AI Ethics, 2, 611–622.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.

Carr, N. (2010). The shallows: How the internet is changing the way we think, read and remember. Atlantic Books.

Chan, C. K. Y. (2023). Is AI changing the rules of ccademic misconduct? An in-depth look at students’ perceptions of “AI-giarism.” ArXiv. https://arxiv.org/abs/2306.03358

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 618–694.

Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., Gill, I. S., & Cacciamani, G. E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ, 384, e077192.

Gupta, S., Ranjan, R., & Singh, S. N. (2024). A comprehensive survey of Retrieval-Augmented Generation (RAG): Evolution, current landscape and future directions. ArXiv. https://arxiv.org/abs/2410.12837

Hosseini, M., Rasmussen, L. M., & Resnik, D. B. (2023). Using AI to write scholarly publications. Accountability in Research, 31(7), 715–723.

Kwon, D. (2024, July 30). AI is complicating plagiarism. How should scientists respond? Nature. https://www.nature.com/articles/d41586-024-02371-z

Mann, S. P., Vazirani, A. A., Aboy, M., Earp, B. D., Minssen, T., Cohen, I.G., & Savulescu, J. (2024). Guidelines for ethical use and acknowledgement of large language models in academic writing. Nature Machine Intelligence, 6, 1272–1274.

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304.

Raman, R. (2023). Transparency in research: An analysis of ChatGPT usage acknowledgment by authors across disciplines and geographies. Accountability in Research, 1–22. https://doi.org/10.1080/08989621.2023.2273377

Silva, V. T., de Souza, J. P. G., & Cerqueira, R. F. D. (2024). Lessons learned in knowledge extraction from unstructured data with LLMs for material. ACS Spring 2024. https://research.ibm.com/publications/lessons-learned-in-knowledge-extraction-from-unstructured-data-with-llms-for-material-discovery

Tadimalla, S. Y. & Maher, M. L. (2024). AI and identity. ArXiv. https://arxiv.org/html/2403.07924v2

Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press.

van Est, Q. C., Rerimassie, V., van Keulen, I., & Dorren, G. (2014). Intimate technology: the battle for our body and behaviour. Rathenau Instituut.

van Riemsdijk, M. B. (2018). Intimate computing: Abstract for the philosophy conference “Dimensions of Vulnerability.” Intimate Computing. https://intimate-computing.net/wp-content/uploads/2019/03/riemsdijk18dov.pdf

van Rooij, I. (2022, December 29). Against automated plagiarism. Iris van Rooij. https://irisvanrooijcogsci.com/2022/12/29/against-automated-plagiarism/

Wu, X. & Tsioutsiouliklis, K. (2024). Thinking with knowledge graphs: Enhancing LLM reasoning through structured data. ArXiv. https://arxiv.org/abs/2412.10654



From Automation to Authenticity: Rethinking AI through the MEAT framework

Rasleen Kour

Indian Institute of Technology Ropar, Punjab, India, India

We live in a world influenced by artificial intelligence (AI), which has solved numerous problems and revealed unexplored dimensions of human existence. However, AI poses significant risks alongside its benefits, including the possibility of artificial humans acting uncontrollably (Coeckelbergh 2020). The critical question is not about having a positive or negative relationship with AI, but whether our engagement with it is meaningful. Meaningful engagement emphasizes fostering authentic relationships with technology, preserving human connections, supporting environmental stewardship, and upholding moral, social, cultural, and intrinsic values. Appropriate technology should prioritize individual well-being and resource efficiency without harming the environment. However, AI often falls short of these ideals.

I developed the MEAT (Meaningful Engagement with Appropriate Technology) model, which aligns more effectively with low-tech solutions than high-tech automation by prioritizing transparency and user involvement. Nonetheless, in an era dominated by automation, returning to pre-technological times is not an option. What is required instead is a critical reflection on how technology shapes our social, cultural, and community ties. Humans are not “unencumbered selves” and must strike a balance with their surroundings to ensure that automation remains meaningful by maintaining transparency and encouraging user involvement. For instance, fully automated technologies like autonomous cars lack the participatory dimension required for meaningful engagement, as meaningful interaction with technology should grow alongside humanity rather than sidelining it.

The MEAT model emphasizes the importance of critically examining the role of technology in creating a more fulfilling and examined life. It proposes shifting the focus from the how—how a technology works or how it is used—to the why—why a technology is necessary and valuable. In human-technology relation, unlike postphenomenological thinkers, we need to replace (particular human) ‘h’ with an uppercase ‘H’ to recognize that not all technologies are meaningful for all humans. It is essential to explore which technologies are truly beneficial and to shift the relationship from (h-t) to (H-t). Meaningful engagement with technology relies on three key principles. First, it must retain human autonomy by avoiding manipulative technological influences. Second, it should focus on upskilling rather than deskilling. Third, it should free humans from monotonous tasks and foster more corporeal human-to-human (h-h) relationships, reinforcing the emancipatory potential of technology. This approach underscores the importance of human essence (H), self-sufficiency, collective welfare, and equilibrium between humans, nature, and technology. When applied to AI, meaningful engagement necessitates limiting automation to ensure that users’ roles are not entirely replaced. Authentic human engagement requires more than automation to address the imbalance between humanity, technology, and nature. We must prioritize social fixes over superficial technological solutions (an idea inspired by Borgmann) to foster a more harmonious and meaningful coexistence with AI.



Autonomy, relationality and emancipation in the digital age

Eloise Changyue Soulier

University of Hamburg, Germany

Guidelines and legal frameworks on artificial intelligence (AI) and digital technologies often consider human autonomy as one of the main values to protect [1]. Abundant academic scholarship is also concerned with autonomy and digital technologies [2, 3, 4]. Although the concern for human autonomy is neither new nor limited to digital technologies, their wide-ranging scope and their intransparent and adaptive nature makes this concern particularly salient [2].

Western law in general, including the regulation of digital technologies, arguably relies on a Kantian understanding of autonomy as self-legislation by rational atomistic individuals [5, 6]. As such, respecting a user’s autonomy means providing them with all potentially relevant information and then not interfering with their decision-making process. In the context of digital technologies, this is exemplified by regulatory measures such as informed consent approaches to data protection. This latter example illustrates the failure of a Kantian conception of autonomy to live up to its own ideal of rational independent decision-making, and could be argued to rather serve a neoliberal agenda that overburdens individuals [7].

To understand and begin to address these shortcomings, it is useful to draw from the scholarly critiques of the Kantian conception of autonomy. These critiques pertain both to its epistemic value and to the normative ideal it serves. The cognitive sciences have struck a blow at the idea of rational decision making [8]. Critical theories, importantly feminist theory, have challenged this conception as obscuring our relationality and interdependence, as well as conveying a masculinist, individualist ideal [9]. Critiques stemming from the philosophy of technology, but also from disability studies, underline our dependence on the technological infrastructure [10].

Scholars who challenge this conception of autonomy however recognize the need for a concept of autonomy, crucially due to the necessity for any emancipatory project to have a concept of autonomy [5]. The question is rather: which concept of autonomy? Within the framework of pragmatist conceptual ethics [11], this question amounts to examining the function fulfilled by the concept of autonomy. I argue that adopting a conception of autonomy that makes room for relationality is both epistemically and practically more fruitful, but also a better normative ideal. Nevertheless, drawing on Christman [12] and Khader [13], I claim that the purpose of emancipation requires a conception of autonomy not to be constitutively relational. In light of these considerations of the different purposes a concept of autonomy should serve, I propose to operate with a conception of autonomy as “the ability to structure our dependences”.

Finally, I show that these theoretical reflections on autonomy have very practical consequences for the design and regulation of digital technologies, of which I discuss two examples: the PIMS (personal information management system) mechanism of cookie banners management and the possible tailoring of recommendation systems in a way that supports this organized dependence.

References

[1] Anna Jobin, Marcello Ienca, and Effy Vayena. The global landscape of ai ethics guidelines. Nature

machine intelligence, 1(9):389–399, 2019.

[2] Daniel Susser, Beate Roessler, and Helen Nissenbaum. Technology, autonomy, and manipulation. Internet Policy Review, 8(2), 2019.

[3] Karen Yeung. ‘hypernudge’: Big data as a mode of regulation by design. In The social power of algorithms, pages 118–136. Routledge, 2019.

[4] Alan Rubel, Clinton Castro, and Adam Pham. Algorithms and autonomy: the ethics of automated decision systems. Cambridge University Press, 2021.

[5] Jennifer Nedelsky. Laws Relations A Relational Theory of Self, Autonomy, and Law. 2011.

[6] Tal Z Zarsky. Privacy and manipulation in the digital age. Theoretical Inquiries in Law, 20(1):157–188, 2019.

[7] Philip M. Napoli. Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9):751–760, 2015.

[8] Daniel Kahneman and Amos Tversky. Intuitive prediction: Biases and corrective procedures. Technical report, Decisions and Designs Inc Mclean Va, 1977.

[9] Catriona Mackenzie and Natalie Stoljar. Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford University Press, 2000.

[10] Carolyn Ells. Lessons about autonomy from the experience of disability. Social theory and practice,27(4):599–615, 2001.

[11] Amie L Thomasson. A pragmatic method for normative conceptual work. Conceptual engineering and conceptual ethics, pages 435–458, 2020.

[12] John Christman. Relational autonomy, liberal individualism, and the social constitution of selves. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 117(1/2):143–164, 2004.

[13] Serene J Khader. The feminist case against relational autonomy. Journal of Moral Philosophy, 17(5):499–526, 2020.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany