Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
OS-7: Artificial Academia: Exploring the Risks and Hopes for Artificial Intelligence in Science
Time:
Wednesday, 25/June/2025:
10:00am - 11:40am

Location: Room A

Session Topics:
Artificial Academia: Exploring the Risks and Hopes for Artificial Intelligence in Science

Show help for 'Increase or decrease the abstract text size'
Presentations
10:00am - 10:20am

Research Networks of Artificial Intelligence

Oscar M. Granados

Universidad Jorge Tadeo Lozano, Colombia

The evolution of Artificial Intelligence research has been consolidated with the interaction of scientists operating as a social network. This evolution has had a set of people or groups represented by institutions, each of which had connections of some kind with some of the others. However, AI research has consolidated knowledge networks such as other scientific fields that work through interconnected scientists, but the structure and dynamics evolution could be different.

In some growing social complex networks, one may expect that the preferential attachment scheme should be effective. In such networks, the addition of a new agent to the system (e.g., scientific field) may be driven by their pursuit of more influential agents capable of connecting it to a center for expediting their objectives. The sought-after centrality of the new agent does not exclusively correspond to the highest degree within the network, but rather, it can manifest through various forms of centrality. This centrality can be captured through the eigenvector centrality and betweenness centrality. The first one is defined as the summation of the centralities of the neighbors of a given vertex. A high eigenvector centrality does not necessarily translate to a high degree centrality, but it means that the vertex under focus has very good connections. The second one is defined as the vertex quality of being “in-between” communities.

Traditionally, the co-authorship network constantly expands with the addition of new authors and new edges between old and new authors. Namely, the topological properties are determined by the dynamical properties of networks, especially by the preferential attachment. I considered knowledge networks as a network focused on dynamics and evolution, but in AI knowledge networks can have a limit, i.e., the scientists have a maximum number of co-authorships and, in some cases, the interactions are not an exclusive result of preferential attachment. This argument is aligned with the idea that the scale-free property is not prevalent in all complex networks and the degree-driven preferential attachment model has limitations in describing social networks.

In this study, I present two models. The first one is the growth model in which preferential attachment is a linear function of the vertices’ eigenvector centrality rather than their degree centrality. This model demonstrates some similarities with a winner-takes-all scenario which is identified by the fact that one vertex catches a proportion of all links. The second one shows that vertex betweenness is power-law distributed and correlated with link weight distributions. Such empirical pieces of evidence suggest that, for co-authorship networks, the vertex degree is not the main driver of preferential attachment, therefore, other metrics may explain better the ties attractors. I conclude that vertex betweenness and in some way, the approach of eigenvector centrality, are the key metrics for the new social ties as opposed to vertex degree or other centrality metrics. The empirical findings align partially with some research and well with previous research in some particular cases. However, some methods like cliques identify other features and patterns of that scientific field.



10:20am - 10:40am

AI-Assisted Tools in Bibliometric Network Analysis

Daria Maltseva

National Research University Higher School of Economics, Russian Federation

Over the last decades, various bibliometric analysis tools have been developed to study scientific disciplines and their evolution. Bibliometric network analysis involves analyzing networks of co-authorship, citation, co-citation, bibliographic coupling, and co-occurrence of bibliometric units. Research typically follows three stages: 1) creating a bibliographic database, 2) preprocessing and constructing bibliographic networks, and 3) analyzing these networks. Tools like VOSviewer, CitNetExplorer, Bibliometrix, and Biblioshiny offer diverse solutions, catering to different research needs and user expertise levels.

In recent years, Artificial Intelligence (AI) technologies have advanced rapidly, transforming research practices by enabling new tools for data analysis, hypothesis generation, process optimization, and result interpretation. However, integrating AI faces technical, organizational, educational, and psychological challenges. The intersection of network analysis and AI offers promising opportunities for mutual enhancement. AI can be applied in bibliometrics for tasks like automated data collection, citation analysis, author disambiguation, co-authorship analysis, research impact assessment, text mining, and recommender systems (Saeidnia et al., 2024). Despite its potential, there is a need for a systematic overview of these currently fragmented practices and tools.

Our study explores how AI techniques and tools can enhance bibliometric network research across its three main stages. Using the Open Alex database, we analyze English-language publications (2015-2025) on AI applications in bibliometric analysis. Through a quantitative approach, we identify key areas for AI integration. A qualitative case-study of selected sources highlights practical applications for researchers. The findings are validated via interviews with experts in the field of bibliometric network analysis.



10:40am - 11:00am

Patterns of bibliographic diversity in post-2022 publications on "LLMs" and "ChatGPT"

Moses Boudourides1, Evan Piepho2, Amin Gino Fabbrucci Barbagli3

1Northwestern University, United States of America; 2Arizona State University, United States of America; 3Università degli studi di Trieste, Italy

We compiled a dataset of 71,717 publications from the Dimensions.ai database using three keyword searches—“ChatGPT,” “large language model,” and “LLM”—covering the period from January 2022 to January 2025. After performing standard preprocessing, we extracted the author and publication metadata fields. The author fields include first name, last name, and gender, and the publication fields include publication id, date, type and venue of publication, concepts, research areas (from the category_for field in Dimensions), times cited, supporting grant ids, funding USD, and countries of funders. To categorize research areas, we used the ANZSRC 2020 (Australian and New Zealand Standard Research Classification).

The gender of authors was identified using the Namsor algorithm, which recognizes morphemes—the smallest units of construction within languages that help comprise words—to classify a name’s gender, ethnic origin, and other information. The accuracy of Namsor’s model has been verified by multiple studies and audits, and it is used frequently within academic and international institutions, particularly in the context of examining gender disparities. Our primary research questions center on analyzing patterns of gender diversity in post-2022 publications on large language models and ChatGPT, applying pre-existing scholarship to a rapidly emerging and highly publicized research domain. After identifying the gender of authors using Namsor, we evaluated the distribution of male and female authors across interdisciplinary research, journals, open access types of publications, number of citations, grant support, and countries of the funders.

Finally, we examined four types of networks derived from our dataset: co-authorship networks, citation networks, networks of shared concepts, and networks of shared fields of research. Furthermore, we conducted several statistical analyses using the Relational Hyperevent Model (RHEM), a family of statistical models designed to assess the likelihood of continued interactions among actors over time. RHEMs are particularly well suited for handling fine-grained, time-stamped events, such as those found in co-authorship networks. In this study, we applied RHEM to evaluate the likelihood of authors continuing their collaborations and to determine how these collaborations are influenced by the presence of grants and the sharing of common research concepts and fields.



11:00am - 11:20am

The ethos of science as a category that controls the development of AI.

Magdalena Zdun

Cracow University of Economics, Poland

The research tradition of innovation, including the achievements of anthropologists and sociologists proves that innovation is subject to double legitimization: cognitive ( the usefulness of innovation, etc.) and axiological (fitting into the system of norms, values). In addition, research conducted at the beginning of the twentieth century by anthropologists on the diffusion of innovation proves the importance of the axiological legitimization of the novelty. In this context, Linton wrote about the "troublesomeness of innovation", and Rogers pointed to different levels of acceptance of the novelty: individual, collective, and the level of authority. These findings encourage AI (as a technology used in the area of modern science) to be evaluated on the basis of axiological criteria. The aim of the presentation will be to identify factors influencing the axiological legitimization of AI in the area of the modern university. Two concepts will serve this diagnosis: innovation and the ethos of science. The first of these will allow us to indicate the dimensions of AI: technical, interactive, and normative. All these dimensions must be seen as related to the key paradigms of the theory of innovation. The concept of ethos, in turn, will allow us to add axiological duties to these dimensions. Finally AI, it will be inscribed in the scheme of axiological assumptions, and ethos can be recognized as a category that conditioning the further development of this technology. The method of analysis will be theoretical discussion in the context of the sociological theory. The result will be an AI legitimization scheme.



11:20am - 11:40am

The Influence of Trust Networks on Students' Perceptions proficiency of Artificially Intelligent Assistants

Yutong Bu, Andrew Melatos, Robin Evans

The University of Melbourne, Australia

The increasing integration of artificial intelligence (AI) tools in educational settings has sparked extensive debate. Educators use AI to design lesson plans, provide feedback, and grade assessments, while students leverage these tools for background research and, in some cases, to directly generate answers to assessment tasks. This growing adoption has prompted a many-sided conversation about ethics, philosophy, fairness and bias, educational goals, and adoption patterns. A particularly pressing issue is academic integrity, as AI tools enable students to complete assessments with minimal human input, prompting the development of policy frameworks to address this risk.

A critical question in AI-assisted education is how effectively AI tools perform assessment tasks, as evaluated by human experts. This question has two dimensions: intrinsic proficiency and perceived proficiency. Intrinsic proficiency, which has been widely studied, refers to the AI's measurable capability in both objective tasks (e.g., solving mathematical problems) and subjective tasks (e.g., writing essays). Perceived proficiency, however, remains underexplored. While often assumed to correlate with intrinsic proficiency, perceived proficiency is a socially emergent construct that directly influences the rate and equity of AI adoption across educational contexts.

This study examines how students in networked cohorts form perceptions of AI tool proficiency. Using Monte Carlo multi-agent simulations, we investigate how students’ trust ties and personal observations interact to shape their opinions. A probabilistic opinion dynamics model is employed, wherein each student's perception is represented by a probability density function (PDF) updated iteratively through independent observations and peer influence modulated by trust relationships.

Our model generalises previous, non-probabilistic work that restricts agents to holding a single belief at a time by allowing agents to maintain a spectrum of uncertain beliefs at any given time. We also considers the antagonistic interaction between agents by introducing negative ties within the network.

The findings underscore the role of trust and peer dynamics in shaping perceived proficiency, which can diverge from intrinsic performance. We compute students' asymptotic learning time as a function of the number of AI users in different types of networks. We find that in high-trust networks, students are able to infer the AI tool's proficiency correctly, while in low-trust networks, most agents infer the proficiency incorrectly. Disturbing the network with even one partisan -- obdurate agents who refuse to change their opinion, regardless of external inputs or peer pressure -- makes students' opinions fluctuate indefinitely. We also explore the role of teachers in shaping students' opinion on the AI tool's proficiency. We finally discuss the implications for the design of policies governing AI use in education. Specifically, we highlight potential unintended and inequitable outcomes stemming from counterintuitive network effects, emphasising the need for strategies to ensure fair and effective adoption of AI tools.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: INSNA Sunbelt 2025
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany