Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
OS-9: Beyond detection: disinformation and the amplification of toxic content in the age of social media
Time:
Wednesday, 25/June/2025:
8:00am - 9:40am

Location: Room 109

75
Session Topics:
Beyond detection: disinformation and the amplification of toxic content in the age of social media

Show help for 'Increase or decrease the abstract text size'
Presentations
8:00am - 8:20am

Cognitive Warfare on Social Networks

André Carvalho, Aimãn Mourad, Maria Conejero

Centro Universitário FEI, Brazil

Cognitive warfare is an expanding phenomenon, operating within the domain of the human mind, which is revolutionizing how information is shared and interpreted on social networks. The dissemination of disinformation and fake news, driven by advanced technologies—particularly generative artificial intelligence—heightens the complexity of these strategies, whose primary objective is to manipulate individuals' thoughts and behaviors. The analysis of this emerging form of warfare and its global implications guides this study, which aims to provide valuable insights for the formulation of public policies to counter threats to the psychosocial expression of democracy resulting from the amplification of such influence on social platforms.

To achieve this aim, a mixed-methods approach was employed, incorporating exploratory and systematic literature reviews, case studies, and semi-structured interviews. This research stands out not only for the innovative nature of its subject matter but also for its complexity, evidenced by the integration of knowledge from diverse fields. The findings reveal that the manipulation of information and public perceptions on social networks is a central weapon of cognitive warfare, exploiting brain vulnerabilities to influence emotions, shape behaviors, and destabilize democratic societies.

Recent events illustrate how this dynamic unfolds, exposing significant vulnerabilities in the psychosocial expression of democracy. In light of the growing influence of cognitive warfare on social media and its global ramifications, this study offers valuable contributions toward the development of public policies. Moreover, its recommendations present concrete applicability, outlining a set of actionable measures



8:20am - 8:40am

A Data-Driven Adaptive Approach to Supporting Fact-Checking and Mitigating Mis/Disinformation Through Domain Quality Evaluation

Kaveh Kadkhoda1, Anna Bertani1,2, Thomas Louf1, Riccardo Gallotti1

1Fondazione Bruno Kessler, Italy; 2University of Trento, Italy

Misinformation spreads rapidly on social media, harming public debate, democracy, and social stability. To address this concern, we propose a real-time machine learning system that predicts website trustworthiness by analyzing 48 domain features, including PageRank, Domain Authority, and Spam Score. Our system achieves a mean absolute error of 0.12, demonstrating high accuracy in estimating website reliability. It adapts to the changing online environment and enables researchers, media agencies, and policymakers to identify suspicious domains more efficiently.

Existing tools such as NewsGuard and Media Bias Fact Check offer valuable assessments but only cover a limited number of sites and often require paid subscriptions. These constraints make it challenging for large-scale efforts to reduce misinformation. Our approach overcomes these limitations by using a comprehensive dataset of domains, gathered from six expert sources. Missing data was handled using advanced imputation, and a unified trustworthiness score between 0 and 1 was assigned to each site. Higher scores indicate more reliable domains, while lower scores suggest questionable sources.

We follow a streamlined workflow that includes data preparation, splitting, model training, performance evaluation, and new domain assessment. Because the system updates as the internet evolves, it supports fact-checking organizations, social media platforms, and other stakeholders in making timely, informed decisions about untrustworthy sites. Fact-checkers can focus on highly suspicious domains, and social media companies can label or prioritize content more effectively. In summary, our system provides a scalable, adaptive, and cost-effective solution to evaluating domain credibility, meeting an urgent need to mitigate the spread of misinformation.



8:40am - 9:00am

A temporal-network perspective on the longitudinal analysis of online coordinated behaviour

Luca Rossi1, Matteo Magnani2

1IT University Copenhagen; 2Uppsala University, Sweden

Network-based methods have proven successful in detecting coordination in social media, for example to identify deceptive attempts to increase the visibility of social media posts and websites. The typical approach consists in constructing networks of social media accounts, where edges indicate a sign of possible coordination (for example the simultaneous sharing of the same URL by two accounts), then looking for dense subnetworks (communities) potentially indicating coordinated botnets. In this approach, time is fundamental, as it is used to select and/or weight edges; yet little attention has been dedicated to the longitudinal aspects of coordination. In this work we examine the suitability of different temporal network analysis methods for the longitudinal study of online coordination, where the network is split into temporal slices and communities are traced over time. In particular, we focus on two methodological questions: (1) How to slice the network so that analytically interesting communities emerge? Here we consider calendar-based, data-based, and community-based slicing. (2) Which temporal community detection methods provide the most informative results? Here we test both multilayer methods (simultaneously looking for communities across all slices) and layer-by-layer methods (looking for communities in individual slices and tracing their temporal evolution). For our empirical study we use a climate-change-communication dataset with 64500 Facebook posts published by 470 manually-selected actors (294) and counter-actors (176) active on the climate debate, posted between 2023 and 2024. Our results confirm the intuition, recently suggested in the literature, that a longitudinal approach can find more traces of coordination and reveal dynamic patterns that are otherwise lost in a static analysis. At the same time, we raise questions about the validity of existing design choices, their consequences, and suggest associated guidelines. A noteworthy result of our experimental study is the observation that the most common community detection approach used in the literature to identify coordination risks to identify groups for which there is no strong evidence of actual coordination, and that this problem is exacerbated when a longitudinal perspective is adopted.



9:00am - 9:20am

How algorithms recommend political content on social network.

Tim Faverjon1, Pedro Ramaciotti2

1Sciences Po Paris, France; 2Institut des Systèmes Complexes de Paris Ile-de-France CNRS, France

Recommendation algorithms are widely used on social media platforms and play a crucial role in shaping users' information environments. These algorithms leverage traces of online behavior—such as likes, shares, and social ties—to predict user preferences. Given that such behaviors correlate with political attitudes, to what extent do recommendation algorithms learn and leverage users' political leanings? And how would recommendations change if algorithms were prevented from using this information?

Using algorithmic explanation methods, we identified political information within the representation learning space of recommendation models. We then manually reduced the influence of this information and examined its effects on political bias and content diversity.

In a case study based on URL-sharing data from X (formerly Twitter), we trained a recommendation algorithm and used political attitude estimates to analyze how political patterns emerge in the model’s hidden layers. Our findings show that even when trained solely on behavioral data, the algorithm captures users’ political traits, steering recommendations toward partisan content. We observed that ignoring political information effectively reduces political bias in recommendations but also diminishes content diversity as recommendations shift toward mainstream sources.

This study proposes a method to assess the impact of political information in recommendation systems, opening new avenues to understand recommender’s role in shaping online political discourse amplification.



9:20am - 9:40am

Studying information segregation on YouTube: Structural differences in the recommendation graph

Marijn A. Keijzer1, Lukas Erhard2, Zarine Kharazian3, Manika Lamba4

1Institute for Advanced Study in Toulouse, France; 2University of Stuttgart, Germany; 3University of Washington, USA; 4University of Oklahoma, USA

The impact of YouTube's recommendation algorithm has been the subject of debate. Case studies and anecdotal evidence have suggested that algorithmic recommendations may lure the platform's users into rabbit holes where they are exposed to biased content and misinformation. Empirical research on rabbit holes, however, has been challenging to conduct because of the difficulty of separating the effects of the algorithm from user behavior. In this study, we study algorithmic recommendations separately from user behavior through a comparative network approach. We collect data for various political and non-political issues on what videos the YouTube algorithm recommends to watch after having consumed a given video. The resulting networks of YouTube recommendations between videos on different issues are then compared on structural network characteristics. If the algorithm creates rabbit holes, one would expect the recommendation networks of videos on conspiratorial content should be characterized by stronger modularity. To test this hypothesis, we analyzed 15,455 videos and 154,311 recommendation links on 40 topics, each belonging to one of four main categories: news, science, conspiracy, and non-controversial. On the macro level, we find that conspiratorial video networks are small, sparse, and centralized, with their density partly explained by lower view counts and network size. They often link to leftist partisan channels but not more than other networks. The sentiment of their videos is negative on average---like news networks, but unlike science and non-controversial networks. Finally, on the micro-level we find that sentiment does not predict in-degree but recommendations are driven mainly by view count.



9:40am - 10:00am

The COVID-19 Infodemic on Twitter: Exploring Patterns and Dynamics across Countries

Anna Bertani1,2, Alessandro Cortese1, Federico Pilati4, Pierluigi Sacco3, Riccardo Gallotti1

1Fondazione Bruno Kessler, Italy; 2University of Trento; 3University of Chieti-Pescara; 4University of Bologna

The COVID-19 pandemic was accompanied by a wide spread of false and misleading informa- tion on online social media (the so-called infodemic). Having reliable indicators of the extent of infodemic is crucial to enable targeted interventions, protect public health and promote ac- curate information dissemination. In this study, we validate the three infodemic metrics of the FBK COVID-19 Infodemics Observatory (Gallotti et. al, 2020), elaborated on a large dataset of over 1.3 billion tweets, by assessing their degree of correlation to a set of 20 country-level socioeconomic indicators for 37 OECD countries. Using dimensionality reduction techniques such as Uniform Manifold Approximation and Projection (UMAP), we project socioeconomic indicators and countries into a two-dimensional space to identify underlying structures in the data. Our findings reveal distinct clusters of countries based on their infodemic risk index and socioeconomic characteristics. Countries with stronger democratic institutions, higher ed- ucation levels, and diverse media environments exhibited lower infodemic risks, while those with greater political and social polarization were more vulnerable to misinformation. Addi- tionally, we examine the evolution of Infodemic Risk over time, identifying shifts in misinfor- mation dynamics through k-means clustering and principal component analysis. Furthermore, we analyze the role of media diversity (Bertani et al., 2024) in shaping a country’s resilience against misinformation. Our results indicate a positive correlation between media pluralism and lower infodemic risks, emphasizing the importance of a diverse news ecosystem in miti- gating misinformation spread. These insights provide valuable implications for policymakers and researchers aiming to combat digital misinformation and enhance public trust in information sources.



10:00am - 10:20am

The Diffusion of Propaganda on Social Media: Analyzing Russian and Chinese Influence on X (Twitter) during Xi Jinping's visit to Moscow in 2023

Iuliia Alieva

University of Stuttgart, Germany

Propaganda and disinformation have become central tools for both the Kremlin and China in advancing their political goals and strategic narratives. The rise of social media and the computational power of the Internet have significantly amplified these efforts, enabling the widespread dissemination of political messaging designed to shape and control public discourse.

Prior research has documented Kremlin-affiliated disinformation operations, such as those conducted by the Internet Research Agency (IRA), which has sought to influence political and social discourse in multiple countries. The IRA has been identified as a primary source of malicious online activity, using divisive messaging on social media to manipulate public opinion, promote strategic narratives, and foster destabilization, polarization, information disorder, and societal distrust. Notable instances include interference in the 2016 U.S. presidential election, the 2016 Brexit referendum in the United Kingdom, and other socio-political events (Badawy et al., 2019; Bastos & Mercea, 2018; Linvill & Warren, 2020).

Scholars have also examined propaganda and disinformation narratives surrounding Russia’s invasion of Ukraine (Alieva et al., 2024; Geissler et al., 2023). Additionally, researchers at the European Union Disinformation Lab (EU DisinfoLab) identified "Operation Doppelgänger," a 2022 Russian disinformation campaign that created fake websites mimicking legitimate news outlets to spread pro-Russian narratives and undermine support for Ukraine. Investigations by the U.S. Department of Justice further exposed the campaign's covert methods and infrastructure (EU DisinfoLab, 2022; U.S. Department of Justice, 2024).

This study contributes to the existing research by analyzing the propaganda strategies employed by Chinese and Russian state actors, identifying key users and the main narratives they propagate. Specifically, it examines discourse on X (formerly Twitter) surrounding Chinese leader Xi Jinping's visit to Moscow in March 2023 to meet with Russian President Vladimir Putin, focusing on the diffusion of narratives promoted by Russian and Chinese actors.

References:

Alieva, I., Kloo, I., & Carley, K. M. (2024). Analyzing Russia’s propaganda tactics on Twitter using mixed methods network analysis and natural language processing: A case study of the 2022 invasion of Ukraine. EPJ Data Science, 13(42). https://doi.org/10.1140/epjds/s13688-024-00479-w

Badawy, A., Addawood, A., Lerman, K., & Ferrara, E. (2019). Characterizing the 2016 Russian IRA influence campaign. Social Network Analysis and Mining, 9, 1–11.

Bastos, M., & Mercea, D. (2018). The public accountability of social platforms: Lessons from a study on bots and trolls in the Brexit campaign. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20180003.

EU DisinfoLab. (2022). Operation Doppelgänger. https://www.disinfo.eu

Geissler, D., Bar, D., Prollochs, N., & Feuerriegel, S. (2023). Russian propaganda on social media during the 2022 invasion of Ukraine. EPJ Data Science, 12(1), 35.

Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 37(4), 447–467.

U.S. Department of Justice. (2024). Justice Department disrupts covert Russian government-sponsored foreign malign influence campaign. https://www.justice.gov



10:20am - 10:40am

The role of moral values in the social media debate

Giulio Prevedello1,2, Emanuele Brugnoli2,3,4, Pietro Gravino1,2, D. Ruggiero Lo Sardo2,3,4, Vittorio Loreto2,3,4,5

1Sony CSL - Paris, France; 2Enrico Fermi’s Research Center, Italy; 3Sony CSL - Rome, Italy; 4Sapienza University of Rome, Italy; 5Complexity Science Hub Vienna, Austria

Social media platforms serve as digital arenas for public discourse, shaped by news providers, political entities, and user interactions. Within this space, leader-follower relationships influence debate dynamics, often affected by polarization, misinformation, and toxicity.

Our research examines the role of moral values in shaping engagement and toxicity in online discussions. We focus on Moral Foundations Theory (MFT), which defines five moral dyads: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and purity/degradation.

We analyzed immigration-related tweets (2018–2022) from 516 Italian news providers and political figures, along with follower interactions. Using a fine-tuned deep learning model, we identified the primary moral dyad in each tweet and combined this information with toxicity scores from Google’s Perspective API.

Our findings show that fairness/cheating and authority/subversion correlate with engagement, while purity/degradation and care/harm respectively correlated and anti-correlated with toxicity. Community analysis based, on retweets of moral content, provided a finer-grained segmentation of the Italian political landscape than standard methods. Progressive and conservative leaning of political accounts in the same group aligned with their mentions of moral values as expected by the MFT. Finally, we found evidence of in-group bias, with followers engaging more toxically when interacting with out-group communities.

These insights suggest that leveraging moral alignment, between users and contents from opposing communities, could help design interventions fostering healthier discourse between polarized groups.



10:40am - 11:00am

Unveiling emerging moderation dynamics in Mastodon’s federated instance network

Beatriz Arregui Garcia1, Lucio La Cava2, Anees Baqir3, Riccardo Gallotti1, Sandro Meloni4

1Fondazione Bruno Kessler, Italy; 2Universita di Calabria; 3Northeastern University London; 4Centro Studi e Ricerche “Enrico Fermi”

Mastodon, a decentralized online social network (DOSN), has experienced rapid user migration from traditional social media platforms. As a microblogging platform within the Fediverse, Mastodon operates through independent instances that communicate with each other. This decentralized nature reshapes network structures and alters information flow, presenting new challenges in moderation and the management of harmful content.

This study investigates the relationship between Mastodon’s friendship network, based on follow relationships, and its moderation mechanisms, which define inter-instance restrictions. By analyzing structural changes in this signed and directed network over a year, we identify evolving moderation actors while observing persistent large-scale patterns. The banning-banned network naturally divides into two groups: a majority of banned instances and a smaller, highly active minority responsible for most bannings.

Using an information diffusion model, we analyze how these structures influence the spread of information. Our findings reveal that the minority group predominantly shares information internally, while the majority group demonstrates less cohesion. Additionally, cross-group information flow is asymmetrical, with the majority group becoming rapidly isolated, whereas the minority retains greater resilience in spreading information. An echo-chamber effect emerges, reinforcing the separation of the minority from untrusted instances.

Understanding these mechanisms is critical to mitigating the spread of harmful content and fostering healthy, diverse digital ecosystems. This study provides insights into moderation dynamics in decentralized networks, offering implications for platform governance and information integrity.



11:00am - 11:20am

Amplifying Extremism: Network Dynamics of Conspiratorial and Toxic Content in the Canadian Freedom Convoy Movement

Deena Abul-Fottouh1, Jan Eckardt2, Rachel McLay1, Mathieu Turgeon2

1Dalhousie University, Canada; 2University of Western Ontario

Crises often fuel conspiracy theories, which can serve as radicalizing mechanisms and pathways to extremism. This was evident during the COVID-19 pandemic, as anti-vaccine groups advanced narratives suggesting the virus was engineered or that vaccines were a coordinated scheme between pharmaceutical companies and governments. The 2022 Canadian Freedom Convoy protests emerged from such sentiments, evolving into a movement where right-wing extremists played a key role in spreading conspiracy theories and mobilizing digital activism.

This study examines how conspiratorial and extremist narratives propagate through online networks. Using discussions from three pro-convoy X (formerly Twitter) hashtags, we employ large language models to detect and classify conspiracy theories, analyzing how they spread through network structures. Moving beyond individual-level characteristics of conspiracy theorists, we investigate the role of network positions in amplifying toxic content. Applying community detection methods, we assess whether conspiratorial discourse is confined within echo chambers or bridges broader audiences. Additionally, we conduct network analysis of the most shared URL domains to evaluate their ideological bias and role in spreading conspiracy theories and right-wing extremism.

By examining the lifecycle of conspiracy theories—tracking their reach, speed of proliferation, and engagement—we provide insights into the structural mechanisms that sustain digital extremism. This research highlights how online networks facilitate the amplification of toxic content, offering broader implications for understanding the intersection of digital activism, misinformation, and radicalization.



11:20am - 11:40am

Streamwork Makes the Dream Work! Cross-Platform Collaboration and Community-Building Among Far-Right and Conspiracy-Ideologist Actors on Telegram and YouTube.

Harald Sick1, Pablo Jost1, Michael Schmidt2, Christian Donner2

1Johannes Gutenberg University Mainz, Germany; 2Institute for Democracy and Civil Society Jena, Germany

How do political actors build communities across platforms? This question arises when trying to understand how digital counter-publics function in far-right extremist and conspiracy-ideological milieus. To generate attention, the actors have to network well and navigate their audience across platforms. YouTube has become a key platform for far-right, conspiracy-ideological groups, and channels spreading (coronavirus) disinformation, influencing discourse from the margins to the mainstream (Baele et al., 2023; Knüpfer et al., 2023). These actors view YouTube as a vital space for networking and constructing an alternative public sphere (Munger & Phillips, 2022): Youtubers can benefit from the followership of prominent figures in the scene through mutual invitations, interviews and joint video podcasts (Lewis, 2018).

This is where our contribution comes in. Based on data from around 2,000 German Telegram channels from far-right and conspiracist milieus, we examined cross-platform use and identified which actors use Telegram and YouTube in parallel (RQ1). Additionally, we analyzed their collaborative behavior on YouTube to identify the factors underlying potential collaborations between actors (RQ2) and to assess whether these community-building efforts are successful (RQ3).

By analysing the bimodal, cross-platform link network, we were able to find 470 actors who operate channels on both platforms. Using a custom-built retrieval augmented generation (RAG) system, we were then able to identify their collaborations through shared appearances in their 77,770 videos. Exponential random graph models of the collaboration network on YouTube show that ideological similarity is the main driver of collaborations and that conspiracy ideologues in particular build bridges across milieu boundaries. Finally, we will assess the success of their community-building efforts by using Relational Event Models to analyze how collaborations influence audience viewing behavior, as reflected in the video comments.



11:40am - 12:00pm

The resilience of conspiracy theory networks on social media: from COVID-19 to the Russian invasion of Ukraine

Antti Gronow1, Arttu Malkamäki2, Pamela Mullo1

1University of Helsinki, Finland; 2Aalto University, Finland

Digital communication and social media increasingly act as sources of information and news. At the same time, this shift has facilitated the spread of misinformation and conspiracy theories, as social media has come to play a central role in the spread of conspiracy theories, particularly during crises. The COVID-19 pandemic saw an explosion of misinformation and conspiratorial narratives, with social media networks of conspiracy theorists reinforcing and amplifying these claims. While previous research has found that conspiracy theories can act as monological belief systems, so that believing in one theory predicts believing in another, less is known about whether social media conspiracy theory networks are resilient to sudden changes in the underlying societal crises. We argue that the historical and geopolitical context may mediate the extent to which the changing societal crisis act as a challenge to the resilience of conspiracy theory networks on social media.

In this paper, we examine the extent to which the social media networks of conspiracy theorists are resilient to changing crises, shifting their focus from one issue to another while retaining their network structure and the positions in those structures. We examine Finnish Twitter data of conspiracy theory networks associated with COVID-19 and examine what proportion of accounts started spreading Ukraine-related conspiracy theories and what their network positions are in both networks. We also examine how resilient the networks around these topics are to the spread of conspiracy theories. The Finnish context makes the case a "hard" test of conspiracy theory resilience because of a history of conflictual relations with Russia and because belief in the Russian narrative was marginal in Finland compared to many other countries. The results show that conspiracy networks are relatively resilient, especially the core group of users, but also that their influence outside of their own epistemic bubbles remains limited.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: INSNA Sunbelt 2025
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany