Session | ||
OS-117: Beyond detection: disinformation and the amplification of toxic content in the age of social media 2
Session Topics: Beyond detection: disinformation and the amplification of toxic content in the age of social media
| ||
Presentations | ||
10:00am - 10:20am
The COVID-19 Infodemic on Twitter: Exploring Patterns and Dynamics across Countries 1Fondazione Bruno Kessler, Italy; 2University of Trento; 3University of Chieti-Pescara; 4University of Bologna The COVID-19 pandemic was accompanied by a wide spread of false and misleading informa- tion on online social media (the so-called infodemic). Having reliable indicators of the extent of infodemic is crucial to enable targeted interventions, protect public health and promote ac- curate information dissemination. In this study, we validate the three infodemic metrics of the FBK COVID-19 Infodemics Observatory (Gallotti et. al, 2020), elaborated on a large dataset of over 1.3 billion tweets, by assessing their degree of correlation to a set of 20 country-level socioeconomic indicators for 37 OECD countries. Using dimensionality reduction techniques such as Uniform Manifold Approximation and Projection (UMAP), we project socioeconomic indicators and countries into a two-dimensional space to identify underlying structures in the data. Our findings reveal distinct clusters of countries based on their infodemic risk index and socioeconomic characteristics. Countries with stronger democratic institutions, higher ed- ucation levels, and diverse media environments exhibited lower infodemic risks, while those with greater political and social polarization were more vulnerable to misinformation. Addi- tionally, we examine the evolution of Infodemic Risk over time, identifying shifts in misinfor- mation dynamics through k-means clustering and principal component analysis. Furthermore, we analyze the role of media diversity (Bertani et al., 2024) in shaping a country’s resilience against misinformation. Our results indicate a positive correlation between media pluralism and lower infodemic risks, emphasizing the importance of a diverse news ecosystem in miti- gating misinformation spread. These insights provide valuable implications for policymakers and researchers aiming to combat digital misinformation and enhance public trust in information sources. 10:20am - 10:40am
The Diffusion of Propaganda on Social Media: Analyzing Russian and Chinese Influence on X (Twitter) during Xi Jinping's visit to Moscow in 2023 University of Stuttgart, Germany Propaganda and disinformation have become central tools for both the Kremlin and China in advancing their political goals and strategic narratives. The rise of social media and the computational power of the Internet have significantly amplified these efforts, enabling the widespread dissemination of political messaging designed to shape and control public discourse. Prior research has documented Kremlin-affiliated disinformation operations, such as those conducted by the Internet Research Agency (IRA), which has sought to influence political and social discourse in multiple countries. The IRA has been identified as a primary source of malicious online activity, using divisive messaging on social media to manipulate public opinion, promote strategic narratives, and foster destabilization, polarization, information disorder, and societal distrust. Notable instances include interference in the 2016 U.S. presidential election, the 2016 Brexit referendum in the United Kingdom, and other socio-political events (Badawy et al., 2019; Bastos & Mercea, 2018; Linvill & Warren, 2020). Scholars have also examined propaganda and disinformation narratives surrounding Russia’s invasion of Ukraine (Alieva et al., 2024; Geissler et al., 2023). Additionally, researchers at the European Union Disinformation Lab (EU DisinfoLab) identified "Operation Doppelgänger," a 2022 Russian disinformation campaign that created fake websites mimicking legitimate news outlets to spread pro-Russian narratives and undermine support for Ukraine. Investigations by the U.S. Department of Justice further exposed the campaign's covert methods and infrastructure (EU DisinfoLab, 2022; U.S. Department of Justice, 2024). This study contributes to the existing research by analyzing the propaganda strategies employed by Chinese and Russian state actors, identifying key users and the main narratives they propagate. Specifically, it examines discourse on X (formerly Twitter) surrounding Chinese leader Xi Jinping's visit to Moscow in March 2023 to meet with Russian President Vladimir Putin, focusing on the diffusion of narratives promoted by Russian and Chinese actors. References: Alieva, I., Kloo, I., & Carley, K. M. (2024). Analyzing Russia’s propaganda tactics on Twitter using mixed methods network analysis and natural language processing: A case study of the 2022 invasion of Ukraine. EPJ Data Science, 13(42). https://doi.org/10.1140/epjds/s13688-024-00479-w Badawy, A., Addawood, A., Lerman, K., & Ferrara, E. (2019). Characterizing the 2016 Russian IRA influence campaign. Social Network Analysis and Mining, 9, 1–11. Bastos, M., & Mercea, D. (2018). The public accountability of social platforms: Lessons from a study on bots and trolls in the Brexit campaign. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20180003. EU DisinfoLab. (2022). Operation Doppelgänger. https://www.disinfo.eu Geissler, D., Bar, D., Prollochs, N., & Feuerriegel, S. (2023). Russian propaganda on social media during the 2022 invasion of Ukraine. EPJ Data Science, 12(1), 35. Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 37(4), 447–467. U.S. Department of Justice. (2024). Justice Department disrupts covert Russian government-sponsored foreign malign influence campaign. https://www.justice.gov 10:40am - 11:00am
The role of moral values in the social media debate 1Sony CSL - Paris, France; 2Enrico Fermi’s Research Center, Italy; 3Sony CSL - Rome, Italy; 4Sapienza University of Rome, Italy; 5Complexity Science Hub Vienna, Austria Social media platforms serve as digital arenas for public discourse, shaped by news providers, political entities, and user interactions. Within this space, leader-follower relationships influence debate dynamics, often affected by polarization, misinformation, and toxicity. Our research examines the role of moral values in shaping engagement and toxicity in online discussions. We focus on Moral Foundations Theory (MFT), which defines five moral dyads: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and purity/degradation. We analyzed immigration-related tweets (2018–2022) from 516 Italian news providers and political figures, along with follower interactions. Using a fine-tuned deep learning model, we identified the primary moral dyad in each tweet and combined this information with toxicity scores from Google’s Perspective API. Our findings show that fairness/cheating and authority/subversion correlate with engagement, while purity/degradation and care/harm respectively correlated and anti-correlated with toxicity. Community analysis based, on retweets of moral content, provided a finer-grained segmentation of the Italian political landscape than standard methods. Progressive and conservative leaning of political accounts in the same group aligned with their mentions of moral values as expected by the MFT. Finally, we found evidence of in-group bias, with followers engaging more toxically when interacting with out-group communities. These insights suggest that leveraging moral alignment, between users and contents from opposing communities, could help design interventions fostering healthier discourse between polarized groups. 11:00am - 11:20am
Unveiling emerging moderation dynamics in Mastodon’s federated instance network 1Fondazione Bruno Kessler, Italy; 2Universita di Calabria; 3Northeastern University London; 4Centro Studi e Ricerche “Enrico Fermi” Mastodon, a decentralized online social network (DOSN), has experienced rapid user migration from traditional social media platforms. As a microblogging platform within the Fediverse, Mastodon operates through independent instances that communicate with each other. This decentralized nature reshapes network structures and alters information flow, presenting new challenges in moderation and the management of harmful content. This study investigates the relationship between Mastodon’s friendship network, based on follow relationships, and its moderation mechanisms, which define inter-instance restrictions. By analyzing structural changes in this signed and directed network over a year, we identify evolving moderation actors while observing persistent large-scale patterns. The banning-banned network naturally divides into two groups: a majority of banned instances and a smaller, highly active minority responsible for most bannings. Using an information diffusion model, we analyze how these structures influence the spread of information. Our findings reveal that the minority group predominantly shares information internally, while the majority group demonstrates less cohesion. Additionally, cross-group information flow is asymmetrical, with the majority group becoming rapidly isolated, whereas the minority retains greater resilience in spreading information. An echo-chamber effect emerges, reinforcing the separation of the minority from untrusted instances. Understanding these mechanisms is critical to mitigating the spread of harmful content and fostering healthy, diverse digital ecosystems. This study provides insights into moderation dynamics in decentralized networks, offering implications for platform governance and information integrity. 11:20am - 11:40am
Amplifying Extremism: Network Dynamics of Conspiratorial and Toxic Content in the Canadian Freedom Convoy Movement 1Dalhousie University, Canada; 2University of Western Ontario Crises often fuel conspiracy theories, which can serve as radicalizing mechanisms and pathways to extremism. This was evident during the COVID-19 pandemic, as anti-vaccine groups advanced narratives suggesting the virus was engineered or that vaccines were a coordinated scheme between pharmaceutical companies and governments. The 2022 Canadian Freedom Convoy protests emerged from such sentiments, evolving into a movement where right-wing extremists played a key role in spreading conspiracy theories and mobilizing digital activism. This study examines how conspiratorial and extremist narratives propagate through online networks. Using discussions from three pro-convoy X (formerly Twitter) hashtags, we employ large language models to detect and classify conspiracy theories, analyzing how they spread through network structures. Moving beyond individual-level characteristics of conspiracy theorists, we investigate the role of network positions in amplifying toxic content. Applying community detection methods, we assess whether conspiratorial discourse is confined within echo chambers or bridges broader audiences. Additionally, we conduct network analysis of the most shared URL domains to evaluate their ideological bias and role in spreading conspiracy theories and right-wing extremism. By examining the lifecycle of conspiracy theories—tracking their reach, speed of proliferation, and engagement—we provide insights into the structural mechanisms that sustain digital extremism. This research highlights how online networks facilitate the amplification of toxic content, offering broader implications for understanding the intersection of digital activism, misinformation, and radicalization. |