Session | ||
OS-9: Beyond detection: disinformation and the amplification of toxic content in the age of social media
Session Topics: Beyond detection: disinformation and the amplification of toxic content in the age of social media
| ||
Presentations | ||
8:00am - 8:20am
Cognitive Warfare on Social Networks Centro Universitário FEI, Brazil Cognitive warfare is an expanding phenomenon, operating within the domain of the human mind, which is revolutionizing how information is shared and interpreted on social networks. The dissemination of disinformation and fake news, driven by advanced technologies—particularly generative artificial intelligence—heightens the complexity of these strategies, whose primary objective is to manipulate individuals' thoughts and behaviors. The analysis of this emerging form of warfare and its global implications guides this study, which aims to provide valuable insights for the formulation of public policies to counter threats to the psychosocial expression of democracy resulting from the amplification of such influence on social platforms. To achieve this aim, a mixed-methods approach was employed, incorporating exploratory and systematic literature reviews, case studies, and semi-structured interviews. This research stands out not only for the innovative nature of its subject matter but also for its complexity, evidenced by the integration of knowledge from diverse fields. The findings reveal that the manipulation of information and public perceptions on social networks is a central weapon of cognitive warfare, exploiting brain vulnerabilities to influence emotions, shape behaviors, and destabilize democratic societies. Recent events illustrate how this dynamic unfolds, exposing significant vulnerabilities in the psychosocial expression of democracy. In light of the growing influence of cognitive warfare on social media and its global ramifications, this study offers valuable contributions toward the development of public policies. Moreover, its recommendations present concrete applicability, outlining a set of actionable measures 8:20am - 8:40am
A Data-Driven Adaptive Approach to Supporting Fact-Checking and Mitigating Mis/Disinformation Through Domain Quality Evaluation 1Fondazione Bruno Kessler, Italy; 2University of Trento, Italy Misinformation spreads rapidly on social media, harming public debate, democracy, and social stability. To address this concern, we propose a real-time machine learning system that predicts website trustworthiness by analyzing 48 domain features, including PageRank, Domain Authority, and Spam Score. Our system achieves a mean absolute error of 0.12, demonstrating high accuracy in estimating website reliability. It adapts to the changing online environment and enables researchers, media agencies, and policymakers to identify suspicious domains more efficiently. Existing tools such as NewsGuard and Media Bias Fact Check offer valuable assessments but only cover a limited number of sites and often require paid subscriptions. These constraints make it challenging for large-scale efforts to reduce misinformation. Our approach overcomes these limitations by using a comprehensive dataset of domains, gathered from six expert sources. Missing data was handled using advanced imputation, and a unified trustworthiness score between 0 and 1 was assigned to each site. Higher scores indicate more reliable domains, while lower scores suggest questionable sources. We follow a streamlined workflow that includes data preparation, splitting, model training, performance evaluation, and new domain assessment. Because the system updates as the internet evolves, it supports fact-checking organizations, social media platforms, and other stakeholders in making timely, informed decisions about untrustworthy sites. Fact-checkers can focus on highly suspicious domains, and social media companies can label or prioritize content more effectively. In summary, our system provides a scalable, adaptive, and cost-effective solution to evaluating domain credibility, meeting an urgent need to mitigate the spread of misinformation. 8:40am - 9:00am
A temporal-network perspective on the longitudinal analysis of online coordinated behaviour 1IT University Copenhagen; 2Uppsala University, Sweden Network-based methods have proven successful in detecting coordination in social media, for example to identify deceptive attempts to increase the visibility of social media posts and websites. The typical approach consists in constructing networks of social media accounts, where edges indicate a sign of possible coordination (for example the simultaneous sharing of the same URL by two accounts), then looking for dense subnetworks (communities) potentially indicating coordinated botnets. In this approach, time is fundamental, as it is used to select and/or weight edges; yet little attention has been dedicated to the longitudinal aspects of coordination. In this work we examine the suitability of different temporal network analysis methods for the longitudinal study of online coordination, where the network is split into temporal slices and communities are traced over time. In particular, we focus on two methodological questions: (1) How to slice the network so that analytically interesting communities emerge? Here we consider calendar-based, data-based, and community-based slicing. (2) Which temporal community detection methods provide the most informative results? Here we test both multilayer methods (simultaneously looking for communities across all slices) and layer-by-layer methods (looking for communities in individual slices and tracing their temporal evolution). For our empirical study we use a climate-change-communication dataset with 64500 Facebook posts published by 470 manually-selected actors (294) and counter-actors (176) active on the climate debate, posted between 2023 and 2024. Our results confirm the intuition, recently suggested in the literature, that a longitudinal approach can find more traces of coordination and reveal dynamic patterns that are otherwise lost in a static analysis. At the same time, we raise questions about the validity of existing design choices, their consequences, and suggest associated guidelines. A noteworthy result of our experimental study is the observation that the most common community detection approach used in the literature to identify coordination risks to identify groups for which there is no strong evidence of actual coordination, and that this problem is exacerbated when a longitudinal perspective is adopted. 9:00am - 9:20am
How algorithms recommend political content on social network. 1Sciences Po Paris, France; 2Institut des Systèmes Complexes de Paris Ile-de-France CNRS, France Recommendation algorithms are widely used on social media platforms and play a crucial role in shaping users' information environments. These algorithms leverage traces of online behavior—such as likes, shares, and social ties—to predict user preferences. Given that such behaviors correlate with political attitudes, to what extent do recommendation algorithms learn and leverage users' political leanings? And how would recommendations change if algorithms were prevented from using this information? Using algorithmic explanation methods, we identified political information within the representation learning space of recommendation models. We then manually reduced the influence of this information and examined its effects on political bias and content diversity. In a case study based on URL-sharing data from X (formerly Twitter), we trained a recommendation algorithm and used political attitude estimates to analyze how political patterns emerge in the model’s hidden layers. Our findings show that even when trained solely on behavioral data, the algorithm captures users’ political traits, steering recommendations toward partisan content. We observed that ignoring political information effectively reduces political bias in recommendations but also diminishes content diversity as recommendations shift toward mainstream sources. This study proposes a method to assess the impact of political information in recommendation systems, opening new avenues to understand recommender’s role in shaping online political discourse amplification. 9:20am - 9:40am
Studying information segregation on YouTube: Structural differences in the recommendation graph 1Institute for Advanced Study in Toulouse, France; 2University of Stuttgart, Germany; 3University of Washington, USA; 4University of Oklahoma, USA The impact of YouTube's recommendation algorithm has been the subject of debate. Case studies and anecdotal evidence have suggested that algorithmic recommendations may lure the platform's users into rabbit holes where they are exposed to biased content and misinformation. Empirical research on rabbit holes, however, has been challenging to conduct because of the difficulty of separating the effects of the algorithm from user behavior. In this study, we study algorithmic recommendations separately from user behavior through a comparative network approach. We collect data for various political and non-political issues on what videos the YouTube algorithm recommends to watch after having consumed a given video. The resulting networks of YouTube recommendations between videos on different issues are then compared on structural network characteristics. If the algorithm creates rabbit holes, one would expect the recommendation networks of videos on conspiratorial content should be characterized by stronger modularity. To test this hypothesis, we analyzed 15,455 videos and 154,311 recommendation links on 40 topics, each belonging to one of four main categories: news, science, conspiracy, and non-controversial. On the macro level, we find that conspiratorial video networks are small, sparse, and centralized, with their density partly explained by lower view counts and network size. They often link to leftist partisan channels but not more than other networks. The sentiment of their videos is negative on average---like news networks, but unlike science and non-controversial networks. Finally, on the micro-level we find that sentiment does not predict in-degree but recommendations are driven mainly by view count. |