Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Papers) Epistemology I
Time:
Friday, 27/June/2025:
10:05am - 11:20am

Session Chair: Maaike Eline Harmsen
Location: Auditorium 4


Show help for 'Increase or decrease the abstract text size'
Presentations

The Sullenberger case and the epistemic role of simulations and digital twin technologies

Laura Crompton

University of Vienna, Austria

On 15 January 2009, a US Airways plane made an emergency landing on the Hudson River. After a bird strike, two engines failed and the plane lost thrust. It is because of the decision made by the experienced pilot Chesley Sullenberger that all passengers and crew survived an accident that could have had disastrous consequences. After the accident, the US National Transportation Safety Board (NTSB) ran multiple simulations, to analyse and evaluate the decision of Sullenberger - a standard procedure to establish how emergency protocols might need to be adapted according to certain situations. The simulation is used, in a sense, as computer evidence of what could and should have been done differently.

I believe that this has important implications for the epistemological status of such simulations and digital twins (DT): while on the one hand they are (mainly) built and implemented to make predictions, i.e. process possibilities and 'mights' of what might happen if we act in a certain way, on the other hand they are often treated as a representation of reality or factual evidence. Simulations and digital twin technologies have inherent limitations in their ability to represent reality, yet decision makers often treat their results as definitive truths rather than probabilities or approximations. In this paper, I aim to look into the complex relationship between human decision-making and simulation-based analysis.

\noindent In this paper, I aim to look at the epistemological status of simulations and DT. There is a challenging dynamic between the descriptive and prescriptive character of simulations and DT. It is along these lines that we have to ask whether simulations and DT can and should replace real-world experience as a basis for analysis, especially in regard to human decisions.



(In)visibility reductions: a feminist epistemology critique of online ‘shadowbanning’

Mariam Al Askari

Independent, United Arab Emirates

Social media platforms like Facebook, YouTube or Reddit are online information environments where users can share and engage with content. Platforms have employed various moderation practices to limit certain behaviours and curb the spread of content considered harmful or misleading. Paradigmatic approaches are the removal of content and the suspension of accounts, along with a notice informing affected users of the changes made. In this paper, I address a less apparent form of platform governance: reductions in content or profile visibility that are neither disclosed by platform hosts, nor explicitly verifiable by those affected. Examples include delisting, downranking, and hashtag blocking, which critics refer to more generally as ‘shadowbanning’. Are shadowbans conducive to healthier epistemic environments online? Under what conditions does this form of moderation become unethical? In this paper, I distinguish two categories of shadowbanned content and I argue that this moderation technique is unethical when applied to one of them, namely because it bars genuine and well-intended users from fully engaging in knowledge-producing processes online. To support this view, I draw on feminist epistemology work by Lorraine Code, Helen Longino, Amanda Menking and Jon Rosenberg, as well as texts on trust and human-computer interaction.

I begin by discerning two types of shadowbanned content. The first comprises content that targets user vulnerabilities. It is produced by ‘bad actors’ who intend to harm or exploit vulnerabilities (e.g. click-bait or trolling), but also by users who instrumentalise vulnerabilities for unethical or unsafe ends (e.g. content about or depicting self-harm). The second category comprises content that does not target nor rely on user vulnerabilities for its purposes, but may still cause discomfort or discord in users. The content in this category covers sensitive themes or less mainstream views, but may still be genuine and well-intended (e.g. certain political discourse, or content depicting violence to denounce it). There are a number of reasons why platforms shadowban content instead of leaving it up or banning it outright, which I also discuss in detail. In section 2, I present a feminist epistemology conception of knowledge, that shifts the focus from epistemic products to processes. As such, more trust, transparency, and inclusive epistemic practices are what support healthy epistemic environments. Sophisticated moderation methods are not incompatible with this framework, so long as they encourage continuous, genuine, and ‘situated’ epistemic inquiry. In section 3, I use this theoretical framework to assess whether shadowbanning makes healthier epistemic environments. I explain how it does for one content category, but not for the other. This differentiation could support the design of better moderation systems: ones that can discern when user vulnerabilities are intently targeted; or when content is provocative for ulterior motives (e.g. maximising views) versus when it just happens to be so.



Epistemological imbalances in assessment of surveillance technologies: what CCTV cameras show us

Blas Alonso

University of Twente, Spain

Surveillance technologies are used worldwide as mechanisms to safeguard security and maximize citizen wellbeing but, as a consequence, privacy is often negatively affected as a trade-off. A paradigmatic example of surveillance technology that tries to contribute to society by making it safer are CCTV (Closed-Circuit TeleVision) cameras, which are often used for crime prevention. But CCTV systems have the small inconvenient of not being extremely effective preventing crime (citation). Why, assuming that it is not obvious how they contribute to the general well-being, we still invest millions in their development? In this paper, I will argue that the reasons for the adoption of surveillance technologies are often based on biased evidence that comes from the overrepresentation of certain values when assessing how these technologies contribute to wellbeing. In other words, wellbeing can be improved by realising different values in society (safety, autonomy, privacy, freedom of speech, etc.), but the influence of a technology over some of these values is easier to prove than the impact it has on others. This epistemological imbalance can be found in CCTV cameras, as the contributions to security of the cameras are easily proven by comparing different timelines of conflictive areas, but the impact that cameras have over the privacy of citizens is often overlooked: compared to numbers and crime statistics, the methods for evaluating CCTV’s impact on privacy are less “objective” and do not make good headlines (interviews, case studies, etc.).

This epistemological imbalance is not sufficient to justify why we keep on installing CCTV cameras, as it is not even clear that they contribute to prevent crime, but the overrepresentation of the “goodness” of CCTV cameras by displaying it in crime statistics and quick advertisement is easily weaponized by political parties and interested stakeholders. CCTV cameras are a powerful weapon to instigate feelings of insecurity among the population, making them a tool of political mobilization in difficult times. For concluding the paper, we point at the fact that this issue derived from the imbalance to prove the influence of technology in different values that contribute to technology might be a general issue among surveillance technologies, that often carry the same structure of trade-offs between security and privacy.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany