Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
(Symposium) Human, gender, and trust in AI ethics: addressing structural issues through Ethical, Legal, and Social Aspects (ELSA) Lab approach
Time:
Wednesday, 25/June/2025:
5:00pm - 6:30pm

Location: Auditorium 6


Show help for 'Increase or decrease the abstract text size'
Presentations

Human, gender, and trust in AI ethics: addressing structural issues through Ethical, Legal, and Social Aspects (ELSA) Lab approach

Chair(s): Hao Wang (Wageningen University and Research, Netherlands, The)

This panel suggests that there are two critical gaps that need to be addressed to develop responsible AI. The existing AI ethics literature often focuses on the first gap—the disconnect between ethical principles and design practices. This gap has led to criticisms of AI ethics as being either impractical (Hagendorff, 2020) or complicit in ethics washing (Wagner, 2018). Many studies have worked on bridging this first gap by figuring out how to turn ethical principles into concrete design tasks that can be applied in AI design. Examples include approaches like AI ethics by design (Brey & Dainow, 2023) or VSD of AI (Umbrello & Van de Poel, 2021). These aim to make AI ethics more actionable for designers.

However, we believe that a second, less-explored gap exists—one between design practices and addressing broader structural issues that shape them. Many structural issues are socio-political, rooted in existing unequal social and political structures. Some others are more ontological in nature, which relate to our basic understanding of the world and reality—our beliefs, mindsets, and assumptions. Current approaches to AI ethics, whether in the form of guidelines or ethics-by-design strategies, often miss the mark when it comes to tackling those structural issues. They usually focus on giving practical recommendations for individual developers or companies, which makes them actionable but also limits their scope by rarely addressing the bigger picture—issues that go beyond what developers or their organizations can implement (Ryan et al., 2024). Structural issues, on the other hand, are deeply connected to broader systemic problems, making them too complex and abstract to solve with ethical guidelines or ethics-by-design alone. To address these structural challenges, we need to address the pervasive conceptual paradigms often underpinning AI discourse, identify different power asymmetries and disadvantaged groups (e.g., women), and to re-evaluate the merit of driving paradigms such as trustworthy AI.

In this panel, we propose the ELSA (Ethical, legal and social aspects) Lab approach as a promising way to bridge both the practical and structural gaps in developing responsible AI. The ELSA Lab is an “experimental, systemic approach in which Quadruple Helix stakeholders—academia, civil society, government, and industry—work together to experiment with strategies to address the ELSA aspects in AI (re)design” (Wang et al. 2025). The approach is procedural and experimental, allowing room to not only operationalize ethics in design practices but critically reflect on issues like power dynamics, anthropocentrism, and broader structural impacts.

The panel has three parts: First, we introduce two key gaps in AI ethics, and how the structural gap is often neglected despite its relevance to responsible AI. Next, we present three examples showing how structural issues shape AI design, covering power dynamics in trustworthy AI, underlying assumptions in human-centered AI (e.g., anthropocentrism, instrumentalism), and the influence of patriarchal structures. Finally, we bring all the presentations together in considering the ELSA lab approach and what opportunities it offers to address these two gaps.

 

Presentations of the Symposium

 

The power and emotions in trustworthy AI

Hao Wang
Wageningen University and Research

In AI ethics, trustworthy AI is widely pursued and often seen as inherently a good thing and adhering to a series of values (AI HLEG 2019). However, this value-based framing can easily turn the ideal of trustworthy AI into a checklist of predefined criteria or principles, reducing it to a mere technical assessment or a proof of compliance. I illustrate two important things that might be missed in this value-based view of trustworthy AI.

First, it can obscure the power dynamics. Drawing on Habermas’s critique of ‘technocracy,’ I will illustrate how this compliance-oriented understanding of trustworthy AI risks undermining its progressive goals and flatten structural solutions into techno-bureaucratic projects. This techno-bureaucracy reinforces, rather than challenges, the logic of continuous data expropriation and asymmetric power. Second, it misses the lived experiences of people who face challenges with AI every day. Trustworthiness is not just about abstract values like transparency or privacy; it is also about the emotions, fears, and frustrations people feel. Many people fear losing control, get angry over political manipulation, or worry about being replaced by machines. This erosion of trust is a reflection of the broader disruptions to everyday life caused by AI.

Given all this, I would even argue that promoting trust in AI might not always be the best strategy. Sometimes, fostering a healthy skepticism—or even justified distrust—could be more important. This kind of distrust is not about eroding societal trust; it is about pushing for the conditions that actually deserve it (Wang, 2022). In this way, distrust can be a pathway to challenge structural issues, holding AI accountable, and making our algorithmic society more trustworthy.

 

Addressing problematic conceptual assumptions about human-technology relations in AI development practices

Luuk Stellinga
Wageningen University and Research

New developments in artificial intelligence (AI) have generated widespread societal debate on AI technologies and their implications. While aimed at improving the moral state of AI, this debate runs the risk of being based on harmful worldviews, which could undermine its well-intentioned goals. Hermeneutic philosophy of technology offers a way to respond, as it allows for uncovering what is taken for granted about humans and technologies in contemporary discourses, such as the societal debate on AI. Following a hermeneutic approach, current AI discourse can be revealed to maintain an understanding of human-AI relations that is universalist (human experiences are viewed as universal), instrumentalist (human-AI relations are viewed as user-tool relations), and anthropocentric (humans are viewed as uniquely moral beings). Each of these assumptions prompts philosophical questions and can be problematized for various reasons. Anthropocentrism, for example, can be argued to overlook the moral status of nonhuman animals and the natural environment, and lead to a disregard for the real and significant harms that AI can cause them (Bossert & Hagendorff, 2021, Van Wynsberghe, 2021)

In the context of this panel, the critical question is whether it is possible to address the problematic assumptions of universalism, instrumentalism, and anthropocentrism in concrete AI development practices. Such assumptions do not straightforwardly lead to specific design choices, as they operate at a broader level and interweave with systemic societal challenges, but nevertheless frame the ways in which problems, methods, and solutions are articulated in AI development. Addressing these assumptions therefore requires not a simple intervention in the design process, but a structural rethinking of the development of AI technologies. The ELSA lab research methodology provides an opportunity to consider how the abovementioned problematic assumptions can be addressed. What is particularly promising about the ELSA lab approach is that it takes on a perspective on responsible AI development as an ongoing and dynamic process, wherein consideration of ethical, legal, and social aspects occurs throughout the development and deployment cycle. Besides this, it is grounded in a multi-level perspective on human-AI relations, acknowledging both individual artifact issues, organizational issues, systemic issues, and ontological issues (Wang & Blok, 2025). As a result, ELSA lab research provides a variety of points at which conceptual assumptions can be identified and critiqued. We consider the merits and implications of critical reflection on philosophical assumptions at the different steps and levels of the ELSA lab approach.

 

AI, Gender, and Agri-food

Mark Ryan
Wageningen University and Research

In recent years, there has been a surge in research on the ethical, legal, and social aspects (ELSA) of artificial intelligence (AI) in agri-food (van Hilten et al., 2024). Much attention has been given to the impact on farmers when deploying AI on their farms and its possible impact on non-human life and society (Ryan et al., 2021). Occasionally, the impact of AI on gender dimensions in agri-food gets raised (Sparrow & Howard, 2020). However, rarely in much detail and even less so concerning structural dimensions underpinning these concerns (Ryan et al., 2024), which will be the focus of this presentation.

To begin with, the domain of agri-food and the discipline of computer science have traditionally been male-dominated. One may assume that adopting AI in agri-food will further catalyse and exacerbate gender challenges and concerns. There is the possibility that the digitalisation of agri-food will further disenfranchise and push women out of the industry. The impact of this could harm diversity and inclusion in the sector. It could also harm the industry because it needs to attract more young farmers to replace an ageing, declining demographic of farmers.

Another significant structural concern in using AI in agri-food is the impact on women in the Global South. In the Global South, the agri-food sector comprises up to 80% of women (Davies, 2023); whereas women make up only 30% of the workforce on farms in Europe and as low as 15% in Ireland (EU Cap Network, 2022). It is 26.4% in the US (USDA, 2024), with similar figures throughout the Global North.

The use and deployment of AI and AI-powered robots and drones on farms are expected to (mostly) be deployed chiefly on wealthy, large, monocultural farms (Ryan, 2020). This may result in an increased digital divide between predominantly wealthy male-dominated farms in the Global North and farms in the Global South, primarily worked on by women.

This divergence in the use and benefit of AI in the agri-food sector creates a split between those who can benefit from these technologies. It also further disadvantages women in already precarious positions in the Global South. AI and AI-powered robots offer real potential to help alleviate and reduce many of the dull, dirty, and dangerous jobs done by women in the Global South; however, if they are priced out of such opportunity, it raises many justice concerns about disadvantage, fair distribution of resources and benefits, and inequality.

Many of these structural concerns and impacts are far-reaching. They cannot simply be addressed by creating AI ethics guidelines for organisations to follow or trying to embed values into the design of AI models. This presentation aims to take a first step by identifying some of the structural gender challenges that the deployment and use of AI in agri-food raises. It will open the discussion for ways that approaches such as ELSA can help, alongside political will and effective policymaking decisions.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SPT 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany