Conference Program

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
I.05: Navigating Techno-Futures in Education: Artificial Intelligence and/for Social Justice
Time:
Tuesday, 04/June/2024:
11:15am - 1:00pm

Location: Room 6

Building A Viale Sant’Ignazio 70-74-76


Convenors: Leonardo Piromalli (IREF - Istituto di Ricerche Educative e Formative); Danilo Taglietti (University of Naples “Federico II”, Italy)


Show help for 'Increase or decrease the abstract text size'
Presentations

Educational Robotics Timescapes: an analysis of the EdTech imaginary

Emiliano Grimaldi, Jessica Parola

University of Naples Federico II, Italy, Italy

In contemporary debates, AI and robotics are presented as technologies that will revolutionise the future of education. Promoted by an increasingly powerful industry, iterative cycles of hypes and hopes are boosting the creation of an imaginary that makes their introduction into the field of education a ‘desirable necessity’. This presentation deals with the analysis of this imaginary to understanding the different educational timescapes that are enacted through it. Our analysis focuses on the envisioning of AI-based educational robotics within that industry. Theoretically, we draw on Taylor’s notion of imaginary and Kitchin’s analysis of digital timescapes to explore emerging forms of robotically-mediated educational temporalities. Following Kitchin, we explore educational robotics timescapes by mapping out the fluctuations in pace, tempo, rhythm and synchronicity. Consistently, our research questions are:

  • What are the forms of temporality that are enacted in the imaginary of robotics in education?
  • What kind of pace, tempo, rhythm, synchronicity are distinctive of those forms of temporalities?
  • What relations and ethics can be detected on those forms of temporalities?

To address them, we analyse EdTech companies’ work of envisioning through a quantitative and qualitative composite methodology, to map and understand the social making of temporalities imbued with the emerging imaginary. We, in fact, combine the use of Network Text Analysis (NTA), to extract semantic networks/galaxies and to identify the influential pathways for the production of meaning within texts, with a qualitative interpretation of this networks through the time-conceptual grid inspired by Kitchin’s work on digital timescapes.

Methodologically, we selected a corpus of emerging EdTech companies providing AI-based robotics services and related communication, marketing and technical materials available on the public websites. Data were initially extracted using the T-LAB software. The textual material was previously normalized and then imported into Gephi. NTA and, specifically, a community detection algorithm based on the Louvain method was carried out to map distinct clusters. This procedure, allowed to explore specific semantic networks, which are then interpreted as time-conceptual cores.

The presentation discusses five heterogenous traits of an envisioned robotically-mediated educational temporality that are enacted in the educational robotics imaginary. Specifically, the NTA allowed us to identify the centrality of five temporal concepts in the emerging educational robotics imaginary, such as potentiality, adaptiveness, automation, improvement, and efficiency and a set of related semantic networks. We will show how each of these semantic networks, combined with a qualitative interpretation of texts, allows us to discuss the rhythms of such an envisaged temporality, the forms of calculation of time, the temporal relations that are designed and the enacted modalities that establish a particular relation between present, past and future. Finally, we discuss how the various forms of temporalities linked to the educational robotics imaginary have significant cultural implications for how educational time is mediated, embodied, placed and experienced by teachers and students. We also reflect on how temporal envisioning can be related to similarly paradoxical educational problematisations, promises, solutions, and goals.



The Anthill Model of Collective Intelligence in AI systems: some critical concerns for Social Justice and Democratic Education

Pietro Corazza

University of Bologna, Italy

The recent developments in AI technologies are contributing to deeply transform the processes of knowledge production and circulation, including learning. This intervention will therefore start considering the following questions: “Where is the most significant and influential learning happening in our societies, and what kind of systems are undertaking learning? How is ‘our’ learning (as citizens, students, workers) intermingled with the ways that machines learn? Who is ultimately benefiting from the outcomes?” (Selwyn et al., 2020, p. 3).

Such questions will be addressed by referring to the concept of “collective intelligence”, which has recently acquired a significant role in the debate around AI, connected to the claim that machine learning systems would be able to generate an innovative and extremely powerful form of collective intelligence (Mulgan, 2018). However, the concept of collective intelligence in itself is quite vague, therefore it is crucial to analyse how it is interpreted and materialised in actual AI systems.

In particular, I will claim that today’s AI systems, and the tech companies that control them, in most cases appear to embody a conception of collective intelligence that could be defined as “Anthill Model” (Corazza, 2022). This model consists in a system that as a whole exhibits an intelligent behaviour, even though the individual participants contribute to it mainly being unaware of the way the systems functions and of the role they play in it. The fundamental objective of the anthill model is not the learning or the personal growth of its members, but rather the continuous improvement of the centralised processes of data collection and analysis, which are closely linked to strategies of economic exploitation of collective intelligence.

The perspective of the expansion and strengthening of such a model entails some deeply problematic implications in terms of social justice and democratic education. Indeed, an education system coherent with the anthill model would not be oriented to the promotion of critical and autonomous thinking, because individuals would be, on the contrary, encouraged to delegate more and more decisions to the AI systems, since those are considered capable of elaborating a knowledge that is ‘superior’ to that generated by human beings. Moreover, the increasing automation of manual and cognitive labour makes the majority of people appear less and less useful in the eyes of the anthill model: this would induce to provide a high quality education only to the minority of people who is entrusted with the management of digital platforms, while reducing substantially the investments destined to the majority of population, whose education does not appear to be indispensable anymore. The pursuit of such a vision would obviously outline a scenario of extreme widening of inequalities.

The contribution will therefore conclude by attempting to sketch some answers to the following questions: is the presented scenario an inevitable doom, or is it still possible to act trying to promote a different future? Is it possible to use digital technologies to design forms of collective intelligence which are not conceived as an anthill, but rather as a dialogic community?



Reframing AI in Education: A Social Justice Approach to Technological Mediations

Valeria Cesaroni

Università di Perugia, Italy

The current debate on the relationship between Artificial Intelligence and Education (AIED) revolves around pivotal themes such as the potential for personalized learning (Tapalova 2022), aiding educators in diversifying instructional materials for inclusivity (Mehta et al. 2023), and enhancing student performance assessment and monitoring, including predictive analysis (Mouta et al. 2023). These topics are often examined through two theoretical lenses: instrumentalism (Pitt 2014), viewing technology as neutral, devoid of moral or political values and influence, and technological determinism, attributing an inherent power to technological artifacts to steer innovation. Technologies are therefore supposed to be used as mere teaching aids, they are automatically associated with innovation or with improving or worsening teaching (MIUR 2015). Closely related to this view is also the idea that the only role envisaged for teachers is to be trained in the use of such technologies with a markedly technicist perspective (Gonnet 2001), neglecting a formation on themes related to the ethical, social, and psychological impacts of technologies (Miao et al. 2021).

This contribution, drawing from a postphenomenological perspective on technology (Verbeek 2006, Winner 2017, Coeckelbergh 2019), aims to reframe the AIED debate within a social justice framework (Fraser 2008b). By understanding technologies as "political phenomena", therefore only fully comprehensible by considering the sociopolitical assemblages of technologies, humans, organizations, and processes (Latour 2005, Winner 2014), a deeper insight into the socio-technical transformations of education is presented, arguing for transformative democratic participation in the governance of these technologies (Giroux 1986).

To clarify this perspective, the contribution examines the concept of personalized learning, a term denoting various machine learning-based techniques, from customized interfaces to "adaptive tutors" and learning management systems (Bulger 2016). Current discourse often perceives these technologies as drivers for inclusive education, promoting equal access to learning opportunities and democratizing education. By exploring the operational characteristics and underlying epistemological assumptions of these technologies, this contribution argues that the promoted "personalization" aligns with a neoliberal governmental vision of knowledge and education (Foucault 1991) rather than with a democratic and inclusive view. The socio-technical imaginary of "one-to-one education" or "an Aristotle for every Alexander" (Hillis 2000) indeed embodies the educational ideal of an individual whose understanding would be objectively ensured through statistical data analysis, shaping education around a corporate, instrumental mindset rather than a democratic, inclusive ethos (Brisset, Mitter 2017).
Therefore, by conceptualizing technologies as socio-political mediators, the contribution will propose to understand this AI-based personalisation of education and learning as individualized standardization through the datafication of education, a concept that also sheds light on the evolving nature of the teaching profession.

Hence, the contribution emphasizes that an AI approach grounded in socio-political mediation theory fosters the need for democratic governance of such technologies, advocating for a social justice approach as participatory parity (Fraser 2008a) calling the entire educational system (Mitchell 2018) to devise transformative methods for the potential democratic utilization of these technologies, potentially serving as a valuable support for inclusion if governed democratically and responsibly.



Augmented Teachers for Augmented Students: Preparing Educators And Innovating Education For Symbiotic Future With AI

Cristina Maria Roberta Pozzi

Edulia, Italy

Enhancing human’s physical and cognitive abilities is an ancient dream, rooted in our culture that has shaped the imagination and development of computers and AI.

Since computers have appeared in society and in the workplace, many have expressed the desire to exploit their potential in the educational field giving rise to tools and new teaching approaches with a dual objective: to best prepare the workers of tomorrow and to increase the productivity and personalization of the educational sector(1, 2). With the increasing discussions about digital school, debates imprinted in companies' point of view instead of that of the educating community are flourishing.

However, education cannot be measured in terms of productivity.

It is a complex relationship that aims to give individuals the opportunity to grow and to develop attitudes and skills, and to build the character useful for understanding the world in which they are situated locally and globally, therefore making it possible for future citizens to exercise their freedom within the space and time they are living in(3).

Based on these considerations, we can hypothesize which tools to use in the classroom and how to use them.

When we talk about AI, and genAI there are still grey areas that ask for slow pace: hallucinations, biases, security, privacy, copyright, incompleteness are just some of the things that risk making these tools harmful both from a learning point of view and in terms of social justice. In addition to that comes the problem of accessibility: to the tool and to the knowledge for using it. As hoped for by Engelbart (4), we must aspire to a system with adequately trained humans embedded. That's why some scholars are proposing to slow AIED(5).

It is critical balance the use of AI in education to protect individuals and their growth path, prevent inequalities, and prepare youngsters to face the present and the future including a dimension that is part of our lives.

The question arising is epochal: how can pedagogical models be rethought to make them fit for digitally augmented students and their need for a «bilingual» brain able to tackle both analogical and digital contexts(6)?

Drawing on our experiences in Edulia since 2021, and the initiatives we've embarked upon, I aim to explore and provide insights into some potential solutions for this complex issue. We have encountered numerous lessons and challenges that have shaped our approach. We will delve into these learnings to outline possible directions and answer the difficult questions at hand. As we still navigate uncertain waters, having only just begun to skim the surface of the impact of digital media on brains (especially the symbiosis with AI), we can only proceed with caution. The answer must emerge incrementally from an active approach that focuses on the two terms of the relationship: students and teachers, or, better said in this context, the new augmented students and their need to interact with trained augmented teachers.



Digital Citizenship and Data Literacy. The Challenges of the Artificial Intelligence Era

Veronica Punzo

Università di Pisa, Italy

The future is a promise, unpredictable by definition; facing some tangible and manifest trends, however, we can learn how to adapt our choices and organize our behaviors to move forward and even shape what the future holds (Margiotta, 2019).

Artificial Intelligence benefits and threats are currently the innovation that impact actuality more than any other ones.

The AI-driven technological revolution is characterized by an unusual speed of technical progress and the pervasiveness of its use in every aspect of political and social life.

In particular, the techniques that support Generative AI (GenAI) can influence real and virtual contexts through predictions, suggestions, and decisions based on human-defined goals.

Within this framework is the relationship between AI and education (UNESCO, 2019), which can be articulated in the three dimensions:

- educating with AI, using it to support teachers the AI and interact with students;

- educating AI, empowering the programmer in the training of the model and inserting criteria that allow the algorithm to act in a fair manner (fair);

- educating to AI, developing critical thinking and promoting knowledge and use of AI languages and logic (Panciroli & Rivoltella, 2023, pp. 7-9).

The issue is crucial in the European Commission's policy agenda, as reflected in the "Digital Education Action Plan" (2021-2027) and in the "Digital Compass 2030: The European Model for the Digital Decade".

Italy, by adopting the "School Plan 4.0," interpreting the European frameworks about computational thinking, AI, and robotics in learning, has relaunched the school curriculum including the field of digital technologies with a focus on the implications of AI for the students and teachers education.

A new literacy is required to affirm the inescapability of human intentionality as a key factor in interaction with AI and to guide teachers and students to the correct and profitable use of languages related to the flourishing of new technologies.

To reach new literacy goal the educational action must necessarily sensitize teachers and students to the issues inherent in the ownership and protection of the data contained included in the information collected and re-used by AI applications. This is necessary to face digital educational poverty, since the failure to acquire digital skills, as new alphabets (Pasta & Rivoltella, 2022) needed in postmedial society to analyze and master the production and enjoyment of different digital content, aimed at the full exercise of rights and enjoyment of equal opportunities, both in the digital and physical dimensions, promoting every potential and resource for autonomous and individual growth.

The proposed case study relates to the experimental introduction of the AI discipline within the Computing and Telecommunications Address, in the fourth and fifth classes for one hour a week at the Marconi-Pieralisi Secondary High School in Jesi (AN, IT).

The survey, still in progress, follows a qualitative approach and focuses on some transversal and recurring focal points relating to the development of plural competencies in science, technology, and engineering, but also to the intersection between AI and humanistic disciplines around education to rights to social responsibility.



Non-humans at School. From Blackboards to Robots

Assunta Viteritti1, Letizia Zampino2, Leonardo Piromalli3

1Sapienza -Università di Roma, Italy; 2University of Trento, Italy; 3IREF (Istituto Ricerche Educative e Formative)

Educational places are relational and material spaces (Nespor, 2012; Roehl, 2012; Fenwick and Landri, 2014; Landri and Viteritti, 2016; Viteritti 2020), unstable and changing sociomaterial settings (Orlikowski, 2007) produced by the interaction between humans and non-humans. An entanglement that is inextricable and requires conceptual tools beyond the humanist perspective of education (Braidotti 2013; Ferrante 2016; Landri 2018). The relational field is acted within sociomaterial ensembles that shape and distribute effects and consequences in time and space. Humans are part of these entanglements and by/with these are contained and produced. Specific and plural co-acting interweavings that cannot be analysed a priori as matter of facts but only in their relational and performative emergence. In the educational field, spaces, objects, technologies, have played a minor role, but the need to encourage a change of perspective has recently arisen. Humans and non-humans are analysed as reticular textures, heterogeneous networks that act and incorporate changes in practices and policies. The missing masses placed at the centre by Latour (2006) request voice and co-construct educational action in every sphere. Looking at objects, material and digital, does not mean considering them as substitutes for the human, but as participants in human action, capable of intervening in the premises and consequences of actions. This paper is in the line of STS (Science and Technology Studies) and the study of materiality in education (Fenwick & Edwards, 2012; Decuypere, 2019; Gorur et al., 2019). In this vision, space, desks, blackboards, technological objects such as computers, electronic registers, interactive whiteboards, platforms, and robots are placed at the centre of the analysis, as a glance at materiality enlightens and makes visible the articulated and dense daily work of the educational worlds. The power of materiality relations acts and has many effects. On policies - since there is a widening, multiplication and differentiation of the institutional arenas that place the private actors of the technology market in the central zone; on curricula - which incorporate new knowledge and new standards; on everyday learning practices - increasingly influenced by the relationship with objects and devices; on spaces - continuously subject to structural and cultural transformations and arrangements; on management processes, etc. By adopting the perspective of sociomateriality, it is possible to observe how these objects establish and prescribe old and new moralities, social and political orders, power relations, new and old forms of inequality and (in)justice. The contribution intends to propose an empirical reflection around a plurality of objects that act simultaneously feeding, amplifying, diverting and transforming educational action. We will see in action more traditional objects, such as the blackboard and the class register, following their digital transformation, and new objects that are helping to redefine the educational environment in schools and universities: robots and digital platforms. In the conclusions, we present some theoretical reflections on how this object-centred perspective could enrich interdisciplinary research in education.