ISTP 2026 Conference
“Theorizing in Dark Times – Art, Narrative, Politics”
June 8 – June 12, 2026 | Brooklyn, NY, USA
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Symposium: Are Dark Times Looming for Universities? Student Dialogue with AI Chatbots and Its Implications for Learning and Critical Education
| ||
| Presentations | ||
Are Dark Times Looming for Universities? Student Dialogue with AI Chatbots and Its Implications for Learning and Critical Education AI chatbots, such as ChatGPT, are introducing a new form of dialogue to learning and education. Students are increasingly conversing with AI devices to develop their learning and understanding. While this offers new learning opportunities, it also poses risks, such as exacerbating inequalities, intensifying instrumental modes of learning, and fostering one-sided and reduced ways of understanding. This symposium will examine how students learn with generative AI, and what this new form of dialogue means for the activity of learning. The focus will be on preceding learning: how students formulate questions, interpret responses, and construct meaning through dialogue with chatbots. We will also explore how students understand and engage with chatbot responses as a novel form of digitally generated text in their learning activities. Furthermore, we will explore how an AI-driven “dialogue” risks narrowing student agency versus activist dialogical learning spaces that expand it. Building on participatory, subjectivity- and world-centered conceptions of learning, the symposium will contribute to rethinking learning and education in the context of AI by analyzing ways to engage with AI critically and constructively in the practice of learning in higher education, as well as how AI can connect with a transformative developmental pedagogy instead of reinforcing instrumental modes of learning. Presentations of the Symposium Cultivating Relational Socratic Ignorance (RSI) in Learning with Generative AI Research show that Relational Socratic Ignorance (RSI) can be mobilized as a critical pedagogical stance when students and educators engage with generative AI. RSI is developed from our long-term work on preceding learning and friction in cultural learning ecologies, where human sense-making is always shaped by material and technological relations. Rather than treating AI as a neutral instrument for learning, RSI foregrounds how every educational encounter with large language models involves interpretive negotiations between students culturally situated learning histories and the algorithmic logics of the models themselves. In this perspective, students’ ignorance is not a deficit but a relational awareness. It is a recognition that what we do not know is always defined within specific cultural, disciplinary, and technological frames. By cultivating this awareness, students can learn to identify how their prompts, questions and interpretations, are co-structured by both human and machine. The friction that emerges when these interpretive frames collide becomes a site for reflection and critique, opening possibilities for a more conscious and dialogical engagement with technology. RSU thus contributes to critical education by inviting educators and students to sustain curiosity through relational not-knowing rather than algorithmic certainty. It offers a way to keep open the question of what counts as relevant knowledge when learning with AI and how educational practices can remain ethically and culturally responsive in a technologically mediated world. From Moral Panic to Transformative Activist Pedagogy: The Role of Student Agency in Reimagining Education in an Era of AI In these dark times, an unprecedented uproar over generative AI in education has seen strident debates between boosters, who envision a technological revolution, and knockers, who warn of an impending doomsday, resulting in polarized calls to either hastily adopt or outright ban the use of generative AI tools. Amidst this moral panic, students have already exploited these writing tools in their coursework even as institutions scramble to develop policies for responsible AI use. While this debate has foregrounded ethical concerns like integrity, fairness, equity, and accessibility, it has tended to sideline their connection to different forms of pedagogy. This paper seeks to contribute to this debate about the use of generative AI tools in higher education by bringing student voices from a U.S. community college into dialogue with their psychology professor to explore how their experiences with different modes of instruction shape their motives, understanding, and stance toward AI writing tools. As confrontation with concerns over fraudulent written assignments has become inescapable and corollary reformulations of assessment tasks ensue, engaging students in this dialogue not only brings much-needed students’ perspectives on these issues but has also become an important avenue to counter the risk that these concerns will cement both outdated deficit views on non-traditional college students and traditional, top-down forms of teaching-learning. The paper concludes with a discussion of how a transformative activist pedagogy that positions students as agents of their learning and other community and social practices can create meaningful contexts for the use of AI tools. Tentacular Learning and the Missing Word-to-World Connection of AI-Generated Text Learning and knowing unfold through dialogue with oneself, with others, and with the learning matter. Generative AI systems, such as ChatGPT, constitute a new form of dialogue. Today, many learners, including university students, use generative AI in their learning activities. They use it not only for practical, operative aspects, such as searching for and accessing learning materials, but also for creative, content-related, and world-engaging learning, such as explaining concepts, interpreting text passages, or providing insights into particular aspects of the world. Building on a theory of learning as a transformative, worlding, and tentacular practice, this talk explores the significance of dialogue with AI chatbots in human learning. The focus is on the system’s responses and the specific kind of synthetic text generated by AI. Are these responses meaningful and knowledgeable statements, or are they merely strings of words based on sophisticated statistical calculations but with no word-to-world connection? The paper explores how AI systems generate the responses and reflects on the implications of this novel kind of text for tentacular, world-engaged learning and knowing. | ||

