ID: 334
/ MCI-Demo Session: 1
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Prototyping and Interaction Modelling, User Experience Design, Tangible Interaction, Haptics, Touch and Gestures, Digital Humanities and UX, Gamification and Serious Games, Affect, Aesthetics, Emotion, Inclusion, Fairness, EthicKeywords: Information Design, Physical Artifacts, Accessibility
(be)greifbar – Ein physisches Interaktionsobjekt zur Vermittlung abstrakter Zusammenhänge
Margit Rosenburg
Hochschule Merseburg, Deutschland
Diese Forschungsarbeit entwickelt ein analoges Interaktionsobjekt, das die komplexen Auswirkungen von Künstlicher Intelligenz (KI) in soziotechnischen Systemen auf die drei Dimensionen Ökologie, Soziales und Ökonomie sichtbar und greifbar macht. Ziel ist ein kritisches Bewusstsein für die vielschichtigen Zielkonflikte in Soziotechnischen Systemen zu fördern. Der entwickelte Demonstrator benutzt die Balance-Metapher, bei der ein fiktives Unternehmen in “Schieflage” gerät, die durch ein instabil gelagertes Spielfeld physisch dargestellt wird. Spielende können durch Implementieren von KI und Ausgleichsmaßnahmen das Gleichgewicht wiederherstellen und dabei komplexen Wechselwirkungen zwischen den drei Dimensionen haptisch erfahren. Eine Feldstudie (n=16) zeigte eine durchweg positive Resonanz zur Balance- Metapher. Nach Überwindung einer anfänglichen Interaktionsschwelle entwickelten Teilnehmende hohe Motivation und kreative Lösungsansätze, wodurch intensive Dialoge und vielfältige Reflexionen entstanden. Die Arbeit leistet einen Beitrag zur Kommunikation, indem sie zeigt, wie man unter Nutzung von Metaphern in physischen Interaktionsobjekten mehrdimensionale Zusammenhänge zugänglich machen kann. Darüber hinaus trägt sie zum Diskurs um Digital Diversity bei, da die analoge Umsetzung neue Kommunikationsräume schafft, in denen auch Menschen ohne hohe digitale Affinität oder Zugang an gesellschaftlich relevanten Diskussionen über Digitalisierung und KI teilhaben können.
ID: 331
/ MCI-Demo Session: 2
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Mobile and Ubiquitous Interaction, Assistive Technologien, Applications, Health and WellbeingKeywords: pose estimation, mobile fitness, digital health, mobile interaction, home workout systems
WorkoutBuddy: AI-Based Virtual Fitness Coach for Home Workouts
David Schwarz1,2, Tobias Breitenauer1,2, Daniel Matt2
1University of Augsburg, Germany; 2Technical University of Applied Sciences Augsburg, Germany
We introduce WorkoutBuddy, a mobile-based virtual fitness coach that supports home workouts through AI-powered exercise tracking and real-time feedback. Our system leverages the ubiquity of smartphones to make guided exercise accessible to everyone, including users with limited mobility, older adults, or those unable to visit gyms. Using only a phone's built-in camera, the app detects user movements in real time, providing feedback on repetitions and exercise. No extra hardware or setup is required, drastically lowering cost and effort barriers. The UX is intentionally simple, including tap-to-start, pose detection via on-device models, and multimodal feedback. This demonstration highlights how inclusive design and mobile AI can expand access to digital fitness experiences.
ID: 306
/ MCI-Demo Session: 3
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Interaction Techniques, Wearable and Nomadic ComputingKeywords: Breathing, Physiology, Biosignals, Respiration, Embodied interaction
BreathClip: A Wearable Respiration Sensor for Interaction Design
Iddo Wald1, Amber Maimon2,3, Shiyao Zhang1, Rainer Malaka1
1University of Bremen, Germany; 2University of Haifa; 3Ben Gurion University
We present BreathClip, a lightweight, clip-on respiration sensor that enables accessible, embodied interactions based on respiration. The sensor captures subtle torso movements using an IMU and streams breathing data wirelessly in real time. BreathClip attaches easily to everyday clothing with either a clip or magnet, avoiding the discomfort and setup challenges of commonly used belts. With low-cost off-the-shelf hardware, and open-source software, it lowers the barrier for integrating live breathing signals into interaction design. Inspired by the theme of Digital Diversity, we demonstrate an interactive multi-user demo that visualizes each participant’s unique breathing pattern as an evolving digital trace, highlighting how diverse inner signals can shape shared interactive experiences. BreathClip enables harnessing physiological signals for richer, more versatile interaction design.
ID: 281
/ MCI-Demo Session: 4
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Tangible Interaction, Haptics, Touch and GesturesKeywords: tangibles, inflatables, pneumatic, tools, methods
HugSense: Exploring the Sensing Capabilities of Inflatables
Klaus Stephan, Maximilian Eibl, Albrecht Kurze
TU Chemnitz, Deutschland
What information can we get using inflatables as sensors? While using inflatables as actuators for various interactions has been widely adopted in the HCI community, using the sensing capabilities of inflatables is much less common. Almost all inflatable setups include air pressure sensors as part of the automation when pressurizing or deflating, but the full potential of those sensors is rarely explored. This paper shows how to turn a complete pillow into a force sensor using an inflatable and a simple pneumatics setup including an air pressure sensor. We will show that this setup yields accurate and interesting data that warrants further exploration and elaborate on the potential for practical applications.
ID: 298
/ MCI-Demo Session: 5
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Prototyping and Interaction Modelling, User Experience Design, Interaction Techniques, Human-Robot Interaction, Multimodal Interfaces, Applications, CSCW and Social Computing, Digital Humanities and UXKeywords: Multilingual communication, embodied conversational agent, humanoid social robot interpreter, speech translation, human-robot interaction, real-time translation, interaction design
Worldhat: A Humanoid Social Robot Interpreter for Multilingual Dyadic Conversations
Sandra Müller, Martin Feick, Alexander Mädche
Karlsruhe Institute of Technology, Germany
Language barriers continue to hinder non-native speakers’ access to essential public services, where effective communication is crucial. While mobile translation apps are common, they often fall short in socially complex settings due to the lack of non-verbal cues, disrupting conversational flow and weakening interpersonal connection. This demonstration introduces Worldhat, a humanoid robot interpreter that supports real-time multilingual communication through speech translation and embodied interaction. Built on the Furhat SDK, Worldhat integrates speech recognition, machine translation, and speech output, simulating the social behaviors of a human interpreter. It highlights how embodied interaction can enhance translation systems. Future work includes automatic language detection and deeper exploration of robot embodiment.
ID: 259
/ MCI-Demo Session: 6
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Prototyping and Interaction Modelling, Interaction Techniques, Virtual, Mixed and Extended RealitiesKeywords: Virtual Reality, Breathing Interaction
Breathe Me Up, Scotty: A Virtual Reality Free-Fall Experience to Explore Breathing Rate as a Measure and Interaction Modality in Stressful Situations
Niklas Pfützenreuter1,3, Daniel Zielasko2, Uwe Gruenefeld3
1University of Duisburg-Essen, Deutschland; 2Trier University, Deutschland; 3GENERIO, Deutschland
Measuring the breathing rate makes it possible to draw conclusions about the user's current mental stress level or physical load. This enables novel opportunities to enhance interactions, treating breathing as an additional input modality. It can support applications designed to actively reduce stress, promote mindfulness, and improve overall well-being. However, it is still unclear how breathing rate can be used as an input modality for adaptive interfaces - especially with the goal of achieving greater mental stress reduction. This demo takes a first step in this direction by analyzing how the breathing rate changes when users are exposed to mental stress using an immersive setup. We isolate mental stress from other types of stress by creating a Virtual Reality free-fall tower application that triggers mental stress by creating a feeling of height. We selected the sitting scenario to minimize the user's movement as a source of sensor noise, a known limitation of many current sensors. We aim to use the measured data to create a Cross-Reality system that uses breath-controlled transitions between stressful and relaxed environments to enable faster stress reduction.
ID: 261
/ MCI-Demo Session: 7
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Interaction Techniques, Multimodal InterfacesKeywords: image captioning, interactive machine learning, contextualisation, personalisation
CUTIE: A human-in-the-loop interface for the generation of personalised and contextualised image captions
Aliki Anagnostopoulou1,2, Sara-Jane Bittner1, Lavanya Govindaraju1, Hasan Md Tusfiqur Alam1, Daniel Sonntag1,2
1DFKI Deutsches Forschungszentrum für Künstliche Intelligenz, Deutschland; 2Carl-von-Ossietzky Universität Oldenburg, Applied Artificial Intelligence, Deutschland
Image captioning is an AI-complete task that bridges computer vision and natural language processing. Its goal is to generate textual descriptions for a given image. However, general-purpose image captioning often does not capture contextual information, such as information about the people present or the location the image was shot. To address this challenge, we propose a web-based tool that leverages automated image captioning, large foundation models, and additional deep learning modules such as object recognition and metadata analysis to accelerate the process of generating contextualised and personalised image captions. The tool allows users to create personalised and contextualised image captions efficiently. User interactions and feedback given to the various components are stored and later used for domain adaptation of the respective components. Our ultimate goal is to improve the efficiency and accuracy of creating personalised and contextualised image captions.
ID: 330
/ MCI-Demo Session: 8
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: User Experience Design, Mobile and Ubiquitous Interaction, Virtual, Mixed and Extended Realities, Gamification and Serious Games, Health and WellbeingKeywords: breathing training, immersion, digital health, mobile health intervention, biofeedback, breathing detection
Demonstrating BREEZE-VR: A Gamified Virtual Reality Biofeedback Breathing Training to Strengthen Mental Resilience and Reduce Acute Stress
Tobias Kowatsch1,2,3, Lola Jo Ackermann2, Helen Galliker2, Yanick Xavier Lukic4
1Institute for Implementation Science in Health Care, University of Zürich, Zürich, Switzerland; 2School of Medicine, University of St.Gallen, St.Gallen, Switzerland; 3Department of Management, Technology, and Economics, Eidgenössische Technische Hochschule Zürich, Zürich, Switzerland; 4Institute of Computer Science, School of Engineering, Zurich University of Applied Sciences, Winterthur, Switzerland
Noncommunicable diseases (NCDs) pose a significant challenge to public health and the economy, underscoring the need for effective strategies in prevention and management. Evidence shows that slow-paced breathing holds promise in strengthening mental resilience and reducing acute stress, both relevant for addressing the prevention and management of NCDs. Biofeedback can help individuals adopt slow-paced breathing patterns, thereby enhancing the effectiveness of breathing trainings. Gamified immersive environments may further amplify these effects. Against this background, we developed BREEZE-VR, a gamified virtual reality biofeedback breathing training. In this paper, we describe the development of the first prototype of BREEZE-VR and provide a video clip demonstrating its use. During the conference, attendees are invited to try out BREEZE-VR. In our future work, we will evaluate the effectiveness and acceptability of BREEZE-VR in a laboratory study.
ID: 299
/ MCI-Demo Session: 9
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: User Experience Design, Interaction Techniques, Human-Robot Interaction, Health and Wellbeing, Affect, Aesthetics, EmotionKeywords: Human-Robot Interaction, Active Behavior, Zoomorphic Robot
DoggoRoomie - Examining Zoomorphic Robot Interactions For Promoting Active Behavior In A Comfortable Setting
Anika Bork, Christopher Kröger, Ivana Žemberi, Lars Hurrelbrink, Srujana Madam Sampangiramu, Yuliya Litvin, Rachel Ringe, Bastian Dänekas, Rainer Malaka
University of Bremen, Germany
This demo explores how zoomorphic social robot interactions can be designed to encourage physical activity. By employing animal-like nudges with varying levels of intrusiveness, the robot prompts people to adopt more active behaviors. The aim is to develop an innovative approach to tackling the widespread challenge of sedentary lifestyles.
ID: 275
/ MCI-Demo Session: 10
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Interaction Techniques, Mobile and Ubiquitous InteractionKeywords: Virtual reality, human-in-the-loop, research, gesture recogniton
Integrating Human Feedback in VR – A Human-in-the-Loop Approach to Real-Time Gesture Recognition
Mathias Haimerl1,2, Andreas Riener1
1Human-Computer Interaction Group (HCIG), Faculty of Computer Science, Technische Hochschule Ingolstadt, Deutschland; 2Johannes Kepler Universität Linz, Austria
Studies in virtual reality (VR) often suffer from missing participant feedback due to limited sensor data available on VR devices. However, some experiments require direct feedback to be evaluated in the VR scene immediately, which tends to be challenging. Our setup utilizes a coordinated setup of hardware and software components to provide a feedback loop from participants' gestures into the VR scene so entities can react to people's actions while simultaneously collecting data from all connected systems. This demo showcases a study setup including (a) a central data broker with low- and high-frequency data storage, (b) a gesture recognition and publishing system, (c) a VR system to collect entity movement data and voice transcriptions, and (d) a command-line interface for managing the system. This should facilitate study execution by limiting waiting times for participants and improving the overall data collection process.
ID: 271
/ MCI-Demo Session: 11
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Mobile and Ubiquitous Interaction, Interaction with Embedded and Ambient Systems, Learning TechnologyKeywords: HCI, Human-Computer Interaction, Home, Smart Home, Internet of Things, IoT, Sensor Data
Sensorkit: A Toolkit for Simple Sensor Data in Research, Education and Beyond
Albrecht Kurze, Christin Reuter, Andy Börner
TU Chemnitz, Deutschland
The interaction with and implications of simple sensor data play an increasingly important role in the design and use of smart interactive system, e.g. in the home. We present an updated version of our Sensorkit, a tool for collecting, processing, visualizing and interacting with simple sensor data. The Sensorkit is an open tool that comes with all that is needed 'in a box' and works 'out of the box'. We present recent new features such as new sensors or an integrated annotation tool for lay users that is also a flexible support for researchers. We show the great flexibility of the Sensorkit as a research tool for explorative and creative sensor usages and for privacy research in the home. We have also gained valuable experiences in educational uses in a wide variety of settings. We hope to inspire similar or other creative uses in research, education and beyond.
ID: 318
/ MCI-Demo Session: 12
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Interaction Techniques, Tangible Interaction, Haptics, Touch and GesturesKeywords: Tangible user interfaces, Multi-touch interaction, Map navigation, Multi-user
Setting the Stage for Collaboration: A Multi-View Table for Touch and Tangible Map Interaction
Erich Querner1,2, Philipp Ewerling1, Martin Christof Kindsmüller2
1Interactive Scape GmbH, Germany; 2Technische Hochschule Brandenburg, Germany
We present a multi-user map application based on a novel multi-view concept that enables simultaneous and independent interaction with shared geospatial content. Each user operates a personal View Finder and Focus View, color-coded for clarity, while a shared, immutable Context View provides a common reference frame. The system supports both touch-based and tangible interaction techniques, including gesture control, virtual joysticks, and physical objects. Users can flexibly arrange their workspaces on a multi-touch table, supporting both individual exploration and collaborative tasks.
ID: 290
/ MCI-Demo Session: 13
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Haptics, Touch and Gestures, Gamification and Serious Games, Fairness, EthicKeywords: deceptive patterns, dark patterns, countermeasures, interactivity
Whack-a-Pattern: Fighting Deceptive Patterns With a Hammer
René Schäfer, Lennart Becker, Adrian Wagner, Kevin Fiedler, Paul Miles Preuschoff, Sophie Hahn, Jan Borchers
RWTH Aachen University, Germany
Deceptive patterns are interface design strategies that manipulate users in their decision-making against their best interests. They are common in websites and apps, cost users time, money, or personal data, and often lead to anger and frustration. This makes sensitizing people about them increasingly important. This is the goal of Whack-a-Pattern. Inspired by the classic arcade game Whack-a-Mole, Whack-a-Pattern has players shatter as many deceptive patterns as they can using a (soft) physical hammer while sparing fair designs, in a fast-paced, high-energy game. Whack-a-Pattern approaches the rather dystopian topic of deceptive patterns from a new and irreverent, fun perspective. It is accompanied by a version of the game that can be played online to educate and spark interest in deceptive patterns among the research community and beyond.
ID: 280
/ MCI-Demo Session: 14
MCI: Demos: Interactive Systems or Demonstrators
Human-Computer-Interaction: Tangible Interaction, Digital Humanities and UX, Reflection and Perspectives: Individual and SocietyKeywords: Arts and humanities, Human computer interaction, Hacking
Write Again(st) the Machine. Reanimating a GDR-Era Typewriter as a Reflective Interface for Human-AI Dialogue
Karola Köpferl, Albrecht Kurze
TU Chemnitz, Deutschland
We present a hacked 1980s East German typewriter repurposed as a screenless interface for large language models (LLMs) such as ChatGPT. Typed prompts are transmitted via a WiFi-enabled microcontroller; responses return slowly, audibly, and irreversibly-letter by letter, in ink on paper. There is no screen, no cursor, no delete key. This deliberately frictional interaction reimagines AI not as a seamless productivity tool, but as an embodied, material encounter. Demonstrated in public settings, the system transforms digital language into tangible trace-sparking reflection, curiosity, and intergenerational conversation. Its mechanical rhythm and nostalgic presence stand in stark contrast to contemporary AI norms. In the context of Chemnitz as European Capital of Culture 2025 the project becomes both a technical prototype and a site of cultural inquiry. It links the city’s computing legacy with today’s algorithmic imagination: a conversation piece for the age of artificial intelligence.
|