
Human-technology interactions are simultaneously becoming more immersive and more mobile. Consequently, discomfort and sickness induced by the use of these technologies have emerged as critical challenges. From traditional motion sickness to visually induced motion sickness (VIMS), cybersickness, and discomfort from everyday mobile device use, the spectrum of technology-induced symptoms continues to grow. This workshop explores the underlying causes of these issues, sensing techniques, and real-time mitigation strategies for intersensory discomfort in mobile extended reality (XR) contexts. Additionally, we discuss how tools based on artificial intelligence (AI) could enhance user comfort by detecting and classifying discomfort and personalizing mitigation techniques. Participants will contribute by presenting specific challenges related to various forms of discomfort or motion sickness in their work. By discussing these challenges and exploring possible solution strategies through interdisciplinary collaboration in breakout groups, the workshop aims to bring together researchers interested in the topic, foster exchange on current issues, and inspire ideas that participants might pursue further in their work.
Schedule
09:00 | Speed dating |
09:15 | Keynote by Stephen Brewster |
10:30 | Coffee break |
11:00 | Presentations of position papers |
12:30 | Lunch break |
13:30 | Input from organizers on cybersickness sensing and mitigation |
14:30 | Demo Session, including „Haptic Vest for Motion Cues to Mitigate Motion Sickness“ by Katharina Pöhlmann and „Visual Techniques to Mitigate Cybersickness in VR“ by Luca Hilbrich |
15:00 | Coffee break |
15:15 | Design sessions on specific problems in breakout groups |
16:30 | Presentations of design session outcomes |
17:00 | Wrap-up |
Participation
We invite you to submit a 2–4 page position paper (using the ACM two-column format) that shares your research in mobile XR and highlights specific problems or challenges you’ve encountered related to discomfort and sickness. Each submission will be reviewed, and you’ll receive constructive feedback from at least one of the workshop organizers. Submissions do not have to be anonymous.
If your paper is accepted, it will be published on our workshop website and help shape the conversations during the event. We also encourage you to bring along any prototypes or demos related to your work for our hands-on session—it’s a great chance to get feedback, spark meaningful discussions, and inspire new ideas.
Important Dates
Submission deadline: June 20, 2025
Acceptance notification: July 11, 2025
Camera ready: August 1, 2025
Contributions
You’ll find the accepted position papers here.
Organizers

Luca Hilbrich is a Ph.D. student at the Berlin University of Applied Sciences (BHT). He completed his master’s degree in Audio Technology at the Technical University of Berlin, focusing on the parallels between visual and auditory perception through cross-modal correspondences in his thesis. Currently, Luca’s research centers on advancing multimodal sensory illusions using haptics, graphics, and audio technologies. His work aims to alleviate cybersickness and enhance user comfort in extended reality applications as part of the „Real-time Intersensory Discomfort Compensation“ research project.

Yannick Weiss is a Ph.D. student at LMU Munich. His research interests surround haptics in and for extended reality, from understanding the underlying processes of haptic perception to creating and adapting haptic experiences for virtual and augmented reality interfaces. He received a master’s degree in Human-Computer Interaction from LMU Munich in 2021 and is currently working on the research project „Real-time Intersensory Discomfort Compensation“, where he is investigating real-time sensing and mitigation of sensory conflicts.

Katharina Pöhlmann is a post doctoral researcher fellow in the Human Computer Interaction group in the School of Computing Science at the University of Glasgow. Her research focuses on the detection and mitigation of cyber- and motion sickness using multi sensory cue strategies for mitigation. Katharina has received her PhD from the University of Lincoln in 2022 and since then has worked as a post doctoral researcher at the University of Glasgow and the KITE Research Institute. She has served as paper chair at AutoUI 2024 and Mensch \& Computer 2024 and General Chair at VIMS in 2025.

Zhanyan Qiu is a Ph.D. student at the University of Glasgow. His research focuses on virtual reality technology and mitigating motion sickness in VR applications in transportation. He is dedicated to designing innovative visual cues for VR that reduce the occurrence of motion sickness during in-vehicle VR use, while simultaneously minimizing potential distractions caused by these visual cues.

Kevin Fiedler is a Ph.D. student at RWTH Aachen University, where he previously received a master’s degree in computer science in 2023. His current research focuses on improving accessibility in extended reality and the real world by utilizing extended reality and multimodal input techniques, including mitigating motion sickness during in-car mixed reality use.

Albrecht Schmidt is a professor of computer science at Ludwig-Maximilians-Universität (LMU) in Munich, where he holds a chair for Human-Centered Ubiquitous Media. His teaching and research interests include human-centered artificial intelligence, intelligent interactive systems, ubiquitous computing, multimodal user interfaces, and digital media technologies. He studied computer science at the University of Ulm (Germany) and at the Manchester Metropolitan University, where he wrote 1996 his M.Sc. thesis with the title „A modular neural network architecture with additional generalization abilities for high dimensional input vectors“. In 2003, he completed his Ph.D. at Lancaster University. Albrecht chaired the technical program of ACM SIGCHI 2014, he is on the editorial board of the ACM TOCHI journal, and he is the co-founder of the ACM conference TEI and Automotive User interfaces. In 2018, he was inducted into the ACM SIGCHI Academy and in 2020 he was elected into Leopoldina, the Germany academy of science.

Katrin Wolf is a professor for Human-Computer Interaction at the Berlin University of Applied Sciences and Technology (BHT). Her research interests lie at the intersection of human-computer interaction and interaction design, focusing on how to make novel technologies more usable and useful. To date, Katrin’s research has focused on technologies and domains including mobile and wearable systems; virtual, augmented, and mixed reality, as well as interactive exhibitions. Katrin is actively involved in the research community. Most notably, she was General Chair of the German conference on Mensch \& Computer (MuC) 2019, Program Chair at TEI 2021, AHs 2019, and MUM 2018 as well as Late-Breaking Work Chair at CHI 2021 and Student Research Competition Chair of CHI 2019 and CHI 2020.

Stephen Brewster is a Professor of Human-Computer Interaction in the School of Computing Science at the University of Glasgow. There, he leads the Multimodal Interaction Group, which is very active and has a strong international reputation in HCI (http://mig.dcs.gla.ac.uk). His research focuses on multimodal HCI, or using multiple sensory modalities and control mechanisms (particularly audio, haptics, and gesture) to create a rich, natural interaction between human and computer. His work has a strong experimental focus, applying perceptual research to practical situations. A long-term focus has been on mobile interaction and how we can design better user interfaces for users who are on the move. Other areas of interest include VR/AR, wearable devices, and in-car interaction. He pioneered the study of non-speech audio and haptic interaction for mobile devices, with work starting in the 1990s. He was a General Chair of CHI 2019 in Glasgow, CHI papers chair in 2013 and 2014, and has previously chaired MobileHCI, EuroHaptics, and TEI. He is a member of the ACM SIGCHI Academy, an ACM Distinguished Speaker, and a Fellow of the Royal Society of Edinburgh.