top of page
MultiModal + Education + UX/UI 

M.Y. Harmony in Motion

 A Multimodal Interface for Guided Yoga Exploration that provides feedback to yoga practitioners regarding correct posture to maximise results and minimise injury 

DURATION:

1 Month

SKILLS:

Haptics, Design Research, prototyping, UI/UX, Learning

TEAM:

Helena Linder Miñambres, Saga Oldenburg,

M.Y. is a prototype developed to enhance solo yoga practice by integrating multimodal feedback—visual, auditory, and haptic cues. This innovative approach aims to address common challenges faced by individuals practicing yoga alone, such as uncertainty in pose execution, the timing for transitions, and effective breathing. The ultimate goal is to create a supportive environment that fosters confidence and safety during practice.

Research Objectives

The evaluation sought to answer the following research questions:

RQ1: How effective is the device in guiding users through yoga poses with a combination of multimodal feedback compared to visual-only instruction?

RQ2: To what extent do users feel confident while learning and performing a yoga pose with the device's guidance?RQ3: How safe do users feel during practice, and to what extent does the interface contribute to the feeling of safety?

DESIGN

M.Y. is an innovative wearable guide that leverages computer vision to provide real-time visual, haptic, and auditory feedback for yoga practitioners at home. Comprising six haptic bands, the device can be fastened around specific body parts, allowing users to personalize their sessions based on their individual needs. Users define parameters through a graphical user interface (GUI), which records their inputs. Visual cards serve as cues throughout the session, while computer vision technology tracks the user's form via a camera. This system calculates corrections by comparing the user’s pose to the cues, delivering localized haptic feedback to guide adjustments. The auditory component, paced to the user's flow, enhances mindfulness and reduces anxiety during practice, creating a more engaging experience than traditional video-based instruction.

Input Modalities

  • Before the Session:  The GUI allows users to input preferences such as breathing pace, transition speed, and time spent on each pose, with plans to expand personalization options to include various yoga styles and asanas.

  • During the Session: The Mediapipe Pose landmark detection framework is utilized to track the user’s positioning in real-time, providing immediate feedback on their form.

image.png
image.png
image.png

Output Modalities

  • Visual Feedback: Static visual cards illustrate each yoga pose step-by-step, featuring images and text descriptions to guide users without overwhelming cognitive load, encouraging a focus on bodily movement rather than the screen.

  • Haptic Feedback: The wearable design, created from stretchy material for comfort, integrates haptic actuators that guide movement through vibrotactile signals. Feedback is localized to specific body parts, ceasing once the user achieves the correct position, inspired by natural principles of equilibrium.

  • Auditory Feedback: An interactive soundscape developed with SuperCollider dynamically adjusts pitch based on user movements, enhancing body perception and flexibility. Additionally, an auditory guide regulates breathing, syncing with haptic feedback to support mindfulness.

image.png

Evaluation Setup

Participants: Six female students aged 22-26 from KTH Royal Institute of Technology, with varying levels of yoga experience (from less than a week to over a year). Participants practiced yoga independently, following videos, or in online classes. They rated their yoga proficiency from 2 to 4 out of 5.

 

Methodology: A within-groups evaluation strategy was adopted, involving two tasks:

Task 1: Participants practiced the Vrikshasana (Tree Pose) and Virabhadrasana II (Warrior II Pose) using only visual guidance from graphic cards. This task focused on one body part at a time to simplify instructions.

Task 2: The same poses were performed with the addition of the haptic wearable device and auditory feedback. The Wizard-of-Oz (WoZ) approach was utilized, with team members controlling the prototype without participants' knowledge.

Data Collection: Two Google Forms were created: one for participant consent and background information (Pre-Study) and another for note-takers to collect observational data and participant responses post-testing sessions. Each session lasted approximately 30 minutes, with moderators noting verbal and non-verbal cues for further analysis.

​User Feedback

  • Satisfaction: Participants found M.Y. engaging and more enjoyable than traditional video guides.

  • Injury Prevention: Belief in preventing long-term injuries, but concerns for complex poses and fast-paced styles.

  • Confidence: Users, especially beginners, felt reassured by the system, akin to having a personal instructor.

  • Comprehension: Visual cues were effective reminders for pose execution.

  • Visual Output: Predominantly used for guidance, even with other feedback modalities present.

  • Haptic Output: While helpful for accuracy, some found it overwhelming; preferences for actuator placement varied.

  • Auditory Output: Positive responses to the soundscape; suggestions for adding a vocal guide to improve breathing instructions.

    For more details and quantitative feedback, reach out!

The M.Y. project highlights the effectiveness of a multimodal yoga guide in enhancing user confidence and engagement during solo practice. Future efforts will focus on technological improvements, user customization, and refining feedback modalities to cater to a diverse user base. This research offers valuable insights into designing interactive systems for physical activities and supports the integration of advanced technologies in personal wellness.

bottom of page