Human pose estimation plays a vital role in various domains, including fitness tracking, physiotherapy, sports performance analysis, and human-computer interaction. Accurate posture detection is essential to prevent injuries, improve physical activity performance, and aid rehabilitation processes. This research presents a real-time human pose detection and feedback system leveraging the MoveNet deep learning model and TensorFlow. The system captures live video streams using OpenCV, processes the frames with MoveNet to extract key joint positions, and applies an angle calculation module to evaluate movement accuracy. To enhance accessibility and usability, the system integrates a graphical user interface (GUI) built with Tkinter and a text-to-speech feedback mechanism to provide real-time guidance.
The effectiveness of the system is validated through comparative analysis with standard pose models, ensuring that users receive real-time feedback on their posture deviations. The experimental results demonstrate high detection accuracy, rapid processing speeds, and enhanced user engagement, making it a viable solution for automated fitness coaching, physiotherapy monitoring, and interactive learning applications. Additionally, the system reduces the reliance on human instructors by offering automated posture correction, thereby democratizing access to professional-level movement assessment.
Introduction
Overview
Human pose estimation plays a critical role in domains like fitness, healthcare, and rehabilitation. Traditional motion capture systems are accurate but impractical due to high costs and equipment requirements. This research introduces a real-time, lightweight, markerless yoga posture detection and feedback system using the MoveNet deep learning model, aimed at improving form, reducing injuries, and offering accessibility for home fitness and physiotherapy users.
Problem Statement
Many individuals practice yoga or fitness without professional supervision, leading to improper postures and increased injury risk. Existing vision-based models (e.g., OpenPose, HRNet) are accurate but computationally heavy and lack real-time feedback capabilities for consumer devices. This study addresses the need for a resource-efficient, feedback-integrated pose correction system using MoveNet, optimized for low-latency performance and immediate corrective feedback.
Literature Review
Real-time pose feedback significantly improves posture accuracy and reduces injuries, especially in yoga.
Models like OpenPose and CNN-based systems can detect keypoints with high accuracy (~91%).
Visual (color-coded skeletons) and audio feedback (via text-to-speech) enhance user comprehension and engagement.
Combining deep learning with real-time assessment surpasses traditional methods in precision and user support.
Methodology
A. System Architecture
User Authentication: Secure login with personal and injury history stored in a MySQL database.
Pose Selection: Users choose yoga poses from a list.
Live Pose Execution: Camera feed and reference pose shown simultaneously.
Pose Detection: MoveNet extracts body keypoints and calculates joint angles.
Pose Evaluation: Deep learning model compares user pose with ideal pose.
Corrective Feedback: Text and voice feedback suggest specific posture adjustments.
Continuous Monitoring: System loops through detection and correction throughout the session.
Data Logging: Post-session summary including pose accuracy and correction history.
B. Process Flow
From splash screen to user login, pose selection, real-time pose detection, corrective feedback, and session completion—all stages are designed for seamless interaction and continuous improvement.
C. Dataset
Based on the Yoga-82 dataset, enhanced with additional open-source and manually annotated data.
Keypoints and angle annotations tailored for the MoveNet framework.
Augmented with rotations, lighting changes, etc., for better real-world performance.
D. Performance Evaluation
Model Test Accuracy: 99.33%
Precision/Recall/F1-Score: All above 0.99
Pose-wise Accuracy: 96.5–100% across 10 yoga poses
Low false classification rate: Only 1 incorrect prediction in 149 samples
Real-time feedback through visual and auditory cues enhances user experience and posture alignment.
Key Results
The system delivers real-time feedback with minimal latency and high pose recognition accuracy.
Pose-specific accuracy is excellent, with several poses achieving 100% classification accuracy.
Integration of MoveNet ensures smooth performance even on consumer-grade hardware.
The feedback mechanism significantly improves user form and engagement during practice.
Text-to-speech and visual cues make the system accessible for all skill levels and abilities.
Conclusion
The Yoga Pose Detection and Feedback System successfully achieves its goal of providing real-time posture analysis and corrective feedback to users practicing yoga. By leveraging MoveNet for key-point detection and a deep learning-based evaluation model, the system effectively classifies yoga poses with high accuracy and efficiency. The results demonstrate that the system achieves an overall 99.33% accuracy, with several poses classified with 100% accuracy, ensuring a reliable and precise evaluation mechanism.
The integration of text-to-speech guidance and visual feedback enhances user engagement, making it easier for individuals to adjust their poses in real time. The system is designed to be adaptable for yoga practitioners of all skill levels, providing a user-friendly experience that promotes safe and effective yoga practice.
Future enhancements could include expanding the dataset to incorporate more diverse body types and postures, integrating edge computing solutions for deployment on mobile devices, and improving the feedback mechanism with AI-driven adaptive learning. These improvements would further enhance accessibility, usability, and real-time performance, making the system a valuable tool for fitness training, physiotherapy, and wellness applications.
In conclusion, the proposed system provides a technologically advanced, accessible, and practical solution for individuals looking to refine their yoga practice with automated, real-time posture correction. The combination of deep learning, real-time video processing, and interactive feedback makes it a significant contribution to the field of computer vision-based fitness applications.
References
[1] Fazil Rishan, Binali De Silva, Sasmini Alawathugoda, Shakeel Nijabdeen, Lakmal Rupasinghe, Chethana Liyanapathirana, The Department of Software Engineering Sri Lanka Institute of Information Technology, “InfinityYogaTutor: Yoga Posture Detection And Correction System”, 2020 IEEE.
[2] Vivek Anand Thoutam, Anugrah Srivastava, Tapas Badal, Vipul Kumar Mishra, G. R. Sinha, Aditi Sakalle, Harshit Bhardwaj and Manish Raj, “Yoga Pose Estimation and Feedback Generation Using Deep Learning”, Received 12 December 2021; Revised 17 February 2022; Accepted 26 February 2022; Published 24 March 2022.
[3] Renhao Huang, Jiqing Wang, Haowei Lou, Haodong Lu, Bofei Wang School of Computer Science and Engineering, University of New South Wales, Australia, “Miss Yoga: A Yoga Assistant Mobile Application Based on Keypoint Detection”, 2020 IEEE.
[4] TewodrosLegesseMunea,YalewZelalemJembre,HalefomTekleWeldegebriel,LongbiaoChe n, ChenxiHuang, and ChenhuiYang, School of Informatics, Xiamen University, Xiamen 361005, China Department of Electronic Engineering, Keimyung University, Daegu 42601, South Korea, \"The Progress of Human Pose Estimation : A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation\", July 20 2020.
[5] Santosh Kumar Yadav, Amitojdeep Singh, Abhishek Gupta, Jagdish Lal Raheja, \"Real- time Yoga recognition using deep learning\", Springer 20 May 2019.
[6] Ali Raza, Azam Mehmood Qadri, Iqra Akhtar, Nagwan Abdel Samee, And Maali Alabdulhafith, “LogRF: An Approach to Human Pose Estimation Using Skeleton Landmarks for Physiotherapy Fitness Exercise Correction”, IEEE 28 September 2023.
[7] Woojoo Kim, Jaeho Sung, Daniel Saakes, Chunxi Huang, Shuping Xiong, Department of industrial and Systems Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea, “Ergonomic posturalassessment using a new open-source human pose estimation technology (OpenPose)”, International Journal of Industrial Ergonomics, 8 June 2021.
[8] Maybel Chan Thar1, Khine Zar Ne Winn1 , Nobuo Funabiki2 1University of Information Technology, Yangon, Myanmar 2Okayama University, Okayama, Japan, “A Proposal of Yoga Pose Assessment Method Using Pose Detection for Self Learning”.
[9] Muhammad Usama Islam, Hasan Mahmud, Faisal Bin Ashraf, Iqbal Hossain and Md. Kamrul Hasan, “Yoga Posture Recognition By Detecting Human Joint Points In Real Time Using Microsoft Kinect”.
[10] Amira Samy Talaat, “Novel deep learning models for yoga pose estimator”, Springer 17 November 2023.
[11] Deepak Kumar, Anurag Sinha Department of Information Technology, Research Scholar, Amity University, Jharkhand, India, “Yoga Pose Detection and Classification Using Deep Learning”, International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 28 Nov 2020.
[12] Faisal Bin Ashraf, Muhammad Usama Islam, Md Rayhan Kabir, Jasim Uddin, “YoNet: A Neural Network for Yoga Pose Classification”, Springer 8 February 2023.