This paper presents a novel approach to real- time yoga posture detection and correction using the YOLOv8 objectdetectionalgorithm.Thesystemaimsto assist practitioners in performing yoga poses correctly, thereby reducing the risk of injuries and enhancing the effectiveness of their practice. The system works by analyzinglivevideo feedsofpractitionersandcomparing theirposestoadatabaseofcorrectposes.Ifdiscrepancies are detected, real-time feedbackis provided to guide the practitioner in adjusting their posture. The system also tracksthepractitioner\'s progressovertime, allowing for personalized feedback and recommendations. Experimentalresultsdemonstratetheeffectivenessofthe system in accurately detecting and correcting yoga postures, highlighting its potential to revolutionize the way yoga is practiced and taught.
Introduction
Yoga is a holistic practice that combines physical postures, breathing, and meditation to enhance physical, mental, and emotional well-being. However, incorrect practice can cause injuries and reduce its benefits. To help practitioners perform yoga poses accurately, researchers are developing a real-time yoga posture detection and correction system using YOLOv8, an advanced, fast, and accurate object detection algorithm.
This system captures live video of a person doing yoga, detects key body points (joints and body parts), and compares them to a database of correct poses. It then provides real-time feedback via visual or audio cues to correct posture, improving practice and preventing injuries.
The text also reviews related literature on AI and computer vision-based fitness and yoga solutions, highlighting various methods and technologies like OpenCV, MediaPipe, CNNs, and deep learning for pose estimation and correction. These studies emphasize the growing use of AI for personalized exercise guidance and injury prevention.
For the system’s development, a dataset of annotated yoga images with labeled key points was created using tools like CVAT. The data is split into training and validation sets for model training.
The methodology centers on leveraging YOLOv8’s speed and accuracy for real-time pose detection and correction, enabling more accessible and effective yoga practice for all skill levels.
Conclusion
Tosumup,theuseofYOLOv8forestimatingyogaposesis anoteworthydevelopmentincomputervisionmethods utilized in the yoga analysisdomain. YOLOv8\'sreal-time performanceallowsforeasyintegrationintoavarietyof apps,givingpractitioner sinstantfeedback whiletheypractice yoga. With the ability to modify and perfectpositionsinrealtime,thisfeatureispricelessfor augmentingthelearning process.
Furthermore, YOLOv8\'s high degree of precision in identifyingandlocalizingyogapositionsguaranteesaccurate feedbackon thealignmentandexecutionoftheposes.Yoga practitioners may improve their technique and prevent injuries by using YOLOv8 to assist extensive study of postures by properly identifying important body joints and landmarks. Moreover, YOLOv8\'s computational efficiency makes it appropriate for implementation on devices with limited resources, including wearables and smartphones fitness monitors. This increases the technology\'s accessibility for estimating yoga poses, making itpossible fora largergroup of users to gain from tailored advice and feedback when practicing.
It is imperative to recognize the persistent obstacles and prospectsforenhancementinYOLOv8-based yogaposition estimate. Enhancing the model\'s robustness and generalization abilities will need addressing variables