print · login   

Automated Detection and Analysis of Pull-Up Performance Using Pose Estimation and Time-Series Modeling

Recent advances in real-time pose estimation — enabled by models like MediaPipe, OpenPose, and ARKit — have opened up new possibilities for motion tracking using only consumer-grade cameras (e.g., webcams or smartphones). These systems can detect human body landmarks with increasing precision and speed, even without markers or specialized hardware.

This progress is creating opportunities to build lightweight, vision-based tools for monitoring physical activity and analyzing exercise performance. In the fitness and rehabilitation domains, such tools can support users in tracking progress, maintaining good form, and detecting fatigue or movement degradation over time — all without wearables or expensive motion capture setups.

Pull-ups, as a form of bodyweight resistance training, offer a controlled and repetitive movement pattern that is ideal for studying automatic repetition detection, range-of-motion analysis, and early indicators of fatigue. However, pose-based tracking of pull-up performance is non-trivial: small variations in camera angle, landmark jitter, body sway, or partial occlusion can impact the accuracy and reliability of movement interpretation. This thesis will explore how pose estimation and time-series analysis can be combined to build a robust system for tracking pull-up performance and inferring training dynamics from movement signals.

A variety of methods may be applied or compared in this project, ranging from classical signal processing techniques (e.g., smoothing, differentiation, peak detection) to machine learning models for time-series segmentation, motion classification, and fatigue detection. Students may also explore sensor fusion approaches, experiment with different pose estimation frameworks, or analyze model robustness across users and recording conditions. An additional line of inquiry could involve comparing 2D and 3D pose estimation models in terms of their accuracy, sensitivity to camera placement, and suitability for movement quality assessment in constrained settings like home workouts.

The project will be supervised by Yuliya Shapovalova from Radboud University.

Contact: Yuliya Shapovalova

References:

Saraswat, S., & Malathi, G. (2024, April). Pose estimation based fall detection system using mediapipe. In 2024 10th International Conference on Communication and Signal Processing (ICCSP) (pp. 1733-1738). IEEE.

Badiola-Bengoa, A., & Mendez-Zorrilla, A. (2021). A systematic review of the application of camera-based human pose estimation in the field of sport and physical exercise. Sensors, 21(18), 5996.

Wang, J., Qiu, K., Peng, H., Fu, J., & Zhu, J. (2019, October). AI coach: Deep human pose estimation and analysis for personalized athletic training assistance. In Proceedings of the 27th ACM international conference on multimedia (pp. 374-382).