DOI QR코드

DOI QR Code

Development of Fast Posture Classification System for Table Tennis Robot

탁구 로봇을 위한 빠른 자세 분류 시스템 개발

  • Jin, Seongho (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Kwon, Yongwoo (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Kim, Yoonjeong (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Park, Miyoung (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • An, Jaehoon (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Kang, Hosun (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Choi, Jiwook (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Lee, Inho (Dept of Electronics Engineering, Pusan National University)
  • Received : 2022.06.24
  • Accepted : 2022.08.16
  • Published : 2022.11.30

Abstract

In this paper, we propose a table tennis posture classification system using a cooperative robot to develop a table tennis robot that can be trained like a real game. The most ideal table tennis robot would be a robot with a high joint driving speed and a high degree of freedom. Therefore, in this paper, we intend to use a cooperative robot with sufficient degrees of freedom to develop a robot that can be trained like a real game. However, cooperative robots have the disadvantage of slow joint driving speed. These shortcomings are expected to be overcome through quick recognition. Therefore, in this paper, we try to quickly classify the opponent's posture to overcome the slow joint driving speed. To this end, learning about dynamic postures was conducted using image data as input, and finally, three classification models were created and comparative experiments and evaluations were performed on the designated dynamic postures. In conclusion, comparative experimental data demonstrate the highest classification accuracy and fastest classification speed in classification models using MLP (Multi-Layer Perceptron), and thus demonstrate the validity of the proposed algorithm.

Keywords

Acknowledgement

This work was supported by Police-Lab 2.0 Program (www.kipot. or.kr) funded by the Ministry of Science and ICT (MSIT, Korea) & Korean National Police Agency (KNPA, Korea) [Project Name: Development and demonstration of unmanned patrol robot system for local police support / Project Number: 210121M05] This work has supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. 2021R1C1C1009989)

References

  1. M. Iosa, P. Picerno, S. Paolucci, and G. Morone, "Wearable inertial sensors for human movement analysis," Expert Rev. Med. Devices, vol. 13, no. 7, pp. 641-659, July, 2016, DOI: 10.1080/ 17434440.2016.1198694.
  2. F. Li, K. Shirahama, M. A. Nisar, L. Koping, and M. Grzegorzek, "Comparison of feature learning methods for human activity recognition using wearable sensors," IEEE Sensors, vol. 18, no. 2, pp. 679, 2018, DOI: 10.3390/s18020679.
  3. C. Chen, R. Jafari, and N. Kehtarnavaz, "A Real-Time Human Action Recognition System Using Depth and Inertial Sensor Fusion," IEEE Sensors Journal, vol. 16, no. 3, pp. 773-781, February, 2016, DOI: 10.1109/jsen.2015.2487358.
  4. C. Chen R. Jafari, and N. Kehtarnavaz, "Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors," IEEE Transactions on Human-Machine Systems, vol. 45, no. 1, pp. 51-61, 2015, DOI: 10.1109/thms.2014.2362520.
  5. N. Dawar and N. Kehtarnavaz, "Action Detection and Recognition in Continuous Action Streams by Deep Learning-Based Sensing Fusion," IEEE Sensors Journal, vol. 18, no. 23, pp. 9660-9668, December, 2018, DOI: 10.1109/jsen.2018.2872862.
  6. A. I. Cuesta-Vargas, A. Galan-Mercant, and J. M. Williams, "The use of inertial sensors system for human motion analysis," Phys. Ther. Rev., vol. 15, no. 6, pp. 462-473, 2010, DOI: 10.1179/1743288x11y.0000000006.
  7. Z. Zhiqing, "Biomechanical Analysis and Model Development Applied to Table Tennis Forehand Strokes," School of Mechanical and Aerospace Engineering, 2016, DOI: 10.32657/10356/70334.
  8. R. A. Minhas, A. Javed, A. Irtaza, M. T. Mahmood, and Y. B. Joo, "Shot Classification of Field Sports Videos Using AlexNet Convolutional Neural Network," Computing and Artificial Intelligence, vol. 9, January, 2019, DOI: 10.3390/app9030483.
  9. J. Kim and Y. Do, "Human Detection in the Images of a Single Camera for a Corridor Navigation Robot," The Journal of Korea Robotics Society, vol. 8, no. 4, pp. 236-246, Dec., 2013, DOI: 110.7746/jkros.2013.8.4.238.
  10. F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black, "Keep it SMPL: automatic estimation of 3D human pose and shape from a single image," European Conference on Computer Vision, vol. 9909, pp. 561-578, 2016, DOI: 10.1007/978-3-319-46454-1_34.
  11. Q. Dang, J. Yin, B. Wang, and W. Zheng, "Deep learning based 2d human pose estimation: A survey," Tsinghua Science and Technology, vol. 24, no. 6, pp. 663-676, 2019, DOI: 10.26599/tst.2018.9010100.
  12. Z. Xiao, X. Xu, H. Xing, F. Song, X. Wang, and B. Zhao, "A federated learning system with enhanced feature extraction for human activity recognition," Knowledge-Based Systems, vol. 229, October, 2021, DOI: 10.1016/j.knosys.2021.107338.
  13. D. K. Choubey, M. Kumar, V. Shukla, S. Tripathi, and V. K. Dhandhania, "Comparative Analysis of Classification Methods with PCA and LDA for Diabetes," Current diabetes reviews, vol. 16, no. 8, pp. 833-850, September, 2020, DOI: 10.2174/1573399816666200123124008.
  14. F. Anowar, S. Sadaoui, and B. Selim, "Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE)," Computer Science Review, vol. 40, May, 2021, DOI: 10.1016/j.cosrev.2021.100378.
  15. A. Iqbal and S. Aftab, "A Classification Framework for Software Defect Prediction Using Multi-filter Feature Selection Technique and MLP," I.J. Modern Education and Computer Science, vol. 1, pp. 18-25, February, 2020, DOI: 10.5815/ijmecs.2020.01.03.
  16. H. I. Dino and M. B. Abdulrazzaq, "Facial Expression Classification Based on SVM, KNN and MLP Classifiers," International Conference on Advanced Science and Engineering, May, 2019, DOI: 10.1109/icoase.2019.8723728.
  17. E. Vats and C. S. Chan, "Early detection of human actions-A hybrid approach," Applied Soft Computing, vol. 46, pp. 953-966, September, 2016, DOI: 10.1016/j.asoc.2015.11.007.
  18. J. E. Lee, J. H. Kim, H. G. Lee, S. J. Kim, D. H. Kim, and G. T. Park, "Emergency Alarm Service for the old and the weak by Human Behavior Recognition in Intelligent Space," The Journal of Korea Robotics Society, vol. 2, no. 4, pp. 297-303, 2007, Accessed: January 27, 2022, [Online], https://koreascience.kr/article/JAKO200723736027917.page.
  19. Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, "Cascaded pyramid network for multi-person pose estimation," Computer Vision and Pattern Recognition Conference, June, 2018, DOI: 10.1109/cvpr.2018.00742.
  20. M. Seo, S. Lee, and D.-G. Choi, "Spatial-temporal Ensemble Method for Action Recognition," The Journal of Korea Robotics Society, vol. 15, no. 4, pp. 385-391, 2020, DOI: 10.7746/jkros. 2020.15.4.385.
  21. A. Elhayek, O. Kovalenko, P. Murthy, J. Malik, and D. Stricker, "Fully automatic multi-person human motion capture for VR applications," EuroVR 2018, vol. 11162, pp. 28-47, 2018, DOI: 10.1007/978-3-030-01790-3_3.
  22. F. Sajjad, A. F. Ah med, and M. A. Ah med, "A study on th e learning based human pose recognition," 2017 9th IEEE-GCC Conference and Exhibition (GCCCE), pp. 1-8, Manama, Bahrain, 2017, DOI: 10.1109/IEEEGCC.2017.8448200.
  23. X. Cheng, L. Zhang, Y. Tang, Y. Liu, H. Wu, and J. He, "Real-Time Human Activity Recognition Using Conditionally Parametrized Convolutions on Mobile and Wearable Devices," IEEE Sensors, vol. 22, no. 6, March, 2022, DOI: 10.1109/jsen.2022.3149337.
  24. D. Ravi, C. Wong, B. Lo, and G.-Z. Yang, "Deep learning for human activity recognition: A resource efficient implementation on low-power devices," IEEE 13th Int. Conf. Wearable Implant. Body Sensor Netw (BSN), pp. 71-76, Jun, 2016, DOI: 10.1109/bsn.2016.7516235.
  25. Z. Yang, O. I. Raymond, C. Zh ang, Y. Wan, and J. Long, "DFTerNet:Towards 2-bit dynamic fusion networks for accurate human activity recognition," IEEE Access, vol. 6, pp. 56750-56764, 2018, DOI: 10.1109/access.2018.2873315.
  26. S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, "Deep learning models for real-time human activity recognition with smartphones," Mobile Netw. Appl., vol. 25, pp. 743-755, December, 2019, DOI: 10.1007/s11036-019-01445-x.
  27. Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sh eikh , "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, pp. 172-186, January, 2021, DOI: 10.1109/tpami.2019.2929257.
  28. Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multiperson 2D pose estimation using part affinity fields," Computer Vision and Pattern Recognition Conference, pp. 7291-7299, 2017, DOI: 10.1109/cvpr.2017.143.