DOI QR코드

DOI QR Code

Action Recognition Method in Sports Video Shear Based on Fish Swarm Algorithm

  • Jie Sun (Institute of Physical Education, China University of Geoscience) ;
  • Lin Lu (Institute of Physical Education, China University of Geoscience)
  • Received : 2022.07.01
  • Accepted : 2023.04.25
  • Published : 2023.08.31

Abstract

This research offers a sports video action recognition approach based on the fish swarm algorithm in light of the low accuracy of existing sports video action recognition methods. A modified fish swarm algorithm is proposed to construct invariant features and decrease the dimension of features. Based on this algorithm, local features and global features can be classified. The experimental findings on the typical sports action data set demonstrate that the key details of sports action can be successfully retained by the dimensionality-reduced fusion invariant characteristics. According to this research, the average recognition time of the proposed method for walking, running, squatting, sitting, and bending is less than 326 seconds, and the average recognition rate is higher than 94%. This proves that this method can significantly improve the performance and efficiency of online sports video motion recognition.

Keywords

References

  1. H. Wei and N. Kehtarnavaz, "Simultaneous utilization of inertial and video sensing for action detection and recognition in continuous action streams," IEEE Sensors Journal, vol. 20, no. 11, pp. 6055-6063, 2020. https://doi.org/10.1109/JSEN.2020.2973361
  2. J. Xiong, L. Lu, H. Wang, J. Yang, and G. Gui, "Object-level trajectories based fine-grained action recognition in visual IoT applications," IEEE Access, vol. 7, pp. 103629-103638, 2019. https://doi.org/10.1109/ACCESS.2019.2931471
  3. O. Elharrouss, N. Almaadeed, S. Al-Maadeed, A. Bouridane, and A. Beghdadi, "A combined multiple action recognition and summarization for surveillance video sequences," Applied Intelligence, vol. 51, pp. 690-712, 2021. https://doi.org/10.1007/s10489-020-01823-z
  4. F. Liu, X. Xu, T. Zhang, K. Guo, and L. Wang, "Exploring privileged information from simple actions for complex action recognition," Neurocomputing, vol. 380, pp. 236-245, 2020. https://doi.org/10.1016/j.neucom.2019.11.020
  5. F. Pourpanah, C. P. Lim, and Q. Hao, "A reinforced fuzzy ARTMAP model for data classification," International Journal of Machine Learning and Cybernetics, vol. 10, pp. 1643-1655, 2019. https://doi.org/10.1007/s13042-018-0843-4
  6. P. Elias, J. Sedmidubsky, and P. Zezula, "Understanding the limits of 2D skeletons for action recognition," Multimedia Systems, vol. 27, pp. 547-561, 2021. https://doi.org/10.1007/s00530-021-00754-0
  7. Y. Y. Joefrie and M. Aono, "Multi-label multi-class Action recognition with deep spatio-temporal layers based on temporal Gaussian mixtures," IEEE Access, vol. 8, pp. 173566-173575, 2020. https://doi.org/10.1109/ACCESS.2020.3025931
  8. J. Xie, Q. Miao, R. Liu, W. Xin, L. Tang, S. Zhong, and X. Gao, "Attention adjacency matrix based graph convolutional networks for skeleton-based action recognition," Neurocomputing, vol. 440, pp. 230-239, 2021. https://doi.org/10.1016/j.neucom.2021.02.001
  9. J. H. Kim and C. S. Won, "Action recognition in videos using pre-trained 2D convolutional neural networks," IEEE Access, vol. 8, pp. 60179-60188, 2020. https://doi.org/10.1109/ACCESS.2020.2983427
  10. D. Ludl, T. Gulde, and C. Curio, "Enhancing data-driven algorithms for human pose estimation and action recognition through simulation," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 9, pp. 3990-3999, 2020. https://doi.org/10.1109/TITS.2020.2988504