Acknowledgement
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C1003257).
References
- Bagate A and Shah M (2019). Human activity recognition using rgb-d sensors. In Proceedings of 2019 International Conference on Intelligent Computing and Control Systems, Madurai, India, 902-905.
- Chaaraoui AA, Padilla-Lopez JR, and Florez-Revuelta F (2015). Abnormal gait detection with RGB-D devices using joint motion history features. In Proceedings of 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Ljubljana, Slovenia, 1-6.
- Cho B, Jang H, and Zhang B (2012). Motion recognition and classification using kinect sensor data, KIISE, 39, 318-320.
- Cho K and Chen X (2014). Classifying and visualizing motion capture sequences using deep neural networks, In Proceedings of 2014 International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, 122-130.
- Du G, Zhang P, Mai J, and Li Z (2012). Markerless kinect-based hand tracking for robot teleoperation, International Journal of Advanced Robotic Systems, 9, 36, Available from: http://doi:10.5772/50093
- Du Y, Wang W, and Wang L (2015). Hierarchical recurrent neural network for skeleton based action recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1110-1118.
- Jalal A, Uddin MZ, and Kim TS (2012). Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home, IEEE Transactions on Consumer Electronics, 58, 863-871. https://doi.org/10.1109/TCE.2012.6311329
- Jin X, Yao Y, Jiang Q, Huang X, Zhang J, Zhang X, and Zhang K (2015). Virtual personal trainer via the kinect sensor. In Proceedings of 2015 IEEE 16th International Conference on Communication Technology, conference location, Hangzhou, 406-463.
- Kim D, Kim W, and Park KS (2022). Effects of exercise type and gameplay mode on physical activity in exergame, Electronics, 11, 3086, Available from: https://doi.org/10.3390/electronics11193086
- Maat S (2020). Clustering gestures using multiple techniques (Doctoral dissertation), Tilburg University, Tilburg, Netherlands.
- Lee I, Kim D, Kang S, and Lee S (2017). Ensemble deep learning for skeleton-based action recognition using temporal sliding lstm networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 1012-1020.
- Li C, Zhong Q, Xie D, and Pu S (2018). Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation, Available from: arXiv preprint arXiv:1804.06055
- Lin BS, Wang LY, Hwang YT, Chiang PY, and Chou WJ (2018). Depth camera based system for estimating energy expenditure of physical activities in gyms, IEEE Journal of Biomedical and Health Informatics, 23, 1086-1095. https://doi.org/10.1109/JBHI.2018.2840834
- Park K (2016). Development of kinect-based pose recognition model for exercise game, KIPS, 5, 303-310. https://doi.org/10.3745/KTCCS.2016.5.10.303
- Patsadu O, Nukoolkit C, and Watanapa B (2012). Human gesture recognition using Kinect camera. In Proceedings of 2012 Ninth International Conference on Computer Science and Software Engineering (JCSSE), Bangkok, Thailand, 28-32.
- Reddy VR and Chattopadhyay T (2014). Human activity recognition from kinect captured data using stick model. In Proceedings of International Conference on Human-Computer Interaction, Heraklion, Crete, Greece, 305-315.
- Shin BG, Kim UH, Lee SW, Yang JY, and Kim W (2021). Fall detection based on 2-Stacked Bi-LSTM and human-skeleton keypoints of RGBD camera, KIPS Transactions on Software and Data Engineering, 10, 491-500. https://doi.org/10.3745/KTSDE.2021.10.11.491
- Taha A, Zayed HH, Khalifa ME, and El-Horbaty ESM (2015). Human activity recognition for surveillance applications. In Proceedings of the 7th International Conference on Information Technology, Amman, Jordan, 577-586.
- Tao W, Liu T, Zheng R, and Feng H (2012). Gait analysis using wearable sensors, Sensors, 12, 2255-2283. https://doi.org/10.3390/s120202255
- Wang J, Liu Z, Wu Y, and Yuan J (2012). Mining actionlet ensemble for action recognition with depth cameras. In Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 1290-1297.
- Yang Y, Yan H, Dehghan M, and Ang MH (2015). Real-time human-robot interaction in complex environment using kinect v2 image recognition. In Proceedings of 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Siem Reap, Cambodia, 112-117.
- Zhu Y, Chen W, and Guo G (2013). Fusing spatiotemporal features and joints for 3d action recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 486-491.