DOI QR코드

DOI QR Code

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk (Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology) ;
  • Ince, Ibrahim Furkan (Department of Electronics Engineering, Kyungsung University) ;
  • Yildirim, Mustafa Eren (Department of Electronics Engineering, Kyungsung University) ;
  • Park, Jang Sik (Department of Electronics Engineering, Kyungsung University) ;
  • Song, Jong Kwan (Department of Electronics Engineering, Kyungsung University) ;
  • Yoon, Byung Woo (Department of Electronics Engineering, Kyungsung University)
  • 투고 : 2018.10.24
  • 심사 : 2019.07.29
  • 발행 : 2020.02.07

초록

Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

키워드

참고문헌

  1. M. T. Uddin and M. A. Uddin, Human activity recognition from wearable sensors using extremely randomized trees, in Proc. Int. Conf. Electr. Eng. Inf. Commun. Technol., Dhaka, Bangladesh, May 2015, pp. 769-778.
  2. A. Jalal et al., Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart home, Indoor Built Environ. 22 (2013), no. 1, 271-279. https://doi.org/10.1177/1420326X12469714
  3. Y. Zhan and T. J. Kuroda, Wearable sensor-based human activity recognition from environmental background sounds, J. Ambient Intell. Humanized Comput. 5 (2014), no. 1, 77-89. https://doi.org/10.1007/s12652-012-0122-2
  4. Z. A. Jalal and I. Uddin, Security Architecture for Third Generation (3G) using GMHS Cellular Network, in Proc. Int. Conf. on Emerging Technol., Islamabad, Pakistan, Nov. 2007, pp. 74-79.
  5. A. Jalal and M. A. Zeb, Security Enhancement for E-learning portal, Int. J. Comput. Sci. Netw. Security 8 (2008), no. 3, 41-45.
  6. A. Jalal and M. A. Zeb, Collaboration achievement along with performance maintenance in video streaming, in Proc. Int. Conf. Comput. Inf. Technol., Dhaka, Bangladesh, 2007, pp. 369-374.
  7. A. Jalal and A. Shahzad, Multiple facial feature detection using vertex-modeling structure, in Proc. IEEE Conf. Interactive Comput. Aided Learn., Villach, Austria, Sept. 2007, pp. 26-28.
  8. A. Jalal, S. Kim, and B. J. Yun, Assembled algorithm in the realtime h.263 codec for advanced performance, in Proc. Int. Workshop Enterprise Netw. Comput. Healthcare Industry, Busan, Rep. of Korea, June 2005, pp. 295-298.
  9. A. Jalal and S. Kim, Algorithmic implementation and efficiency maintenance of real-time environment using low-bitrate wireless communication, in Proc. IEEE Workshop Softw. Technol. Future Embedded Ubiquitous Syst., Gyeongju, Rep. of Korea, Apr. 2006, pp. 81-88.
  10. N. Ravi et al., Activity recognition from accelerometer data, in Proc. Conf. Innovative Applicat. Artif. Intell., Pittsburgh, PA, USA, July 2005, pp. 1541-1546.
  11. D. Figo et al., Preprocessing techniques for context recognition from accelerometer data, Personal Ubiquitous Comput. 14 (2010), 645-662. https://doi.org/10.1007/s00779-010-0293-9
  12. C. B. Erdas et al., Integrating features for accelerometer-based activity recognition, Procedia Comput. Sci. 98 (2016), 522-527. https://doi.org/10.1016/j.procs.2016.09.070
  13. D. Koller et al., Real-time vision-based camera tracking for augmented reality applications, in Proc. ACM Symp. Virtual Reality Softw. Technol., Lausanne, Switzerland, Sept. 1997, pp. 87-94.
  14. A. Jalal, M. Z. Uddin, and T. Kim, Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home, IEEE Trans. Consumer Electron. 58 (2012), no. 3, 863-871. https://doi.org/10.1109/TCE.2012.6311329
  15. S. Kamal, A. Jalal, and D. Kim, Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM, J. Electr. Eng. Technol. 11 (2016), no. 6, 1857-1862. https://doi.org/10.5370/JEET.2016.11.6.1857
  16. A. Jalal, Y. Kim, and D. Kim, Ridge body parts features for human pose estimation and recognition from RGB-D video data, in Proc. Int. Conf. Comput., Commun. Netw. Technol., Hefei, China, July 2014, pp. 1-6.
  17. A. Jalal et al., Human activity recognition via the features of labeled depth body parts, Lecture Notes Comput. Sci. 7251 (2012), 246-249.
  18. A. Jalal, T. K. Jeong, and T. S. Kim, Development of a life logging system via depth imaging-based human activity recognition for smart homes, in Proc. Int. Symp. Sustainable Healthy Buildings, Seoul, Rep. of Korea, Sept. 2012, pp. 91-95.
  19. A. Jalal and S. Kamal. Real-time life logging via a depth silhouette-based human activity recognition system for smart home services, in Proc. Int. Conf. Adv. Video Signal Based Surveillance, Seoul, Rep. of Korea, Aug. 2014, pp. 74-80.
  20. J. L. Johnson, Design of experiments and progressively sequenced regression are combined to achieve minimum data sample size, Int. J. Hydromechatronics 1 (2018), no. 3, 308-331. https://doi.org/10.1504/IJHM.2018.094885
  21. L. Alberto, S. Vincentelli, and B. Vigna, Autonomous vehicles: A playground for sensors, in Proc. Int. Workshop Adv. Sens. Interfaces, Vieste, Italy, June 2017, p. 2.
  22. J. L. Johnson, Reynolds stress statistics in the near nozzle region of coaxial swirling jets, Int. J. Hydromechatronics 1 (2018), no. 3, 332-349. https://doi.org/10.1504/IJHM.2018.094886
  23. V. Lumelsky, Whole-body robot sensing and human-robot interaction, in Proc. Int. Symp. Micro-NanoMechatronics Human Sci., Nagoya, Japan, Nov. 2012, pp. 155-155.
  24. S. Huang et al., Wear calculation of sandblasting machine based on EDEM-FLUENT coupling, Int. J. Hydromechatronics 1 (2018), no. 4, 447-459. https://doi.org/10.1504/IJHM.2018.097295
  25. Q. Huang, J. Yang, and Y. Qiao, Person re-identification across multi-camera system based on local descriptors, in Proc. Int. Conf. Distribut. Smart Cameras, Hong Kong, China, Oct. 2012, pp. 1-6.
  26. A. Farooq, A. Jalal, and S. Kamal, Dense RGB-D map-based human tracking and activity tecognition using skin joints features and self-organizing map, KSII Trans. Internet Inf. Syst. 9 (2015), no. 5, 1856-1869. https://doi.org/10.3837/tiis.2015.05.017
  27. A. Jalal and S. Kim, Global security using human face understanding under vision ubiquitous architecture system, World Academy Sci. Eng. Technol. 2 (2008), no. 1, 160-164.
  28. F. Farooq, J. Ahmed, and L. Zheng, Facial expression recognition using hybrid features and self-organizing maps, in Proc. IEEE Int. Conf. Multimedia Expo, Hong Kong, China, July 2017, pp. 409-414.
  29. H. Yoshimoto, N. Date, and S. Yonemoto, Vision-based real-time motion capture system using multiple cameras, in Proc. IEEE Int. Conf. Multisensor Fusion Integr. Intell. Syst., Tokyo, Japan, Aug. 2003, pp. 247-251.
  30. M. Ye and R. Yang, Real-time simultaneous pose and shape estimation for articulated objects using a single depth camera, in Proc. IEEE Conf. Computer Vision Pattern Recogn., Columbus, OH, USA, June 2014, pp. 2345-2352.
  31. J. Shotton et al., Real-time human pose recognition in parts from single depth images, Machine Learning for Computer Vision, Studies in Computational Intelligence 411 (2013), 119-135. https://doi.org/10.1007/978-3-642-28661-2_5
  32. M. Ding and G. Fan, Articulated and generalized Gaussian kernel correlation for human pose estimation, IEEE Trans. Image Process. 25 (2016), no. 2, 776-789. https://doi.org/10.1109/TIP.2015.2507445
  33. Y. Hbali et al., Skeleton-based human activity recognition for elderly monitoring systems, IET Comput. Vision 12 (2018), no. 1, 16-26. https://doi.org/10.1049/iet-cvi.2017.0062
  34. A. Jalal, S. Kamal, and D. Kim, A depth video-based human detection and activity recognition using multi-features and embedded hidden Markov models for health care monitoring system, Int. J. Interactive Multimedia Artif. Intell. 4 (2017), no. 4, 54-62. https://doi.org/10.9781/ijimai.2017.447
  35. T. N. Nguyen and N. Q. Ly, Abnormal activity detection based on dense spatial-temporal features and improved one-class learning, in Proc. Int. Symp. Inf. Commun. Technol., Nha Trang City, Viet Nam, Dec. 2017, pp. 370-377.
  36. D. Singh and C. K. Mohan, Graph formulation of video activities for abnormal activity recognition, Pattern Recogn. 65 (2017), 265-272. https://doi.org/10.1016/j.patcog.2017.01.001
  37. A. Jalal, M. Maria, and M. Sidduqi, Robust spatio-temporal features for human interaction recognition via artificial neural network, in Proc. Int. Conf. Frontiers Inf. Technol., Islamabad, Pakistan, Dec. 17-19, 2018, pp. 218-223.
  38. Y. Chen and C. Shen, Performance analysis of smartphone-sensor behavior for human activity recognition, IEEE Access 5 (2017), 3095-3110. https://doi.org/10.1109/ACCESS.2017.2676168
  39. F. Sikder and D. Sarkar, Log-sum distance measures and its application to human-activity monitoring and recognition using data from motion sensors, IEEE Sensors J. 17 (2017), no. 14, 4520-4533. https://doi.org/10.1109/JSEN.2017.2707921
  40. A. Jalal et al., Wearable sensor-based human behavior understanding and recognition in daily life for smart environments, in Proc. Int. Conf. Frontiers Inf. Technol., Islamabad, Pakistan, Dec. 17-19, 2018, pp. 105-110.
  41. X. Luo et al., Abnormal activity detection using pyroelectric infrared sensors, Sensors 16 (2016), 1-17. https://doi.org/10.1109/JSEN.2016.2616227
  42. A. Subasi et al., IoT based mobile healthcare system for human activity recognition, in Proc. Learn. Technol. Conf. (L&T), Jeddah, Saudi Arabia, Feb. 2018, pp. 29-34.
  43. K. Wang et al., 3D human activity recognition with reconfigurable convolutional neural networks, in Proc. ACM Int. Conf. Multimedia, Orlando, FL, USA, Nov. 2014, pp. 97-106.
  44. K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556. 2014.
  45. A. Karpathy et al., Large-scale video classification with convolutional neural networks, in Proc. IEEE Conf. Comput. Vision Pattern Recogn., Columbus, OH, USA, June 23-28, 2014, pp. 1725-1732.
  46. D. Tao, Y. Wen, and R. Hong, Multicolumn bidirectional long short-term memory for mobile devices-based human activity recognition, IEEE Internet Things J. 3 (2016), no. 6, 1124-1134. https://doi.org/10.1109/JIOT.2016.2561962
  47. N. D. Thang et al., Estimation of 3-D human body posture via co-registration of 3-D human model and sequential stereo information, Appl. Intell. 35 (2011), no. 2, 163-177. https://doi.org/10.1007/s10489-009-0209-4
  48. Md Z Uddin, N. D. Thang, and T.-S. Kim, Human Activity Recognition via 3-D joint angle features and Hidden Markov models, in Proc. Int. Conf. Image Process., Hong Kong, China, Sept. 2010, pp. 713-716.
  49. F. Ofli et al., Sequence of the most informative joints (SMIJ): A new representation for human skeletal action recognition, J. Visual Commun. Image Representation 25 (2014), no. 1, 24-38. https://doi.org/10.1016/j.jvcir.2013.04.007
  50. Y. Lin and Y. H. Jeon, Random forests and adaptive nearest neighbors, Technical Report No. 1055, University of Wisconsin, 2002.
  51. O. F. Ince et al., Human identification using video-based analysis of the angle between skeletal joints, J. Institute Contr. Robot. Syst. 24 (2018), no. 3, 263-270. https://doi.org/10.5302/J.ICROS.2018.17.0195
  52. L. Piyathilaka and S. Kodagoda, Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features, in Proc. Conf. Industrial Electron. Applicat., Melbourne, Australia, June 2013, pp. 567-572.
  53. A. Jalal, S. Kamal, and D. Kim, A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments, Sensors 14 (2014), no. 7, 11735-11756. https://doi.org/10.3390/s140711735
  54. A. Jalal et al., Robust human activity recognition from depth video using spatiotemporal multi-fused features, Pattern Recogn. 61 (2017), 295-308. https://doi.org/10.1016/j.patcog.2016.08.003
  55. A. Jalal, S. Kamal, and D. Kim, Shape and motion features approach for activity tracking and recognition from kinect video camera, in Proc. Int. Conf. Adv. Inf. Netw. Applicat. Workshops, Gwangiu, Rep. of Korea, Mar. 2015, pp. 445-450.
  56. A. Jalal and Y. Kim, Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data, in Proc. Int. Conf. Adv. Video Signal Based Surveillance, Seoul, Rep. of Korea, Aug. 2014, pp. 119-124.
  57. A. Jalal, S. Kamal, and D. Kim, Individual detection-tracking-recognition using depth activity images, in Proc. Int. Conf. Ubiquitous Robots Ambient Intell., Goyang, Rep. of Korea, Oct. 2015, pp. 450-455.
  58. H. Wu et al., Human activity recognition based on the combined SVM&HMM, in Proc. Int. Conf. Inf. Auto., Hailar, China, July 2014, pp. 219-224.

피인용 문헌

  1. Performance Boosting of Scale and Rotation Invariant Human Activity Recognition (HAR) with LSTM Networks Using Low Dimensional 3D Posture Data in Egocentric Coordinates vol.10, pp.23, 2020, https://doi.org/10.3390/app10238474
  2. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System vol.13, pp.2, 2020, https://doi.org/10.3390/su13020970