DOI QR코드

DOI QR Code

LoS/NLoS Identification-based Human Activity Recognition System Using Channel State Information

채널 상태 정보를 활용한 LoS/NLoS 식별 기반 인간 행동 인식 시스템

  • 권혁돈 (한림대학교 소프트웨어학부) ;
  • 권정혁 (한림대학교 스마트컴퓨팅연구소) ;
  • 이솔비 (한림대학교 스마트컴퓨팅연구소) ;
  • 김의직 (한림대학교 소프트웨어학부)
  • Received : 2024.05.15
  • Accepted : 2024.06.10
  • Published : 2024.06.30

Abstract

In this paper, we propose a Line-of-Sight (LoS)/Non-Line-of-Sight (NLoS) identification- based Human Activity Recognition (HAR) system using Channel State Information (CSI) to improve the accuracy of HAR, which dynamically changes depending on the reception environment. to consider the reception environment of HAR system, the proposed system includes three operational phases: Preprocessing phase, Classification phase, and Activity recognition phase. In the preprocessing phase, amplitude is extracted from CSI raw data, and noise in the extracted amplitude is removed. In the Classification phase, the reception environment is categorized into LoS and NLoS. Then, based on the categorized reception environment, the HAR model is determined based on the result of the reception environment categorization. Finally, in the activity recognition phase, human actions are classified into sitting, walking, standing, and absent using the determined HAR model. To demonstrate the superiority of the proposed system, an experimental implementation was performed and the accuracy of the proposed system was compared with that of the existing HAR system. The results showed that the proposed system achieved 16.25% higher accuracy than the existing system.

본 논문에서는 수신환경에 따라 변화하는 인간 행동 인식 (Human Activity Recognition, HAR)의 정확도를 향상시키기 위해 채널 상태 정보 (Chanel State Information, CSI)를 활용한 Line-of-Sight (LoS)/Non-Line-of-Sight (NLoS) 식별 기반 HAR 시스템을 제안한다. 제안 시스템은 수신환경을 고려한 HAR 시스템을 위해 Preprocessing phase, Classification phase, Activity recognition phase의 세 동작 단계를 포함한다. Preprocessing phase에서는 CSI 원시 데이터로부터 진폭이 추출되고, 추출된 진폭 내 노이즈가 제거된다. Classification phase에서는 데이터 수신환경이 LoS 환경 또는 NLoS 환경으로 분류되고, 수신환경 분류 결과를 기반으로 HAR 모델이 결정된다. 마지막으로, Activity recognition phase에서는 결정된 HAR 모델을 활용하여 인간의 동작을 앉기, 걷기, 서 있기, 부재중으로 분류한다. 제안 시스템의 우수성을 입증하기 위해, 실험적 구현을 수행하였으며 제안 시스템의 정확도를 기존 HAR 시스템의 정확도와 비교하였다. 실험 결과, 제안 시스템은 대조군 대비 16.25% 더 높은 정확도를 달성하였다.

Keywords

Acknowledgement

본 연구는 2022년도 중소벤처기업부의 기술개발사업 지원에 의한 연구임 [S3278476]. 이 논문은 2020년도 정부 (교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업임 (No. 2020R1I1A3052733). 이 성과는 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (No. 2021R1C1C2095696). 본 논문은 교육부와 한국연구재단의 재원으로 지원을 받아 수행된 3단계 산학연협력선도대학 육성사업 (LINC 3.0)의 연구결과입니다.

References

  1. M. M. Islam, S. Nooruddin, F. Karray, and G. Muhammad, ''Multi-level feature fusion for multimodal human activity recognition in Internet of Healthcare Things", Inf. Fusion, Vol.94, pp.17-31, 2023. https://doi.org/10.1016/j.inffus.2023.01.015
  2. E. Ramanujam, T. Perumal, and S. Padmavathi, "Human Activity Recognition With Smartphone and Wearable Sensors Using Deep Learning Techniques: A Review," IEEE Sens. J., Vol.21, No.12, pp.13029-13040, 2021. https://doi.org/10.1109/JSEN.2021.3069927
  3. Y. Jang, I. Jeong, M. Y. Heravi, S. Sarkar, H. Shin, and Y.Ahn, "Multi-Camera-Based Human Activity Recognition for Human-Robot Collaboration in Construction," Sensors, Vol.23, No.15, pp.6997:1-6997:20, 2023. https://doi.org/10.3390/s23156997
  4. C. F. Hsieh, Y. C. Chen, C. Y. Hsieh, and M. L. Ku, "Device-Free Indoor Human Activity Recognition Using Wi-Fi RSSI: Machine Learning Approaches," in Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 2020.
  5. P. F. Moshiri, R. Shahbazian, M. Nabati, and S. A. Ghorashi, "A CSI-based human activity recognition using deep learning," Sensors, Vol.21, No.21, pp.7225:1-7225:19, 2021. https://doi.org/10.3390/s21217225
  6. N. Damodaran and J. Schafer, "Device Free Human Activity Recognition using WiFi Channel State Information," in Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, Leicester, UK, 2019.
  7. J. Ding and Y. Wang, "WiFi CSI-Based human activity recognition using deep recurrent neural network," IEEE Access, Vol.7, pp.174257-174269, 2019. https://doi.org/10.1109/ACCESS.2019.2956952
  8. B. A. Alsaify, M. M. Almazari, R. Alazrai, S. Alouneh, and M. I. Daoud, "A CSI-based multi-environment human activity recognition framework," Appl. Sci.-Basel, Vol.12, No.2, pp.930:1-930:29, 2022. https://doi.org/10.3390/app12020930
  9. R. Alazrai, M. Hababeh, B. A. Alsaify, M. Z. Ali, and M. I. Daoud, "An end-to-end deep learning framework for recognizing human-to-human interactions using Wi-Fi signals," IEEE Access, Vol.8, pp.197695-197710, 2020. https://doi.org/10.1109/ACCESS.2020.3034849
  10. H. Li, X. He, X. Chen, Y. Fang, and Q. Fang, "Wi-motion: A robust human activity recognition using WiFi signals," IEEE Access, Vol.7, pp.153287-153299, 2019. https://doi.org/10.1109/ACCESS.2019.2948102
  11. A. Natarajan, V. Krishnasamy, and M. Singh, "A Machine Learning Approach to Passive Human Motion Detection using WiFi Measurements from Commodity IoT Devices," IEEE Trans. Instrum. Meas., Vol.72, pp.1-10, 2023. https://doi.org/10.1109/TIM.2023.3272374
  12. M. S. Islam, M. K. A. Jannat, M. N. Hossain, W. S. Kim, S. W. Lee, and S. H. Yang, "STC-NLSTMNet: An Improved Human Activity Recognition Method Using Convolutional Neural Network with NLSTM from WiFi CSI," Sensors, Vol.23, pp.356:1-356:21, 2022. https://doi.org/10.3390/s23010356
  13. Teach, Learn, and Make with Raspberry Pi Foundation [Online]. Available: https://www.raspberrypi.org
  14. Nexmon [Online]. Available: https://nexmon.org
  15. M. Abadi et al. "TensorFlow: A system for large-scale machine learning," in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI '16), GA, USA, 2016.