DOI QR코드

DOI QR Code

Predicting Traffic Accident Risk based on Driver Abnormal Behavior and Gaze

  • Ji-Woong Yang (Dept. of Artificial Intelligence Semiconductor Engineering, Hanyang University) ;
  • Hyeon-Jin Jung (Dept. of Computer Science, Yonsei University) ;
  • Han-Jin Lee (Dept. of Computer Science, Yonsei University) ;
  • Tae-Wook Kim (Division of Software, Yonsei University) ;
  • Ellen J. Hong (Division of Software, Yonsei University)
  • Received : 2024.06.11
  • Accepted : 2024.07.24
  • Published : 2024.08.30

Abstract

In this paper, we propose a new approach by analyzing driver behavior and gaze changes within the vehicle in real-time to assess and predict the risk of traffic accidents. Utilizing data analysis and machine learning algorithms, this research precisely measures drivers' abnormal behaviors and gaze movement patterns in real-time, and aggregates these into an overall Risk Score to evaluate the potential for traffic accidents. This research underscores the significance of internal factors, previously unexplored, providing a novel perspective in the field of traffic safety research. Such an innovative approach suggests the feasibility of developing real-time predictive models for traffic accident prevention and safety enhancement, expected to offer critical foundational data for future traffic accident prevention strategies and policy formulation.

본 연구는 기존 연구가 주로 도로의 물리적 상태 및 도로 환경 등 외부 요인에 초점을 맞춘 것에 반해, 차량 내부에서 발생하는 운전자의 행동 및 시선 변화를 실시간으로 분석함으로써 교통사고 위험도를 측정하고 예측하는 새로운 접근법을 제시한다. 실시간으로 운전자의 이상행동과 시선 이동 패턴을 정밀하게 측정하고, 이를 통해 도출된 각각의 위험 점수를 합산하여 교통사고 위험도를 평가한다. 본 연구는 기존 연구에서 다루지 않았던 내재적 요인의 중요성을 강조하며 교통안전 연구 분야에 새로운 시각을 제공한다. 이러한 혁신적 접근 방식은 교통사고 예방 및 안전 개선을 위한 실시간 예측 모델의 개발 가능성을 제시하며, 향후 교통사고 예방 전략 및 정책 수립에 있어 중요한 기초 자료를 제공할 수 있을 것으로 기대된다.

Keywords

Acknowledgement

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korea government (2022R1F1A1074273).

References

  1. World Health Organization, "Global status report on road safety 2023," Geneva: World Health Organization, 2023. 
  2. A. R. Quimby, G. Maycock, I. D. Carter, R. Dixon, and J. G. Wall, Perceptual abilities of accident involved drivers. No. RR 27, 1986. 
  3. IS Kim, "The Study on Risk of Distracted Driving," 2008 Annual Conference Korean Psychologial Assocoaition, pp. 136-137, Aug 2008. 
  4. J. C. Stutts, H. F. Herman, and W. W. William, "Cell phone use while driving in North Carolina: 2002 update report," 2002. 
  5. D. L. Strayer, F. A. Drews, and D. J. Crouch, "Fatal distraction? A comparison of the cell-phone driver and the drunk driver," Driving Assessment Conference. Vol. 2. No. 2003. University of Iowa, 2003. 
  6. JH Cho, IS Song, and TJ Song, "Concept and Relationship of Driver's Inattention and Distraction in traffic accident," Proceedings of the KOR-KST Conference, No. 73, pp. 156-161. Oct 2015. 
  7. SC Lee, "The Effects of Driving Behavior Determinants on Dangerous Driving and Traffic Accidents in the Reckless Drivers Group : A Path Analysis Study," Journal of Korean Society of Transportation, Vol. 25, No. 2, pp. 95-105, Apr 2007. 
  8. JS Cho, HS Lee, JY Lee, and DK Kim, "The Hazardous Expressway Sections for Drowsy Driving Using Digital Tachograph in Truck," Journal of Korean Society of Transportation, Vol. 35, No. 2, pp. 160-168, Apr 2017. 
  9. SY Lee and SC Lee, "The effects of driving confidence level on drunken drivers: A path analysis study," Korean Journal of Industrial and Organizational Psychology, Vol. 20, No. 1, pp. 43-55. 2007. 
  10. YJ Song, "Real-time driver behavior recognition system using a CNN-LSTM model," Master's Thesis, Hanyang University, 2021. 
  11. Y. Xing, C. Lv, H. Wang, D. Cao, E. Velenis, and F. Y. Wang, "Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach," IEEE Transactions on Vehicular Technology, Vol. 68, No. 6, pp. 5379-5390, June 2019, doi: 10.1109/TVT.2019.2908425. 
  12. C. Zhang, R. Li, W. Kim, D. Yoon, and P. Patras, "Driver Behavior Recognition via Interwoven Deep Convolutional Neural Nets With Multi-Stream Inputs," IEEE Access, vol. 8, pp. 191138-191151, Oct 2020, doi: 10.1109/ACCESS.2020.3032344. 
  13. L. Yang, H. Yang, H. Wei, Z. Hu, and C. Lv, "Video-Based Driver Drowsiness Detection With Optimised Utilization of Key Facial Features," IEEE Transactions on Intelligent Transportation Systems, Vol. 25, No. 7, pp. 6938-6950, July 2024, doi: 10.1109/TITS.2023.3346054 
  14. Z. Li and D. Jin, "Driver anomalous driving behavior detection based on supervised contrastive learning," 4th International Conference on Computer Vision and Data Mining (ICCVDM 2023), Vol. 13063, pp. 667-674, Sep 2024. 
  15. F. Tango and M. Botta, "Real-Time Detection System of Driver Distraction Using Machine Learning," IEEE Transactions on Intelligent Transportation Systems, Vol. 14, No. 2, pp. 894-905, June 2013, doi: 10.1109/TITS.2013.2247760. 
  16. HK Kim and SK Ryu, "Study of MTCNN-based driver gaze information visualization," Journal of Digital Art Engineering and Multimedia, Vol. 9, No. 4, pp. 431-440, Dec 2022, doi: 10.29056/jdaem.2022.12.09 
  17. Y. Guan, Z. Chen, W. Zeng, Z. Cao, and Y. Xiao, "End-to-End Video Gaze Estimation via Capturing Head-Face-Eye Spatial-Temporal Interaction Context," IEEE Signal Processing Letters, Vol. 30, pp. 1687-1691, Nov 2023, doi: 10.1109/LSP.2023.3332569 
  18. L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, "Temporal Segment Networks: Towards Good Practices for Deep Action Recognition," European Conference on Computer Vision, pp. 20-36, Sep 2016. doi: 10.1007/978-3-319-46484-8_2. 
  19. L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, "Temporal Segment Networks for Action Recognition in Videos," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 41, No. 11, pp. 2740-2755, Nov 2019.