DOI QR코드

DOI QR Code

Development on Identification Algorithm of Risk Situation around Construction Vehicle using YOLO-v3

YOLO-v3을 활용한 건설 장비 주변 위험 상황 인지 알고리즘 개발

  • 심승보 (한국건설기술연구원 차세대 인프라연구센터) ;
  • 최상일 (한국건설기술연구원 차세대 인프라연구센터)
  • Received : 2019.05.13
  • Accepted : 2019.07.05
  • Published : 2019.07.31

Abstract

Recently, the government is taking new approaches to change the fact that the accident rate and accident death rate of the construction industry account for a high percentage of the whole industry. Especially, it is investing heavily in the development of construction technology that is fused with ICT technology in line with the current trend of the 4th Industrial Revolution. In order to cope with this situation, this paper proposed a concept to recognize and share the work situation information between the construction machine driver and the surrounding worker to enhance the safety in the place where construction machines are operated. In order to realize the part of the concept, we applied image processing technology using camera based on artificial intelligence to earth-moving work. Especially, we implemented an algorithm that can recognize the surrounding worker's circumstance and identify the risk situation through the experiment using the compaction equipment. and image processing algorithm based on YOLO-v3. This algorithm processes 15.06 frames per second in video and can recognize danger situation around construction machine with accuracy of 90.48%. We will contribute to the prevention of safety accidents at the construction site by utilizing this technology in the future.

최근 정부는 건설 산업의 재해율과 사고 사망률이 전체 산업 중 높은 비율을 차지한다는 점을 개선하기 위하여 새로운 대책을 강구하고 있다. 특히 4차 산업혁명의 시대적 흐름에 맞춰 ICT 기술과 융합된 건설 기술 개발에 집중적으로 투자하고 있다. 이런 상황에 대응하고자 본 논문에서는 건설 기계를 사용하는 작업에서 작업자의 안전성 향상을 위한 방법으로, 건설 기계 운전자와 주변 작업자 간의 작업 상황 정보를 공유하고 인지할 수 있는 개념을 제시하였다. 그리고 해당 개념의 일부를 실현하고자 카메라를 이용한 인공 지능 기반 영상처리 기술을 활용하여 토공 작업에 접목시켰다. 그 중에서도 다짐 장비를 이용한 실험을 통해 YOLO-v3 기반의 영상 처리 알고리즘으로 토공 작업 중에 주변 작업자 상황을 인지하고 위험 상황 여부를 판단할 수 있는 알고리즘을 구현하였다. 그 결과 본 알고리즘은 동영상에서 초당 15.06프레임을 처리하며 90.48%의 정확도로 건설 기계 주변 위험 상황을 인지할 수 있다. 향후 이 같은 기술을 활용하여 건설 현장의 안전사고 예방에 기여하고자 한다.

Keywords

SHGSCZ_2019_v20n7_622_f0001.png 이미지

Fig. 1. Hazard rate (%)[1]

SHGSCZ_2019_v20n7_622_f0002.png 이미지

Fig. 2. Accident death rate per 10,000 persons[1]

SHGSCZ_2019_v20n7_622_f0003.png 이미지

Fig. 3. Construction safety system flow

SHGSCZ_2019_v20n7_622_f0004.png 이미지

Fig. 4. Part of construction safety system for recognition of the surrounding objects and decision of the situation using a camera.

SHGSCZ_2019_v20n7_622_f0005.png 이미지

Fig. 5. The YOLO Detection System[16]

SHGSCZ_2019_v20n7_622_f0006.png 이미지

Fig. 6. Boundary between safe and dangerous region and positions of the detected objects in the image

SHGSCZ_2019_v20n7_622_f0007.png 이미지

Fig. 7. Flowchart of detection algorithm for risk situation in construction sites

SHGSCZ_2019_v20n7_622_f0008.png 이미지

Fig. 8. Images for validation of detection algorithm

SHGSCZ_2019_v20n7_622_f0009.png 이미지

Fig. 9. Results of detection algorithm

Table 1. Analysis of the sensor’s function for construction safety[2]

SHGSCZ_2019_v20n7_622_t0001.png 이미지

Table 2. Accuracy of detection algorithm for risk situation in construction sites

SHGSCZ_2019_v20n7_622_t0002.png 이미지

References

  1. Ministry of Land, Infrastructure and Transport. (Oct., 31, 2018). Smart Construction Technology Road Map. Available From: http://www.molit.go.kr/USR/NEWS/m_71/dtl.jsp?id=95081506 (accessed Feb., 8, 2019)
  2. B. Jo, Y. Lee, D. Kim, J. Kim, P. Choi, "Image-based proximity warning system for excavator of construction sites", Journal of the Korea Contents Association, vol. 16, no. 10, pp. 588-597, 2016. DOI: http://dx.doi.org/10.5392/JKCA.2016.16.10.588
  3. J. Y. Soh, J. Lee, C. H. Han, "Development of Omnidirectional Object Detecting Technology for a safer excavator", Journal of the Korea Institute of Building Construction, vol. 10, no. 4, pp. 105-112, 2010. DOI: https://doi.org/10.5345/JKIC.2010.10.4.105
  4. J. Seo, S. Han, S. Lee, H. Kim, "Computer vision techniques for construction safety and health monitoring", Advanced Engineering Informatics, vol. 29, pp. 239-251, 2015. DOI: http://dx.doi.org/10.1016/j.aei.2015.02.001
  5. J. Na, S. Lee, C. Kim, H. Son, C. Kim. "Real-time vision-based proximity detection for improved worker safety in construction equipment operation", Proc. of Architectural institute of Korea, vol. 35, no. 2, pp. 31-32, 2015.
  6. S. Han, S. Lee, "A vision-based motion capture and recognition framework for behavior-based safety management", Automation in Construction, vol. 35, pp. 131-141, 2013. DOI: http://dx.doi.org/10.1016/j.autcon.2013.05.001
  7. H. Kim, H. Kim, Y. W. Hong, H. Byun, "Detecting construction equipment using a region-based fully convolutional network and transfer learning", Journal of computing in Civil Engineering, vol. 32. no. 2, 04017082, 2018. DOI: https://doi.org/10.1061/(ASCE)CP.1943-5487.0000731
  8. S. Ren, K. He, R. Girshick, J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks", Proc. of Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, Canada, pp. 91-99, 2015.
  9. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpath, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, "ImageNet large scale visual recognition challenge", International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015. DOI: https://doi.org/10.1007/s11263-015-0816-y
  10. R. Girshick, J. Donahue, T. Darrell, J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation", Proc. of the IEEE conference on computer vision and pattern recognition, Columbus, Ohio, pp. 580-587, 2014.
  11. R. Girshick, "Fast r-cnn", Proc. of The IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 1440-1448, 2015.
  12. K. He, G. Gkioxari, P. Dollar, R. Girshick, "Mask R-CNN", Proc. of The IEEE International Conference on Computer Vision (ICCV), Venezia, Italy, pp. 2980-2988, 2017.
  13. X. Glorot, Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks", Proc. of the thirteenth international conference on artificial intelligence and statistics, Sardinia, Italy, pp. 249-256, 2010.
  14. J. R. Uijlings, K. E. Van De Sande, T. Gevers, A. W. Smeulders, "Selective Search for Object Recognition", International journal of computer vision, vol. 104, no. 2, pp. 154-171, 2013. DOI: https://doi.org/10.1007/s11263-013-0620-5
  15. W. Kim, S. Park, R. Lee, J. Seo, "A case study on the application of machine guidance in construction field", Journal of the Korean Society of Civil Engineers, vol. 38, no. 5, pp. 721-731, 2018. DOI: https://doi.org/10.12652/Ksce.2018.38.5.0721
  16. J. Redmon, S. Divvala, R. Girshick, A. Farhadi, "You only look once: Unified, real-time object detection", Proc. the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016.
  17. J. Redmon, A. Farhadi, "Yolov3: An incremental improvement", arXiv preprint arXiv:1804.02767, 2018.