DOI QR코드

DOI QR Code

딥러닝 기반 터널 내 이동체 자동 추적 및 유고상황 자동 감지 프로세스 개발

Development of a deep-learning based automatic tracking of moving vehicles and incident detection processes on tunnels

  • 이규범 (한국건설기술연구원 미래융합연구본부, 과학기술연합대학원대학교(UST) 스마트도시건설융합) ;
  • 신휴성 (한국건설기술연구원 미래융합연구본부) ;
  • 김동규 (한국건설기술연구원 인프라안전연구본부)
  • Lee, Kyu Beom (Department of Future Technology and Convergence Research, Korea Institute of Civil Engineering and Building Technology, Smart City and University of Science & Technology) ;
  • Shin, Hyu Soung (Department of Future Technology and Convergence Research, Korea Institute of Civil Engineering and Building Technology) ;
  • Kim, Dong Gyu (Department of Infrastructure Safety Research, Korea Institute of Civil Engineering and Building Technology)
  • 투고 : 2018.10.10
  • 심사 : 2018.11.14
  • 발행 : 2018.11.30

초록

도로 터널의 주행은 시야의 제한으로 인해 유고상황이 발생한 후 2차 대형사고로 이어지기 쉽다. 따라서, 유고상황 발생 즉시, 상황을 자동 감지하여 신속히 초동대응이 이루어 져야 한다. 유고상황을 자동으로 감시할 수 있는 시스템은 기존에도 존재했지만, 폐합된 터널 내 열악 환경에서 촬영되는 CCTV 영상의 질적 한계로 인해 유고상황을 제대로 감지하지 못했다. 이러한 한계를 극복하기 위해 딥러닝을 기반으로 한 터널 영상유고 자동 감지 시스템을 개발하였으며, 지난 2017년 11월 딥러닝 객체 인식 네트워크에 대한 연구를 진행하여 우수한 객체인식 성능을 보인바 있다. 그러나 객체인식은 정지영상 기반으로 수행되므로 이동체의 이동방향과 속도를 알 수 없어, 정차 및 역주행 등 이동체의 이동특성에 따른 유고상황을 판단하기 힘들다. 본 논문에서는 객체인식으로 감지된 이동체의 객체정보를 기반으로 별도의 객체추적기법을 적용하여 이동체의 이동 특성을 자동으로 추적하는 프로세스를 제안하였다. 이를 통해 얻어진 이동체의 이동 방향과 속도 정보를 기반으로 정차 및 역주행을 판별하는 알고리즘을 개발하여 딥러닝 기반 터널 영상유고 자동감지 시스템을 완성하였다. 또한, 유고상황이 포함된 영상들에 대하여 유고상황 감지성능을 검증하였다. 검증 실험 결과, 화재, 정차와 역주행 상황에 대해서는 모두 100% 수준으로 완전한 유고상황 감지성능을 보였으나, 보행자 발생 상황에서는 78.5%로 상대적으로 낮은 성능을 보였다. 하지만, 향후 지속적인 영상유고 영상 빅데이터를 확장해 나가고 주기적인 재학습을 통해 유고상황에 대한 인지성능을 향상시켜 나갈 수 있을 것이다.

An unexpected event could be easily followed by a large secondary accident due to the limitation in sight of drivers in road tunnels. Therefore, a series of automated incident detection systems have been under operation, which, however, appear in very low detection rates due to very low image qualities on CCTVs in tunnels. In order to overcome that limit, deep learning based tunnel incident detection system was developed, which already showed high detection rates in November of 2017. However, since the object detection process could deal with only still images, moving direction and speed of moving vehicles could not be identified. Furthermore it was hard to detect stopping and reverse the status of moving vehicles. Therefore, apart from the object detection, an object tracking method has been introduced and combined with the detection algorithm to track the moving vehicles. Also, stopping-reverse discrimination algorithm was proposed, thereby implementing into the combined incident detection processes. Each performance on detection of stopping, reverse driving and fire incident state were evaluated with showing 100% detection rate. But the detection for 'person' object appears relatively low success rate to 78.5%. Nevertheless, it is believed that the enlarged richness of image big-data could dramatically enhance the detection capacity of the automatic incident detection system.

키워드

TNTNB3_2018_v20n6_1161_f0001.png 이미지

Fig. 1. Object tracking process using bounding box information

TNTNB3_2018_v20n6_1161_f0002.png 이미지

Fig. 2. Reverse driving-stopping detection process using tracking bounding boxes

TNTNB3_2018_v20n6_1161_f0003.png 이미지

Fig. 3. Concept of IoL (Intersection over Line)

TNTNB3_2018_v20n6_1161_f0004.png 이미지

Fig. 4. Deep learning and tracking based incident detection processes

TNTNB3_2018_v20n6_1161_f0005.png 이미지

Fig. 5. 3 types of evaluation about tunnel incident detection system

TNTNB3_2018_v20n6_1161_f0006.png 이미지

Fig. 6. Test results of Faster R-CNN model

TNTNB3_2018_v20n6_1161_f0007.png 이미지

Fig. 7. Composition of the tunnel incident detection system

TNTNB3_2018_v20n6_1161_f0008.png 이미지

Fig. 8. Composition of deep learning based incident inference module

TNTNB3_2018_v20n6_1161_f0009.png 이미지

Fig. 9. Multitasking process of inference core module

Table 1. Object tracking success or failure with respect to video frame rate

TNTNB3_2018_v20n6_1161_t0001.png 이미지

Table 2. Composition of tunnel incident video bigdata

TNTNB3_2018_v20n6_1161_t0002.png 이미지

Table 3. Tunnel incident detection results

TNTNB3_2018_v20n6_1161_t0003.png 이미지

참고문헌

  1. Alex, B., Zongyuan, G., Lionel, O., Fabio, R., Ben, U. (2016), "Simple online and realtime tracking", Proceedings of the Image Processing (ICIP) 2016 IEEE International Conference, pp. 3464-3468.
  2. Kim, D.G., Shin, Y.W., Shin, Y.S. (2012), "Section enlargement by reinforcement of shotcrete lining on the side wall of operating road tunnel", Journal of Korean Tunnelling and Underground Space Association, Vol. 14, No. 6, pp. 637-652. https://doi.org/10.9711/KTAJ.2012.14.6.637
  3. Kim, T.B. (2016), "The national highway, expressway tunnel video incident detection system performance analysis and reflect attributes for double deck tunnel in great depth underground space", Journal of the Korea Institute of Information and Communication Engineering, Vol. 20, No. 7, pp. 1325-1334. https://doi.org/10.6109/JKIICE.2016.20.7.1325
  4. Lee, J.S., Lee, S.K., Kim, D.W., Hong, S.J., Yang, S.I. (2018), "Trends on object detection techniques based on deep learning", Electronics and Telecommunications Trends, Vol. 33, No. 4, pp. 23-32. https://doi.org/10.22648/ETRI.2018.J.330403
  5. Ministry of Land, Infrastructure and Transport (MOLIT) (2016), "Attempt for faultless safety system of road tunnels". Press Release.
  6. Ministry of Land, Infrastructure and Transport (MOLIT) (2016), "Guideline of installation of disaster prevention facilities on road tunnels".
  7. Ren, S., He, K., Girshick, R., Sun, J. (2015), "Faster R-CNN: Towards real-time object detection with region proposal networks." Proceedings of the Advances in Neural Information Processing Systems, pp. 91-99.
  8. Roh, C.G., Park, B.J., Kim, J.S. (2016), "A study on the contents for operation of tunnel management systems using a view synthesis technology", The Journal of the Korea Contents Association, Vol. 16, No. 6, pp. 507-515. https://doi.org/10.5392/JKCA.2016.16.06.507
  9. Shin, H.S., Kim, D.K., Yim, M.J., Lee, K.B., Oh, Y.S. (2017), "A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm", Journal of Korean Tunnelling and Underground Space Association, Vol. 19, No. 1, pp. 95-107. https://doi.org/10.9711/KTAJ.2017.19.1.095
  10. Shin, H.S., Lee, K.B., Yim, M.J., Kim, D.K. (2017), "Development of a deep-learning based tunnel incident detection system on CCTVs", Journal of Korean Tunnelling and Underground Space Association, Vol. 19, No. 6, pp. 915-936. https://doi.org/10.9711/KTAJ.2017.19.6.915
  11. Yilmaz, A., Javed, O., Shah, M. (2006), "Object tracking: A survey", Acm computing surveys (CSUR), Vol. 38, No. 4, Article No. 13.
  12. Zhu, M. (2004), "Recall, precision and average precision", Department of Statistics and Actuarial Science, University of Waterloo, Waterloo 2: 30.
  13. Zitnick, C.L., Dollar, P. (2014), "Edge boxes: Locating object proposals from edges", European Conference on Computer Vision, pp. 391-405.