DOI QR코드

DOI QR Code

센서 융합을 통한 물체 거리 측정 및 인식 시스템

Object detection and distance measurement system with sensor fusion

  • Lee, Tae-Min (Dept. of Electronic Engineering, Hanyang University) ;
  • Kim, Jung-Hwan (Dept. of Electronic Engineering, Hanyang University) ;
  • Lim, Joonhong (Dept. of Electronic Engineering, Hanyang University)
  • 투고 : 2020.03.06
  • 심사 : 2020.03.18
  • 발행 : 2020.03.31

초록

본 논문에서는 자율주행 자동차에 물체를 인식하고 거리를 측정하는데 효율적인 센서 융합을 제안한다. 자율주행 자동차에 사용되는 대표적인 센서는 레이더, 라이다, 카메라이다. 이 중 라이다 센서는 차량 주변의 맵을 만드는 역할을 한다. 하지만 날씨 조건에 성능이 하락하고 센서의 가격이 매우 비싸다는 단점 있다. 본 논문에서는 이러한 단점을 보완하고자 비교적 저렴하고 눈, 비, 안개에 지장 없는 레이더 센서로 거리를 측정하며 차량 주변을 관찰한다. 물체 인식률이 뛰어난 카메라 센서를 융합하여 물체 인식 및 거리를 측정한다. 융합된 영상은 IP서버를 통해 실시간으로 스마트폰에 전송되어 현재 차량의 상황을 내부, 외부에서 판단하는 자율주행 보조 시스템에 사용될 수 있다.

In this paper, we propose an efficient sensor fusion method for autonomous vehicle recognition and distance measurement. Typical sensors used in autonomous vehicles are radar, lidar and camera. Among these, the lidar sensor is used to create a map around the vehicle. This has the disadvantage, however, of poor performance in weather conditions and the high cost of the sensor. In this paper, to compensate for these shortcomings, the distance is measured with a radar sensor that is relatively inexpensive and free of snow, rain and fog. The camera sensor with excellent object recognition rate is fused to measure object distance. The converged video is transmitted to a smartphone in real time through an IP server and can be used for an autonomous driving assistance system that determines the current vehicle situation from inside and outside.

키워드

참고문헌

  1. Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, 2012. DOI: 10.1145/3065386
  2. Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition, 2014. DOI: 10.1109/cvpr.2014.81
  3. Shaoqing Ren, Kaiming He, Ross Girshick and Jian Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.39, No.6, pp.1137-1149, 2017. DOI: 10.1109/TPAMI.2016.2577031
  4. Kaiming He, Georgia Gkioxari, Piotr Dollar and Ross Girshick, "Mask r-cnn," Proceedings of the IEEE international conference on computer vision, 2017. DOI: 10.1109/iccv.2017.322
  5. A. G. Stove, "Linear FMCW radar techniques," IEE Proceedings F-Radar and Signal Processing, Vol.139, pp.343-350, 1992. DOI: 10.1049/ip-f-2.1993.0019
  6. Jau-Jr Lin, Yuan-Ping Li, Wei-Chiang Hsu and Ta-Sung Lee, "Design of an FMCW radar baseband signal processing system for automotive application," SpringerPlus, Vol.5, pp.42, 2016. DOI: 10.1186/s40064-015-1583-5
  7. Redmon, Joseph, and Ali Farhadi. "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767 2018.
  8. Gareth Dwyer, Jack Stouffer and Shalabh Aggarwal, Flask : building Python web services, Packt Publishing, 2017.
  9. Hyunggi Cho, Young-Woo Seo, B.V.K. Vijaya Kumar and Ragunathan Raj Rajkumar, "A multi-sensor fusion system for moving object detection and tracking in urban driving environments," 2014 IEEE International Conference on Robotics and Automation (ICRA), pp.1836-1843, 2014. DOI: 10.1109/ICRA.2014.6907100
  10. Ricardo Omar Chavez-Garcia and Olivier Aycard, "Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking," IEEE Transactions on Intelligent Transportation Systems, Vol.17, No.2, pp.525-534, 2016. DOI: 10.1109/TITS.2015.2479925