DOI QR코드

DOI QR Code

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman (Department of Aeronautical Systems Engineering, Hanseo University) ;
  • Lee, Seongbong (Department of Unmanned Aircraft Systems, Hanseo University) ;
  • Kim, Hyeji (Department of Aeronautical Systems Engineering, Hanseo University) ;
  • Lee, Dongjin (Department of Unmanned Aircraft Systems, Hanseo University)
  • Received : 2020.08.13
  • Accepted : 2020.08.26
  • Published : 2020.09.30

Abstract

In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Keywords

References

  1. J.W. Lim, and S.S. Rhim, "Estimation of Human Position and Velocity in Collaborative Robot System Using Visual Object Detection Algorithm and Kalman Filter," 2020 17th International Conference on Ubiquitous Robots (UR), June 2020. DOI: https://doi.org/10.1109/ur49135.2020.9144888.
  2. Y. Wu, Y. Sui and G. Wang, "Vision-Based Real-Time Aerial Object Localization and Tracking for UAV Sensing System," IEEE Access, Vol. 5, pp. 23969-23978, October 2017. DOI: https://doi.org/10.1109/access.2017.2764419.
  3. K. Kidono, T. Miyasaka, A. Watanabe, T. Naito, and J. Miura, "Pedestrian recognition using high-definition LIDAR," 2011 IEEE Intelligent Vehicles Symposium (IV), June 2011. DOI: http://dx.doi.org/10.1109/ivs.2011.5940433.
  4. B. Li, T. Zhang, and T. Xia, "Vehicle Detection from 3D Lidar Using Fully Convolutional Network," Robotics: Science and Systems XII, August 2016. DOI: https://doi.org/10.15607/RSS.2016.XII.042.
  5. J.S. Lee, M.G. Kim, and H.I. Kim, "Camera and LiDAR Sensor Fusion for Improving Object Detection," Journal of Broadcast Engineering(JBE), Vol. 24, No. 4, July 2019. DOI: https://dx.doi.org/10.5909/JBE.2019.24.4.580.
  6. C. Premebida, G. Monteiro, U. Nunes and P. Peixoto, "A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking," 2007 IEEE Intelligent Transportation Systems Conference, pp. 1044-1049, October 2007. DOI: https://doi.org/10.1109/itsc.2007.4357637.
  7. J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), July 2017. DOI: http://dx.doi.org/10.1109/cvpr.2017.690.
  8. N.A. Thacker and A.J. Lacey, Tutorial: The Kalman Filter, Tina Memo No. 1996-002, 1998
  9. D.G. Yoo, T.L. Song, and D.S. Kim, “Track-to-Track Information Fusion using 2D and 3D Radars,” Journal of Institute of Control, Robotics and Systems, Vol. 18, No. 9, pp. 863-870, September 2012. DOI: http://dx.doi.org/10.5302/j.icros.2012.18.9.863.