DOI QR코드

DOI QR Code

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei (Dept. of Digital Media Technology, North China University of Technology) ;
  • Zou, Shuanghui (Dept. of Digital Media Technology, North China University of Technology) ;
  • Tian, Yifei (Dept. of Computer and Information Science, University of Macau) ;
  • Sun, Su (Dept. of Digital Media Technology, North China University of Technology) ;
  • Fong, Simon (Dept. of Computer and Information Science, University of Macau) ;
  • Cho, Kyungeun (Dept. of Multimedia Engineering, Dongguk University) ;
  • Qiu, Lvyang (Dept. of Multimedia Engineering, Dongguk University)
  • Received : 2018.08.31
  • Accepted : 2018.10.22
  • Published : 2018.12.31

Abstract

Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Keywords

E1JBB0_2018_v14n6_1445_f0001.png 이미지

Fig. 1. The proposed system framework of environment perception and reconstruction.

E1JBB0_2018_v14n6_1445_f0002.png 이미지

Fig. 2. Proposed system framework for environment perception and reconstruction.

E1JBB0_2018_v14n6_1445_f0003.png 이미지

Fig. 3. CPU-GPU sequential diagram of proposed 3D reconstruction module.

E1JBB0_2018_v14n6_1445_f0004.png 이미지

Fig. 4. Multiple sensors mounted on UGV. (a) LiDAR, (b) CCD camera, (c) IMU, and (d) multiple sensor integration.

E1JBB0_2018_v14n6_1445_f0005.png 이미지

Fig. 5. Segmentation result of ground and non-ground points.

E1JBB0_2018_v14n6_1445_f0006.png 이미지

Fig. 6. Ground segmentation and object clustering result in LiDAR point clouds.

E1JBB0_2018_v14n6_1445_f0007.png 이미지

Fig. 7. Object segmentation speed performances using the proposed CPU-GPU hybrid system and the CPU-based method.

E1JBB0_2018_v14n6_1445_f0008.png 이미지

Fig. 8. High-resolution terrain reconstruction results using texture mesh (a) and colored particles (b).

Table 1. Explanation of variables in Fig. 2

E1JBB0_2018_v14n6_1445_t0001.png 이미지

Table 2. Explanation of variables in Fig. 3

E1JBB0_2018_v14n6_1445_t0002.png 이미지

References

  1. Y. Matsushita and J. Miura, "On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter," Robotics and Autonomous Systems, vol. 59, no. 5, pp. 274-284, 2011. https://doi.org/10.1016/j.robot.2011.02.009
  2. J. M. Noguera, R. J. Segura, C. J. Ogayar, and R. Joan-Arinyo, "Navigating large terrains using commodity mobile devices," Computers & Geosciences, vol. 37, no. 9, pp. 1218-1233, 2011. https://doi.org/10.1016/j.cageo.2010.08.007
  3. W. Song, S. Zou, Y. Tian, S. Fong, and K. Cho, "Classifying 3D objects in LiDAR point clouds with a backpropagation neural network," Human-centric Computing and Information Sciences, vol. 8, article no. 29, 2018.
  4. S. R. Sukumar, S. Yu, D. L. Page, A. F. Koschan, and M. A. Abidi, "Multi-sensor integration for unmanned terrain modeling," in Proceedings of SPIE 6230: Unmanned Systems Technology VIII. Bellingham, WA: International Society for Optics and Photonics, 2006.
  5. D. Huber, H. Herman, A. Kelly, P. Rander, and J. Ziglar, "Real-time photo-realistic visualization of 3D environments for enhanced tele-operation of vehicles," in Proceedings of 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), Kyoto, Japan, 2009, pp. 1518-1525.
  6. J. Elseberg, D. Borrmann, and A. Nuchter, "One billion points in the cloud-an octree for efficient processing of 3D laser scans," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 76, pp. 76-88, 2013. https://doi.org/10.1016/j.isprsjprs.2012.10.004
  7. D. Gingras, T. Lamarche, J. L. Bedwani, and E. Dupuis, "Rough terrain reconstruction for rover motion planning," in Proceedings of 2010 Canadian Conference on Computer and Robot Vision (CRV), Ottawa, Canada, 2010, pp. 191-198.
  8. A. Golovinskiy and T. Funkhouser, "Min-cut based segmentation of point clouds," in Proceedings of 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), Kyoto, Japan, 2009, pp. 39-46.
  9. B. Douillard, J. Underwood, V. Vlaskine, A. Quadros, and S. Singh, "A pipeline for the segmentation and classification of 3D point clouds," in Experimental Robotics. Heidelberg: Springer, 2014, pp. 585-600.
  10. M. Himmelsbach, F. V. Hundelshausen, and H. J. Wuensche, "Fast segmentation of 3D point clouds for ground vehicles," in Proceedings of 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, 2010, pp. 560-565.
  11. J. Wang, R. Lindenbergh, and M. Menenti, "SigVox: a 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 128, pp. 111-129, 2017. https://doi.org/10.1016/j.isprsjprs.2017.03.012
  12. H. Wang, B. Wang, B. Liu, X. Meng, and G. Yang, "Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle," Robotics and Autonomous Systems, vol. 88, pp. 71-78, 2017. https://doi.org/10.1016/j.robot.2016.11.014
  13. A. Broggi, S. Cattani, M. Patander, M. Sabbatelli, and P. Zani, "A full-3D voxel-based dynamic obstacle detection for urban scenario using stereo vision," in Proceedings of 2013 16th International IEEE Conference on Intelligent Transportation Systems-(ITSC), Hague, The Netherlands, 2013, pp. 71-76.
  14. A. Khatamian and H. R. Arabnia, "Survey on 3D surface reconstruction," Journal of Information Processing Systems, vol. 12, no. 3, pp. 338-357, 2016. https://doi.org/10.3745/JIPS.01.0010
  15. W. Song, L. Liu, Y. Tian, G. Sun, S. Fong, and K. Cho, "A 3D localisation method in indoor environments for virtual reality applications," Human-centric Computing and Information Sciences, vol. 7, article no. 39, 2017.
  16. D. Zeng, Y. Dai, F. Li, R. S. Sherratt, and J. Wang, "Adversarial learning for distant supervised relation extraction," Computers, Materials & Continua, vol. 55, no. 1, pp. 121-136, 2018.