Acknowledgement
This work was supported by Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (No. 2018-0-00327, Development of fully autonomous driving navigation AI technology in high-precision map shadow environment).
References
- E. Yurtsever et al., A survey of autonomous driving: Common practices and emerging technologies, IEEE Access 8 (2020), 58443-58469. https://doi.org/10.1109/access.2020.2983149
- S.-J. Han et al., Robust ego-motion estimation and map matching technique for autonomous vehicle localization with high definition digital map, in Proc. Int. Conf. Inf. Commun. Technol. Convergence, (Jeju, Rep. of Korea), Oct. 2018, pp. 630-635.
- G. Grisetti et al., A tutorial on graph-based SLAM, IEEE Intell. Trans. Syst. Mag. 2 (2010), no. 4, 31-43. https://doi.org/10.1109/MITS.2010.939925
- J. Zhang and S. Singh, Low-drift and real-time lidar odometry and mapping, Auto. Robots 41 (2017), no. 2, 401-416. https://doi.org/10.1007/s10514-016-9548-2
- A. Geiger, P. Lenz, and R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Providence, RI, USA), June 2012, pp. 3354-3361.
- C. Forster, M. Pizzoli, and D. Scaramuzza, SVO: Fast semidirect monocular visual odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Hong Kong, China), May 2014, pp. 15-22.
- J. Engel, V. Koltun, and D. Cremers, Direct sparse odometry, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2017), no. 3, 611-625. https://doi.org/10.1109/TPAMI.2017.2658577
- J. Zhu, Image gradient-based joint direct visual odometry for stereo camera, in Proc. Int. Joint Conf. Artif. Intell. (Melbourne, Australia), Aug. 2017, pp. 4558-4564.
- Y. Ma et al., An Invitation to 3-D Vision: From Images to Geometric Models, vol. 26, Springer, NY, USA, 2012.
- A. Censi, An ICP variant using a point-to-line metric, in Proc. IEEE Int. Conf. Robot. Autom. (Pasadena, CA, USA), May 2008, pp. 19-25.
- K. L. Low, Linear least-squares optimization for point-to-plane ICP surface registration, Tech. Rep. TR04-004, Department of Computer Science, University of North Carolina at Chapel Hill, 2004, pp. 1-3.
- A. Segal, D. Haehnel, and S. Thrun, Generalized-ICP. Robotics: Science and systems, in Proc. Robot.: Sci. Syst. (Seattle, WA, USA), June 2009.
- J. E. Deschaud, IMLS-SLAM: Scan-to-model matching based on 3D data, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 2480-2485.
- J. Behley and C. Stachniss, Efficient surfel-based SLAM using 3D laser range data in urban environments, Robot: Sci. Syst. 2018 (2018).
- X. Chen et al., SuMa++: Efficient LiDAR-based semantic SLAM, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 4530-4537.
- G. Chen et al., PSF-LO: Parameterized semantic features based lidar odometry, arXiv preprint, CoRR, 2020, arXiv: 2010.13355.
- J. Saarinen et al., Normal distributions transform occupancy maps: Application to large-scale online 3D mapping, in Proc. IEEE Int. Conf. Robot. Autom. (Karlsruhe, Germany), May 2013, pp. 2233-2238.
- C. Schulz and A. Zell, Real-time graph-based SLAM with occupancy normal distributions transforms, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), May 2020, pp. 3106-3111.
- K. Ji et al., CPFG-SLAM: A robust simultaneous localization and mapping based on LIDAR in off-road environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), 2018, pp. 650-655.
- Y. S. Shin, Y. S. Park, and A. Kim, Direct visual SLAM using sparse depth for camera-lidar system, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 5144-5151.
- L. Sun et al., DLO: Direct LiDAR odometry for 2.5D outdoor environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), June 2018, pp. 1-5.
- J. Li et al., DL-SLAM: Direct 2.5D LiDAR SLAM for autonomous driving, in Proc. IEEE Intell. Veh. Symp. (Paris, France), June 2019, pp. 1205-1210.
- A. Nicolai et al., Deep learning for laser based odometry estimation, in RSS Workshop Limits and Potentials of Deep Learning in Robotics, vol. 184, 2016.
- W. Wang et al., DeepPCO: End-to-end point cloud odometry through deep parallel neural network, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 3248-3254.
- Y. Cho, G. Kim, and A. Kim, Unsupervised geometry-aware deep lidar odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), Aug. 2020, pp. 2145-2152.
- Z. J. Yew and G. H. Lee, 3DFeat-net: Weakly supervised local 3D features for point cloud registration, in Proc. Eur. Conf. Comput. Vis. (Munich, Germany), Sept. 2018, pp. 607-623.
- Y. Zhou and O. Tuzel, VoxelNet: End-to-end learning for point cloud based 3D object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Salt lake city, UT, USA), June 2018, pp. 4490-4499.
- W. Lu et al., DeepVCP: An end-to-end deep neural network for point cloud registration, in Proc. IEEE/CVF Int. Conf. Comput. Vis. (Seoul, Rep. of Korea), Oct. 2019, pp. 12-21.
- J. L. Blanco, A tutorial on SE(3) transformation parameterizations and on-manifold optimization, Tech. Rep. 012010, University of Malaga, 2010.
- W. H. Press et al., Numerical recipes 3rd edition: The art of scientific computing, Cambridge University Press, Cambridge, UK, 2007.
- H. Badino et al., Fast and accurate computation of surface normals from range images, in Proc. IEEE Int. Conf. Robot. Autom. (Shanghai, China), 2011, pp. 3084-3091.
- H. Farid and E. P. Simoncelli, Differentiation of discrete multidimensional signals, IEEE Trans. Image Process. 13 (2004), no. 4, 496-508. https://doi.org/10.1109/TIP.2004.823819
- J. Han, M. Choi, and Y. Kwon, 40-TFLOPS artificial intelligence processor with function-safe programmable many-cores for ISO26262 ASIL-D, ETRI J. 42 (2020), no. 4, 468-479. https://doi.org/10.4218/etrij.2020-0128