DOI QR코드

DOI QR Code

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM

  • Choi, Yun Won (Automotive IT Platform Research Team, ETR) ;
  • Choi, Jeong Won (Department of Automatic Electrical Engineering, Yeungnam College of Science & Technology) ;
  • Dai, Yanyan (Department of Electrical Engineering, Yeungnam University) ;
  • Lee, Suk Gyu (Department of Electrical Engineering, Yeungnam University)
  • Received : 2014.02.05
  • Accepted : 2014.06.03
  • Published : 2014.08.01

Abstract

This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Keywords

References

  1. Y. Yagi, "Omni-directional sensing and its applications," IEICE Trans, INF. & SYST., vol. E82-D, no. 3, pp. 568-579, Mar. 1999.
  2. V. N. Peri and S. K. Nayar, "Generation of perspective and panoramic video from omnidirectional video," Proc of DARPA Image Understanding Workshop, New Orleans, May 1997.
  3. T. E. Boult, R. Micheals, X. Gao, P. Lewis, C. Power, W. Yin, and A. S. Erkan, "Frame-rate omnidirectional surveillance and tracking of camouflaged and occluded and targets," Second IEEE Workshop on, pp. 48-55, Jun. 1999.
  4. E Menegatti, A Pretto, A Scarpa, and E. Pagello, "Omnidirectional vision scan matching for robot localization in dynamic environments," Robotics, IEEE Transactions on, vol. 22, no. 3, pp. 523-535, Jun. 2006. https://doi.org/10.1109/TRO.2006.875495
  5. T. Fukuda, S. Ito, F. Arai, Y. Yokoyama, Y. Abe, K. Tanaka, and Y. Tanaka, "Navigation system based on ceiling landmark recognition for autonomous mobile robot," Proc. 1995 IEEE/RSJ International Conference on, vol. 2, pp. 150-155, Aug. 1995.
  6. J. Gaspar, N. Winters, and J. Santos-Victor, "Vision-based navigation and environmental representations with an omnidirectional camera," Robotics and Automation, IEEE Transactions on, vol. 16, no. 6, pp. 890-898, 2000. https://doi.org/10.1109/70.897802
  7. J. R. Kim, M. S. Lim, and J. H. Lim, "Omni camera visionbased localization for mobile robots navigation using omnidirectional images," Journal of Institute of Control, Robotics and Systems (in Korean), vol. 17, no. 3, pp. 206-210, Apr. 2011. https://doi.org/10.5302/J.ICROS.2011.17.3.206
  8. W. J. Kim and C. G. Kim, "Text region extraction from videos using the Harris corner detector," Journal of KIISE: Software and Applications, vol. 34, no. 7, pp. 646-654, Jul. 2007.
  9. S. M. Kim, J. H. Lee, S. G. Roh, and S. Y. Park, "The study of pre-processing algorithm for improving efficiency of optical flow method on ultrasound image," Journal of IEEK, vol. 47, no. 5, pp. 24-32, Sep. 2010.
  10. J. H. Shin and G. J. Kwon, "Non-metric fish-eyeLens distortion correction using ellipsoid model," HCI 2005 Conference on, vol. 1, pp. 83-89, 2005.
  11. S. Y. Lee, "Use of optical flow information with three cameras for robot navigation," Journal of Institute of Control, Robotics and Systems (in Korean), vol. 18, no. 2, pp. 110-117, Feb. 2012. https://doi.org/10.5302/J.ICROS.2012.18.2.110
  12. H. J. Sohn and B. K. Kim, "An efficient localization algorithm based on vector matching for mobile robots using laser range finders," Journal of Intelligent and Robotic Systems Archive, vol. 51, no. 4, pp. 461-488, Apr. 2008. https://doi.org/10.1007/s10846-007-9194-1
  13. R. L. Graham, "An efficient algorithm for determining the convex hull of a finite planar set," Information Processing Letters, vol. 1, pp. 132-133, 1972. https://doi.org/10.1016/0020-0190(72)90045-2
  14. H. D. Kang and K. H. Jo, "Localization of 3D spatial information from single omni-directional image," Journal of Institute of Control, Robotics and Systems (in Korean), vol. 12, no. 7, pp. 686-692, Jul. 2006. https://doi.org/10.5302/J.ICROS.2006.12.7.686