Visual-Attention Using Corner Feature Based SLAM in Indoor Environment

실내 환경에서 모서리 특징을 이용한 시각 집중 기반의 SLAM

  • Shin, Yong-Min (Department of Intelligent Robot Engineering, Hanyang University) ;
  • Yi, Chu-Ho (Division of Electrical Computer Engineering, Hanyang University) ;
  • Suh, Il-Hong (Collage of Information and Communications, Hanyang University) ;
  • Choi, Byung-Uk (Collage of Information and Communications, Hanyang University)
  • 신용민 (한양대학교 지능형로봇학과) ;
  • 이주호 (한양대학교 전자컴퓨터통신공학과) ;
  • 서일홍 (한양대학교 컴퓨터공학부) ;
  • 최병욱 (한양대학교 컴퓨터공학부)
  • Received : 2011.09.09
  • Accepted : 2012.06.07
  • Published : 2012.07.25

Abstract

The landmark selection is crucial to successful perform in SLAM(Simultaneous Localization and Mapping) with a mono camera. Especially, in unknown environment, automatic landmark selection is needed since there is no advance information about landmark. In this paper, proposed visual attention system which modeled human's vision system will be used in order to select landmark automatically. The edge feature is one of the most important element for attention in previous visual attention system. However, when the edge feature is used in complicated indoor area, the response of complicated area disappears, and between flat surfaces are getting higher. Also, computation cost increases occurs due to the growth of the dimensionality since it uses the responses for 4 directions. This paper suggests to use a corner feature in order to solve or prevent the problems mentioned above. Using a corner feature can also increase the accuracy of data association by concentrating on area which is more complicated and informative in indoor environments. Finally, this paper will prove that visual attention system based on corner feature can be more effective in SLAM compared to previous method by experiment.

References

  1. D. Lowe, "Object recognition from local scale-invariant features," Proceedings of the International Conference on Computer Vision. 2. pp. 1150-1157, 1999.
  2. H. Bay, A, Ess, T. Tuytelaars, and L.V. Gool, "Speeded-Up Robust Features (SURF)," Computer Vision and Image Understanding vol.110, pp. 346-359, 2008. https://doi.org/10.1016/j.cviu.2007.09.014
  3. C. Harris and M. Stephens, "A combined corner and edge detector," Proceedings of the 4th Alvey Vision Conference, pp. 147-151, 1988.
  4. L. Itti, C. Koch, and E. Niebur, "Model of saliency based visual attention for rapid scene analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI), pp. 1254-1259, 1998.
  5. J. Daugman, "Two-dimensional spectral analysis of cortical receptive field profiles," Vision Res. 20 (10), pp. 847-856, 1980. https://doi.org/10.1016/0042-6989(80)90065-6
  6. A. Davison, "Mobile Robot Navigation Using Active Vision," PhD thesis, University of Oxford, UK, 1999.
  7. A. Argyros, C. Bekris, and S. Orphanoudakis, "Robot homing based on corner tracking in a sequence of panoramic images," Computer Vision and Pattern Recognition Conference (CVPR), pp. 11-13, 2001.
  8. RAWSEEDS: Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets, "Bicocca_2009-02-25b," http://www.rawseeds.org/rs/capture_sessions/vie w/5, 2009.
  9. S. Thrun, W. Burgard, and D. Fox, Probabilistic ROBOTICS, MIT Press, 2006.
  10. S. Frintrop, "VOCUS: A Visual Attention System for Object Detection and Goal-directed Search," Phd thesis, University of Bonn, 2006.
  11. S. Frintrop, Maria Klodt and Erich Rome, "A Real-time Visual Attention System Using Integral Images," in Proc. of the 5th International Conference on Computer Vision Systems (ICVS 2007), 2007.
  12. J. Harel, C. Koch, and P. Perona. "Graph-based visual saliency," Advances in Neural Information Processing Systems, 19, pp. 545-552, 2007.
  13. C. Siagian and L. Itti, "Biologically Inspired Mobile Robot Vision Localization," IEEE Transactions on Robotics, Vol. 25, No. 4, pp. 861-873, 2009. https://doi.org/10.1109/TRO.2009.2022424
  14. C. Siagian and L. Itti, "Rapid biologicallyinspired scene classification using features shared with visual attention," IEEE Trans. Pattern Anal. Mach. Intell,pp. 300-312, 2007.
  15. S. Frintrop and P. Jensfelt, "Attentional Landmarks and Active Gaze Control for Visual SLAM," IEEE Transactions on Robotics, Special Issue on Visual SLAM, vol. 24, no. 5, Oct. 2008.
  16. S. Frintrop, P. Jensfelt, and H. Christensen, "Attentional Landmark Selection for Visual SLAM," Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'06), 2006.
  17. Y.J. LEE, "Indoor SLAM Using Entropy-based Visual Saliency and Outdoor SLAM Using Rotation Invariant Descriptors of Salient Regions," PhD thesis, Korea University, 2011.
  18. S. Engel, X. Zhang, and B. Wandell, "Colour tuning in human visual cortex measured with functional magnetic resonance imaging," Nature, vol. 388, no.6637, pp. 68-71, 1997 https://doi.org/10.1038/40398
  19. M. Livingstone and D. Hubel, "Anatomy and physiology of a color system in the primate visual cortex," J Neurosci 4, pp. 309-356, 1984.
  20. T. Baliey and G. Dissanayake, "An efficient multiple hypotheses filter for bearing-only SLAM," In Proceedings of International Conference on Intelligent Robot and Systems(IROS'04), pp. 736-741, 2004.