Object Recognition-based Global Localization for Mobile Robots

이동로봇의 물체인식 기반 전역적 자기위치 추정

  • 박순용 (KIST/연세대학교 학연협동) ;
  • 박민용 (연세대학교 전기전자공학과) ;
  • 박성기 (KIST 인지로봇연구단)
  • Published : 2008.02.29

Abstract

Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

Keywords