• Title/Summary/Keyword: Segment Calibration

Search Result 23, Processing Time 0.021 seconds

RSSI-based Location Determination via Segmentation-based Linear Spline Interpolation Method (분할기반의 선형 호 보간법에 의한 RSSI기반의 위치 인식)

  • Lau, Erin-Ee-Lin;Chung, Wan-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.473-476
    • /
    • 2007
  • Location determination of mobile user via RSSI approach has received ample attention from researchers lately. However, it remains a challenging issue due to the complexities of RSSI signal propagation characteristics, which are easily exacerbated by the mobility of user. Hence, a segmentation-based linear spline interpolation method is proposed to cater for the dynamic fluctuation pattern of radio signal in complex environment. This optimization algorithm is proposed in addition to the current radiolocation's (CC2431, Chipcon, Norway) algorithm, which runs on IEEE802.15.4 standard. The enhancement algorithm involves four phases. First phase consists of calibration model in which RSSI values at different static locations are collected and processed to obtain the mean and standard deviation value for the predefined distance. RSSI smoothing algorithm is proposed to minimize the dynamic fluctuation of radio signal received from each reference node when the user is moving. Distances are computed using the segmentation formula obtain in the first phase. In situation where RSSI value falls in more than one segment, the ambiguity of distance is solved by probability approach. The distance probability distribution function(pdf) for each distances are computed and distance with the highest pdf at a particular RSSI is the estimated distance. Finally, with the distances obtained from each reference node, an iterative trilateration algorithm is used for position estimation. Experiment results obtained position the proposed algorithm as a viable alternative for location tracking.

  • PDF

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF