• Title/Summary/Keyword: relative pose estimation

Search Result 22, Processing Time 0.029 seconds

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.

Motion Estimation Using 3-D Straight Lines (3차원 직선을 이용한 카메라 모션 추정)

  • Lee, Jin Han;Zhang, Guoxuan;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.4
    • /
    • pp.300-309
    • /
    • 2016
  • This paper proposes a method for motion estimation of consecutive cameras using 3-D straight lines. The motion estimation algorithm uses two non-parallel 3-D line correspondences to quickly establish an initial guess for the relative pose of adjacent frames, which requires less correspondences than that of current approaches requiring three correspondences when using 3-D points or 3-D planes. The estimated motion is further refined by a nonlinear optimization technique with inlier correspondences for higher accuracy. Since there is no dominant line representation in 3-D space, we simulate two line representations, which can be thought as mainly adopted methods in the field, and verify one as the best choice from the simulation results. We also propose a simple but effective 3-D line fitting algorithm considering the fact that the variance arises in the projective directions thus can be reduced to 2-D fitting problem. We provide experimental results of the proposed motion estimation system comparing with state-of-the-art algorithms using an open benchmark dataset.

1-Point Ransac Based Robust Visual Odometry

  • Nguyen, Van Cuong;Heo, Moon Beom;Jee, Gyu-In
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.2 no.1
    • /
    • pp.81-89
    • /
    • 2013
  • Many of the current visual odometry algorithms suffer from some extreme limitations such as requiring a high amount of computation time, complex algorithms, and not working in urban environments. In this paper, we present an approach that can solve all the above problems using a single camera. Using a planar motion assumption and Ackermann's principle of motion, we construct the vehicle's motion model as a circular planar motion (2DOF). Then, we adopt a 1-point method to improve the Ransac algorithm and the relative motion estimation. In the Ransac algorithm, we use a 1-point method to generate the hypothesis and then adopt the Levenberg-Marquardt method to minimize the geometric error function and verify inliers. In motion estimation, we combine the 1-point method with a simple least-square minimization solution to handle cases in which only a few feature points are present. The 1-point method is the key to speed up our visual odometry application to real-time systems. Finally, a Bundle Adjustment algorithm is adopted to refine the pose estimation. The results on real datasets in urban dynamic environments demonstrate the effectiveness of our proposed algorithm.

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.

Localization of a Mobile Robot Using Ceiling Image with Identical Features (동일한 형태의 특징점을 갖는 천장 영상 이용 이동 로봇 위치추정)

  • Noh, Sung Woo;Ko, Nak Yong;Kuc, Tae Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.160-167
    • /
    • 2016
  • This paper reports a localization method of a mobile robot using ceiling image. The ceiling has landmarks which are not distinguishablefrom one another. The location of every landmark in a map is given a priori while correspondence is not given between a detected landmark and a landmark in the map. Only the initial pose of the robot relative to the landmarks is given. The method uses particle filter approach for localization. Along with estimating robot pose, the method also associates a landmark in the map to a landmark detected from the ceiling image. The method is tested in an indoor environment which has circular landmarks on the ceiling. The test verifies the feasibility of the method in an environment where range data to walls or to beacons are not available or severely corrupted with noise. This method is useful for localization in a warehouse where measurement by Laser range finder and range data to beacons of RF or ultrasonic signal have large uncertainty.

Absolute Positioning System for Mobile Robot Navigation in an Indoor Environment (ICCAS 2004)

  • Yun, Jae-Mu;Park, Jin-Woo;Choi, Ho-Seek;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1448-1451
    • /
    • 2004
  • Position estimation is one of the most important functions for the mobile robot navigating in the unstructured environment. Most of previous localization schemes estimate current position and pose of mobile robot by applying various localization algorithms with the information obtained from sensors which are set on the mobile robot, or by recognizing an artificial landmark attached on the wall, or objects of the environment as natural landmark in the indoor environment. Several drawbacks about them have been brought up. To compensate the drawbacks, a new localization method that estimates the absolute position of the mobile robot by using a fixed camera on the ceiling in the corridor is proposed. And also, it can improve the success rate for position estimation using the proposed method, which calculates the real size of an object. This scheme is not a relative localization, which decreases the position error through algorithms with noisy sensor data, but a kind of absolute localization. The effectiveness of the proposed localization scheme is demonstrated through the experiments.

  • PDF

Robust 3-D Motion Estimation Based on Stereo Vision and Kalman Filtering (스테레오 시각과 Kalman 필터링을 이용한 강인한 3차원 운동추정)

  • 계영철
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.176-187
    • /
    • 1996
  • This paper deals with the accurate estimation of 3- D pose (position and orientation) of a moving object with reference to the world frame (or robot base frame), based on a sequence of stereo images taken by cameras mounted on the end - effector of a robot manipulator. This work is an extension of the previous work[1]. Emphasis is given to the 3-D pose estimation relative to the world (or robot base) frame under the presence of not only the measurement noise in 2 - D images[ 1] but also the camera position errors due to the random noise involved in joint angles of a robot manipulator. To this end, a new set of discrete linear Kalman filter equations is derived, based on the following: 1) the orientation error of the object frame due to measurement noise in 2 - D images is modeled with reference to the camera frame by analyzing the noise propagation through 3- D reconstruction; 2) an extended Jacobian matrix is formulated by combining the result of 1) and the orientation error of the end-effector frame due to joint angle errors through robot differential kinematics; and 3) the rotational motion of an object, which is nonlinear in nature, is linearized based on quaternions. Motion parameters are computed from the estimated quaternions based on the iterated least-squares method. Simulation results show the significant reduction of estimation errors and also demonstrate an accurate convergence of the actual motion parameters to the true values.

  • PDF

An Accurate Extrinsic Calibration of Laser Range Finder and Vision Camera Using 3D Edges of Multiple Planes (다중 평면의 3차원 모서리를 이용한 레이저 거리센서 및 카메라의 정밀 보정)

  • Choi, Sung-In;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.177-186
    • /
    • 2015
  • For data fusion of laser range finder (LRF) and vision camera, accurate calibration of external parameters which describe relative pose between two sensors is necessary. This paper proposes a new calibration method which can acquires more accurate external parameters between a LRF and a vision camera compared to other existing methods. The main motivation of the proposed method is that any corner data of a known 3D structure which is acquired by the LRF should be projected on a straight line in the camera image. To satisfy such constraint, we propose a 3D geometric model and a numerical solution to minimize the energy function of the model. In addition, we describe the implementation steps of the data acquisition of LRF and camera images which are necessary in accurate calibration results. In the experiment results, it is shown that the performance of the proposed method are better in terms of accuracy compared to other conventional methods.

Advanced Relative Localization Algorithm Robust to Systematic Odometry Errors (주행거리계의 기구적 오차에 강인한 개선된 상대 위치추정 알고리즘)

  • Ra, Won-Sang;Whang, Ick-Ho;Lee, Hye-Jin;Park, Jin-Bae;Yoon, Tae-Sung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.9
    • /
    • pp.931-938
    • /
    • 2008
  • In this paper, a novel localization algorithm robust to the unmodeled systematic odometry errors is proposed for low-cost non-holonomic mobile robots. It is well known that the most pose estimators using odometry measurements cannot avoid the performance degradation due to the dead-reckoning of systematic odometry errors. As a remedy for this problem, we tty to reflect the wheelbase error in the robot motion model as a parametric uncertainty. Applying the Krein space estimation theory for the discrete-time uncertain nonlinear motion model results in the extended robust Kalman filter. This idea comes from the fact that systematic odometry errors might be regarded as the parametric uncertainties satisfying the sum quadratic constrains (SQCs). The advantage of the proposed methodology is that it has the same recursive structure as the conventional extended Kalman filter, which makes our scheme suitable for real-time applications. Moreover, it guarantees the satisfactoty localization performance even in the presence of wheelbase uncertainty which is hard to model or estimate but often arises from real driving environments. The computer simulations will be given to demonstrate the robustness of the suggested localization algorithm.

Tangible Tele-Meeting in Tangible Space Initiative

  • Lee, Joong-Jae;Lee, Hyun-Jin;Jeong, Mun-Ho;Jeong, SeongWon;You, Bum-Jae
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.762-770
    • /
    • 2014
  • Tangible Space Initiative (TSI) is a new framework that can provide a more natural and intuitive Human Computer Interface for users. This is composed of three cooperative components: a Tangible Interface, Responsive Cyber Space, and Tangible Agent. In this paper we present a Tangible Tele-Meeting system in TSI, which allows people to communicate with each other without any spatial limitation. In addition, we introduce a method for registering a Tangible Avatar with a Tangible Agent. The suggested method is based on relative pose estimation between the user and the Tangible Agent. Experimental results show that the user can experience an interaction environment that is more natural and intelligent than that provided by conventional tele-meeting systems.