• 제목/요약/키워드: Relative pose estimation

검색결과 21건 처리시간 0.045초

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제35권4호
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정 (2D-3D Pose Estimation using Multi-view Object Co-segmentation)

  • 김성흠;복윤수;권인소
    • 로봇학회논문지
    • /
    • 제12권1호
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법 (A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM)

  • 이재민;김곤우
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

Pose-graph optimized displacement estimation for structural displacement monitoring

  • Lee, Donghwa;Jeon, Haemin;Myung, Hyun
    • Smart Structures and Systems
    • /
    • 제14권5호
    • /
    • pp.943-960
    • /
    • 2014
  • A visually servoed paired structured light system (ViSP) was recently proposed as a novel estimation method of the 6-DOF (Degree-Of-Freedom) relative displacement in civil structures. In order to apply the ViSP to massive structures, multiple ViSP modules should be installed in a cascaded manner. In this configuration, the estimation errors are propagated through the ViSP modules. In order to resolve this problem, a displacement estimation error back-propagation (DEEP) method was proposed. However, the DEEP method has some disadvantages: the displacement range of each ViSP module must be constrained and displacement errors are corrected sequentially, and thus the entire estimation errors are not considered concurrently. To address this problem, a pose-graph optimized displacement estimation (PODE) method is proposed in this paper. The PODE method is based on a graph-based optimization technique that considers entire errors at the same time. Moreover, this method does not require any constraints on the movement of the ViSP modules. Simulations and experiments are conducted to validate the performance of the proposed method. The results show that the PODE method reduces the propagation errors in comparison with a previous work.

가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정 (Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion)

  • 박종승;이범종
    • 정보처리학회논문지B
    • /
    • 제13B권5호
    • /
    • pp.499-506
    • /
    • 2006
  • 본 논문에서는 실시간 증강현실 시스템에서의 가상 객체 삽입을 위한 빠르고 안정된 카메라 자세 추정 방법을 제안한다. 단일 프레임에서 마커의 특징점 추출을 통해 카메라의 회전행렬과 이동벡터를 추정한다. 카메라 자세 추정을 위해 정사영 투영모델에서의 분해기법을 사용한다. 정사영 투영모델에서의 분해기법은 객체의 모든 특징점의 깊이좌표가 동일하다고 가정하기 때문에 깊이좌표의 기준이 되는 참조점의 설정과 점의 분포에 따라 카메라 자세 계산의 정확도가 달라진다. 본 논문에서는 실제 환경에서 일반적으로 잘 동작하고 융통성 있는 참조점 설정 방법과 이상점 제거 방법을 제안한다. 제안된 카메라 자세추정 방법에 기반하여 탐색된 마커 위치에 가상객체를 삽입하기 위한 비디오 증강 시스템을 구현하였다. 실 환경에서의 다양한 비디오에 대한 실험 결과, 제안된 카메라 자세 추정 기법은 기존의 자세추정 기법만큼 빠르고 기존의 방법보다 안정적이고 다양한 증강현실 시스템 응용에 적용될 수 있음을 보여주었다.

6 DOF Pose Estimation of Polyhedral Objects Based on Geometric Features in X-ray Images

  • Kim, Jae-Wan;Roh, Young-Jun;Cho, Hyung-S.;Jeon, Hyoung-Jo;Kim, Hyeong-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.63.4-63
    • /
    • 2001
  • An x-ray vision can be a unique method to monitor and analyze the motion of mechanical parts in real time which are invisible from outside. Our problem is to identify the pose, i.e. the position and orientation of an object from x-ray projection images. It is assumed here that the x-ray imaging conditions that include the relative coordinates of the x-ray source and the image plane are predetermined and the object geometry is known. In this situation, an x-ray image of an object at a given pose can be estimated computationally by using a priori known x-ray projection image model. It is based on the assumption that a pose of an object can be determined uniquely to a given x-ray projection image. Thus, once we have the numerical model of x-ray imaging process, x-ray image of the known object at any pose could be estimated ...

  • PDF

Laser pose calibration of ViSP for precise 6-DOF structural displacement monitoring

  • Shin, Jae-Uk;Jeon, Haemin;Choi, Suyoung;Kim, Youngjae;Myung, Hyun
    • Smart Structures and Systems
    • /
    • 제18권4호
    • /
    • pp.801-818
    • /
    • 2016
  • To estimate structural displacement, a visually servoed paired structured light system (ViSP) was proposed in previous studies. The ViSP is composed of two sides facing each other, each with one or two laser pointers, a 2-DOF manipulator, a camera, and a screen. By calculating the positions of the laser beams projected onto the screens and rotation angles of the manipulators, relative 6-DOF displacement between two sides can be estimated. Although the performance of the system has been verified through various simulations and experimental tests, it has a limitation that the accuracy of the displacement measurement depends on the alignment of the laser pointers. In deriving the kinematic equation of the ViSP, the laser pointers were assumed to be installed perfectly normal to the same side screen. In reality, however, this is very difficult to achieve due to installation errors. In other words, the pose of laser pointers should be calibrated carefully before measuring the displacement. To calibrate the initial pose of the laser pointers, a specially designed jig device is made and employed. Experimental tests have been performed to validate the performance of the proposed calibration method and the results show that the estimated displacement with the initial pose calibration increases the accuracy of the 6-DOF displacement estimation.

머신러닝 기반 낙상 인식 알고리즘 (Fall Detection Algorithm Based on Machine Learning)

  • 정준현;김남호
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.226-228
    • /
    • 2021
  • 구글사에서 출시된 ML Kit API의 Pose detection를 사용한 영상기반 낙상 알고리즘을 제안한다. Pose detection 알고리듬을 사용하여 추출된 신체의 33개의 3차원 특징점을 활용하여 낙상을 인식한다. 추출된 특징점을 분석하여 낙상을 인식하는 알고리듬은 k-NN을 사용한다. 영상의 크기와 영상내의 인체의 크기에 영향을 받지 않도록 정규화과정을 거치며 특징점들의 상대적인 움직임을 분석하여 낙상을 인식한다. 본 실험을 위해 사용한 13개의 테스트 영상중 13개의 영상에서 낙상을 인식하여 100%의 성공률을 보였다.

  • PDF

수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구 (Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image)

  • 신영식;이영준;최현택;김아영
    • 로봇학회논문지
    • /
    • 제11권2호
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.