• Title/Summary/Keyword: lens distortion

Search Result 213, Processing Time 0.02 seconds

Research of Satellite Autonomous Navigation Using Star Sensor Algorithm (별 추적기 알고리즘을 활용한 위성 자율항법 연구)

  • Hyunseung Kim;Chul Hyun;Hojin Lee;Donggeon Kim
    • Journal of Space Technology and Applications
    • /
    • v.4 no.3
    • /
    • pp.232-243
    • /
    • 2024
  • In order to perform various missions in space, including planetary exploration, estimating the position of a satellite in orbit is a very important factor because it is directly related to the success rate of mission performance. As a study for autonomous satellite navigation, this study estimated the satellite's attitude and real time orbital position using a star sensor algorithm with two star trackers and earth sensor. To implement the star sensor algorithm, a simulator was constructed and the position error of the satellite estimated through the technique presented in the paper was analyzed. Due to lens distortion and errors in the center point finding algorithm, the average attitude estimation error was at the level of 2.6 rad in the roll direction. And the position error was confirmed by attitude error, so average error in altitude direction was 516 m. It is expected that the proposed satellite attitude and position estimation technique will contribute to analyzing star sensor performance and improving position estimation accuracy.

Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera (회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱)

  • Oh, So-Jung;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.211-221
    • /
    • 2012
  • To utilize omni-directional images acquired by a rotating line camera for indoor spatial information services, we should register precisely the images with respect to an indoor coordinate system. In this study, we thus develop a georeferencing method to estimate the exterior orientation parameters of an omni-directional image - the position and attitude of the camera at the acquisition time. First, we derive the collinearity equations for the omni-directional image by geometrically modeling the rotating line camera. We then estimate the exterior orientation parameters using the collinearity equations with indoor control points. The experimental results from the application to real data indicate that the exterior orientation parameters is estimated with the precision of 1.4 mm and $0.05^{\circ}$ for the position and attitude, respectively. The residuals are within 3 and 10 pixels in horizontal and vertical directions, respectively. Particularly, the residuals in the vertical direction retain systematic errors mainly due to the lens distortion, which should be eliminated through a camera calibration process. Using omni-directional images georeferenced precisely with the proposed method, we can generate high resolution indoor 3D models and sophisticated augmented reality services based on the models.

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.