• Title/Summary/Keyword: 3차원 거리 추정

Search Result 150, Processing Time 0.029 seconds

Depth estimation of an underwater target using DIFAR sonobuoy (다이파 소노부이를 활용한 수중표적 심도 추정)

  • Lee, Young gu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.3
    • /
    • pp.302-307
    • /
    • 2019
  • In modern Anti-Submarine Warfare, there are various ways to locate a submarine in a two-dimensional space. For more effective tracking and attack against a submarine the depth of the target is a critical factor. However, it has been difficult to find out the depth of a submarine until now. In this paper a possible solution to the depth estimation of submarines is proposed utilizing DIFAR (Directional Frequency Analysis and Recording) sonobuoy information such as contact bearings at or prior to CPA (Closest Point of Approach) and the target's Doppler signals. The relative depth of the target is determined by applying the Pythagorean theorem to the slant range and horizontal range between the target and the hydrophone of a DIFAR sonobuoy. The slant range is calculated using the Doppler shift and the target's velocity. the horizontal range can be obtained by applying a simple trigonometric function for two consecutive contact bearings and the travel distance of the target. The simulation results show that the algorithm is subject to an elevation angle, which is determined by the relative depth and horizontal distance between the sonobuoy and target, and that a precise measurement of the Doppler shift is crucial.

3D Range Finding Algorithm Using Small Translational Movement of Stereo Camera (스테레오 카메라의 미소 병진운동을 이용한 3차원 거리추출 알고리즘)

  • Park, Kwang-Il;Yi, Jae-Woong;Oh, Jun-Ho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.8
    • /
    • pp.156-167
    • /
    • 1995
  • In this paper, we propose a 3-D range finding method for situation that stereo camera has small translational motion. Binocular stereo generally tends to produce stereo correspondence errors and needs huge amount of computation. The former drawback is because the additional constraints to regularize the correspondence problem are not always true for every scene. The latter drawback is because they use either correlation or optimization to find correct disparity. We present a method which overcomes these drawbacks by moving the stereo camera actively. The method utilized a motion parallax acquired by monocular motion stereo to restrict the search range of binocular disparity. Using only the uniqueness of disparity makes it possible to find reliable binocular disparity. Experimental results with real scene are presented to demonstrate the effectiveness of this method.

  • PDF

3D Accuracy Analysis of Mobile Phone-based Stereo Images (모바일폰 기반 스테레오 영상에서 산출된 3차원 정보의 정확도 분석)

  • Ahn, Heeran;Kim, Jae-In;Kim, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.677-686
    • /
    • 2014
  • This paper analyzes the 3D accuracy of stereo images captured from a mobile phone. For 3D accuracy evaluation, we have compared the accuracy result according to the amount of the convergence angle. In order to calculate the 3D model space coordinate of control points, we perform inner orientation, distortion correction and image geometry estimation. And the quantitative 3D accuracy was evaluated by transforming the 3D model space coordinate into the 3D object space coordinate. The result showed that relatively precise 3D information is generated in more than $17^{\circ}$ convergence angle. Consequently, it is necessary to set up stereo model structure consisting adequate convergence angle as an measurement distance and a baseline distance for accurate 3D information generation. It is expected that the result would be used to stereoscopic 3D contents and 3D reconstruction from images captured by a mobile phone camera.

3-D Model-Based Tracking for Mobile Augmented Reality (모바일 증강현실을 위한 3차원 모델기반 카메라 추적)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.65-68
    • /
    • 2011
  • 본 논문에서는 모바일 증강현실을 실현하기 위한 3차원 모델기반 카메라 추적 기술을 제안한다. 3차원 모델기반 추적 기술은 평면적이지 않은 객체에 적용 가능하며, 특히 텍스처가 없는 환경에서 유용하다. 제안하는 방식은 대상 객체의 3차원 모델정보로부터 영상에서 추출한 에지와의 대응점을 찾고, 대응점의 거리를 최소화하는 카메라 움직임을 추정함으로써 이전 카메라 포즈(위치 및 방향)로부터 현재 포즈가 추적되는 방식이다. 안드로이드 플랫폼의 스마트폰 상에서 제안된 방식으로 카메라 포즈를 추적하고 3차원 가상 콘텐츠를 증강시켜 봄으로써 그 유용성을 확인한다.

  • PDF

Near-field Source Localization Method using Matrix Pencil (Matrix Pencil 기법을 이용한 근거리 음원 위치 추정 기법)

  • Jung, Tae-Jin;Lee, Su-Hyoung;Yoon, Kyung Sik;Lee, KyunKyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.3
    • /
    • pp.247-251
    • /
    • 2013
  • In this paper, near-field source localization algorithm is presented using Matrix Pencil in Uniform Linear Array(ULA). Based on the centrosymmetry of the ULA, the proposed algorithm decouples the steering vectors which allow for the bearing estimation using Matrix pencil. With estimated bearing, the range estimation of each source is consequently obtained by defining 1D MUSIC spectrum. Simulation results are presented to validate the performance of the proposed algorithm.

Optimum Design of the Microphone Sensor Array for 3D TDOA Positioning System (3차원 TDOA 위치인식 시스템의 마이크 센서 배열 최적 설계)

  • Oh, Jongtaek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.31-36
    • /
    • 2014
  • A study on the indoor positioning system has been active recently for the location based service indoors. In the 3 dimensional positioning system based on the acoustic signal and TDOA technology, the error characteristics of the estimated source position would be changed depending on the number of microphones and the pattern of the microphone array. In this paper, the estimated position error according to the measured distance error between the microphones and the signal source is analyzed, and the optimum microphone array is decided considering the estimated position error patterns and the total amount of the estimated position error.

Analysis on Line-Of-Sight (LOS) Vector Projection Errors according to the Baseline Distance of GPS Orbit Errors (GPS 궤도오차의 기저선 거리에 따른 시선각 벡터 투영오차 분석)

  • Jang, JinHyeok;Ahn, JongSun;Bu, Sung-Chun;Lee, Chul-Soo;Sung, SangKyung;Lee, Young Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.4
    • /
    • pp.310-317
    • /
    • 2017
  • Recently, many nations are operating and developing Global Navigation Satellite System (GNSS). Also, Satellite Based Augmentation System (SBAS), which uses the geostationary orbit, is operated presently in order to improve the performance of GNSS. The most widely-used SBAS is Wide Area Augmentation System (WAAS) of GPS developed by the United States. SBAS uses various algorithms to offer guaranteed accuracy, availability, continuity and integrity to its users. There is algorithm for guarantees the integrity of the satellite. This algorithm calculates the satellite errors, generates the correction and provides it to the users. The satellite orbit errors are calculated in three-dimensional space in this step. The reference placement is crucial for this three-dimensional calculation of satellite orbit errors. The wider the reference placement becomes, the wider LOS vectors spread, so the more the accuracy improves. For the next step, the regional features of the US and Korea need to be analyzed. Korea has a very narrow geographic features compared to the US. Hence, there may be a problem if the three-dimensional space method of satellite orbit error calculation is used without any modification. This paper suggests a method which uses scalar values to calculate satellite orbit errors instead of using three-dimensional space. Also, this paper proposes the feasibility for this method for a narrow area. The suggested method uses the scalar value, which is a projection of orbit errors on the LOS vector between a reference and a satellite. This method confirms the change in errors according to the baseline distance between Korea and America. The difference in the error change is compared to present the feasibility of the proposed method.

Intrinsic Camera Calibration Based on Radical Center Estimation (근심 추정 기반의 내부 카메라 파라미터 보정)

  • 이동훈;김복동;정순기
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.742-744
    • /
    • 2004
  • 본 논문에서는 두 개의 직교하는 소실점(Orthogonal Vanishing Points)을 이용하여 카메라의 내부 파라미터를 추정하기 위한 방법을 제안한다. 카메라 보정(camera calibration)은 2차원 영상으로부터 3차원 정보를 얻기 위한 중요한 단계이다. 기존의 소실점을 이용한 대부분의 방법들은 세 개의 직교하는 소실점을 사용하여 파라미터론 추정하지만, 실제 영상에서는 세 개의 직교 소실점을 포함하는 영상을 획득하는 것은 어려운 문제이다 따라서 본 논문에서는 2개의 직교 소실점을 사용하여 카메라 U부 보정을 위한 기하적이고 직관적인 새로운 방법을 제안한다. 주점(principal point)과 초점거리(focal length)는 Thales의 이론을 기초한 기하학적 제약사항으로부터 다중 반구(multiple hemispheres)들의 관계로부터 유도된다.

  • PDF

Estimation of Joint Roughness Coefficient(JRC) using Modified Divider Method (수정 분할자법을 이용한 절리 거칠기 계수(JRC)의 정량화)

  • Jang Hyun-Shic;Jang Bo-An;Kim Yul
    • The Journal of Engineering Geology
    • /
    • v.15 no.3
    • /
    • pp.269-280
    • /
    • 2005
  • We assigned points on surface of standard roughness profile by 0.1mm along the length and measured coordinates of points. Then, the lengths of profile were measured with different scales using modified divider method. The fractal dimensions and intercepts of slopes were determined by plotting the length vs scale in log-log scale. The fractal dimensions as well as intercepts of slopes show well correlation with joint roughness coefficients(JRC). However, multiplication of the kactal dimension by intercept show better correlation with IRC and we derived a new equation to estimate JRC from fractal dimension and intercept. The crossover length in which we can determine the correct fractal dimension was between 0.3-3.2mm. We measured joint roughness of 26 natural joints and calculated JRC using the equation suggested by Tse and Cruden(1979) and new equation derived by us. IRC values calculated by both equations are almost the same, indicating new equation is effective in measuring IRC.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.