• 제목/요약/키워드: Landmark position

검색결과 107건 처리시간 0.029초

지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차 (A Vision-based Position Estimation Method Using a Horizon)

  • 신종진;남화진;김병주
    • 한국군사과학기술학회지
    • /
    • 제15권2호
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

Cone-Beam Computed Tomogram (CBCT)과 Adjusted 2D lateral cephalogram의 계측점 차이에 관한 비교 연구 (Comparison of landmark positions between Cone-Beam Computed Tomogram (CBCT) and Adjusted 2D lateral cephalogram)

  • 손수정;전윤식;김민지
    • 대한치과보철학회지
    • /
    • 제52권3호
    • /
    • pp.222-232
    • /
    • 2014
  • 목적: 본 연구에서는 CBCT (Cone-Beam Computed Tomogram)와 100%로 확대율을 보정한 조절된 측모 두부 방사선 규격 계측 사진(Adjusted 2D Lateral Cephalogram; 이하 Adj-Ceph)의 좌표값을 비교하여 차이가 있는 계측점들의 항목을 분석하여 기존의 2D 분석법을 CBCT 분석에 적용할 수 있는지 여부를 평가해보고자 하였다. 재료 및 방법: 성인 환자 50명의 CBCT 자료 50개와, 동일 환자의 측모 두부 방사선 규격사진을 100% 확대율로 보정한 자료(Adj-Ceph) 50개를 대상으로 하여, 수평축과 수직축의 좌표를 비교하였다. 계측점들의 위치와 좌우 중첩 여부에 따라 두개골 전방에 위치한 점들(group A), 두개 중후방에 위치한 점들(group B), 좌우 양측성 점들(group C), 치아부위 계측점들(group D) 네 그룹으로 나누어 분석 하였고, 좌표값에 유의한 차이가 있는지 분석하기 위하여 paired t-test를 시행하였다. 결과: 수평축(Y축)에서는 Group B (S, Ar, Ba, PNS), Group C (Po, Or, Hinge axis, Go), Group D (U1RP, U6CP, L6CP) 등 11개의 계측점에서 유의한 차이가 있었다. 수직축(Z축)에서는 전체 계측점에서 유의한 차이가 있었다(P<.01). 좌표값의 차이 분석 결과 수평축에서는 13개의 계측점에서 1 mm 이상의 유의한 차이가 있었다. 수직축에서는 Group B의 Sella를 제외한 전체 계측점에서 1 mm 이상의 유의한 차이가 있었다. 결론: CBCT 분석 시에는 기존의 측모두부방사선 규격사진의 분석법을 그대로 사용하기에는 어려움이 있다. 3D 분석법, 또는 수평축에서 13개의 계측점들이 보정되고, 수칙축 19개가 보정된 수정된 새로운 2D 분석법이 사용되어야 한다.

Pose Determination of a Mobile-Task Robot Using an Active Calibration of the Landmark

  • Jin, Tae-Seok;Park, Jin-Woo;Lee, Jand-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.734-739
    • /
    • 2003
  • A new method of estimating the pose of a mobile-task robot is developed based upon an active calibration scheme. The utility of a mobile-task robot is widely recognized, which is formed by the serial connection of a mobile robot and a task robot. For the control of the mobile robot, an absolute position sensor is necessary. This paper proposes an active calibration scheme to estimate the pose of a mobile robot that carries a task robot on the top. The active calibration scheme is to estimate a pose of the mobile robot using the relative position/orientation to a known object whose location, size, and shape are known a priori. Through the homogeneous transformation, the absolute position/orientation of the camera is calculated and that is propagated to getting the pose of a mobile robot. With the experiments in the corridor, the proposed active calibration scheme is verified experimentally.

  • PDF

RFID의 확률적 센서모델을 이용한 위치 추정 알고리즘 (The Position Estimation Algorithm based on Stochastic Sensor Model of RFID)

  • 지용관;문승욱;박희환;박장현
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2005년도 춘계학술대회 논문집
    • /
    • pp.1478-1482
    • /
    • 2005
  • Since it is a very issue that figures out a current position of mobile robots, various methods have been proposed until nowadays. This paper proposes the sensor model of RFID(Radio Frequency Identification) and position estimation algorithm for mobile robots. We designed the sensor model of RFID in experimenting repeatedly. The sensor model of RFID in this case is that of stochastics according to sensing rate. Based on this stochastic sensor model, we designed the algorithm which estimates distance and direction of RFID tag. Therefore we made sure that RFID tag is used as landmark.

  • PDF

위치패턴 기반 하이브리드 실내 측위를 위한 위치 인식 오류 보정 알고리즘 (Error Correction Algorithm of Position-Coded Pattern for Hybrid Indoor Localization)

  • 김상훈;이승걸;김유성;박재현
    • 제어로봇시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.119-124
    • /
    • 2013
  • Recent increasing demand on the indoor localization requires more advanced and hybrid technology. This paper proposes an application of the hybrid indoor localization method based on a position-coded pattern that can be used with other existing indoor localization techniques such as vision, beacon, or landmark technique. To reduce the pattern-recognition error rate, the error detection and correction algorithm was applied based on Hamming code. The indoor localization experiments based on the proposed algorithm were performed by using a QCIF-grade CMOS sensor and a position-coded pattern with an area of $1.7{\times}1.7mm^2$. The experiments have shown that the position recognition error ratio was less than 0.9 % with 0.4 mm localization accuracy. The results suggest that the proposed method could be feasibly applied for the localization of the indoor mobile service robots.

Quantification of three-dimensional facial asymmetry for diagnosis and postoperative evaluation of orthognathic surgery

  • Cao, Hua-Lian;Kang, Moon-Ho;Lee, Jin-Yong;Park, Won-Jong;Choung, Han-Wool;Choung, Pill-Hoon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제42권
    • /
    • pp.17.1-17.11
    • /
    • 2020
  • Background: To evaluate the facial asymmetry, three-dimensional computed tomography (3D-CT) has been used widely. This study proposed a method to quantify facial asymmetry based on 3D-CT. Methods: The normal standard group consisted of twenty-five male subjects who had a balanced face and normal occlusion. Five anatomical landmarks were selected as reference points and ten anatomical landmarks were selected as measurement points to evaluate facial asymmetry. The formula of facial asymmetry index was designed by using the distances between the landmarks. The index value on a specific landmark indicated zero when the landmarks were located on the three-dimensional symmetric position. As the asymmetry of landmarks increased, the value of facial asymmetry index increased. For ten anatomical landmarks, the mean value of facial asymmetry index on each landmark was obtained in the normal standard group. Facial asymmetry index was applied to the patients who had undergone orthognathic surgery. Preoperative facial asymmetry and postoperative improvement were evaluated. Results: The reference facial asymmetry index on each landmark in the normal standard group was from 1.77 to 3.38. A polygonal chart was drawn to visualize the degree of asymmetry. In three patients who had undergone orthognathic surgery, it was checked that the method of facial asymmetry index showed the preoperative facial asymmetry and the postoperative improvement well. Conclusions: The current new facial asymmetry index could efficiently quantify the degree of facial asymmetry from 3D-CT. This method could be used as an evaluation standard for facial asymmetry analysis.

오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법 (Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay)

  • 권방현;손은호;김영철;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제12권4호
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

INS/비전 센서 통합 시스템을 이용한 정밀 상대 위치 추정 (Estimation of Precise Relative Position using INS/Vision Sensor Integrated System)

  • 천세범;원대희;강태삼;성상경;이은성;조진수;이영재
    • 한국항공우주학회지
    • /
    • 제36권9호
    • /
    • pp.891-897
    • /
    • 2008
  • GPS는 정밀한 상대 항법 정보를 제공해 줄 수 있으나 해당 지역에 기준국이 설치되어 있어야 하고 위성 관측 환경에 영향을 받는다는 단점이 있다. 본 논문에서는 이러한 GPS 단독 사용 시의 한계를 극복하기 위해 사전에 알고 있는 랜드 마크의 기하학적인 배열을 이용한 INS/비전 센서 통합 시스템을 제안한다. 제안된 방법은 사전에 그려진 랜드 마크의 이미지만 있으면 GPS 기준국 없이도 상대 항법 정보를 제공할 수 있다. 제안된 시스템은 간단한 시뮬레이션에 의해 성능을 검증하였으며, 이러한 결과 상대 항법 정보를 향상 시킬 수 있음을 확인하였다.

VRML 영상오버레이기법을 이용한 로봇의 Self-Localization (VRML image overlay method for Robot's Self-Localization)

  • 손은호;권방현;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템 (Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images)

  • 김종록;임미섭;임준홍
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.