• Title/Summary/Keyword: Camera Geometry

Search Result 202, Processing Time 0.015 seconds

A Vanishing Point Detection Method Based on the Empirical Weighting of the Lines of Artificial Structures (인공 구조물 내 직선을 찾기 위한 경험적 가중치를 이용한 소실점 검출 기법)

  • Kim, Hang-Tae;Song, Wonseok;Choi, Hyuk;Kim, Taejeong
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.642-651
    • /
    • 2015
  • A vanishing point is a point where parallel lines converge, and they become evident when a camera's lenses are used to project 3D space onto a 2D image plane. Vanishing point detection is the use of the information contained within an image to detect the vanishing point, and can be utilized to infer the relative distance between certain points in the image or for understanding the geometry of a 3D scene. Since parallel lines generally exist for the artificial structures within images, line-detection-based vanishing point-detection techniques aim to find the point where the parallel lines of artificial structures converge. To detect parallel lines in an image, we detect edge pixels through edge detection and then find the lines by using the Hough transform. However, the various textures and noise in an image can hamper the line-detection process so that not all of the lines converging toward the vanishing point are obvious. To overcome this difficulty, it is necessary to assign a different weight to each line according to the degree of possibility that the line passes through the vanishing point. While previous research studies assigned equal weight or adopted a simple weighting calculation, in this paper, we are proposing a new method of assigning weights to lines after noticing that the lines that pass through vanishing points typically belong to artificial structures. Experimental results show that our proposed method reduces the vanishing point-estimation error rate by 65% when compared to existing methods.

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.