• Title/Summary/Keyword: stereo-camera

Search Result 610, Processing Time 0.024 seconds

Development and Comparative Analysis of Mapping Quality Prediction Technology Using Orientation Parameters Processed in UAV Software (무인기 소프트웨어에서 처리된 표정요소를 이용한 도화품질 예측기술 개발 및 비교분석)

  • Lim, Pyung-Chae;Son, Jonghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.895-905
    • /
    • 2019
  • Commercial Unmanned Aerial Vehicle (UAV) image processing software products currently used in the industry provides camera calibration information and block bundle adjustment accuracy. However, they provide mapping accuracy achievable out of input UAV images. In this paper, the quality of mapping is calculated by using orientation parameters from UAV image processing software. We apply the orientation parameters to the digital photogrammetric workstation (DPW) for verifying the reliability of the mapping quality calculated. The quality of mapping accuracy was defined as three types of accuracy: Y-parallax, relative model and absolute model accuracy. The Y-parallax is an accuracy capable of determining stereo viewing between stereo pairs. The Relative model accuracy is the relative bundle adjustment accuracy between stereo pairs on the model coordinates system. The absolute model accuracy is the bundle adjustment accuracy on the absolute coordinate system. For the experimental data, we used 723 images of GSD 5 cm obtained from the rotary wing UAV over an urban area and analyzed the accuracy of mapping quality. The quality of the relative model accuracy predicted by the proposed technique and the maximum error observed from the DPW showed precise results with less than 0.11 m. Similarly, the maximum error of the absolute model accuracy predicted by the proposed technique was less than 0.16 m.

Model-Based Plane Detection in Disparity Space Using Surface Partitioning (표면분할을 이용한 시차공간상에서의 모델 기반 평면검출)

  • Ha, Hong-joon;Lee, Chang-hun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.465-472
    • /
    • 2015
  • We propose a novel plane detection in disparity space and evaluate its performance. Our method simplifies and makes scenes in disparity space easily dealt with by approximating various surfaces as planes. Moreover, the approximated planes can be represented in the same size as in the real world, and can be employed for obstacle detection and camera pose estimation. Using a stereo matching technique, our method first creates a disparity image which consists of binocular disparity values at xy-coordinates in the image. Slants of disparity values are estimated by exploiting a line simplification algorithm which allows our method to reflect global changes against x or y axis. According to pairs of x and y slants, we label the disparity image. 4-connected disparities with the same label are grouped, on which least squared model estimates plane parameters. N plane models with the largest group of disparity values which satisfy their plane parameters are chosen. We quantitatively and qualitatively evaluate our plane detection. The result shows 97.9%와 86.6% of quality in our experiment respectively on cones and cylinders. Proposed method excellently extracts planes from Middlebury and KITTI dataset which are typically used for evaluation of stereo matching algorithms.

3D Reconstruction of Pipe-type Underground Facility Based on Stereo Images and Reference Data (스테레오 영상과 기준데이터를 활용한 관로형 지하시설물 3차원 형상 복원)

  • Cheon, Jangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1515-1526
    • /
    • 2022
  • Image-based 3D reconstruction is to restore the shape and color of real-world objects, and image sensors mounted on mobile platforms are used for positioning and mapping purposes in indoor and outdoor environments. Due to the increase in accidents in underground space, the location accuracy problem of underground spatial information has been raised. Image-based location estimation studies have been conducted with the advantage of being able to determine the 3D location and simultaneously identify internal damage from image data acquired from the inside of pipeline-type underground facilities. In this study, we studied 3D reconstruction based on the images acquired inside the pipe-type underground facility and reference data. An unmanned mobile system equipped with a stereo camera was used to acquire data and image data within a pipe-type underground facility where reference data were placed at the entrance and exit. Using the acquired image and reference data, the pipe-type underground facility is reconstructed to a geo-referenced 3D shape. The accuracy of the 3D reconstruction result was verified by location and length. It was confirmed that the location was determined with an accuracy of 20 to 60 cm and the length was estimated with an accuracy of about 20 cm. Using the image-based 3D reconstruction method, the position and line-shape of the pipe-type underground facility will be effectively updated.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

An analysis of Electro-Optical Camera (EOC) on KOMPSAT-1 during mission life of 3 years

  • Baek Hyun-Chul;Yong Sang-Soon;Kim Eun-Kyou;Youn Heong-Sik;Choi Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.512-514
    • /
    • 2004
  • The Electro-Optical Camera (EOC) is a high spatial resolution, visible imaging sensor which collects visible image data of the earth's sunlit surface and is the primary payload on KOMPSAT-l. The purpose of the EOC payload is to provide high resolution visible imagery data to support cartography of the Korean Peninsula. The EOC is a push broom-scanned sensor which incorporates a single nadir looking telescope. At the nominal altitude of 685Km with the spacecraft in a nadir pointing attitude, the EOC collects data with a ground sample distance of approximately 6.6 meters and a swath width of around 17Km. The EOC is designed to operate with a duty cycle of up to 2 minutes (contiguous) per orbit over the mission lifetime of 3 years with the functions of programmable gain/offset. The EOC has no pointing mechanism of its own. EOC pointing is accomplished by right and left rolling of the spacecraft, as needed. Under nominal operating conditions, the spacecraft can be rolled to an angle in the range from +/- 15 to 30 degrees to support the collection of stereo data. In this paper, the status of EOC such as temperature, dark calibration, cover operation and thermal control is checked and analyzed by continuously monitored state of health (SOH) data and image data during the mission life of 3 years. The aliveness of EOC and operation continuation beyond mission life is confirmed by the results of the analysis.

  • PDF

Obtaining 3-D Depth from a Monochrome Shaded Image (단시안 명암강도를 이용한 물체의 3차원 거리측정)

  • Byung Il Kim
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.7
    • /
    • pp.52-61
    • /
    • 1992
  • An iterative scheme for computing the three-dimensional position and the surface orientation of an opaque object from a singel shaded image is proposed. This method demonstrates that calculating the depth(distance) between the camera and the object from one shaded video image is possible. Most previous research works on $'Shape from Shading$' problem, even in the $'Photometric Stereo Method$', invoved the determination of surface orientation only. To measure the depth of an object, depth of the object, and the reflectance properties of the surface. Assuming that the object surface is uniform Lambertian the measured intensity level at a given image pixel*x,y0becomes a function of surface orientation and depth component of the object. Derived Image Irradiance Equation can`t be solved without further informations since three unknown variables(p,q and D) are in one nonlinear equation. As an additional constraints we assume that surface satisfy smoothness conditions. Then equation can be solved relaxatively using standard methods of TEX>$'Calculus of VariationTEX>$'. After checking the sensitivity of the algorithm to the errors ininput parameters, the theoretical results is tested by experiments. Three objects (plane, cylinder, and sphere)are used. Thees initial results are very encouraging since they match the theoretical calculations within 20$\%$ error in simple experiments.> error in simple experiments.

  • PDF

Gesture-based Table Tennis Game in AR Environment (증강현실과 제스처를 이용한 비전기반 탁구 게임)

  • Yang, Jong-Yeol;Lee, Sang-Kyung;Kyoung, Dong-Wuk;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.5 no.3
    • /
    • pp.3-10
    • /
    • 2005
  • We present the computer table tennis game using player's swing motion. We need to transform a real world coordinate into a virtual world coordinate in order to hit the virtual ball. We can not get a correct 3-dimension position of racket in environment that using one camera or simple image processing. Therefore we use Augmented Reality (AR) concept to develop the game. This paper shows the AR table tennis game using gesture and method to develop the 3D interaction game that only using one camera without any motion detection device or stereo cameras. Also, we use a scan line method to recognize gesture for speedy processing. The game is developed using ARtoolkit and DirectX that is popular tool of SDK for game development.

  • PDF

A Study on the Com positive Beauty of Back-Jae Stone Pagodas by means of Photogrammetry (사진측정(寫眞測定)에 의한 백제석탑(百濟石塔)의 조형미(造形美)에 관한 연구(硏究))

  • Yeu, Bock Mo;Kang, In Joon;Jong, Chang Sik;Song, In Seong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.5 no.1
    • /
    • pp.141-148
    • /
    • 1985
  • This paper is a study on a analysis of geometrical composition about two Back-Jae stone pagodas-stone pagoda at the site of Mir$\bar{u}$k-Sa Temple in Iksan and five storied stone pagoda at the site of Ch$\check{o}$ngrim-Sa Temple in Puy$\check{o}$, existing stone pagodas which Were built in Back-Jae Dynasty. By using P31 terrestrial metric camera and A-10 for precision stereo plot, Ch$\check{o}$ngrim-Sa stone pagoda which has Bock-bal and Mir$\bar{u}$k-Sa stone pagoda which has many broken area are analyzed comparatively. From this result same geometric composition principle; orthotrigon is drawn in respect to module and the length ratio of the widths of Okgesuks which exist at the end-point of the orthotrigon, is found to be decrease as 9 : 8 : 7 : 6 : (5) also the height up to Bock-bal before broken, is able to estimate.

  • PDF

Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model (탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.93-102
    • /
    • 2011
  • This paper presents a walking guidance system for blind pedestrians in an outdoor environment using just one single camera. Unlike many existing travel-aid systems that rely on stereo-vision, the proposed system aims to get necessary information of the road environment by using just single camera fixed at the belly of the user. To achieve this goal, a top-view image of the road is used, on which obstacles are detected by first extracting local extreme points and then verified by the polar edge histogram. Meanwhile, user motion is estimated by using optical flow in an area close to the user. Based on these information extracted from image domain, an audio message generation scheme is proposed to deliver guidance instructions via synthetic voice to the blind user. Experiments with several sidewalk video-clips show that the proposed walking guidance system is able to provide useful guidance instructions under certain sidewalk environments.