• Title/Summary/Keyword: depth sensing camera

Search Result 36, Processing Time 0.025 seconds

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Introducing Depth Camera for Spatial Interaction in Augmented Reality (증강현실 기반의 공간 상호작용을 위한 깊이 카메라 적용)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.62-67
    • /
    • 2009
  • Many interaction methods for augmented reality has attempted to reduce difficulties in tracking of interaction subjects by either allowing a limited set of three dimensional input or relying on auxiliary devices such as data gloves and paddles with fiducial markers. We propose Spatial Interaction (SPINT), a noncontact passive method that observes an occupancy state of the spaces around target virtual objects for interpreting user input. A depth-sensing camera is introduced for constructing the virtual space sensors, and then manipulating the augmented space for interaction. The proposed method does not require any wearable device for tracking user input, and allow versatile interaction types. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also minimized for more precise interaction. The exhibits of dynamic contents such as Miniature AR System (MINARS) could benefit from this fluid 3D user interface.

  • PDF

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

Comparisons of the Environmental Characteristics of Intertidal Beach and Mudflat

  • Kim, Tae-Rim
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.3
    • /
    • pp.225-231
    • /
    • 2009
  • The characteristics of morphological shapes, wave heights, tidal ranges and sediment sizes are observed and compared between intertidal beach and mudflat. The Mohang sand beach, southwest coast of Korea, is located just next to the large mudflat and has tidal range over 5 meters. Wave measurements are conducted at each entrance of the beach and mudflat as well as at the outside waters representing the incident waves to these different coastal environments. The morphological characteristics are also examined including the sediment size and the slope of the bathymetry, For the observation of morphological shapes, camera monitoring technique is used to measure the spatial information of intertidal bathymetry. The water lines moving on the intertidal flat/beach durinq a flood indicate depth contours between low and high water lines. The water lines extracted from the consecutive images are rectified to get the ground coordinates of each depth contours and integrated to provide three dimensional information of intertidal topography. The wave data show that sand beach is in the condition of severer wave forcing but tidal range is almost identical in both environment. The slope of the mudflat is much milder than the sand beach with finer sediment.

Speeding up the KLT Tracker for Real-time Image Georeferencing using GPS/INS Data

  • Tanathong, Supannee;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.6
    • /
    • pp.629-644
    • /
    • 2010
  • A real-time image georeferencing system requires all inputs to be determined in real-time. The intrinsic camera parameters can be identified in advance from a camera calibration process while other control information can be derived instantaneously from real-time GPS/INS data. The bottleneck process is tie point acquisition since manual operations will be definitely obstacles for real-time system while the existing extraction methods are not fast enough. In this paper, we present a fast-and-automated image matching technique based on the KLT tracker to obtain a set of tie-points in real-time. The proposed work accelerates the KLT tracker by supplying the initial guessed tie-points computed using the GPS/INS data. Originally, the KLT only works effectively when the displacement between tie-points is small. To drive an automated solution, this paper suggests an appropriate number of depth levels for multi-resolution tracking under large displacement using the knowledge of uncertainties the GPS/INS data measurements. The experimental results show that our suggested depth levels is promising and the proposed work can obtain tie-points faster than the ordinary KLT by 13% with no less accuracy. This promising result suggests that our proposed algorithm can be effectively integrated into the real-time image georeferencing for further developing a real-time surveillance application.

3D Reconstruction of Urban Building using Laser range finder and CCD camera

  • Kim B. S.;Park Y. M.;Lee K. H.
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.128-131
    • /
    • 2004
  • In this paper, we describe reconstructed 3D-urban modeling techniques for laser scanner and CCD camera system, which are loading on the vehicle. We use two laser scanners, the one is horizon scanner and the other is vertical scanner. Horizon scanner acquires the horizon data of building for localization. Vertical scan data are main information for constructing a building. We compared extraction of edge aerial image with laser scan data. This method is able to correct the cumulative error of self-localization. Then we remove obstacles of 3D-reconstructed building. Real-texture information that is acquired with CCD camera is mapped by 3D-depth information. 3D building of urban is reconstructed to 3D-virtual world. These techniques apply to city plan. 3D-environment game. movie background. unmanned-patrol etc.

  • PDF

A Novel Image Sensing System for 3D Reconstruction (3차원 형상복원을 위한 새로운 시각장치)

  • 이두현;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.5
    • /
    • pp.383-389
    • /
    • 2000
  • This paper presents a stereo camera system that provides a Pair of stereo images using a Biprism. The equivalent of a stereo Pair of images is formed as the left and right halves of a single CCD image. The system is therefore cheap and extremely easy to calibrate since it requires only one CCD camera. An additional advantage of the geometrical set-up is that corresponding features lie on the same scanline automatically, The single camera and Biprism have led to a simple stereo system for which correspondence is very easy and which is accurate for nearby objects in a small field of view. Since we use only a single lens, calibration of the system is greatly simplified. Given the parameters in the Biprism-stereo camera system, we can reconstruct the 3-D structure using only the disparity between the corresponding points.

  • PDF

Optimal Depth Calibration for KinectTM Sensors via an Experimental Design Method (실험 계획법에 기반한 키넥트 센서의 최적 깊이 캘리브레이션 방법)

  • Park, Jae-Han;Bae, Ji-Hum;Baeg, Moon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1003-1007
    • /
    • 2015
  • Depth calibration is a procedure for finding the conversion function that maps disparity data from a depth-sensing camera to actual distance information. In this paper, we present an optimal depth calibration method for Kinect$^{TM}$ sensors based on an experimental design and convex optimization. The proposed method, which utilizes multiple measurements from only two points, suggests a simplified calibration procedure. The confidence ellipsoids obtained from a series of simulations confirm that a simpler procedure produces a more reliable calibration function.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.