• Title/Summary/Keyword: Time-of-Flight camera

Search Result 74, Processing Time 0.026 seconds

Enhancement on Time-of-Flight Camera Images (Time-of-Flight 카메라 영상 보정)

  • Kim, Sung-Hee;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.708-711
    • /
    • 2008
  • Time-of-flight(ToF) cameras deliver intensity data as well as range information of the objects of the scene. However, systematic problems during the acquisition lead to distorted values in both distance and amplitude. In this paper we propose a method to acquire reliable distance information over the entire scene correcting each information based on the other data. The amplitude image is enhanced based on the depth values and this leads depth correction especially for far pixels.

  • PDF

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Autostereoscopic 3D display system with moving parallax barrier and eye-tracking (이동형 패럴랙스배리어와 시점 추적을 이용한 3D 디스플레이 시스템)

  • Chae, Ho-Byung;Ryu, Young-Roc;Lee, Gang-Sung;Lee, Seung-Hyun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.419-427
    • /
    • 2009
  • We present a novel head tracking system for stereoscopic displays that ensures the viewer has a high degree of movement. The tracker is capable of segmenting the viewer from background objects using their relative distance. A depth camera using TOF(Time-Of-Flight) is used to generate a key signal for eye tracking application. A method of the moving parallax barrier is also introduced to supplement a disadvantage of the fixed parallax barrier that provides observation at the specific locations.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Position Recognition and Indoor Autonomous Flight of a Small Quadcopter Using Distributed Image Matching (분산영상 매칭을 이용한 소형 쿼드콥터의 실내 비행 위치인식과 자율비행)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.2_2
    • /
    • pp.255-261
    • /
    • 2020
  • We consider the problem of autonomously flying a quadcopter in indoor environments. Navigation in indoor settings poses two major issues. First, real time recognition of the marker captured by the camera. Second, The combination of the distributed images is used to determine the position and orientation of the quadcopter in an indoor environment. We autonomously fly a miniature RC quadcopter in small known environments using an on-board camera as the only sensor. We use an algorithm that combines data-driven image classification with image-combine techniques on the images captured by the camera to achieve real 3D localization and navigation.

The flight Test Procedures For Agricultural Drones Based on 5G Communication (5G 통신기반 농업용 드론 비행시험 절차)

  • Byeong Gyu Gang
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.2
    • /
    • pp.38-44
    • /
    • 2023
  • This study aims to determine how agricultural drones are operated for flight tests using a 5G communication in order to carry out a mission such as sensing agricultural crop healthy status with special cameras. Drones were installed with a multi-spectral and IR camera to capture images of crop status in separate altitudes with different speeds. A multi-spectral camera can capture crop image data using five different particular wavelengths with a built-in GPS so that captured images with synchronized time could provide better accuracy of position and altitude during the flight time. Captured thermal videos are then sent to a ground server to be analyzed via 5G communication. Thus, combining two cameras can result in better visualization of vegetation areas. The flight test verified how agricultural drones equipped with special cameras could collect image data in vegetation areas.

Low Resolution Depth Interpolation using High Resolution Color Image (고해상도 색상 영상을 이용한 저해상도 깊이 영상 보간법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.60-65
    • /
    • 2013
  • In this paper, we propose a high-resolution disparity map generation method using a low-resolution time-of-flight (TOF) depth camera and color camera. The TOF depth camera is efficient since it measures the range information of objects using the infra-red (IR) signal in real-time. It also quantizes the range information and provides the depth image. However, there are some problems of the TOF depth camera, such as noise and lens distortion. Moreover, the output resolution of the TOF depth camera is too small for 3D applications. Therefore, it is essential to not only reduce the noise and distortion but also enlarge the output resolution of the TOF depth image. Our proposed method generates a depth map for a color image using the TOF camera and the color camera simultaneously. We warp the depth value at each pixel to the color image position. The color image is segmented using the mean-shift segmentation method. We define a cost function that consists of color values and segmented color values. We apply a weighted average filter whose weighting factor is defined by the random walk probability using the defined cost function of the block. Experimental results show that the proposed method generates the depth map efficiently and we can reconstruct good virtual view images.

  • PDF