• Title/Summary/Keyword: ToF camera

Search Result 218, Processing Time 0.023 seconds

Extraction and Transfer of Gesture Information using ToF Camera (ToF 카메라를 이용한 제스처 정보의 추출 및 전송)

  • Park, Won-Chang;Ryu, Dae-Hyun;Choi, Tae-Wan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.10
    • /
    • pp.1103-1109
    • /
    • 2014
  • The latest CCTV camera are network camera in many cases. In this case when transmitting high-quality image by internet, it could be a large load on the internet because the amount of image data is very large. In this study, we propose a method which can reduce the video traffic in this case, and evaluate its performance. We used a method for transmitting and extracting a gesture information using ToF camera such as Kinect in certain circumstances. There may be restrictions on the application of the proposed method because it depends on the performance of the ToF camera. However, it can be applied efficiently to the security or safety management of a small interior space such as a home or office.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Design of Hardware Interface for the Otto Struve 2.1m Telescope

  • Oh, Hee-Young;Park, Won-Kee;choi, Chang-Su;Kim, Eun-Bin;Nguyen, Huynh Anh Le;Lim, Ju-Hee;Jeong, Hyeon-Ju;Pak, Soo-Jong;Im, Myung-Shin
    • Bulletin of the Korean Space Science Society
    • /
    • 2009.10a
    • /
    • pp.25.3-25.3
    • /
    • 2009
  • To search for the quasars at z > 7 in early universe, we are developing a optical camera which has a $1k\times1k$ deep depletion CCD chip, with later planned upgrade to HAWAII-2RG infrared array. We are going to attach the camera to the cassegrain focus of Otto Struve 2.1m telescope at McDonald observatory of University of Texas at Austin, USA. We present the design of a hardware interface to attach the CCD camera to the telescope. It consists of focal reducer, filter wheel, and guiding camera. Focal reducer is needed to reduce the long f-ratio (f/13.7) down to about 4 for wide field of view. The guiding camera design is based on that of DIAFI offset guider which developed for the McDonald 2.7m telescope.

  • PDF

Hybrid Camera System with a TOF and DSLR Cameras (TOF 깊이 카메라와 DSLR을 이용한 복합형 카메라 시스템 구성 방법)

  • Kim, Soohyeon;Kim, Jae-In;Kim, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.19 no.4
    • /
    • pp.533-546
    • /
    • 2014
  • This paper presents a method for a hybrid (color and depth) camera system construction using a photogrammetric technology. A TOF depth camera is efficient since it measures range information of objects in real-time. However, there are some problems of the TOF depth camera such as low resolution and noise due to surface conditions. Therefore, it is essential to not only correct depth noise and distortion but also construct the hybrid camera system providing a high resolution texture map for generating a 3D model using the depth camera. We estimated geometry of the hybrid camera using a traditional relative orientation algorithm and performed texture mapping using backward mapping based on a condition of collinearity. Other algorithm was compared to evaluate performance about the accuracy of a model and texture mapping. The result showed that the proposed method produced the higher model accuracy.

Design and Performance Verification of a LWIR Zoom Camera for Drones

  • Kwang-Woo Park;Jonghwa Choi;Jian Kang
    • Current Optics and Photonics
    • /
    • v.7 no.4
    • /
    • pp.354-361
    • /
    • 2023
  • We present the optical design and experimental verification of resolving performance of a 3× long wavelength infrared (LWIR) zoom camera for drones. The effective focal length of the system varies from 24.5 mm at the wide angle position to 75.1 mm at the telephoto position. The design specifications of the system were derived from ground resolved distance (GRD) to recognize 3 m × 6 m target at a distance of 1 km, at the telephoto position. To satisfy the system requirement, the aperture (f-number) of the system is taken as F/1.6 and the final modulation transfer function (MTF) should be higher than 0.1 (10%). The measured MTF in the laboratory was 0.127 (12.7%), exceeds the system requirement. Outdoor targets were used to verify the comprehensive performance of the system. The system resolved 4-bar targets corresponding to the spatial resolution at the distance of 1 km, 1.4 km and 2 km.

An Experimental Study on the Noise Generation Mechanisms of Propane Premixed Flames (프로판 예혼합화염의 소음발생 매커니즘에 관한 실험적 연구)

  • Lee, Won-Nam;Park, Dong-Soo
    • 한국연소학회:학술대회논문집
    • /
    • 2004.06a
    • /
    • pp.27-33
    • /
    • 2004
  • The Noise generation mechanisms of propane laminar premixed flames on a slot burner have been studied experimentally. The sound levels and frequencies were measured for various mixture flow rates (velocities) and equivalence ratios. The primary frequency of self-induced noise increases with the mean velocity of mixture as $f{\;}{\propto}{\;}U_f^{1.144}$ and the measured noise level increases with the mixture flow rate and equivalence ratio as $p{\;}{\propto}{\;}U_f^{1.7}$$F^{8.2}$. The nature of flame oscillation and the noise generation mechanisms are also investigated using a high speed CCD camera and a DSRL camera. The repetition of sudden extinction at the tip of flame is evident and the repetition rates are identical to the primary frequencies obtained from the FFT analysis of sound pressure signals. CH chemiluminescence intensities of the oscillating flames were also measured by PMT with a 431 nm(10 FWHM) band pass filter and compared to the pressure signals.

  • PDF

Using 'RED ONE' Camera for Digital Film Making (디지털 영화제작을 위한 레드 원 카메라의 활용성 연구)

  • Ko, Hyun-Wook;Min, Kyung-Won
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.163-170
    • /
    • 2009
  • This paper is an analysis of the newly invented RED ONE camera and its various advantages, and also a comparison of the results with already existing digital cameras. As film production systems have been converting to digital systems, Arri Camera, one of the existing film camera companies, has invented and developed the Arri D21.Sony, the leader of the high definition digital camera industry, is also securing its role as the leader of digital filmmaking by introducing the CineAlta F-900 series. With these advances in digital filmmaking, the advent of the RED ONE camera opens doors to new possibilities to digital filmmakers. The development of digital cameras with thoughtful consideration for its users has Influenced the filmmaking environment immensely and has become the reason why it is playing a major role in digital filmmaking.

Assembling three one-camera images for three-camera intersection classification

  • Marcella Astrid;Seung-Ik Lee
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.862-873
    • /
    • 2023
  • Determining whether an autonomous self-driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three-camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian-view intersection classification experiments show that our feature fusion model provides an area under the curve and F1-score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three- and one-camera models.

Enhancement on Time-of-Flight Camera Images (Time-of-Flight 카메라 영상 보정)

  • Kim, Sung-Hee;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.708-711
    • /
    • 2008
  • Time-of-flight(ToF) cameras deliver intensity data as well as range information of the objects of the scene. However, systematic problems during the acquisition lead to distorted values in both distance and amplitude. In this paper we propose a method to acquire reliable distance information over the entire scene correcting each information based on the other data. The amplitude image is enhanced based on the depth values and this leads depth correction especially for far pixels.

  • PDF