• Title/Summary/Keyword: RGB-depth camera

Search Result 82, Processing Time 0.025 seconds

Smoke Detection Based on RGB-Depth Camera in Interior (RGB-Depth 카메라 기반의 실내 연기검출)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • In this paper, an algorithm using RGB-depth camera is proposed to detect smoke in interrior. RGB-depth camera, the Kinect provides RGB color image and depth information. The Kinect sensor consists of an infra-red laser emitter, infra-red camera and an RGB camera. A specific pattern of speckles radiated from the laser source is projected onto the scene. This pattern is captured by the infra-red camera and is analyzed to get depth information. The distance of each speckle of the specific pattern is measured and the depth of object is estimated. As the depth of object is highly changed, the depth of object plain can not be determined by the Kinect. The depth of smoke can not be determined too because the density of smoke is changed with constant frequency and intensity of infra-red image is varied between each pixels. In this paper, a smoke detection algorithm using characteristics of the Kinect is proposed. The region that the depth information is not determined sets the candidate region of smoke. If the intensity of the candidate region of color image is larger than a threshold, the region is confirmed as smoke region. As results of simulations, it is shown that the proposed method is effective to detect smoke in interior.

Robust Vehicle Occupant Detection based on RGB-Depth-Thermal Camera (다양한 환경에서 강건한 RGB-Depth-Thermal 카메라 기반의 차량 탑승자 점유 검출)

  • Song, Changho;Kim, Seung-Hun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.31-37
    • /
    • 2018
  • Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

Multiple Depth and RGB Camera-based System to Acquire Point Cloud for MR Content Production (MR 콘텐츠 제작을 위한 다중 깊이 및 RGB 카메라 기반의 포인트 클라우드 획득 시스템)

  • Kim, Kyung-jin;Park, Byung-seo;Kim, Dong-wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.445-446
    • /
    • 2019
  • Recently, attention has been focused on mixed reality (MR) technology, which provides an experience that can not be realized in reality by fusing virtual information into the real world. Mixed reality has the advantage of having excellent interaction with reality and maximizing immersion feeling. In this paper, we propose a method to acquire a point cloud for the production of mixed reality contents using multiple Depth and RGB camera system.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

A New Camera System Implementation for Realistic Media-based Contents (실감미디어 기반의 콘텐츠를 위한 카메라 시스템의 구현)

  • Seo, Young Ho;Lee, Yoon Hyuk;Koo, Ja Myung;Kim, Woo Youl;Kim, Bo Ra;Kim, Moon Seok;Kim, Dong Wook
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.99-109
    • /
    • 2013
  • In this paper, we propose a new system which captures real depth and color information from natural scene and implemented it. Based on it, we produced stereo and multiview images for 3-dimensional stereoscopic contents and introduced the production of a digital hologram which is considered to the next-generation image. The system consists of both a camera system for capturing images which correspond to RGB and depth images and softwares (SWs) for various image processings which consist of pre-processing such as rectification and calibration, 3D warping, and computer generated hologram (CGH). The camera system use a vertical rig with two paris of depth and RGB camera and a specially manufactured cold mirror which has the different transmittance according to wavelength for obtaining images with the same view point. The wavelength of our mirror is about 850nm. Each algorithm was implemented using C and C++ and the implemented system can be operated in real-time.

RGB-Depth Camera for Dynamic Measurement of Liquid Sloshing (RGB-Depth 카메라를 활용한 유체 표면의 거동 계측분석)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 2019
  • In this paper, a low-cost dynamic measurement system using the RGB-depth camera, Microsoft $Kinect^{(R)}$ v2, is proposed for measuring time-varying free surface motion of liquid dampers used in building vibration mitigation. Various experimental studies are conducted consecutively: performance evaluation and validation of the $Kinect^{(R)}$ v2, real-time monitoring using the $Kinect^{(R)}$ v2 SDK(software development kits), point cloud acquisition of liquid free surface in the 3D space, comparison with the existing video sensing technology. Utilizing the proposed $Kinect^{(R)}$ v2-based measurement system in this study, dynamic behavior of liquid in a laboratory-scaled small tank under a wide frequency range of input excitation is experimentally analyzed.

System Implementation for Generating High Quality Digital Holographic Video using Vertical Rig based on Depth+RGB Camera (Depth+RGB 카메라 기반의 수직 리그를 이용한 고화질 디지털 홀로그래픽 비디오 생성 시스템의 구)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.964-975
    • /
    • 2012
  • Recently the attention on digital hologram that is regarded as to be the final goal of the 3-dimensional video technology has been increased. A digital hologram can be generated with a depth and a RGB image. We proposed a new system to capture RGB and depth images and to convert them to digital holograms. First a new cold mirror was designed and produced. It has the different transmittance ratio against various wave length and can provide the same view and focal point to the cameras. After correcting various distortions with the camera system, the different resolution between depth and RGB images was adjusted. The interested object was extracted by using the depth information. Finally a digital hologram was generated with the computer generated hologram (CGH) algorithm. All algorithms were implemented with C/C++/CUDA and integrated in LabView environment. A hologram was calculated in the general-purpose computing on graphics processing unit (GPGPU) for high-speed operation. We identified that the visual quality of the hologram produced by the proposed system is better than the previous one.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.