• Title/Summary/Keyword: RGB camera

Search Result 312, Processing Time 0.025 seconds

Application of UAV-based RGB Images for the Growth Estimation of Vegetable Crops

  • Kim, Dong-Wook;Jung, Sang-Jin;Kwon, Young-Seok;Kim, Hak-Jin
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.45-45
    • /
    • 2017
  • On-site monitoring of vegetable growth parameters, such as leaf length, leaf area, and fresh weight, in an agricultural field can provide useful information for farmers to establish farm management strategies suitable for optimum production of vegetables. Unmanned Aerial Vehicles (UAVs) are currently gaining a growing interest for agricultural applications. This study reports on validation testing of previously developed vegetable growth estimation models based on UAV-based RGB images for white radish and Chinese cabbage. Specific objective was to investigate the potential of the UAV-based RGB camera system for effectively quantifying temporal and spatial variability in the growth status of white radish and Chinese cabbage in a field. RGB images were acquired based on an automated flight mission with a multi-rotor UAV equipped with a low-cost RGB camera while automatically tracking on a predefined path. The acquired images were initially geo-located based on the log data of flight information saved into the UAV, and then mosaicked using a commerical image processing software. Otsu threshold-based crop coverage and DSM-based crop height were used as two predictor variables of the previously developed multiple linear regression models to estimate growth parameters of vegetables. The predictive capabilities of the UAV sensing system for estimating the growth parameters of the two vegetables were evaluated quantitatively by comparing to ground truth data. There were highly linear relationships between the actual and estimated leaf lengths, widths, and fresh weights, showing coefficients of determination up to 0.7. However, there were differences in slope between the ground truth and estimated values lower than 0.5, thereby requiring the use of a site-specific normalization method.

  • PDF

Real-Time Motion Generation Method of Humanoid Robots based on RGB-D Camera for Interactive Performance and Exhibition (인터렉티브 공연·전시를 위한 RGB-D 카메라 기반 휴머노이드 로봇의 실시간 로봇 동작 생성 방법)

  • Seo, Bohyeong;Lee, Duk-Yeon;Choi, Dongwoon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.528-536
    • /
    • 2020
  • As humanoid robot technology advances, the use of robots for performance is increasing. As a result, studies are being conducted to increase the scope of use of robots in performances by making them natural like humans. Among them, the use of motion capture technology is often used, and there are environmental inconveniences in preparing for motion capture, such as the need for IMU sensors or markers attached to each part of the body and precise high-performance cameras. In addition, for robots used in performance technology, there is a problem that they have to respond in real time depending on the unexpected situation or the audience's response. To make up for the above mentioned problems, in this paper, we proposed a real-time motion capture system by using a number of RGB-D cameras and creating natural robot motion similar to human motion by using motion-captured data.

Discriminant analysis to detect fire blight infection on pear trees using RGB imagery obtained by a rotary wing drone

  • Kim, Hyun-Jung;Noh, Hyun-Kwon;Kang, Tae-Hwan
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.349-360
    • /
    • 2020
  • Fire-blight disease is a kind of contagious disease affecting apples, pears, and some other members of the family Rosaceae. Due to its extremely strong infectivity, once an orchard is confirmed to be infected, all of the orchards located within 100 m must be buried under the ground, and the sites are prohibited to cultivate any fruit trees for 5 years. In South Korea, fire-blight was confirmed for the first time in the Ansung area in 2015, and the infection is still being identified every year. Traditional approaches to detect fire-blight are expensive and require much time, additionally, also the inspectors have the potential to transmit the pathogen, Thus, it is necessary to develop a remote, unmanned monitoring system for fire-blight to prevent the spread of the disease. This study was conducted to detect fire-blight on pear trees using discriminant analysis with color information collected from a rotary-wing drone. The images of the infected trees were obtained at a pear orchard in Cheonan using an RGB camera attached to a rotary-wing drone at an altitude of 4 m, and also using a smart phone RGB camera on the ground. RGB and Lab color spaces and discriminant analysis were used to develop the image processing algorithm. As a result, the proposed method had an accuracy of approximately 75% although the system still requires many flaws to be improved.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

The method to predict spectral reflectance of skin color by RGB color signals (RGB 색신호에 의한 피부색의 분광반사율 추정)

  • 김채경;박상택;김종필;이을환;안석출
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.16 no.3
    • /
    • pp.97-108
    • /
    • 1998
  • Spectral reflectance of the object should be measured to predict the color of object under various illuminants. The spectral reflectance can be represented in a multi-dimension space. Generally the information of inputed image by digital camera and color scanner is represented with 3-dimension color signals such as RGB. In other to predict the color of inputed image under any illuminant, we should be estimated spectral reflectance of the object. In this paper, we described the method to predict spectral reflectance by einenvector using the skin color of printed image, confirmed availability and propriety through experiment. we estimated spectral reflectance of skin color taken by RGB color signals and than reproduced skin color according to various illuminants on CRT.

  • PDF

An Anti-Glare Technique for Drivers Based on Monocular RGB Camera and Smart Film (자동차 운전자를 위한 단일 RGB 카메라와 스마트 필름 기반 눈부심 측정 및 완화 기법)

  • Kim, Jinu;Bae, Sang-Jun;Kim, Dongho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.626-629
    • /
    • 2019
  • 운전 중 운전자의 눈부심은 도로 상황 인식에 대해 악영향을 미치고, 운전자가 운전 중 필요로 하는 도로의 요소들을 적절히 고려할 수 있는 시간의 부재로 이어져 결국 교통사고로까지 이어질 수 있다. 본 논문에서는 자동차 운전자를 위한 단일 RGB 카메라와 스마트 필름 기반 눈부심 측정 및 완화 기법으로, RGB 카메라를 이용한 눈부심 검출 및 스마트 필름과의 연동으로 눈부심을 완화할 수 있는 기법에 대해 제안한다. 추후 본 기법으로 운전 중 다양한 원인으로 인해 발생할 수 있는 눈부심과 그에 따른 교통사고의 위험을 경감시키기 위한 도구로 활용될 수 있을 것으로 기대한다.

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

An Adaptive Colorimetry Analysis Method of Image using a CIS Transfer Characteristic and SGL Functions (CIS의 전달특성과 SGL 함수를 이용한 적응적인 영상의 Colorimetry 분석 기법)

  • Lee, Sung-Hak;Lee, Jong-Hyub;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.641-650
    • /
    • 2010
  • Color image sensors (CIS) output color images through image sensors and image signal processing. Image sensors that convert light to electrical signal are divided into CMOS image sensor and CCD image sensor according to transferring method of signal charge. In general, a CIS has RGB output signals from tri-stimulus XYZ of the scene through image signal processing. This paper presents an adaptive colorimetric analysis method to obtain chromaticity and luminance using CIS under various environments. An image sensor for the use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between camera RGB output signals and CIE XYZ tristimulus values. We first survey the camera characterization in the standard environment then derive a SGL(shutter-gain-level) function which is relationship between luminance and auto exposure (AE) characteristic of CIS, and read the status of an AWB(auto white balance) function. Then we can apply CIS to measure luminance and chromaticity from camera outputs and AE resister values without any preprocessing. Camera RGB outputs, register values, and camera photoelectric characteristic are used to analyze the colorimetric results for real scenes such as chromaticity and luminance. Experimental results show that the proposed method is valid in the measuring performance. The proposed method can apply to various fields like surveillant systems of the display or security systems.

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.