• Title/Summary/Keyword: RGB sensor

Search Result 144, Processing Time 0.02 seconds

A Study on the Augmented Reality Display for Educating Power Tiller Operator using Chroma-key (크로마키를 이용한 증강현실 영상출력 연구)

  • Kim, Yu Yong;Noh, Jae Seung;Hong, Sun Jung
    • Journal of agriculture & life science
    • /
    • v.51 no.1
    • /
    • pp.205-212
    • /
    • 2017
  • This study aimed to output augmented reality display using chroma-key so that power tiller simulator can be operated smoothly while wearing the head mounted display. In this paper, we propose a chroma-eliminating image filtering method. Experimental results show that the maximum and minimum values of hue, saturation and intensity were 0.52, 0.153, 0.57, 0.16, 1, and 0.12, respectively. A keypad was used to set the initial position of augmented reality adjusted with the front, back, top, bottom, left, and right buttons. The initial position value is always maintained and managed according to the trailer attachment and detachment. Finally, we show that the augmented reality merged with virtual image and the acquired image of operation device using coordinate values obtained from the HMD and the position tracking sensor as relative coordinates in Unity program that is the ultimate game development platform.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

The evaluation of Spectral Vegetation Indices for Classification of Nutritional Deficiency in Rice Using Machine Learning Method

  • Jaekyeong Baek;Wan-Gyu Sang;Dongwon Kwon;Sungyul Chanag;Hyeojin Bak;Ho-young Ban;Jung-Il Cho
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.88-88
    • /
    • 2022
  • Detection of stress responses in crops is important to diagnose crop growth and evaluate yield. Also, the multi-spectral sensor is effectively known to evaluate stress caused by nutrient and moisture in crops or biological agents such as weeds or diseases. Therefore, in this experiment, multispectral images were taken by an unmanned aerial vehicle(UAV) under field condition. The experiment was conducted in the long-term fertilizer field in the National Institute of Crop Science, and experiment area was divided into different status of NPK(Control, N-deficiency, P-deficiency, K-deficiency, Non-fertilizer). Total 11 vegetation indices were created with RGB and NIR reflectance values using python. Variations in nutrient content in plants affect the amount of light reflected or absorbed for each wavelength band. Therefore, the objective of this experiment was to evaluate vegetation indices derived from multispectral reflectance data as input into machine learning algorithm for the classification of nutritional deficiency in rice. RandomForest model was used as a representative ensemble model, and parameters were adjusted through hyperparameter tuning such as RandomSearchCV. As a result, training accuracy was 0.95 and test accuracy was 0.80, and IPCA, NDRE, and EVI were included in the top three indices for feature importance. Also, precision, recall, and f1-score, which are indicators for evaluating the performance of the classification model, showed a distribution of 0.7-0.9 for each class.

  • PDF

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.