• Title/Summary/Keyword: 픽셀분류

Search Result 165, Processing Time 0.023 seconds

Analysis of Chicken Feather Color Phenotypes Classified by K-Means Clustering using Reciprocal F2 Chicken Populations (K-Means Clustering으로 분류한 닭 깃털색 표현형의 분석)

  • Park, Jongho;Heo, Seonyeong;Kim, Minjun;Cho, Eunjin;Cha, Jihye;Jin, Daehyeok;Koh, Yeong Jun;Lee, Seung-Hwan;Lee, Jun Heon
    • Korean Journal of Poultry Science
    • /
    • v.49 no.3
    • /
    • pp.157-165
    • /
    • 2022
  • Chickens are a species of vertebrate with varying colors. Various colors of chickens must be classified to find color-related genes. In the past, color scoring was performed based on human visual observation. Therefore, chicken colors have not been measured with precise standards. In order to solve this problem, a computer vision approach was used in this study. Image quantization based on k-means clustering for all pixels of RGB values can objectively distinguish inherited colors that are expressed in various ways. This study was also conducted to determine whether plumage color differences exist in the reciprocal cross lines between two breeds: black Yeonsan Ogye (YO) and White Leghorn (WL). Line B is a crossbred line between YO males and WL females while Line L is a reciprocal crossbred line between WL males and YO females. One male and ten females were selected for each F1 line, and full-sib mating was conducted to generate 883 F2 birds. The results indicate that the distribution of light and dark colors of k-means clustering converged to 7:3. Additionally, the color of Line B was lighter than that of Line L (P<0.01). This study suggests that the genes underlying plumage colors can be identified using quantification values from the computer vision approach described in this study.

A study on the application of the agricultural reservoir water level recognition model using CCTV image data (농업용 저수지 CCTV 영상자료 기반 수위 인식 모델 적용성 검토)

  • Kwon, Soon Ho;Ha, Changyong;Lee, Seungyub
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.245-259
    • /
    • 2023
  • The agricultural reservoir is a critical water supply system in South Korea, providing approximately 60% of the agricultural water demand. However, the reservoir faces several issues that jeopardize its efficient operation and management. To address this issues, we propose a novel deep-learning-based water level recognition model that uses CCTV image data to accurately estimate water levels in agricultural reservoirs. The model consists of three main parts: (1) dataset construction, (2) image segmentation using the U-Net algorithm, and (3) CCTV-based water level recognition using either CNN or ResNet. The model has been applied to two reservoirs G-reservoir and M-reservoir with observed CCTV image and water level time series data. The results show that the performance of the image segmentation model is superior, while the performance of the water level recognition model varies from 50 to 80% depending on water level classification criteria (i.e., classification guideline) and complexity of image data (i.e., variability of the image pixels). The performance of the model can be improved if more numbers of data can be collected.

Artificial Intelligence-Based Detection of Smoke Plume and Yellow Dust from GEMS Images (인공지능 기반의 GEMS 산불연기 및 황사 탐지)

  • Yemin Jeong;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Soyeon Choi;Yungyo Im;Youngmin Seo;Jeong-Ah Yu;Kyoung-Hee Sung;Sang-Min Kim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.859-873
    • /
    • 2023
  • Wildfires cause a lot of environmental and economic damage to the Earth over time. Various experiments have examined the harmful effects of wildfires. Also, studies for detecting wildfires and pollutant emissions using satellite remote sensing have been conducted for many years. The wildfire product for the Geostationary Environmental Monitoring Spectrometer (GEMS), Korea's first environmental satellite sensor, has not been provided yet. In this study, a false-color composite for better expression of wildfire smoke was created from GEMS and used in a U-Net model for wildfire detection. Then, a classification model was constructed to distinguish yellow dust from the wildfire smoke candidate pixels. The proposed method can contribute to disaster monitoring using GEMS images.

Evaluation of Oil Spill Detection Models by Oil Spill Distribution Characteristics and CNN Architectures Using Sentinel-1 SAR data (Sentienl-1 SAR 영상을 활용한 유류 분포특성과 CNN 구조에 따른 유류오염 탐지모델 성능 평가)

  • Park, Soyeon;Ahn, Myoung-Hwan;Li, Chenglei;Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1475-1490
    • /
    • 2021
  • Detecting oil spill area using statistical characteristics of SAR images has limitations in that classification algorithm is complicated and is greatly affected by outliers. To overcome these limitations, studies using neural networks to classify oil spills are recently investigated. However, the studies to evaluate whether the performance of model shows a consistent detection performance for various oil spill cases were insufficient. Therefore, in this study, two CNNs (Convolutional Neural Networks) with basic structures(Simple CNN and U-net) were used to discover whether there is a difference in detection performance according to the structure of CNN and distribution characteristics of oil spill. As a result, through the method proposed in this study, the Simple CNN with contracting path only detected oil spill with an F1 score of 86.24% and U-net, which has both contracting and expansive path showed an F1 score of 91.44%. Both models successfully detected oil spills, but detection performance of the U-net was higher than Simple CNN. Additionally, in order to compare the accuracy of models according to various oil spill cases, the cases were classified into four different categories according to the spatial distribution characteristics of the oil spill (presence of land near the oil spill area) and the clarity of border between oil and seawater. The Simple CNN had F1 score values of 85.71%, 87.43%, 86.50%, and 85.86% for each category, showing the maximum difference of 1.71%. In the case of U-net, the values for each category were 89.77%, 92.27%, 92.59%, and 92.66%, with the maximum difference of 2.90%. Such results indicate that neither model showed significant differences in detection performance by the characteristics of oil spill distribution. However, the difference in detection tendency was caused by the difference in the model structure and the oil spill distribution characteristics. In all four oil spill categories, the Simple CNN showed a tendency to overestimate the oil spill area and the U-net showed a tendency to underestimate it. These tendencies were emphasized when the border between oil and seawater was unclear.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.