• Title/Summary/Keyword: 픽셀기반

Search Result 680, Processing Time 0.033 seconds

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

Parallel Processing of Satellite Images using CUDA Library: Focused on NDVI Calculation (CUDA 라이브러리를 이용한 위성영상 병렬처리 : NDVI 연산을 중심으로)

  • LEE, Kang-Hun;JO, Myung-Hee;LEE, Won-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.29-42
    • /
    • 2016
  • Remote sensing allows acquisition of information across a large area without contacting objects, and has thus been rapidly developed by application to different areas. Thus, with the development of remote sensing, satellites are able to rapidly advance in terms of their image resolution. As a result, satellites that use remote sensing have been applied to conduct research across many areas of the world. However, while research on remote sensing is being implemented across various areas, research on data processing is presently insufficient; that is, as satellite resources are further developed, data processing continues to lag behind. Accordingly, this paper discusses plans to maximize the performance of satellite image processing by utilizing the CUDA(Compute Unified Device Architecture) Library of NVIDIA, a parallel processing technique. The discussion in this paper proceeds as follows. First, standard KOMPSAT(Korea Multi-Purpose Satellite) images of various sizes are subdivided into five types. NDVI(Normalized Difference Vegetation Index) is implemented to the subdivided images. Next, ArcMap and the two techniques, each based on CPU or GPU, are used to implement NDVI. The histograms of each image are then compared after each implementation to analyze the different processing speeds when using CPU and GPU. The results indicate that both the CPU version and GPU version images are equal with the ArcMap images, and after the histogram comparison, the NDVI code was correctly implemented. In terms of the processing speed, GPU showed 5 times faster results than CPU. Accordingly, this research shows that a parallel processing technique using CUDA Library can enhance the data processing speed of satellites images, and that this data processing benefits from multiple advanced remote sensing techniques as compared to a simple pixel computation like NDVI.

Evaluation of satellite-based evapotranspiration and soil moisture data applicability in Jeju Island (제주도에서의 위성기반 증발산량 및 토양수분 적용성 평가)

  • Jeon, Hyunho;Cho, Sungkeun;Chung, Il-Moon;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.835-848
    • /
    • 2021
  • In Jeju Island which has peculiarity for its geological features and hydrology system, hydrological factor analysis for the effective water management is necessary. Because in-situ hydro-meteorological data is affected by surrounding environment, the in-situ dataset could not be the spatially representative for the study area. For this reason, remote sensing data may be used to overcome the limit of the in-situ data. In this study, applicability assessment of MOD16 evapotranspiration data, Globas Land Data Assimilation System (GLDAS) based evapotranspiration/soil moisture data, and Advanced SCATterometer (ASCAT) soil moisture product which were evaluated their applicability on other study areas was conducted. In the case of evapotranspiration, comparison with total precipitation and flux-tower based evapotranspiration were conducted. And for soil moisture, 6 in-situ data and ASCAT soil moisture product were compared on each site. As a result, 57% of annual precipitation was calculated as evapotranspiration, and the correlation coefficient between MOD16 evapotranspiration and GLDAS evapotranspiration was 0.759, which was a robust value. The correlation coefficient was 0.434, indicating a relatively low fit. In the case of soil moisture, in the case of the GLDAS data, the RMSE value was less than 0.05 at all sites compared to the in-situ data, and a statistically significant result was obtained as a result of the significance test of the correlation coefficient. However, for satellite data, RMSE over than 0.05 were found at Wolgak and there was no correlation at Sehwa and Handong points. It is judged that the above results are due to insufficient quality control and spatial representation of the evapotranspiration and soil moisture sensors installed in Jeju Island. It is estimated as the error that appears when adjacent to the coast. Through this study, the necessity of improving the existing ground observation data of hydrometeorological factors is emphasized.

Retrieval of Sulfur Dioxide Column Density from TROPOMI Using the Principle Component Analysis Method (주성분분석방법을 이용한 TROPOMI로부터 이산화황 칼럼농도 산출 연구)

  • Yang, Jiwon;Choi, Wonei;Park, Junsung;Kim, Daewon;Kang, Hyeongwoo;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1173-1185
    • /
    • 2019
  • We, for the first time, retrieved sulfur dioxide (SO2) vertical column density (VCD) in industrial and volcanic areas from TROPOspheric Monitoring Instrument (TROPOMI) using the Principle component analysis(PCA) algorithm. Furthermore, SO2 VCDs retrieved by the PCA algorithm from TROPOMI raw data were compared with those retrieved by the Differential Optical Absorption Spectroscopy (DOAS) algorithm (TROPOMI Level 2 SO2 product). In East Asia, where large amounts of SO2 are released to the surface due to anthropogenic source such as fossil fuels, the mean value of SO2 VCD retrieved by the PCA (DOAS) algorithm was shown to be 0.05 DU (-0.02 DU). The correlation between SO2 VCD retrieved by the PCA algorithm and those retrieved by the DOAS algorithm were shown to be low (slope = 0.64; correlation coefficient (R) = 0.51) for cloudy condition. However, with cloud fraction of less than 0.5, the slope and correlation coefficient between the two outputs were increased to 0.68 and 0.61, respectively. It means that the SO2 retrieval sensitivity to surface is reduced when the cloud fraction is high in both algorithms. Furthermore, the correlation between volcanic SO2 VCD retrieved by the PCA algorithm and those retrieved by the DOAS algorithm is shown to be high (R = 0.90) for cloudy condition. This good agreement between both data sets for volcanic SO2 is thought to be due to the higher accuracy of the satellite-based SO2 VCD retrieval for SO2 which is mainly distributed in the upper troposphere or lower stratosphere in volcanic region.

Evaluation of Magnetization Transfer Ratio Imaging by Phase Sensitive Method in Knee Joint (슬관절 부위에서 자화전이 위상감도법에 의한 자화전이율 영상 평가)

  • Yoon, Moon-Hyun;Seung, Mi-Sook;Choe, Bo-Young
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.269-275
    • /
    • 2008
  • Although MR imaging is generally applicable to depict knee joint deterioration it, is sometimes occurred to mis-read and mis-diagnose the common knee joint diseases. In this study, we employed magnetization transfer ratio (MTR) method to improve the diagnosis of the various knee joint diseases. Spin-echo (SE) T2-weighted images (TR/TE 3,400-3,500/90-100 ms) were obtained in seven cases of knee joint deterioration, FSE T2-weighted images (TR/TE 4,500-5,000/100-108 ms) were obtained in seven cases of knee joint deterioration, gradient-echo (GRE) T2-weighted images (TR/TE 9/4.56/$50^{\circ}$ flip angle, NEX 1) were obtained in 3 cases of knee joint deterioration, In six cases of knee joint deterioration, fat suppression was performed using a T2-weighted short T1/tau inverse recovery (STIR) sequence (TR/TE =2,894-3,215 ms/70 ms, NEX 3, ETL 9). Calculation of MTR for individual pixels was performed on registration of unsaturated and saturated images. After processing to make MTR images, the images were displayed in gray color. For improving diagnosis, three-dimensional isotropic volume images, the MR tristimulus color mapping and the MTR map was employed. MTR images showed diagnostic images quality to assess the patients' pathologies. The intensity difference between MTR images and conventional MRI was seen on the color bar. The profile graph on MTR imaging effect showed a quantitative measure of the relative decrease in signal intensity due to the MT pulse. To diagnose the pathologies of the knee joint, the profile graph data was shown on the image as a small cross. The present study indicated that MTR images in the knee joint were feasible. Investigation of physical change on MTR imaging enables to provide us more insight in the physical and technical basis of MTR imaging. MTR images could be useful for rapid assessment of diseases that we examine unambiguous contrast in MT images of knee disorder patients.

  • PDF

Estimation of river discharge using satellite-derived flow signals and artificial neural network model: application to imjin river (Satellite-derived flow 시그널 및 인공신경망 모형을 활용한 임진강 유역 유출량 산정)

  • Li, Li;Kim, Hyunglok;Jun, Kyungsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.7
    • /
    • pp.589-597
    • /
    • 2016
  • In this study, we investigated the use of satellite-derived flow (SDF) signals and a data-based model for the estimation of outflow for the river reach where in situ measurements are either completely unavailable or are difficult to access for hydraulic and hydrology analysis such as the upper basin of Imjin River. It has been demonstrated by many studies that the SDF signals can be used as the river width estimates and the correlation between SDF signals and river width is related to the shape of cross sections. To extract the nonlinear relationship between SDF signals and river outflow, Artificial Neural Network (ANN) model with SDF signals as its inputs were applied for the computation of flow discharge at Imjin Bridge located in Imjin River. 15 pixels were considered to extract SDF signals and Partial Mutual Information (PMI) algorithm was applied to identify the most relevant input variables among 150 candidate SDF signals (including 0~10 day lagged observations). The estimated discharges by ANN model were compared with the measured ones at Imjin Bridge gauging station and correlation coefficients of the training and validation were 0.86 and 0.72, respectively. It was found that if the 1 day previous discharge at Imjin bridge is considered as an input variable for ANN model, the correlation coefficients were improved to 0.90 and 0.83, respectively. Based on the results in this study, SDF signals along with some local measured data can play an useful role in river flow estimation and especially in flood forecasting for data-scarce regions as it can simulate the peak discharge and peak time of flood events with satisfactory accuracy.

Evaluation of Oil Spill Detection Models by Oil Spill Distribution Characteristics and CNN Architectures Using Sentinel-1 SAR data (Sentienl-1 SAR 영상을 활용한 유류 분포특성과 CNN 구조에 따른 유류오염 탐지모델 성능 평가)

  • Park, Soyeon;Ahn, Myoung-Hwan;Li, Chenglei;Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1475-1490
    • /
    • 2021
  • Detecting oil spill area using statistical characteristics of SAR images has limitations in that classification algorithm is complicated and is greatly affected by outliers. To overcome these limitations, studies using neural networks to classify oil spills are recently investigated. However, the studies to evaluate whether the performance of model shows a consistent detection performance for various oil spill cases were insufficient. Therefore, in this study, two CNNs (Convolutional Neural Networks) with basic structures(Simple CNN and U-net) were used to discover whether there is a difference in detection performance according to the structure of CNN and distribution characteristics of oil spill. As a result, through the method proposed in this study, the Simple CNN with contracting path only detected oil spill with an F1 score of 86.24% and U-net, which has both contracting and expansive path showed an F1 score of 91.44%. Both models successfully detected oil spills, but detection performance of the U-net was higher than Simple CNN. Additionally, in order to compare the accuracy of models according to various oil spill cases, the cases were classified into four different categories according to the spatial distribution characteristics of the oil spill (presence of land near the oil spill area) and the clarity of border between oil and seawater. The Simple CNN had F1 score values of 85.71%, 87.43%, 86.50%, and 85.86% for each category, showing the maximum difference of 1.71%. In the case of U-net, the values for each category were 89.77%, 92.27%, 92.59%, and 92.66%, with the maximum difference of 2.90%. Such results indicate that neither model showed significant differences in detection performance by the characteristics of oil spill distribution. However, the difference in detection tendency was caused by the difference in the model structure and the oil spill distribution characteristics. In all four oil spill categories, the Simple CNN showed a tendency to overestimate the oil spill area and the U-net showed a tendency to underestimate it. These tendencies were emphasized when the border between oil and seawater was unclear.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.