• Title/Summary/Keyword: Drone remote sensing

Search Result 82, Processing Time 0.018 seconds

Analyzing Soybean Growth Patterns in Open-Field Smart Agriculture under Different Irrigation and Cultivation Methods Using Drone-Based Vegetation Indices

  • Kyeong-Soo Jeong;Seung-Hwan Go;Kyeong-Kyu Lee;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.45-56
    • /
    • 2024
  • Faced with aging populations, declining resources, and limited agricultural productivity, rural areas in South Korea require innovative solutions. This study investigated the potential of drone-based vegetation indices (VIs) to analyze soybean growth patterns in open-field smart agriculture in Goesan-gun, Chungbuk Province, South Korea. We monitored multi-seasonal normalized difference vegetation index (NDVI) and the normalized difference red edge (NDRE) data for three soybean lots with different irrigation methods (subsurface drainage, conventional, subsurface drip irrigation) using drone remote sensing. Combining NDVI (photosynthetically active biomass, PAB) and NDRE (chlorophyll) offered a comprehensive analysis of soybean growth, capturing both overall health and stress responses. Our analysis revealed distinct growth patterns for each lot. LotA(subsurface drainage) displayed early vigor and efficient resource utilization (peaking at NDVI 0.971 and NDRE 0.686), likely due to the drainage system. Lot B (conventional cultivation) showed slower growth and potential limitations (peaking at NDVI 0.963 and NDRE 0.681), suggesting resource constraints or stress. Lot C (subsurface drip irrigation) exhibited rapid initial growth but faced later resource limitations(peaking at NDVI 0.970 and NDRE 0.695). By monitoring NDVI and NDRE variations, farmers can gain valuable insights to optimize resource allocation (reducing costs and environmental impact), improve crop yield and quality (maximizing yield potential), and address rural challenges in South Korea. This study demonstrates the promise of drone-based VIs for revitalizing open-field agriculture, boosting farm income, and attracting young talent, ultimately contributing to a more sustainable and prosperous future for rural communities. Further research integrating additional data and investigating physiological mechanisms can lead to even more effective management strategies and a deeper understanding of VI variations for optimized crop performance.

The Optimal GSD and Image Size for Deep Learning Semantic Segmentation Training of Drone Images of Winter Vegetables (드론 영상으로부터 월동 작물 분류를 위한 의미론적 분할 딥러닝 모델 학습 최적 공간 해상도와 영상 크기 선정)

  • Chung, Dongki;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1573-1587
    • /
    • 2021
  • A Drone image is an ultra-high-resolution image that is several or tens of times higher in spatial resolution than a satellite or aerial image. Therefore, drone image-based remote sensing is different from traditional remote sensing in terms of the level of object to be extracted from the image and the amount of data to be processed. In addition, the optimal scale and size of data used for model training is different depending on the characteristics of the applied deep learning model. However, moststudies do not consider the size of the object to be found in the image, the spatial resolution of the image that reflects the scale, and in many cases, the data specification used in the model is applied as it is before. In this study, the effect ofspatial resolution and image size of drone image on the accuracy and training time of the semantic segmentation deep learning model of six wintering vegetables was quantitatively analyzed through experiments. As a result of the experiment, it was found that the average accuracy of dividing six wintering vegetablesincreases asthe spatial resolution increases, but the increase rate and convergence section are different for each crop, and there is a big difference in accuracy and time depending on the size of the image at the same resolution. In particular, it wasfound that the optimal resolution and image size were different from each crop. The research results can be utilized as data for getting the efficiency of drone images acquisition and production of training data when developing a winter vegetable segmentation model using drone images.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

Semantic Segmentation of Drone Images Based on Combined Segmentation Network Using Multiple Open Datasets (개방형 다중 데이터셋을 활용한 Combined Segmentation Network 기반 드론 영상의 의미론적 분할)

  • Ahram Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.967-978
    • /
    • 2023
  • This study proposed and validated a combined segmentation network (CSN) designed to effectively train on multiple drone image datasets and enhance the accuracy of semantic segmentation. CSN shares the entire encoding domain to accommodate the diversity of three drone datasets, while the decoding domains are trained independently. During training, the segmentation accuracy of CSN was lower compared to U-Net and the pyramid scene parsing network (PSPNet) on single datasets because it considers loss values for all dataset simultaneously. However, when applied to domestic autonomous drone images, CSN demonstrated the ability to classify pixels into appropriate classes without requiring additional training, outperforming PSPNet. This research suggests that CSN can serve as a valuable tool for effectively training on diverse drone image datasets and improving object recognition accuracy in new regions.

Robust Radiometric and Geometric Correction Methods for Drone-Based Hyperspectral Imaging in Agricultural Applications

  • Hyoung-Sub Shin;Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.3
    • /
    • pp.257-268
    • /
    • 2024
  • Drone-mounted hyperspectral sensors (DHSs) have revolutionized remote sensing in agriculture by offering a cost-effective and flexible platform for high-resolution spectral data acquisition. Their ability to capture data at low altitudes minimizes atmospheric interference, enhancing their utility in agricultural monitoring and management. This study focused on addressing the challenges of radiometric and geometric distortions in preprocessing drone-acquired hyperspectral data. Radiometric correction, using the empirical line method (ELM) and spectral reference panels, effectively removed sensor noise and variations in solar irradiance, resulting in accurate surface reflectance values. Notably, the ELM correction improved reflectance for measured reference panels by 5-55%, resulting in a more uniform spectral profile across wavelengths, further validated by high correlations (0.97-0.99), despite minor deviations observed at specific wavelengths for some reflectors. Geometric correction, utilizing a rubber sheet transformation with ground control points, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was assessed using root mean square error(RMSE) analysis, revealing minimal errors in both east-west(0.00 to 0.081 m) and north-south directions(0.00 to 0.076 m).The overall position RMSE of 0.031 meters across 100 points demonstrates high geometric accuracy, exceeding industry standards. Additionally, image mosaicking was performed to create a comprehensive representation of the study area. These results demonstrate the effectiveness of the applied preprocessing techniques and highlight the potential of DHSs for precise crop health monitoring and management in smart agriculture. However, further research is needed to address challenges related to data dimensionality, sensor calibration, and reference data availability, as well as exploring alternative correction methods and evaluating their performance in diverse environmental conditions to enhance the robustness and applicability of hyperspectral data processing in agriculture.

High-Resolution Mapping Techniques for Coastal Debris Using YOLOv8 and Unmanned Aerial Vehicle (YOLOv8과 무인항공기를 활용한 고해상도 해안쓰레기 매핑)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.151-166
    • /
    • 2024
  • Coastal debris presents a significant environmental threat globally. This research sought to improve the monitoring methods for coastal debris by employing deep learning and remote sensing technologies. To achieve this, an object detection approach utilizing the You Only Look Once (YOLO)v8 model was implemented to develop a comprehensive image dataset for 11 primary types of coastal debris in our country, proposing a protocol for the real-time detection and analysis of debris. Drone imagery was collected over Sinja Island, situated at the estuary of the Nakdong River, and analyzed using our custom YOLOv8-based analysis program to identify type-specific hotspots of coastal debris. The deployment of these mapping and analysis methodologies is anticipated to be effectively utilized in managing coastal debris.

Coastal Shallow-Water Bathymetry Survey through a Drone and Optical Remote Sensors (드론과 광학원격탐사 기법을 이용한 천해 수심측량)

  • Oh, Chan Young;Ahn, Kyungmo;Park, Jaeseong;Park, Sung Woo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.3
    • /
    • pp.162-168
    • /
    • 2017
  • Shallow-water bathymetry survey has been conducted using high definition color images obtained at the altitude of 100 m above sea level using a drone. Shallow-water bathymetry data are one of the most important input data for the research of beach erosion problems. Especially, accurate bathymetry data within closure depth are critically important, because most of the interesting phenomena occur in the surf zone. However, it is extremely difficult to obtain accurate bathymetry data due to wave-induced currents and breaking waves in this region. Therefore, optical remote sensing technique using a small drone is considered to be attractive alternative. This paper presents the potential utilization of image processing algorithms using multi-variable linear regression applied to red, green, blue and grey band images for estimating shallow water depth using a drone with HD camera. Optical remote sensing analysis conducted at Wolpo beach showed promising results. Estimated water depths within 5 m showed correlation coefficient of 0.99 and maximum error of 0.2 m compared with water depth surveyed through manual as well as ship-board echo-sounder measurements.

Comparative Analysis of Pre-processing Method for Standardization of Multi-spectral Drone Images (다중분광 드론영상의 표준화를 위한 전처리 기법 비교·분석)

  • Ahn, Ho-Yong;Ryu, Jae-Hyun;Na, Sang-il;Lee, Byung-mo;Kim, Min-ji;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1219-1230
    • /
    • 2022
  • Multi-spectral drones in agricultural observation require quantitative and reliable data based on physical quantities such as radiance or reflectance in crop yield analysis. In the case of remote sensing data for crop monitoring, images taken in the same area over time-series are required. In particular, biophysical data such as leaf area index or chlorophyll are analyzed through time-series data under the same reference, it can be directly analyzed. So, comparable reflectance data are required. Orthoimagery using drone images, the entire image pixel values are distorted or there is a difference in pixel values at the junction boundary, which limits accurate physical quantity estimation. In this study, reflectance and vegetation index based on drone images were calculated according to the correction method of drone images for time-series crop monitoring. comparing the drone reflectance and ground measured data for spectral characteristics analysis.

A Study on the Use of Drones for Disaster Damage Investigation in Mountainous Terrain (산악지형에서의 재난피해조사를 위한 드론 맵핑 활용방안 연구)

  • Shin, Dongyoon;Kim, Dajinsol;Kim, Seongsam;Han, Youkyung;Nho, Hyunju
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1209-1220
    • /
    • 2020
  • In the case of forest areas, the installation of ground control points (GCPs) and the selection of terrain features, which are one of the unmanned aerial photogrammetry work process, are limited compared to urban areas, and safety problems arise due to non-visible flight due to high forest. To compensate for this problem, the drone equipped with a real time kinematic (RTK) sensor that corrects the position of the drone in real time, and a 3D flight method that fly based on terrain information are being developed. This study suggests to present a method for investigating damage using drones in forest areas. Position accuracy evaluation was performed for three methods: 1) drone mapping through GCP measurement (normal mapping), 2) drone mapping based on topographic data (3D flight mapping), 3) drone mapping using RTK drone (RTK mapping), and all showed an accuracy within 2 cm in the horizontal and within 13 cm in the vertical position. After evaluating the position accuracy, the volume of the landslide area was calculated and the volume values were compared, and all showed similar values. Through this study, the possibility of utilizing 3D flight mapping and RTK mapping in forest areas was confirmed. In the future, it is expected that more effective damage investigations can be conducted if the three methods are appropriately used according to the conditions of area of the disaster.