• Title/Summary/Keyword: UAV images

Search Result 291, Processing Time 0.022 seconds

Characteristics of UAV Aerial Images for Monitoring of Highland Kimchi Cabbage

  • Lee, Kyung-Do;Park, Chan-Won;So, Kyu-Ho;Kim, Ki-Deog;Na, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.50 no.3
    • /
    • pp.162-178
    • /
    • 2017
  • Remote sensing can be used to provide information about the monitoring of crop growth condition. Recently Unmanned Aerial Vehicle (UAV) technology offers new opportunities for assessing crop growth condition using UAV imagery. The objective of this study was to assess weather UAV aerial images are suitable for the monitoring of highland Kimchi cabbage. This study was conducted using a fixed-wing UAV (Model : Ebee) with Cannon S110, IXUS/ELPH camera during farming season from 2015 to 2016 in the main production area of highland Kimchi cabbage, Anbandegi, Maebongsan, and Gwinemi. The Normalized Difference Vegetation Index (NDVI) by using UAV images was stable and suitable for monitoring of Kimchi cabbage situation. There were strong relationships between UAV NDVI and the growth parameters (the plant height and leaf width) ($R^2{\geq}0.94$). The tendency of UAV NDVI according to Kimchi cabbage growth was similar in the same area for two years (2015~2016). It means that if UAV image may be collected several years, UAV images could be used for estimation of the stage of growth and situation of Kimchi cabbage cultivation.

Stream Environment Monitoring using UAV Images (RGB, Thermal Infrared) (UAV 영상(RGB, 적외 열 영상)을 활용한 하천환경 모니터링)

  • Kang, Joon-Oh;Kim, Dal-Joo;Han, Woong-Ji;Lee, Yong-Chang
    • Journal of Urban Science
    • /
    • v.6 no.2
    • /
    • pp.17-27
    • /
    • 2017
  • Recently, civil complaints have increased due to water pollution and bad smell in rivers. Therefore, attention is focused on improving the river environment. The purpose of this study is to acquire RGB and thermal infrared images using UAV for sewage outlet and to monitor the status of stream pollution and the applicability UAV based images for river embankment maintenance plan was examined. The accuracy of the 3D model was examination by SfM(Structure from Motion) based images analysis on river embankment maintenance area. Especially, The wastewater discharged from the factory near the river was detected as an thermal infrared images and the flow of wastewater was monitored. As a result of the study, we could monitor the cause and flows of wastewater pollution by detecting temperature change caused by wastewater inflow using UAV images. In addition, UAV based a high precision 3D model (DTM, Digital Topographic Map, Orthophoto Mosaic) was produced to obtain precise DSM(Digital Surface Model) and vegetation cover information for river embankment maintenance.

  • PDF

Development of Brightness Correction Method for Mosaicking UAV Images (무인기 영상 병합을 위한 밝기값 보정 방법 개발)

  • Ban, Seunghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1071-1081
    • /
    • 2021
  • Remote Sensing using unmanned aerial vehicles(UAV) can acquire images with higher time resolution and spatial resolution than aerial and satellite remote sensing. However, UAV images are photographed at low altitude and the area covered by one image isrelatively narrow. Therefore multiple images must be processed to monitor large area. Since UAV images are photographed under different exposure conditions, there is difference in brightness values between adjacent images. When images are mosaicked, unnatural seamlines are generated because of the brightness difference. Therefore, in order to generate seamless mosaic image, a radiometric processing for correcting difference in brightness value between images is essential. This paper proposes a relative radiometric calibration and image blending technique. In order to analyze performance of the proposed method, mosaic images of UAV images in agricultural and mountainous areas were generated. As a result, mosaic images with mean brightness difference of 5 and root mean square difference of 7 were avchieved.

A Comparative Study of Image Classification Method to Classify Onion and Garlic Using Unmanned Aerial Vehicle (UAV) Imagery

  • Lee, Kyung-Do;Lee, Ye-Eun;Park, Chan-Won;Na, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.49 no.6
    • /
    • pp.743-750
    • /
    • 2016
  • Recently, usage of UAV (Unmanned Aerial Vehicle) has increased in agricultural part. This study was conducted to classify onion and garlic using supervised classification of a fixed-wing UAV (Model : Ebee) images for evaluation of possibility about estimation of onion and garlic cultivation area using UAV images. Aerial images were obtained 11~12 times from study sites in Changryeng-gun and Hapcheon-gun during farming season from 2015 to 2016. The result for accuracy in onion and garlic image classification by R-G-B and R-G-NIR images showed highest Kappa coefficients for the maximum likelihood method. The result for accuracy in onion and garlic classification showed high Kappa coefficients of 0.75~0.97 from DOY 105 to DOY 141, implying that UAV images could be used to estimate onion and garlic cultivation area.

Analysis of Surface Temperature Characteristics by Land Surface Fabrics Using UAV TIR Images (UAV 열적외 영상을 활용한 피복재질별 표면온도 특성 분석)

  • SONG, Bong-Geun;KIM, Gyeong-Ah;SEO, Kyeong-Ho;LEE, Seung-Won;PARK, Kyung-Hun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.162-175
    • /
    • 2018
  • The purpose of this study was to analyze the surface temperature of surface fabrics using UAV TIR images, to mitigate problems in the thermal environment of urban areas. Surface temperature values derived from UAV images were compared with those measured in-situ during the similar period as when the images were taken. The difference in the in-situ measured and UAV image derived surface temperatures is the highest for gray colored concrete roof fabrics, at $17^{\circ}C$, and urethane fabrics show the lowest difference, at $0.3^{\circ}C$. The experiment power of the scatter plot of in-situ measured and UAV image derived surface temperatures was 63.75%, indicating that the correlation between the two is high. The surface fabrics with high temperature are metal roofs($48.9^{\circ}C$), urethane($43.4^{\circ}C$), and gray colored concrete roofs($42.9^{\circ}C$), and those with low temperature are barren land($30.2^{\circ}C$), area with trees and lawns($30.2^{\circ}C$), and white colored concrete roofs($34.9^{\circ}C$). These results show that accurate analysis of the thermal characteristics of surface fabrics is possible using UAV images. In future, it will be necessary to increase the usability of UAV images via comparison with in-situ data and linkage to satellite imagery.

Texture Mapping of a Bridge Deck Using UAV Images (무인항공영상을 이용한 교량 상판의 텍스처 매핑)

  • Nguyen, Truong Linh;Han, Dongyeob
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1041-1047
    • /
    • 2017
  • There are many methods for surveying the status of a road, and the use of unmanned aerial vehicle (UAV) photo is one such method. When the UAV images are too large to be processed and suspected to be redundant, a texture extraction technique is used to transform the data into a reduced set of feature representations. This is an important task in 3D simulation using UAV images because a huge amount of data can be inputted. This paper presents a texture extraction method from UAV images to obtain high-resolution images of bridges. The proposed method is in three steps: firstly, we use the 3D bridge model from the V-World database; secondly, textures are extracted from oriented UAV images; and finally, the extracted textures from each image are blended. The result of our study can be used to update V-World textures to a high-resolution image.

Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery (UAV와 다시기 위성영상을 이용한 붕괴건물 탐지)

  • Jung, Sejung;Lee, Kirim;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.187-196
    • /
    • 2020
  • In this study, collapsed building detection using UAV (Unmanned Aerial Vehicle) and PlanetScope satellite images was carried out, suggesting the possibility of utilization of heterogeneous sensors in object detection located on the surface. To this end, the area where about 20 buildings collapsed due to forest fire damage was selected as study site. First of all, the feature information of objects such as ExG (Excess Green), GLCM (Gray-Level Co-Occurrence Matrix), and DSM (Digital Surface Model) were generated using high-resolution UAV images performed object-based segmentation to detect collapsed buildings. The features were then used to detect candidates for collapsed buildings. In this process, a result of the change detection using PlanetScope were used together to improve detection accuracy. More specifically, the changed pixels acquired by the bitemporal PlanetScope images were used as seed pixels to correct the misdetected and overdetected areas in the candidate group of collapsed buildings. The accuracy of the detection results of collapse buildings using only UAV image and the accuracy of collapse building detection result when UAV and PlanetScope images were used together were analyzed through the manually dizitized reference image. As a result, the results using only UAV image had 0.4867 F1-score, and the results using UAV and PlanetScope images together showed that the value improved to 0.8064 F1-score. Moreover, the Kappa coefficiant value was also dramatically improved from 0.3674 to 0.8225.

Study on Reflectance and NDVI of Aerial Images using a Fixed-Wing UAV "Ebee"

  • Lee, Kyung-Do;Lee, Ye-Eun;Park, Chan-Won;Hong, Suk-Young;Na, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.49 no.6
    • /
    • pp.731-742
    • /
    • 2016
  • Recent technological advance in UAV (Unmanned Aerial Vehicle) technology offers new opportunities for assessing crop situation using UAV imagery. The objective of this study was to assess if reflectance and NDVI derived from consumer-grade cameras mounted on UAVs are useful for crop condition monitoring. This study was conducted using a fixed-wing UAV(Ebee) with Cannon S110 camera from March 2015 to March 2016 in the experiment field of National Institute of Agricultural Sciences. Results were compared with ground-based recordings obtained from consumer-grade cameras and ground multi-spectral sensors. The relationship between raw digital numbers (DNs) of UAV images and measured calibration tarp reflectance was quadratic. Surface (lawn grass, stairs, and soybean cultivation area) reflectance obtained from UAV images was not similar to reflectance measured by ground-based sensors. But NDVI based on UAV imagery was similar to NDVI calculated by ground-based sensors.

Estimation of Rice Grain Yield Distribution Using UAV Imagery (무인비행체 영상을 활용한 벼 수량 분포 추정)

  • Lee, KyungDo;An, HoYong;Park, ChanWon;So, KyuHo;Na, SangIl;Jang, SuYong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.61 no.4
    • /
    • pp.1-10
    • /
    • 2019
  • Unmanned aerial vehicle(UAV) can acquire images with lower cost than conventional manned aircraft and commercial satellites. It has the advantage of acquiring high-resolution aerial images covering in the field area more than 50 ha. The purposes of this study is to develop the rice grain yield distribution using UAV. In order to develop a technology for estimating the rice yield using UAV images, time series UAV aerial images were taken at the paddy fields and the data were compared with the rice yield of the harvesting area for two rice varieties(Singdongjin, Dongjinchal). Correlations between the vegetation indices and rice yield were ranged from 0.8 to 0.95 in booting period. Accordingly, rice yield was estimated using UAV-derived vegetation indices($R^2=0.70$ in Sindongjin, $R^2=0.92$ in Donjinchal). It means that the rice yield estimation using UAV imagery can provide less cost and higher accuracy than other methods using combine with yield monitoring system and satellite imagery. In the future, it will be necessary to study a variety of information convergence and integration systems such as image, weather, and soil for efficient use of these information, along with research on preparing management practice work standards such as pest control and nutrient use based on UAV image information.

Applicability of Image Classification Using Deep Learning in Small Area : Case of Agricultural Lands Using UAV Image (딥러닝을 이용한 소규모 지역의 영상분류 적용성 분석 : UAV 영상을 이용한 농경지를 대상으로)

  • Choi, Seok-Keun;Lee, Soung-Ki;Kang, Yeon-Bin;Seong, Seon-Kyeong;Choi, Do-Yeon;Kim, Gwang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.1
    • /
    • pp.23-33
    • /
    • 2020
  • Recently, high-resolution images can be easily acquired using UAV (Unmanned Aerial Vehicle), so that it is possible to produce small area observation and spatial information at low cost. In particular, research on the generation of cover maps in crop production areas is being actively conducted for monitoring the agricultural environment. As a result of comparing classification performance by applying RF(Random Forest), SVM(Support Vector Machine) and CNN(Convolutional Neural Network), deep learning classification method has many advantages in image classification. In particular, land cover classification using satellite images has the advantage of accuracy and time of classification using satellite image data set and pre-trained parameters. However, UAV images have different characteristics such as satellite images and spatial resolution, which makes it difficult to apply them. In order to solve this problem, we conducted a study on the application of deep learning algorithms that can be used for analyzing agricultural lands where UAV data sets and small-scale composite cover exist in Korea. In this study, we applied DeepLab V3 +, FC-DenseNet (Fully Convolutional DenseNets) and FRRN-B (Full-Resolution Residual Networks), the semantic image classification of the state-of-art algorithm, to UAV data set. As a result, DeepLab V3 + and FC-DenseNet have an overall accuracy of 97% and a Kappa coefficient of 0.92, which is higher than the conventional classification. The applicability of the cover classification using UAV images of small areas is shown.