• Title/Summary/Keyword: UAV 원격탐사

Search Result 118, Processing Time 0.024 seconds

Analysis of suspended sediment mixing in a river confluence using UAV-based hyperspectral imagery (드론기반 초분광 영상을 활용한 하천 합류부 부유사 혼합 분석)

  • Kwon, Siyoon;Seo, Il Won;Lyu, Siwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.89-89
    • /
    • 2022
  • 하천 합류부에 지천이 유입되는 경우 복잡한 3차원적 흐름 구조를 발생시키고 이로 인해 유사혼합 및 지형 변화가 활발히 발생하게 된다. 특히, 하천 합류부에서 부유사 거동은 하천의 세굴과퇴적, 하천 지형 변화, 하천 생태계, 하천구조물 안정성 등에 직접적으로 영향을 미치기 때문에 이에 대한 정확한 분석이 하천 관리 및 재해 예방에 필수적인 요소이다. 기존의 하천 합류부 부유사 계측 자료들은 재래식 채취 방식으로 수행되어 시공간적 해상도가 매우 낮아서 실측 자료만으로 합류부에서 부유사 혼합을 분석하기에는 한계가 존재하기에 대하천의 부유사 혼합 거동 해석에 수치모형이 주로 활용되어 왔다. 본 연구에서는 하천 합류부에서 부유사 거동을 공간적으로 정밀하게 분석하기 위해 드론 기반초분광 영상을 활용하여 하천 합류부에 최적화된 부유사 계측 방법론을 제시하였다. 현장에서 계측한 초분광 자료와 부유사 농도간의 관계를 구축하기 위하여 기계학습모형인 랜덤포레스트(Random Forest) 회귀 모형과 합류부에서 분광 특성이 다른 두 하천의 특성을 정확하게 반영하기 위한 가우시안 혼합 모형 (Gaussian Mixture Model) 기반 초분광 군집화 기법을 결합하였다. 본 연구에서 구축한 방법론을 낙동강과 황강의 합류부에 적용한 결과, 초분광 군집을 통해 두하천 흐름의 경계층을 명확히 구별하였으며, 이를 바탕으로 지류와 본류에 대해 각각 분리된 회귀 모형을 구축하여 복잡한 합류부 근역 경계층에서의 부유사 거동을 보다 정확하게 재현하였다. 또한 나아가서 재현된 고해상도의 부유사 공간분포를 바탕으로 경계층에서 강한 두 흐름이 혼합되어 발생한 와류(Wake)가 부유사 혼합에 미치는 영향을 규명하였고, 하천 합류부에서 발생하는 전단층의 수평방향 대규모 와류가 부유사 혼합 양상에 지배적 영향을 미치는 것으로 확인하였다.

  • PDF

Utilization of UAV and GIS for Efficient Agricultural Area Survey (효율적인 농업면적 조사를 위한 무인항공기와 GIS의 활용)

  • Jeong, Woo-Chul;Kim, Sung-Bo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.12
    • /
    • pp.201-207
    • /
    • 2020
  • In this study, the practicality of unmanned aerial vehicle photography information was identified. Therefore, a total of four consecutive surveys were conducted on the field-level survey areas among the areas subject to photography using unmanned aerial vehicles, and the changes in crop conditions were analyzed using pictures of unmanned aerial vehicles taken during each survey. It is appropriate to collect and utilize photographic information by directly taking pictures of the survey area according to the time of the on-site survey using unmanned aerial vehicles in the field layer, which is an area where many changes in topography, crop vegetation, and crop types are expected. And it turned out that it was appropriate to utilize satellite images in consideration of economic and efficient aspects in relatively unchanged rice paddies and facilities. If the survey area is well equipped with systems for crop cultivation, deep learning can be utilized in real time by utilizing libraries after obtaining photographic data for a certain area using unmanned aircraft in the future. Through this process, it is believed that it can be used to analyze the overall crop and shipment volume by identifying the crop status and surveying the quantity per unit area.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Comparison of Reflectance and Vegetation Index Changes by Type of UAV-Mounted Multi-Spectral Sensors (무인비행체 탑재 다중분광 센서별 반사율 및 식생지수 변화 비교)

  • Lee, Kyung-do;Ahn, Ho-yong;Ryu, Jae-hyun;So, Kyu-ho;Na, Sang-il
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.947-958
    • /
    • 2021
  • This study was conducted to provide basic data for crop monitoring by comparing and analyzing changes in reflectance and vegetation index by sensor of multi-spectral sensors mounted on unmanned aerial vehicles. For four types of unmanned aerial vehicle-mounted multispectral sensors, such as RedEdge-MX, S110 NIR, Sequioa, and P4M, on September 14 and September 15, 2020, aerial images were taken, once in the morning and in the afternoon, a total of 4 times, and reflectance and vegetation index were calculated and compared. In the case of reflectance, the time-series coefficient of variation of all sensors showed an average value of about 10% or more, indicating that there is a limit to its use. The coefficient of variation of the vegetation index by sensor for the crop test group showed an average value of 1.2 to 3.6% in the crop experimental sites with high vitality due to thick vegetation, showing variability within 5%. However, this was a higher value than the coefficient of variation on a clear day, and it is estimated that the weather conditions such as clouds were different in the morning and afternoon during the experiment period. It is thought that it is necessary to establish and implement a UAV flight plan. As a result of comparing the NDVI between the multi-spectral sensors of the unmanned aerial vehicle, in this experiment, it is thought that the RedEdeg-MX sensor can be used together without special correction of the NDVI value even if several sensors of the same type are used in a stable light environment. RedEdge-MX, P4M, and Sequioa sensors showed a linear relationship with each other, but supplementary experiments are needed to evaluate joint utilization through off-set correction between vegetation indices.

Forest Damage Detection Using Daily Normal Vegetation Index Based on Time Series LANDSAT Images (시계열 위성영상 기반 평년 식생지수 추정을 통한 산림생태계 피해 탐지 기법)

  • Kim, Eun-sook;Lee, Bora;Lim, Jong-hwan
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_2
    • /
    • pp.1133-1148
    • /
    • 2019
  • Tree growth and vitality in forest shows seasonal changes. So, in order to detect forest damage accurately, we have to use satellite images before and after damages taken at the same season. However, temporal resolution of high or medium resolution images is very low,so it is not easy to acquire satellite images of the same seasons. Therefore, in this study, we estimated spectral information of the same DOY using time-series Landsat images and used the estimates as reference values to assess forest damages. The study site is Hwasun, Jeollanam-do, where forest damage occurred due to hail and drought in 2017. Time-series vegetation index (NDVI, EVI, NDMI) maps were produced using all Landsat 8 images taken in the past 3 years. Daily normal vegetation index maps were produced through cloud removal and data interpolation processes. We analyzed the difference of daily normal vegetation index value before damage event and vegetation index value after event at the same DOY, and applied the criteria of forest damage. Finally, forest damage map based on daily normal vegetation index was produced. Forest damage map based on Landsat images could detect better subtle changes of vegetation vitality than the existing map based on UAV images. In the extreme damage areas, forest damage map based on NDMI using the SWIR band showed similar results to the existing forest damage map. The daily normal vegetation index map can used to detect forest damage more rapidly and accurately.

Evaluation of Clustered Building Solid Model Automatic Generation Technique and Model Editing Function Based on Point Cloud Data (포인트 클라우드 데이터 기반 군집형 건물 솔리드 모델 자동 생성 기법과 모델 편집 기능 평가)

  • Kim, Han-gyeol;Lim, Pyung-Chae;Hwang, Yunhyuk;Kim, Dong Ha;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1527-1543
    • /
    • 2021
  • In this paper, we explore the applicability and utility of a technology that generating clustered solid building models based on point cloud automatically by applying it to various data. In order to improve the quality of the model of insufficient quality due to the limitations of the automatic building modeling technology, we develop the building shape modification and texture correction technology and confirmed the resultsthrough experiments. In order to explore the applicability of automatic building model generation technology, we experimented using point cloud and LiDAR (Light Detection and Ranging) data generated based on UAV, and applied building shape modification and texture correction technology to the automatically generated building model. Then, experiments were performed to improve the quality of the model. Through this, the applicability of the point cloud data-based automatic clustered solid building model generation technology and the effectiveness of the model quality improvement technology were confirmed. Compared to the existing building modeling technology, our technology greatly reduces costs such as manpower and time and is expected to have strengths in the management of modeling results.

A Study on Field Compost Detection by Using Unmanned AerialVehicle Image and Semantic Segmentation Technique based Deep Learning (무인항공기 영상과 딥러닝 기반의 의미론적 분할 기법을 활용한 야적퇴비 탐지 연구)

  • Kim, Na-Kyeong;Park, Mi-So;Jeong, Min-Ji;Hwang, Do-Hyun;Yoon, Hong-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.367-378
    • /
    • 2021
  • Field compost is a representative non-point pollution source for livestock. If the field compost flows into the water system due to rainfall, nutrients such as phosphorus and nitrogen contained in the field compost can adversely affect the water quality of the river. In this paper, we propose a method for detecting field compost using unmanned aerial vehicle images and deep learning-based semantic segmentation. Based on 39 ortho images acquired in the study area, about 30,000 data were obtained through data augmentation. Then, the accuracy was evaluated by applying the semantic segmentation algorithm developed based on U-net and the filtering technique of Open CV. As a result of the accuracy evaluation, the pixel accuracy was 99.97%, the precision was 83.80%, the recall rate was 60.95%, and the F1-Score was 70.57%. The low recall compared to precision is due to the underestimation of compost pixels when there is a small proportion of compost pixels at the edges of the image. After, It seems that accuracy can be improved by combining additional data sets with additional bands other than the RGB band.

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.

Comparison of Deep Learning-based Unsupervised Domain Adaptation Models for Crop Classification (작물 분류를 위한 딥러닝 기반 비지도 도메인 적응 모델 비교)

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.199-213
    • /
    • 2022
  • The unsupervised domain adaptation can solve the impractical issue of repeatedly collecting high-quality training data every year for annual crop classification. This study evaluates the applicability of deep learning-based unsupervised domain adaptation models for crop classification. Three unsupervised domain adaptation models including a deep adaptation network (DAN), a deep reconstruction-classification network, and a domain adversarial neural network (DANN) are quantitatively compared via a crop classification experiment using unmanned aerial vehicle images in Hapcheon-gun and Changnyeong-gun, the major garlic and onion cultivation areas in Korea. As source baseline and target baseline models, convolutional neural networks (CNNs) are additionally applied to evaluate the classification performance of the unsupervised domain adaptation models. The three unsupervised domain adaptation models outperformed the source baseline CNN, but the different classification performances were observed depending on the degree of inconsistency between data distributions in source and target images. The classification accuracy of DAN was higher than that of the other two models when the inconsistency between source and target images was low, whereas DANN has the best classification performance when the inconsistency between source and target images was high. Therefore, the extent to which data distributions of the source and target images match should be considered to select the best unsupervised domain adaptation model to generate reliable classification results.

Post-processing Method of Point Cloud Extracted Based on Image Matching for Unmanned Aerial Vehicle Image (무인항공기 영상을 위한 영상 매칭 기반 생성 포인트 클라우드의 후처리 방안 연구)

  • Rhee, Sooahm;Kim, Han-gyeol;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1025-1034
    • /
    • 2022
  • In this paper, we propose a post-processing method through interpolation of hole regions that occur when extracting point clouds. When image matching is performed on stereo image data, holes occur due to occlusion and building façade area. This area may become an obstacle to the creation of additional products based on the point cloud in the future, so an effective processing technique is required. First, an initial point cloud is extracted based on the disparity map generated by applying stereo image matching. We transform the point cloud into a grid. Then a hole area is extracted due to occlusion and building façade area. By repeating the process of creating Triangulated Irregular Network (TIN) triangle in the hall area and processing the inner value of the triangle as the minimum height value of the area, it is possible to perform interpolation without awkwardness between the building and the ground surface around the building. A new point cloud is created by adding the location information corresponding to the interpolated area from the grid data as a point. To minimize the addition of unnecessary points during the interpolation process, the interpolated data to an area outside the initial point cloud area was not processed. The RGB brightness value applied to the interpolated point cloud was processed by setting the image with the closest pixel distance to the shooting center among the stereo images used for matching. It was confirmed that the shielded area generated after generating the point cloud of the target area was effectively processed through the proposed technique.