• Title/Summary/Keyword: UAV image

Search Result 319, Processing Time 0.036 seconds

Automated 3D Model Reconstruction of Disaster Site Using Aerial Imagery Acquired By Drones

  • Kim, Changyoon;Moon, Hyounseok;Lee, Woosik
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.671-672
    • /
    • 2015
  • Due to harsh conditions of disaster areas, understanding of current feature of collapsed buildings, terrain, and other infrastructures is critical issue for disaster managers. However, because of difficulties in acquiring the geographical information of the disaster site such as large disaster site and limited capability of rescue workers, comprehensive site investigation of current location of survivors buried under the remains of the building is not an easy task for disaster managers. To overcome these circumstances of disaster site, this study makes use of an unmanned aerial vehicle, commonly known as a drone to effectively acquire current image data from the large disaster areas. The framework of 3D model reconstruction of disaster site using aerial imagery acquired by drones was also presented. The proposed methodology is expected to assist rescue workers and disaster managers in achieving a rapid and accurate identification of survivors under the collapsed building.

  • PDF

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.

Accuracy Analysis of Cadastral Control Point and Parcel Boundary Point by Flight Altitude Using UAV (UAV를 활용한 비행고도별 지적기준점 및 필지경계점 정확도 분석)

  • Kim, Jung Hoon;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.223-233
    • /
    • 2018
  • In this study was classified the cadastral control points and parcel boundary points into 40m, 100m by flight altitude of UAV (Unmanned Aerial Vehicle) which compared the coordinates extracted from the orthophoto with the parcel boundary point coordinates by GNSS (Global Navigation Satellite System) ground survey. As a results of this study, first, in the spatial resolution analysis that the average error of the orthoimage by flight altitude were 0.024m at 40m, and 0.034m at 100m which were higher 40m than 100m for spatial resolution of orthophotos and position accuracy. Second, in order to analyze the accuracy of image recognition by airmark of flight altitude that was divided into three cases of nothing, green, and red of RMSE (Root Mean Square Error) were X=0.039m, Y=0.019m and Z=0.055m, the highest accuracy. Third, the result of the comparison between orthophotos and field survey results that showed the total RMSE error of the cadastral control points were X=0.029m, Y=0.028m, H=0.051m, and the parcel boundary points were X=0.041m, Y=0.030m. In conclusion, based on the results of this study, it is expected that if the average error of flight altitude is limited to less than 0.05m in the legal regulations related to orthophotos for cadastral surveying, it will be an economical and efficient method for cadastral survey as well as spatial information acquisition.

Characterizing three-dimensional mixing process in river confluence using acoustical backscatter as surrogate of suspended sediment (부유사 지표로 초음파산란도를 활용한 합류부 3차원 수체혼합 특성 도출)

  • Son, Geunsoo;Kim, Dongsu;Kwak, Sunghyun;Kim, Young Do;Lyu, Siwan
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.3
    • /
    • pp.167-179
    • /
    • 2021
  • In order to characterize the mixing process of confluence for understanding the impacts of a river on the other river, it has been crucial to analyze the spatial mixing patterns for main streams depending on various inflow conditions of tributaries. However, most conventional studies have mostly relied upon hydraulic or water quality numerical models for understanding mixing pattern analysis of confluences, due to the difficulties to acquire a wide spatial range of in-situ data for characterizing mixing process. In this study, backscatters (or SNR) measured from ADCPs were particularly used to track sediment mixing assuming that it could be a surrogate to estimate the suspended sediment concentration. Raw backscatter data were corrected by considering the beam spreading and absorption by water. Also, an optical Laser diffraction instrument (LISST) was used to verify the method of acoustic backscatter and to collect the particle size distribution of main stream and tributary. In addition, image-based spatial distributions of sediment mixture in the confluence were monitored in various flow conditions by using an unmanned aerial vehicle (UAV), which were compared with the spatial distribution of acoustic backscatter. As results, we found that when acoustic backscatter by ADCPs were well processed, they could be proper indicators to identify the spatial patterns of the three-dimensional mixing process between two rivers. For this study, flow and sediment mixing characteristics were investigated in the confluence between Nakdong and Nam river.

Post-processing Method of Point Cloud Extracted Based on Image Matching for Unmanned Aerial Vehicle Image (무인항공기 영상을 위한 영상 매칭 기반 생성 포인트 클라우드의 후처리 방안 연구)

  • Rhee, Sooahm;Kim, Han-gyeol;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1025-1034
    • /
    • 2022
  • In this paper, we propose a post-processing method through interpolation of hole regions that occur when extracting point clouds. When image matching is performed on stereo image data, holes occur due to occlusion and building façade area. This area may become an obstacle to the creation of additional products based on the point cloud in the future, so an effective processing technique is required. First, an initial point cloud is extracted based on the disparity map generated by applying stereo image matching. We transform the point cloud into a grid. Then a hole area is extracted due to occlusion and building façade area. By repeating the process of creating Triangulated Irregular Network (TIN) triangle in the hall area and processing the inner value of the triangle as the minimum height value of the area, it is possible to perform interpolation without awkwardness between the building and the ground surface around the building. A new point cloud is created by adding the location information corresponding to the interpolated area from the grid data as a point. To minimize the addition of unnecessary points during the interpolation process, the interpolated data to an area outside the initial point cloud area was not processed. The RGB brightness value applied to the interpolated point cloud was processed by setting the image with the closest pixel distance to the shooting center among the stereo images used for matching. It was confirmed that the shielded area generated after generating the point cloud of the target area was effectively processed through the proposed technique.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Accuracy Assessment on the Stereoscope based Digital Mapping Using Unmanned Aircraft Vehicle Image (무인항공기 영상을 이용한 입체시기반 수치도화 정확도 평가)

  • Yun, Kong-Hyun;Kim, Deok-In;Song, Yeong Sun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2018
  • RIn this research, digital elevation models, true-ortho image and 3-dimensional digital complied data was generated and evaluated using unmanned aircraft vehicle stereoscopic images by applying photogrammetric principles. In order to implement stereoscopic vision, digital Photogrammetric Workstation should be used necessarily. For conducting this, in this study GEOMAPPER 1.0 is used. That was developed by the Ministry of Trade, Industry and Energy. To realize stereoscopic vision using two overlapping images of the unmanned aerial vehicle, the interior and exterior orientation parameters should be calculated. Especially lens distortion of non-metric camera must be accurately compensated for stereoscope. In this work. photogrammetric orientation process was conducted using commercial Software, PhotoScan 1.4. Fixed wing KRobotics KD-2 was used for the acquisition of UAV images. True-ortho photo was generated and digital topographic map was partially produced. Finally, we presented error analysis on the generated digital complied map. As the results, it is confirmed that the production of digital terrain map with a scale 1:2,500~1:3,000 is available using stereoscope method.

Extraction of Individual Trees and Tree Heights for Pinus rigida Forests Using UAV Images (드론 영상을 이용한 리기다소나무림의 개체목 및 수고 추출)

  • Song, Chan;Kim, Sung Yong;Lee, Sun Joo;Jang, Yong Hwan;Lee, Young Jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1731-1738
    • /
    • 2021
  • The objective of this study was to extract individual trees and tree heights using UAV drone images. The study site was Gongju national university experiment forest, located in Yesan-gun, Chungcheongnam-do. The thinning intensity study sites consisted of 40% thinning, 20% thinning, 10% thinning and control. The image was filmed by using the "Mavic Pro 2" model of DJI company, and the altitude of the photo shoot was set at 80% of the overlay between 180m pictures. In order to prevent image distortion, a ground reference point was installed and the end lap and side lap were set to 80%. Tree heights were extracted using Digital Surface Model (DSM) and Digital Terrain Model (DTM), and individual trees were split and extracted using object-based analysis. As a result of individual tree extraction, thinning 40% stands showed the highest extraction rate of 109.1%, while thinning 20% showed 87.1%, thinning 10% showed 63.5%, and control sites showed 56.0% of accuracy. As a result of tree height extraction, thinning 40% showed 1.43m error compared with field survey data, while thinning 20% showed 1.73 m, thinning 10% showed 1.88 m, and control sites showed the largest error of 2.22 m.

The Precise Three Dimensional Phenomenon Modeling of the Cultural Heritage based on UAS Imagery (UAS 영상기반 문화유산물의 정밀 3차원 현상 모델링)

  • Lee, Yong-Chang;Kang, Joon-Oh
    • Journal of Cadastre & Land InformatiX
    • /
    • v.49 no.1
    • /
    • pp.85-101
    • /
    • 2019
  • Recently, thank to the popularization of light-weight drone through the significant developments in computer technologies as well as the advanced automated procedures in photogrammetry, Unmanned Aircraft Systems have led to a growing interest in industry as a whole. Documentation, maintenance, and restoration projects of large scaled cultural property would required accurate 3D phenomenon modeling and efficient visual inspection methods. The object of this study verify on the accuracies achieved of 3D phenomenon reconstruction as well as on the validity of the preservation, maintenance and restoration of large scaled cultural property by UAS photogrammetry. The test object is cltural heritage(treasure 1324) that is the rock-carved standing Bodhisattva in Soraesan Mountain, Siheung, documented in Goryeo Period(918-1392). This standing Bodhisattva has of particular interests since it's size is largest stone Buddha carved in a rock wall and is wearing a lotus shaped crown that is decorated with arabesque patterns. The positioning accuracy of UAS photogrammetry were compared with non-target total station survey results on the check points after creating 3D phenomenal models in real world coordinates system from photos, and also the quantified informations documented by Culture Heritage Administration were compared with UAS on the bodhisattva image of thin lines. Especially, tests the validity of UAS photogrammetry as a alternative method of visual inspection methods. In particular, we examined the effectiveness of the two techniques as well as the relative fluctuation of rock surface for about 2 years through superposition analysis of 3D points cloud models produced by both UAS image analysis and ground laser scanning techniques. Comparison studies and experimental results prove the accuracy and efficient of UAS photogrammetry in 3D phenomenon modeling, maintenance and restoration for various large-sized Cultural Heritage.

A study on the analysis of current status of Seonakdong River algae using hyperspectral imaging (초분광영상을 이용한 서낙동강 조류 발생현황 분석에 관한 연구)

  • Kim, Jongmin;Gwon, Yeonghwa;Park, Yelim;Kim, Dongsu;Kwon, Jae Hyun;Kim, Young Do
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.4
    • /
    • pp.301-308
    • /
    • 2022
  • Algae is an indispensable primary producer in the ecosystem by supplying energy to consumers in the aquatic ecosystem, and is largely divided into green algae, blue-green algae, and diatoms. In the case of blue-green algae, the water temperature rises, which occurs in the summer and overgrows, which is the main cause of the algae bloom. Recently, the change in the occurrence time and frequency of the algae bloom is increasing due to climate change. Existing algae survey methods are performed by collecting water and measuring through sensors, and time, cost and manpower are limited. In order to overcome the limitations of these existing monitoring methods, research has been conducted to perform remote monitoring using spectroscopic devices such as multispectral and hyperspectral using satellite image, UAV, etc. In this study, we tried to confirm the possibility of species classification of remote monitoring through laboratory-scale experiments through algal culture and river water collection. In order to acquire hyperspectral images, a hyperspectral sensor capable of analyzing at 400-1000 nm was used. In order to extract the spectral characteristics of the collected river water for classification of algae species, filtration was performed using a GF/C filter to prepare a sample and images were collected. Radiation correction and base removal of the collected images were performed, and spectral information for each sample was extracted and analyzed through the process of extracting spectral information of algae to identify and compare and analyze the spectral characteristics of algae, and remote sensing based on hyperspectral images in rivers and lakes. We tried to review the applicability of monitoring.