• Title/Summary/Keyword: UAV images

Search Result 297, Processing Time 0.278 seconds

BATHYMETRIC MODULATION ON WAVE SPECTRA

  • Liu, Cho-Teng;Doong, Dong-Jiing
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.344-347
    • /
    • 2008
  • Ocean surface waves may be modified by ocean current and their observation may be severely distorted if the observer is on a moving platform with changing speed. Tidal current near a sill varies inversely with the water depth, and results spatially inhomogeneous modulation on the surface waves near the sill. For waves propagating upstream, they will encounter stronger current before reaching the sill, and therefore, they will shorten their wavelength with frequency unchanged, increase its amplitude, and it may break if the wave height is larger than 1/7 of the wavelength. These small scale (${\sim}$ 1 km changes is not suitable for satellite radar observation. Spatial distribution of wave-height spectra S(x, y) can not be acquired from wave gauges that are designed for collecting 2-D wave spectra at fixed locations, nor from satellite radar image which is more suitable for observing long swells. Optical images collected from cameras on-board a ship, over high-ground, or onboard an unmanned auto-piloting vehicle (UAV) may have pixel size that is small enough to resolve decimeter-scale short gravity waves. If diffuse sky light is the only source of lighting and it is uniform in camera-viewing directions, then the image intensity is proportional to the surface reflectance R(x, y) of diffuse light, and R is directly related to the surface slope. The slope spectrum and wave-height spectra S(x, y) may then be derived from R(x, y). The results are compared with the in situ measurement of wave spectra over Keelung Sill from a research vessel. The application of this method is for analysis and interpretation of satellite images on studies of current and wave interaction that often require fine scale information of wave-height spectra S(x, y) that changes dynamically with time and space.

  • PDF

The Use of Unmanned Aerial Vehicle for Monitoring Individuals of Ardeidae Species in Breeding Habitat: A Case study on Natural Monument in Sinjeop-ri, Yeoju, South Korea (백로류 집단번식지의 개체수 모니터링을 위한 무인항공기 활용연구 - 천연기념물 209호 여주 신접리 백로와 왜가리 번식지를 대상으로 -)

  • Park, Hyun-Chul;Kil, Sung-Ho;Seo, Ok-Ha
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.22 no.1
    • /
    • pp.73-84
    • /
    • 2019
  • In this research, it is a basic study to investigate the population of birds using UAVs. The research area is Ardeidae species(ASP) habitat and has long-term monitoring. The purpose of the study is to compare the ASP populations which analyzed ground observational survey and UAVs imagery. We used DJI's Mavic pro and Phantom4 for this research. Before investigating the population of ASP, we measured the escape distance by the UAVs, and the escape distances of the two UAVs models were statistically significant. Such a result would be different in UAV size and rotor(rotary wing) noise. The population of ASP who analyzed the ground observation and UAVs imagery count differed greatly. In detail, the population(mean) on the ground observation was 174.9, and the UAVs was 247.1 ~ 249.9. As a result of analyzing the UAVs imagery, These results indicate that the lower the UAVs camera altitude, the higher the ASP population, and the lower the UAVs camera altitude, the higher the resolution of the images and the better the reading of the individual of ASP. And we confirmed analyzed images taken at various altitudes, the individuals of ASP was not statistically significant. This is because the resolution of the phantom was superior to that of mavic pro. Our research is fundamental compared to similar studies. However, long-term monitoring for ASP of South Korea's by ground observation is a barrier of the reliability of the monitoring result. We suggested how to use UAVs which can improve long-term monitoring for ASP habitat.

A review of ground camera-based computer vision techniques for flood management

  • Sanghoon Jun;Hyewoon Jang;Seungjun Kim;Jong-Sub Lee;Donghwi Jung
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.425-443
    • /
    • 2024
  • Floods are among the most common natural hazards in urban areas. To mitigate the problems caused by flooding, unstructured data such as images and videos collected from closed circuit televisions (CCTVs) or unmanned aerial vehicles (UAVs) have been examined for flood management (FM). Many computer vision (CV) techniques have been widely adopted to analyze imagery data. Although some papers have reviewed recent CV approaches that utilize UAV images or remote sensing data, less effort has been devoted to studies that have focused on CCTV data. In addition, few studies have distinguished between the main research objectives of CV techniques (e.g., flood depth and flooded area) for a comprehensive understanding of the current status and trends of CV applications for each FM research topic. Thus, this paper provides a comprehensive review of the literature that proposes CV techniques for aspects of FM using ground camera (e.g., CCTV) data. Research topics are classified into four categories: flood depth, flood detection, flooded area, and surface water velocity. These application areas are subdivided into three types: urban, river and stream, and experimental. The adopted CV techniques are summarized for each research topic and application area. The primary goal of this review is to provide guidance for researchers who plan to design a CV model for specific purposes such as flood-depth estimation. Researchers should be able to draw on this review to construct an appropriate CV model for any FM purpose.

Image matching and geometric correction scheme for flood detection with UAV images (홍수 감지를 위한 무인기 획득 영상의 매칭 및 기하보정 기법)

  • Shin, Won-Jae;Lee, Min-Seob;Kwon, Eun-Jeong;Lee, Hyun-Woo;Lee, Yong-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.1029-1030
    • /
    • 2017
  • 본 논문에서는 기존의 재난 감시 및 관리 서비스가 사람에 의한 단순 모니터링 기반의 대응을 제공하는 데 비해, 무인기를 활용해 사람의 사각에서 발생하는 재난 상황을 촬영하여 감시 및 분석을 하며, 무인기에 탑재된 다중 복합 센서 데이터의 실시간 처리 분석을 통해 국지적 홍수 재난의 감지 예측 및 상황대응을 지원하고, 통합경보 시스템과 연동하여 대국민 재난 정보 전달 서비스 제공하는 서비스이다. 현재 본 서비스를 제공할 수 있는 Front to End 시스템이 개발 완료되어 실험실 테스트를 진행하였으며, 이와 더불어 실제 필드에서의 재난 감시 및 예측 성능을 검증하기 위한 필드 테스트를 준비 중에 있다. 이에 본 논문에서는 현재 구축하고 있는 홍수 재난 관리 스마트아이 플랫폼에 대한 내용을 간단히 소개하고, 중요한 기능중 하나인 무인기 촬영 영상의 기하보정에 대해서 논한다.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

Image analysis technology with deep learning for monitoring the tidal flat ecosystem -Focused on monitoring the Ocypode stimpsoni Ortmann, 1897 in the Sindu-ri tidal flat - (갯벌 생태계 모니터링을 위한 딥러닝 기반의 영상 분석 기술 연구 - 신두리 갯벌 달랑게 모니터링을 중심으로 -)

  • Kim, Dong-Woo;Lee, Sang-Hyuk;Yu, Jae-Jin;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.24 no.6
    • /
    • pp.89-96
    • /
    • 2021
  • In this study, a deep-learning image analysis model was established and validated for AI-based monitoring of the tidal flat ecosystem for marine protected creatures Ocypode stimpsoni and their habitat. The data in the study was constructed using an unmanned aerial vehicle, and the U-net model was applied for the deep learning model. The accuracy of deep learning model learning results was about 0.76 and about 0.8 each for the Ocypode stimpsoni and their burrow whose accuracy was higher. Analyzing the distribution of crabs and burrows by putting orthomosaic images of the entire study area to the learned deep learning model, it was confirmed that 1,943 Ocypode stimpsoni and 2,807 burrow were distributed in the study area. Through this study, the possibility of using the deep learning image analysis technology for monitoring the tidal ecosystem was confirmed. And it is expected that it can be used in the tidal ecosystem monitoring field by expanding the monitoring sites and target species in the future.

Implementation of Photovoltaic Panel failure detection system using semantic segmentation (시멘틱세그멘테이션을 활용한 태양광 패널 고장 감지 시스템 구현)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1777-1783
    • /
    • 2021
  • The use of drones is gradually increasing for the efficient maintenance of large-scale renewable energy power generation complexes. For a long time, photovoltaic panels have been photographed with drones to manage panel loss and contamination. Various approaches using artificial intelligence are being tried for efficient maintenance of large-scale photovoltaic complexes. Recently, semantic segmentation-based application techniques have been developed to solve the image classification problem. In this paper, we propose a classification model using semantic segmentation to determine the presence or absence of failures such as arcs, disconnections, and cracks in solar panel images obtained using a drone equipped with a thermal imaging camera. In addition, an efficient classification model was implemented by tuning several factors such as data size and type and loss function customization in U-Net, which shows robust classification performance even with a small dataset.

Tack Coat Inspection Using Unmanned Aerial Vehicle and Deep Learning

  • da Silva, Aida;Dai, Fei;Zhu, Zhenhua
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.784-791
    • /
    • 2022
  • Tack coat is a thin layer of asphalt between the existing pavement and asphalt overlay. During construction, insufficient tack coat layering can later cause surface defects such as slippage, shoving, and rutting. This paper proposed a method for tack coat inspection improvement using an unmanned aerial vehicle (UAV) and deep learning neural network for automatic non-uniform assessment of the applied tack coat area. In this method, the drone-captured images are exploited for assessment using a combination of Mask R-CNN and Grey Level Co-occurrence Matrix (GLCM). Mask R-CNN is utilized to detect the tack coat region and segment the region of interest from the surroundings. GLCM is used to analyze the texture of the segmented region and measure the uniformity and non-uniformity of the tack coat on the existing pavements. The results of the field experiment showed both the intersection over union of Mask R-CNN and the non-uniformity measured by GLCM were promising with respect to their accuracy. The proposed method is automatic and cost-efficient, which would be of value to state Departments of Transportation for better management of their work in pavement construction and rehabilitation.

  • PDF

Automatic Detection of Dead Trees Based on Lightweight YOLOv4 and UAV Imagery

  • Yuanhang Jin;Maolin Xu;Jiayuan Zheng
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.614-630
    • /
    • 2023
  • Dead trees significantly impact forest production and the ecological environment and pose constraints to the sustainable development of forests. A lightweight YOLOv4 dead tree detection algorithm based on unmanned aerial vehicle images is proposed to address current limitations in dead tree detection that rely mainly on inefficient, unsafe and easy-to-miss manual inspections. An improved logarithmic transformation method was developed in data pre-processing to display tree features in the shadows. For the model structure, the original CSPDarkNet-53 backbone feature extraction network was replaced by MobileNetV3. Some of the standard convolutional blocks in the original extraction network were replaced by depthwise separable convolution blocks. The new ReLU6 activation function replaced the original LeakyReLU activation function to make the network more robust for low-precision computations. The K-means++ clustering method was also integrated to generate anchor boxes that are more suitable for the dataset. The experimental results show that the improved algorithm achieved an accuracy of 97.33%, higher than other methods. The detection speed of the proposed approach is higher than that of YOLOv4, improving the efficiency and accuracy of the detection process.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.