• 제목/요약/키워드: Satellite Segmentation

Search Result 128, Processing Time 0.029 seconds

Implementation of Digital Image Processing for Coastline Extraction from Synthetic Aperture Radar Imagery

  • Lee, Dong-Cheon;Seo, Su-Young;Lee, Im-Pyeong;Kwon, Jay-Hyoun;Tuell, Grady H.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.517-528
    • /
    • 2007
  • Extraction of the coastal boundary is important because the boundary serves as a reference in the demarcation of maritime zones such as territorial sea, contiguous zone, and exclusive economic zone. Accurate nautical charts also depend on well established, accurate, consistent, and current coastline delineation. However, to identify the precise location of the coastal boundary is a difficult task due to tidal and wave motions. This paper presents an efficient way to extract coastlines by applying digital image processing techniques to Synthetic Aperture Radar (SAR) imagery. Over the past few years, satellite-based SAR and high resolution airborne SAR images have become available, and SAR has been evaluated as a new mapping technology. Using remotely sensed data gives benefits in several aspects, especially SAR is largely unaffected by weather constraints, is operational at night time over a large area, and provides high contrast between water and land areas. Various image processing techniques including region growing, texture-based image segmentation, local entropy method, and refinement with image pyramid were implemented to extract the coastline in this study. Finally, the results were compared with existing coastline data derived from aerial photographs.

Building DSMs Generation Integrating Three Line Scanner (TLS) and LiDAR

  • Suh, Yong-Cheol;Nakagawa , Masafumi
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.229-242
    • /
    • 2005
  • Photogrammetry is a current method of GIS data acquisition. However, as a matter of fact, a large manpower and expenditure for making detailed 3D spatial information is required especially in urban areas where various buildings exist. There are no photogrammetric systems which can automate a process of spatial information acquisition completely. On the other hand, LiDAR has high potential of automating 3D spatial data acquisition because it can directly measure 3D coordinates of objects, but it is rather difficult to recognize the object with only LiDAR data, for its low resolution at this moment. With this background, we believe that it is very advantageous to integrate LiDAR data and stereo CCD images for more efficient and automated acquisition of the 3D spatial data with higher resolution. In this research, the automatic urban object recognition methodology was proposed by integrating ultra highresolution stereo images and LiDAR data. Moreover, a method to enable more reliable and detailed stereo matching method for CCD images was examined by using LiDAR data as an initial 3D data to determine the search range and to detect possibility of occlusions. Finally, intellectual DSMs, which were identified urban features with high resolution, were generated with high speed processing.

Development of Deep Learning-based Land Monitoring Web Service (딥러닝 기반의 국토모니터링 웹 서비스 개발)

  • In-Hak Kong;Dong-Hoon Jeong;Gu-Ha Jeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Land monitoring involves systematically understanding changes in land use, leveraging spatial information such as satellite imagery and aerial photographs. Recently, the integration of deep learning technologies, notably object detection and semantic segmentation, into land monitoring has spurred active research. This study developed a web service to facilitate such integrations, allowing users to analyze aerial and drone images using CNN models. The web service architecture comprises AI, WEB/WAS, and DB servers and employs three primary deep learning models: DeepLab V3, YOLO, and Rotated Mask R-CNN. Specifically, YOLO offers rapid detection capabilities, Rotated Mask R-CNN excels in detecting rotated objects, while DeepLab V3 provides pixel-wise image classification. The performance of these models fluctuates depending on the quantity and quality of the training data. Anticipated to be integrated into the LX Corporation's operational network and the Land-XI system, this service is expected to enhance the accuracy and efficiency of land monitoring.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

  • Jun Ho Lee;Seok Gi Han;Do Hee Kim;Seokyoung Ju;Tae Kyung Lee;Chang Hoon Song;Myoungjoo Kang;Seonghui Kim;Seohyun Seong
    • Current Optics and Photonics
    • /
    • v.7 no.5
    • /
    • pp.545-556
    • /
    • 2023
  • This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

Region-based Building Extraction of High Resolution Satellite Images Using Color Invariant Features (색상 불변 특징을 이용한 고해상도 위성영상의 영역기반 건물 추출)

  • Ko, A-Reum;Byun, Young-Gi;Park, Woo-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.75-87
    • /
    • 2011
  • This paper presents a method for region-based building extraction from high resolution satellite images(HRSI) using integrated information of spectral and color invariant features without user intervention such as selecting training data sets. The purpose of this study is also to evaluate the effectiveness of the proposed method by applying to IKONOS and QuickBird images. Firstly, the image is segmented by the MSRG method. The vegetation and shadow regions are automatically detected and masked to facilitate the building extraction. Secondly, the region merging is performed for the masked image, which the integrated information of the spectral and color invariant features is used. Finally, the building regions are extracted using the shape feature for the merged regions. The boundaries of the extracted buildings are simplified using the generalization techniques to improve the completeness of the building extraction. The experimental results showed more than 80% accuracy for two study areas and the visually satisfactory results obtained. In conclusion, the proposed method has shown great potential for the building extraction from HRSI.

Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery (UAV와 다시기 위성영상을 이용한 붕괴건물 탐지)

  • Jung, Sejung;Lee, Kirim;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.187-196
    • /
    • 2020
  • In this study, collapsed building detection using UAV (Unmanned Aerial Vehicle) and PlanetScope satellite images was carried out, suggesting the possibility of utilization of heterogeneous sensors in object detection located on the surface. To this end, the area where about 20 buildings collapsed due to forest fire damage was selected as study site. First of all, the feature information of objects such as ExG (Excess Green), GLCM (Gray-Level Co-Occurrence Matrix), and DSM (Digital Surface Model) were generated using high-resolution UAV images performed object-based segmentation to detect collapsed buildings. The features were then used to detect candidates for collapsed buildings. In this process, a result of the change detection using PlanetScope were used together to improve detection accuracy. More specifically, the changed pixels acquired by the bitemporal PlanetScope images were used as seed pixels to correct the misdetected and overdetected areas in the candidate group of collapsed buildings. The accuracy of the detection results of collapse buildings using only UAV image and the accuracy of collapse building detection result when UAV and PlanetScope images were used together were analyzed through the manually dizitized reference image. As a result, the results using only UAV image had 0.4867 F1-score, and the results using UAV and PlanetScope images together showed that the value improved to 0.8064 F1-score. Moreover, the Kappa coefficiant value was also dramatically improved from 0.3674 to 0.8225.

Building change detection in high spatial resolution images using deep learning and graph model (딥러닝과 그래프 모델을 활용한 고해상도 영상의 건물 변화탐지)

  • Park, Seula;Song, Ahram
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.227-237
    • /
    • 2022
  • The most critical factors for detecting changes in very high-resolution satellite images are building positional inconsistencies and relief displacements caused by satellite side-view. To resolve the above problems, additional processing using a digital elevation model and deep learning approach have been proposed. Unfortunately, these approaches are not sufficiently effective in solving these problems. This study proposed a change detection method that considers both positional and topology information of buildings. Mask R-CNN (Region-based Convolutional Neural Network) was trained on a SpaceNet building detection v2 dataset, and the central points of each building were extracted as building nodes. Then, triangulated irregular network graphs were created on building nodes from temporal images. To extract the area, where there is a structural difference between two graphs, a change index reflecting the similarity of the graphs and differences in the location of building nodes was proposed. Finally, newly changed or deleted buildings were detected by comparing the two graphs. Three pairs of test sites were selected to evaluate the proposed method's effectiveness, and the results showed that changed buildings were detected in the case of side-view satellite images with building positional inconsistencies.

Urban Object Classification Using Object Subclass Classification Fusion and Normalized Difference Vegetation Index (객체 서브 클래스 분류 융합과 정규식생지수를 이용한 도심지역 객체 분류)

  • Chul-Soo Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.223-232
    • /
    • 2023
  • A widely used method for monitoring land cover using high-resolution satellite images is to classify the images based on the colors of the objects of interest. In urban areas, not only major objects such as buildings and roads but also vegetation such as trees frequently appear in high-resolution satellite images. However, the colors of vegetation objects often resemble those of other objects such as buildings, roads, and shadows, making it difficult to accurately classify objects based solely on color information. In this study, we propose a method that can accurately classify not only objects with various colors such as buildings but also vegetation objects. The proposed method uses the normalized difference vegetation index (NDVI) image, which is useful for detecting vegetation objects, along with the RGB image and classifies objects into subclasses. The subclass classification results are fused, and the final classification result is generated by combining them with the image segmentation results. In experiments using Compact Advanced Satellite 500-1 imagery, the proposed method, which applies the NDVI and subclass classification together, showed an overall accuracy of 87.42%, while the overall accuracy of the subchannel classification technique without using the NDVI and the subclass classification technique alone were 73.18% and 81.79%, respectively.

A Comparative Study of Reservoir Surface Area Detection Algorithm Using SAR Image (SAR 영상을 활용한 저수지 수표면적 탐지 알고리즘 비교 연구)

  • Jeong, Hagyu;Park, Jongsoo;Lee, Dalgeun;Lee, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1777-1788
    • /
    • 2022
  • The reservoir is a major water supply source in the domestic agricultural environment, and the monitoring of water storage of reservoirs is important for the utilization and management of agricultural water resource. Remote sensing via satellite imagery can be an effective method for regular monitoring of widely distributed objects such as reservoirs, and in this study, image classification and image segmentation algorithms are applied to Sentinel-1 Synthetic Aperture Radar (SAR) imagery for water body detection in 53 reservoirs in South Korea. Six algorithms are used: Neural Network (NN), Support Vector Machine (SVM), Random Forest (RF), Otsu, Watershed (WS), and Chan-Vese (CV), and the results of water body detection are evaluated with in-situ images taken by drones. The correlations between the in-situ water surface area and detected water surface area from each algorithm are NN 0.9941, SVM 0.9942, RF 0.9940, Otsu 0.9922, WS 0.9709, and CV 0.9736, and the larger the scale of reservoir, the higher the linear correlation was. WS showed low recall due to the undetected water bodies, and NN, SVM, and RF showed low precision due to over-detection. For water body detection through SAR imagery, we found that aquatic plants and artificial structures can be the error factors causing undetection of water body.

Comparison of Semantic Segmentation Performance of U-Net according to the Ratio of Small Objects for Nuclear Activity Monitoring (핵활동 모니터링을 위한 소형객체 비율에 따른 U-Net의 의미론적 분할 성능 비교)

  • Lee, Jinmin;Kim, Taeheon;Lee, Changhui;Lee, Hyunjin;Song, Ahram;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1925-1934
    • /
    • 2022
  • Monitoring nuclear activity for inaccessible areas using remote sensing technology is essential for nuclear non-proliferation. In recent years, deep learning has been actively used to detect nuclear-activity-related small objects. However, high-resolution satellite imagery containing small objects can result in class imbalance. As a result, there is a performance degradation problem in detecting small objects. Therefore, this study aims to improve detection accuracy by analyzing the effect of the ratio of small objects related to nuclear activity in the input data for the performance of the deep learning model. To this end, six case datasets with different ratios of small object pixels were generated and a U-Net model was trained for each case. Following that, each trained model was evaluated quantitatively and qualitatively using a test dataset containing various types of small object classes. The results of this study confirm that when the ratio of object pixels in the input image is adjusted, small objects related to nuclear activity can be detected efficiently. This study suggests that the performance of deep learning can be improved by adjusting the object pixel ratio of input data in the training dataset.