• Title/Summary/Keyword: Satellite Segmentation

Search Result 128, Processing Time 0.018 seconds

Shadow Extraction of Urban Area using Building Edge Buffer in Quickbird Image (건물 에지 버퍼를 이용한 Quickbird 영상의 도심지 그림자 추출)

  • Yeom, Jun-Ho;Chang, An-Jin;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.163-171
    • /
    • 2012
  • High resolution satellite images have been used for building and road system analysis, landscape analysis, and ecological assessment for several years. However, in high resolution satellite images, shadows are necessarily cast by manmade objects such as buildings and over-pass bridges. This paper develops the shadow extraction procedures in urban area including various land-use classes, and the extracted shadow areas are evaluated by a manually digitized shadow map. For the shadow extraction, the Canny edge operator and the dilation filter are applied to make building edge buffer area. Also, the object-based segmentation was performed using Gram-Schmitt fusion image, and spectral and spatial parameters are calculated from the segmentation results. Finally, we proposed appropriate parameters and extraction rules for the shadow extraction. The accuracy of the shadow extraction results from the various assessment indices is 80% to 90%.

Change Detection Using Deep Learning Based Semantic Segmentation for Nuclear Activity Detection and Monitoring (핵 활동 탐지 및 감시를 위한 딥러닝 기반 의미론적 분할을 활용한 변화 탐지)

  • Song, Ahram;Lee, Changhui;Lee, Jinmin;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.991-1005
    • /
    • 2022
  • Satellite imaging is an effective supplementary data source for detecting and verifying nuclear activity. It is also highly beneficial in regions with limited access and information, such as nuclear installations. Time series analysis, in particular, can identify the process of preparing for the conduction of a nuclear experiment, such as relocating equipment or changing facilities. Differences in the semantic segmentation findings of time series photos were employed in this work to detect changes in meaningful items connected to nuclear activity. Building, road, and small object datasets made of KOMPSAT 3/3A photos given by AIHub were used to train deep learning models such as U-Net, PSPNet, and Attention U-Net. To pick relevant models for targets, many model parameters were adjusted. The final change detection was carried out by including object information into the first change detection, which was obtained as the difference in semantic segmentation findings. The experiment findings demonstrated that the suggested approach could effectively identify altered pixels. Although the suggested approach is dependent on the accuracy of semantic segmentation findings, it is envisaged that as the dataset for the region of interest grows in the future, so will the relevant scope of the proposed method.

Extracting Flooded Areas in Southeast Asia Using SegNet and U-Net (SegNet과 U-Net을 활용한 동남아시아 지역 홍수탐지)

  • Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1095-1107
    • /
    • 2020
  • Flood monitoring using satellite data has been constrained by obtaining satellite images for flood peak and accurately extracting flooded areas from satellite data. Deep learning is a promising method for satellite image classification, yet the potential of deep learning-based flooded area extraction using SAR data remained uncertain, which has advantages in obtaining data, comparing to optical satellite data. This research explores the performance of SegNet and U-Net on image segmentation by extracting flooded areas in the Khorat basin, Mekong river basin, and Cagayan river basin in Thailand, Laos, and the Philippines from Sentinel-1 A/B satellite data. Results show that Global Accuracy, Mean IoU, and Mean BF Score of SegNet are 0.9847, 0.6016, and 0.6467 respectively, whereas those of U-Net are 0.9937, 0.7022, 0.7125. Visual interpretation shows that the classification accuracy of U-Net is higher than SegNet, but overall processing time of SegNet is around three times faster than that of U-Net. It is anticipated that the results of this research could be used when developing deep learning-based flood monitoring models and presenting fully automated flooded area extraction models.

Cloud-based Satellite Image Processing Service by Open Source Stack: A KARI Case

  • Lee, Kiwon;Kang, Sanggoo;Kim, Kwangseob;Chae, Tae-Byeong
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.4
    • /
    • pp.339-350
    • /
    • 2017
  • In recent, cloud computing paradigm and open source as a huge trend in the Information Communication Technology (ICT) are widely applied, being closely interrelated to each other in the various applications. The integrated services by both technologies is generally regarded as one of a prospective web-based business models impacting the concerned industries. In spite of progressing those technologies, there are a few application cases in the geo-based application domains. The purpose of this study is to develop a cloud-based service system for satellite image processing based on the pure and full open source. On the OpenStack, cloud computing open source, virtual servers for system management by open source stack and image processing functionalities provided by OTB have been built or constructed. In this stage, practical image processing functions for KOMPSAT within this service system are thresholding segmentation, pan-sharpening with multi-resolution image sets, change detection with paired image sets. This is the first case in which a government-supporting space science institution provides cloud-based services for satellite image processing functionalities based on pure open source stack. It is expected that this implemented system can expand with further image processing algorithms using public and open data sets.

Multivariate Region Growing Method with Image Segments (영상분할단위 기반의 다변량 영역확장기법)

  • 이종열
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2004.03a
    • /
    • pp.273-278
    • /
    • 2004
  • Feature identification is one of the largest issue in high spatial resolution satellite imagery. A popular method associated with this feature identification is image segmentation to produce image segments that are more likely to features interested. Here, it is, proposed that combination of edge extraction and region growing methods for image segments were used to improve the result of image segmentation. At the intial step, an image was segmented by edge detection method. The segments were assigned IDs, and polygon topology of segments were built. Based on the topology, the segments were tested their similarities with adjacent segments using multivariate analysis. The segments that have similar spectral characteristics were merged into a region. The test application shows that the segments composed of individual large, spectrally homogeneous structures, such as buildings and roads, were merged into more similar shape of structures.

  • PDF

Comparative Study of Deep Learning Model for Semantic Segmentation of Water System in SAR Images of KOMPSAT-5 (아리랑 5호 위성 영상에서 수계의 의미론적 분할을 위한 딥러닝 모델의 비교 연구)

  • Kim, Min-Ji;Kim, Seung Kyu;Lee, DoHoon;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.206-214
    • /
    • 2022
  • The way to measure the extent of damage from floods and droughts is to identify changes in the extent of water systems. In order to effectively grasp this at a glance, satellite images are used. KOMPSAT-5 uses Synthetic Aperture Radar (SAR) to capture images regardless of weather conditions such as clouds and rain. In this paper, various deep learning models are applied to perform semantic segmentation of the water system in this SAR image and the performance is compared. The models used are U-net, V-Net, U2-Net, UNet 3+, PSPNet, Deeplab-V3, Deeplab-V3+ and PAN. In addition, performance comparison was performed when the data was augmented by applying elastic deformation to the existing SAR image dataset. As a result, without data augmentation, U-Net was the best with IoU of 97.25% and pixel accuracy of 98.53%. In case of data augmentation, Deeplab-V3 showed IoU of 95.15% and V-Net showed the best pixel accuracy of 96.86%.

A Study of Establishment and application Algorithm of Artificial Intelligence Training Data on Land use/cover Using Aerial Photograph and Satellite Images (항공 및 위성영상을 활용한 토지피복 관련 인공지능 학습 데이터 구축 및 알고리즘 적용 연구)

  • Lee, Seong-hyeok;Lee, Moung-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.871-884
    • /
    • 2021
  • The purpose of this study was to determine ways to increase efficiency in constructing and verifying artificial intelligence learning data on land cover using aerial and satellite images, and in applying the data to AI learning algorithms. To this end, multi-resolution datasets of 0.51 m and 10 m each for 8 categories of land cover were constructed using high-resolution aerial images and satellite images obtained from Sentinel-2 satellites. Furthermore, fine data (a total of 17,000 pieces) and coarse data (a total of 33,000 pieces) were simultaneously constructed to achieve the following two goals: precise detection of land cover changes and the establishment of large-scale learning datasets. To secure the accuracy of the learning data, the verification was performed in three steps, which included data refining, annotation, and sampling. The learning data that wasfinally verified was applied to the semantic segmentation algorithms U-Net and DeeplabV3+, and the results were analyzed. Based on the analysis, the average accuracy for land cover based on aerial imagery was 77.8% for U-Net and 76.3% for Deeplab V3+, while for land cover based on satellite imagery it was 91.4% for U-Net and 85.8% for Deeplab V3+. The artificial intelligence learning datasets on land cover constructed using high-resolution aerial and satellite images in this study can be used as reference data to help classify land cover and identify relevant changes. Therefore, it is expected that this study's findings can be used in the future in various fields of artificial intelligence studying land cover in constructing an artificial intelligence learning dataset on land cover of the whole of Korea.

Classification of Industrial Parks and Quarries Using U-Net from KOMPSAT-3/3A Imagery (KOMPSAT-3/3A 영상으로부터 U-Net을 이용한 산업단지와 채석장 분류)

  • Che-Won Park;Hyung-Sup Jung;Won-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang;Moung-Jin Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1679-1692
    • /
    • 2023
  • South Korea is a country that emits a large amount of pollutants as a result of population growth and industrial development and is also severely affected by transboundary air pollution due to its geographical location. As pollutants from both domestic and foreign sources contribute to air pollution in Korea, the location of air pollutant emission sources is crucial for understanding the movement and distribution of pollutants in the atmosphere and establishing national-level air pollution management and response strategies. Based on this background, this study aims to effectively acquire spatial information on domestic and international air pollutant emission sources, which is essential for analyzing air pollution status, by utilizing high-resolution optical satellite images and deep learning-based image segmentation models. In particular, industrial parks and quarries, which have been evaluated as contributing significantly to transboundary air pollution, were selected as the main research subjects, and images of these areas from multi-purpose satellites 3 and 3A were collected, preprocessed, and converted into input and label data for model training. As a result of training the U-Net model using this data, the overall accuracy of 0.8484 and mean Intersection over Union (mIoU) of 0.6490 were achieved, and the predicted maps showed significant results in extracting object boundaries more accurately than the label data created by course annotations.

Semantic Segmentation of Clouds Using Multi-Branch Neural Architecture Search (멀티 브랜치 네트워크 구조 탐색을 사용한 구름 영역 분할)

  • Chi Yoon Jeong;Kyeong Deok Moon;Mooseop Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.143-156
    • /
    • 2023
  • To precisely and reliably analyze the contents of the satellite imagery, recognizing the clouds which are the obstacle to gathering the useful information is essential. In recent times, deep learning yielded satisfactory results in various tasks, so many studies using deep neural networks have been conducted to improve the performance of cloud detection. However, existing methods for cloud detection have the limitation on increasing the performance due to the adopting the network models for semantic image segmentation without modification. To tackle this problem, we introduced the multi-branch neural architecture search to find optimal network structure for cloud detection. Additionally, the proposed method adopts the soft intersection over union (IoU) as loss function to mitigate the disagreement between the loss function and the evaluation metric and uses the various data augmentation methods. The experiments are conducted using the cloud detection dataset acquired by Arirang-3/3A satellite imagery. The experimental results showed that the proposed network which are searched network architecture using cloud dataset is 4% higher than the existing network model which are searched network structure using urban street scenes with regard to the IoU. Also, the experimental results showed that the soft IoU exhibits the best performance on cloud detection among the various loss functions. When comparing the proposed method with the state-of-the-art (SOTA) models in the field of semantic segmentation, the proposed method showed better performance than the SOTA models with regard to the mean IoU and overall accuracy.

Application of Geo-Segment Anything Model (SAM) Scheme to Water Body Segmentation: An Experiment Study Using CAS500-1 Images (수체 추출을 위한 Geo-SAM 기법의 응용: 국토위성영상 적용 실험)

  • Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.343-350
    • /
    • 2024
  • Since the release of Meta's Segment Anything Model (SAM), a large-scale vision transformer generation model with rapid image segmentation capabilities, several studies have been conducted to apply this technology in various fields. In this study, we aimed to investigate the applicability of SAM for water bodies detection and extraction using the QGIS Geo-SAM plugin, which enables the use of SAM with satellite imagery. The experimental data consisted of Compact Advanced Satellite 500 (CAS500)-1 images. The results obtained by applying SAM to these data were compared with manually digitized water objects, Open Street Map (OSM), and water body data from the National Geographic Information Institute (NGII)-based hydrological digital map. The mean Intersection over Union (mIoU) calculated for all features extracted using SAM and these three-comparison data were 0.7490, 0.5905, and 0.4921, respectively. For features commonly appeared or extracted in all datasets, the results were 0.9189, 0.8779, and 0.7715, respectively. Based on analysis of the spatial consistency between SAM results and other comparison data, SAM showed limitations in detecting small-scale or poorly defined streams but provided meaningful segmentation results for water body classification.