• Title/Summary/Keyword: GK2A/AMI

Search Result 15, Processing Time 0.017 seconds

Atmospheric Correction of Sentinel-2 Images Using GK2A AOD: A Comparison between FLAASH, Sen2Cor, 6SV1.1, and 6SV2.1 (GK2A AOD를 이용한 Sentinel-2 영상의 대기보정: FLAASH, Sen2Cor, 6SV1.1, 6SV2.1의 비교평가)

  • Kim, Seoyeon;Youn, Youjeong;Jeong, Yemin;Park, Chan-Won;Na, Sang-Il;Ahn, Hoyong;Ryu, Jae-Hyun;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.647-660
    • /
    • 2022
  • To prepare an atmospheric correction model suitable for CAS500-4 (Compact Advanced Satellite 500-4), this letter examined an atmospheric correction experiment using Sentinel-2 images having similar spectral characteristics to CAS500-4. Studies to compare the atmospheric correction results depending on different Aerosol Optical Depth (AOD) data are rarely found. We conducted a comparison of Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH), Sen2Cor, and Second Simulation of the Satellite Signal in the Solar Spectrum - Vector (6SV) version 1.1 and 2.1, using Geo-Kompsat 2A (GK2A) Advanced Meteorological Imager (AMI) and Aerosol Robotic Network (AERONET) AOD data. In this experiment, 6SV2.1 seemed more stable than others when considering the correlation matrices and the output images for each band and Normalized Difference Vegetation Index (NDVI).

Enhancing GEMS Surface Reflectance in Snow-Covered Regions through Combined of GeoKompsat-2A/2B Data (천리안 위성자료 융합을 통한 적설역에서의 GEMS 지표면 반사도 개선 연구)

  • Suyoung Sim;Daeseong Jung;Jongho Woo;Nayeon Kim;Sungwoo Park;Hyunkee Hong;Kyung-Soo Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1497-1503
    • /
    • 2023
  • To address challenges in classifying clouds and snow cover when calculating ground reflectance in Near-UltraViolet (UV) wavelengths, this study introduces a methodology that combines cloud data from the Geostationary Environmental Monitoring Spectrometer (GEMS) and the Advanced Meteorological Imager (AMI)satellites for snow cover analysis. The proposed approach aims to enhance the quality of surface reflectance calculations, and combined cloud data were generated by integrating GEMS cloud data with AMI cloud detection data. When applied to compute GEMS surface reflectance, this fusion approach significantly mitigated underestimation issues compared to using only GEMS cloud data in snow-covered regions, resulting in an approximately 17% improvement across the entire observational area. The findings of this study highlight the potential to address persistent underestimation challenges in snow areas by employing fused cloud data, consequently enhancing the accuracy of other Level-2 products based on improved surface reflectivity.

Spatial Gap-Filling of Hourly AOD Data from Himawari-8 Satellite Using DCT (Discrete Cosine Transform) and FMM (Fast Marching Method)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Kang, Jonggu;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.777-788
    • /
    • 2021
  • Since aerosol has a relatively short duration and significant spatial variation, satellite observations become more important for the spatially and temporally continuous quantification of aerosol. However, optical remote sensing has the disadvantage that it cannot detect AOD (Aerosol Optical Depth) for the regions covered by clouds or the regions with extremely high concentrations. Such missing values can increase the data uncertainty in the analyses of the Earth's environment. This paper presents a spatial gap-filling framework using a univariate statistical method such as DCT-PLS (Discrete Cosine Transform-based Penalized Least Square Regression) and FMM (Fast Matching Method) inpainting. We conducted a feasibility test for the hourly AOD product from AHI (Advanced Himawari Imager) between January 1 and December 31, 2019, and compared the accuracy statistics of the two spatial gap-filling methods. When the null-pixel area is not very large (null-pixel ratio < 0.6), the validation statistics of DCT-PLS and FMM techniques showed high accuracy of CC=0.988 (MAE=0.020) and CC=0.980 (MAE=0.028), respectively. Together with the AI-based gap-filling method using extra explanatory variables, the DCT-PLS and FMM techniques can be tested for the low-resolution images from the AMI (Advanced Meteorological Imager) of GK2A (Geostationary Korea Multi-purpose Satellite 2A), GEMS (Geostationary Environment Monitoring Spectrometer) and GOCI2 (Geostationary Ocean Color Imager) of GK2B (Geostationary Korea Multi-purpose Satellite 2B) and the high-resolution images from the CAS500 (Compact Advanced Satellite) series soon.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

A Study on the GK2A/AMI Image Based Cold Water Detection Using Convolutional Neural Network (합성곱신경망을 활용한 천리안위성 2A호 영상 기반의 동해안 냉수대 감지 연구)

  • Park, Sung-Hwan;Kim, Dae-Sun;Kwon, Jae-Il
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1653-1661
    • /
    • 2022
  • In this study, the classification of cold water and normal water based on Geo-Kompsat 2A images was performed. Daily mean surface temperature products provided by the National Meteorological Satellite Center (NMSC) were used, and convolution neural network (CNN) deep learning technique was applied as a classification algorithm. From 2019 to 2022, the cold water occurrence data provided by the National Institute of Fisheries Science (NIFS) were used as the cold water class. As a result of learning, the probability of detection was 82.5% and the false alarm ratio was 54.4%. Through misclassification analysis, it was confirmed that cloud area should be considered and accurate learning data should be considered in the future.