• Title/Summary/Keyword: cloud mask

Search Result 29, Processing Time 0.025 seconds

SPOT/VEGETATION-based Algorithm for the Discrimination of Cloud and Snow (SPOT/VEGETATION 영상을 이용한 눈과 구름의 분류 알고리즘)

  • Han Kyung-Soo;Kim Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.235-244
    • /
    • 2004
  • This study focuses on the assessment for proposed algorithm to discriminate cloudy pixels from snowy pixels through use of visible, near infrared, and short wave infrared channel data in VEGETATION-1 sensor embarked on SPOT-4 satellite. Traditional threshold algorithms for cloud and snow masks did not show very good accuracy. Instead of these independent masking procedures, K-Means clustering scheme is employed for cloud/snow discrimination in this study. The pixels used in clustering were selected through an integration of two threshold algorithms, which group ensemble the snow and cloud pixels. This may give a opportunity to simplify the clustering procedure and to improve the accuracy as compared with full image clustering. This paper also compared the results with threshold methods of snow cover and clouds, and assesses discrimination capability in VEGETATION channels. The quality of the cloud and snow mask even more improved when present algorithm is implemented. The discrimination errors were considerably reduced by 19.4% and 9.7% for cloud mask and snow mask as compared with traditional methods, respectively.

Derivation of SST using MODIS direct broadcast data

  • Chung, Chu-Yong;Ahn, Myoung-Hwan;Koo, Ja-Min;Sohn, Eun-Ha;Chung, Hyo-Sang
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.638-643
    • /
    • 2002
  • MODIS (MODerate-resolution Imaging Spectroradiometer) onboard the first Earth Observing System (EOS) satellite, Terra, was launched successfully at the end of 1999. The direct broadcast MODIS data has been received and utilized in Korea Meteorological Administration (KMA) since february 2001. This study introduces utilizations of this data, especially for the derivation of sea surface temperature (SST). To produce the MODIS SST operationally, we used a simple cloud mask algorithm and MCSST algorithm. By using a simple cloud mask algorithm and by assumption of NOAA daily SST as a true SST, a new set of MCSST coefficients was derived. And we tried to analyze the current NASA's PFSST and new MCSST algorithms by using the collocated buoy observation data. Although the number of collocated data was limited, both algorithms are highly correlated with the buoy SST, but somewhat bigger bias and RMS difference than we expected. And PFSST uniformly underestimated the SST. Through more analyzing the archived and future-received data, we plan to derive better MCSST coefficients and apply to MODIS data of Aqua that is the second EOS satellite. To use the MODIS standard cloud mask algorithm to get better SST coefficients is going to be prepared.

  • PDF

Development of Cloud Detection Method with Geostationary Ocean Color Imagery for Land Applications (GOCI 영상의 육상 활용을 위한 구름 탐지 기법 개발)

  • Lee, Hwa-Seon;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.371-384
    • /
    • 2015
  • Although GOCI has potential for land surface monitoring, there have been only a few cases for land applications. It might be due to the lack of reliable land products derived from GOCI data for end-users. To use for land applications, it is often essential to provide cloud-free composite over land surfaces. In this study, we proposed a cloud detection method that was very important to make cloud-free composite of GOCI reflectance and vegetation index. Since GOCI does not have SWIR and TIR spectral bands, which are very effective to separate clouds from other land cover types, we developed a multi-temporal approach to detect cloud. The proposed cloud detection method consists of three sequential steps of spectral tests. Firstly, band 1 reflectance threshold was applied to separate confident clear pixels. In second step, thick cloud was detected by the ratio (b1/b8) of band 1 and band 8 reflectance. In third step, average of b1/b8 ratio values during three consecutive days was used to detect thin cloud having mixed spectral characteristics of both cloud and land surfaces. The proposed method provides four classes of cloudiness (thick cloud, thin cloud, probably clear, confident clear). The cloud detection method was validated by the MODIS cloud mask products obtained during the same time as the GOCI data acquisition. The percentages of cloudy and cloud-free pixels between GOCI and MODIS are about the same with less than 10% RMSE. The spatial distributions of clouds detected from the GOCI images were also similar to the MODIS cloud mask products.

VARIABILITY OF THE TRENDS OBSERVED FROM SEAWIFS-DERIVED SUB-MICRON AEROSOL FRACTION OVER EAST ASIAN SEAS BASED ON DIFFERENT CLOUD MASKING ALGORITHMS

  • Li, Li-Ping;Fukushima, Hajime;Takeno, Keisuke
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.316-319
    • /
    • 2006
  • Monthly-mean aerosol parameters derived from the 1998-2004 SeaWiFS observations over East Asian waters are analyzed. SeaWiFS GAC Level 1 data covering the Northeast Asian area are collected and processed by the standard atmospheric correction algorithm released by the SeaWiFS Project to produce daily aerosol optical thickness (AOT) and ${{\AA}}ngstr{\ddot{o}}m$ exponent imageries. Monthly mean AOT and ${{\AA}}ngstr{\ddot{o}}m$ exponent values are extracted from the daily composite images for six study areas chosen from the surrounding waters of Japan. A slight increasing trend of ${{\AA}}ngstr{\ddot{o}}m$ exponent is found and interpreted as about 4-5% increase in submicron fraction of aerosol optical thickness at 550nm. Two cloud screening methods, including the standard cloud masking method of SeaWiFS and the one based on the local variance method, are applied to the SeaWiFS data processing, in an attempt to inspect the influence to the observed statistical uptrend which probably induced by different cloud mask algorithms. The variability comes from the different cloud masking algorithms are discussed.

  • PDF

INVESTIGATION OF CLOUD COVERAGE OVER ASIA WITH NOAA AVHRR TIME SERIES

  • Takeuchit Wataru;Yasuokat Yoshifumi
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.26-29
    • /
    • 2005
  • In order to compute cloud coverage statistics over Asian region, an operational scheme for masking cloud-contaminated pixels in Advanced Very High Resolution Radiometer (AVHRR) daytime data was developed, evaluated and presented. Dynamic thresholding was used with channell, 2 and 3 to automatically create a cloud mask for a single image. Then the IO-day cloud coverage imagery was generated over the whole Asian region along with cloud-free composite imagery. Finally the monthly based statistics were computed based on the derived cloud coverage imagery in terms of land cover and country. As a result, it was found that 20-day is required to acquire the cloud free data over the whole Asia using NOAA AVHRR. The to-day cloud coverage and cloud-free composite imagery derived in this research is available via the web-site http://webpanda.iis.u-tokyo.ac.jp/CloudCover/.

  • PDF

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation (효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블)

  • Sungho Song;Kyungmin Park;Incheol Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.335-347
    • /
    • 2024
  • Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

A NEW METHOD OF MASKING CLOUD-AFFECTED PIXELS IN OCEAN COLOR IMAGERY BASED ON SPECTRAL SHAPE OF WATER REFLECTANCE

  • Fukushima, Hajime;Tamura, Jin;Toratani, Mitsuhiro;Murakami, Hiroshi
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.25-28
    • /
    • 2006
  • We propose a new method of masking cloud-affected pixels in satellite ocean color imageries such as of GLI. Those pixels, mostly found around cloud pixels or in scattered cloud area, have anomalous features in either in chlorophyll-a estimate or in water reflectance. This artifact is most likely caused by residual error of inter-band registration correction. Our method is to check the pixel-wise 'soundness' of the spectral water reflectance Rw retrieved after the atmospheric correction. First, we define two spectral ratio between water reflectance, IRR1 and IRR2, each defined as RW(B1)/RW (B3) RW (B3) and as RW (B2)/RW(B4) respectively, where $B1{\sim}B4$ stand for 4 consecutive visible bands. We show that an almost linear relation holds over log-scaled IRR1 and IRR2 for shipmeasured RW data of SeaBAM in situ data set and for GLI cloud-free Level 2 sub-scenes. The method we propose is to utilize this nature, identifying those pixels that show significant discrepancy from that relationship. We apply this method to ADEOS-II/GLI ocean color data to evaluate the performance over Level-2 data, which includes different water types such as case 1, turbid case 2 and coccolithophore bloom waters.

  • PDF

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.