• Title/Summary/Keyword: Cloud-free image

Search Result 18, Processing Time 0.02 seconds

INVESTIGATION OF CLOUD COVERAGE OVER ASIA WITH NOAA AVHRR TIME SERIES

  • Takeuchit Wataru;Yasuokat Yoshifumi
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.26-29
    • /
    • 2005
  • In order to compute cloud coverage statistics over Asian region, an operational scheme for masking cloud-contaminated pixels in Advanced Very High Resolution Radiometer (AVHRR) daytime data was developed, evaluated and presented. Dynamic thresholding was used with channell, 2 and 3 to automatically create a cloud mask for a single image. Then the IO-day cloud coverage imagery was generated over the whole Asian region along with cloud-free composite imagery. Finally the monthly based statistics were computed based on the derived cloud coverage imagery in terms of land cover and country. As a result, it was found that 20-day is required to acquire the cloud free data over the whole Asia using NOAA AVHRR. The to-day cloud coverage and cloud-free composite imagery derived in this research is available via the web-site http://webpanda.iis.u-tokyo.ac.jp/CloudCover/.

  • PDF

Combining Conditional Generative Adversarial Network and Regression-based Calibration for Cloud Removal of Optical Imagery (광학 영상의 구름 제거를 위한 조건부 생성적 적대 신경망과 회귀 기반 보정의 결합)

  • Kwak, Geun-Ho;Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1357-1369
    • /
    • 2022
  • Cloud removal is an essential image processing step for any task requiring time-series optical images, such as vegetation monitoring and change detection. This paper presents a two-stage cloud removal method that combines conditional generative adversarial networks (cGANs) with regression-based calibration to construct a cloud-free time-series optical image set. In the first stage, the cGANs generate initial prediction results using quantitative relationships between optical and synthetic aperture radar images. In the second stage, the relationships between the predicted results and the actual values in non-cloud areas are first quantified via random forest-based regression modeling and then used to calibrate the cGAN-based prediction results. The potential of the proposed method was evaluated from a cloud removal experiment using Sentinel-2 and COSMO-SkyMed images in the rice field cultivation area of Gimje. The cGAN model could effectively predict the reflectance values in the cloud-contaminated rice fields where severe changes in physical surface conditions happened. Moreover, the regression-based calibration in the second stage could improve the prediction accuracy, compared with a regression-based cloud removal method using a supplementary image that is temporally distant from the target image. These experimental results indicate that the proposed method can be effectively applied to restore cloud-contaminated areas when cloud-free optical images are unavailable for environmental monitoring.

Cloud Removal Using Gaussian Process Regression for Optical Image Reconstruction

  • Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.4
    • /
    • pp.327-341
    • /
    • 2022
  • Cloud removal is often required to construct time-series sets of optical images for environmental monitoring. In regression-based cloud removal, the selection of an appropriate regression model and the impact analysis of the input images significantly affect the prediction performance. This study evaluates the potential of Gaussian process (GP) regression for cloud removal and also analyzes the effects of cloud-free optical images and spectral bands on prediction performance. Unlike other machine learning-based regression models, GP regression provides uncertainty information and automatically optimizes hyperparameters. An experiment using Sentinel-2 multi-spectral images was conducted for cloud removal in the two agricultural regions. The prediction performance of GP regression was compared with that of random forest (RF) regression. Various combinations of input images and multi-spectral bands were considered for quantitative evaluations. The experimental results showed that using multi-temporal images with multi-spectral bands as inputs achieved the best prediction accuracy. Highly correlated adjacent multi-spectral bands and temporally correlated multi-temporal images resulted in an improved prediction accuracy. The prediction performance of GP regression was significantly improved in predicting the near-infrared band compared to that of RF regression. Estimating the distribution function of input data in GP regression could reflect the variations in the considered spectral band with a broader range. In particular, GP regression was superior to RF regression for reproducing structural patterns at both sites in terms of structural similarity. In addition, uncertainty information provided by GP regression showed a reasonable similarity to prediction errors for some sub-areas, indicating that uncertainty estimates may be used to measure the prediction result quality. These findings suggest that GP regression could be beneficial for cloud removal and optical image reconstruction. In addition, the impact analysis results of the input images provide guidelines for selecting optimal images for regression-based cloud removal.

AVHRR MOSAIC IMAGE DATA SET FOR ASIAN REGION

  • Yokoyama, Ryuzo;Lei, Liping;Purevdorj, Ts.;Tanba, Sumio
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.285-289
    • /
    • 1999
  • A processing system to produce cloud-free composite image data set was developed. In the process, a fine geometric correction based on orbit parameters and ground control points and radiometric correction based on 6S code are applied. Presently, by using AVHRR image data received at Tokyo, Okinawa, Ulaanbaatar and Bangkok, data set of 10 days composite images covering almost whole Asian region.

  • PDF

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Region Selective Transmission Method of MMT based 3D Point Cloud Content (MMT 기반 3차원 포인트 클라우드 콘텐츠의 영역 선별적 전송 방안)

  • Kim, Doohwan;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.25-35
    • /
    • 2020
  • Recently, the development of image processing technology, as well as hardware performance, has been continuing the research on 3D point processing technology that provides users with free viewing angle and stereoscopic effect in various fields. Point cloud technology, which is a type of representation of 3D point, has attracted attention in various fields because it can acquired/expressed point precisely. However, since Hundreds of thousands, millions of point are required to represent one 3D point cloud content, there is a disadvantage that a larger amount of storage space is required than a conventional 2D content. For this reason, the MPEG (Moving Picture Experts Group), an international standardization organization, is continuing to research how to efficiently compress, store, and transmit 3D point cloud content to users. In this paper, a V-PCC bitstream generated by a V-PCC (Video-based Point Cloud Compression) encoder proposed by the MPEG-I (Immersive) group is composed of an MPU (Media Processing Unit) defined by the MMT. In addition, by extending the signaling message defined in the MMT standard, a parameter for a segmented transmission method of the 3D point cloud content by area and quality parameters considering the characteristic of the 3D point cloud content, so that the quality parameters can be selectively determined according to the user's request. Finally, in this paper, we verify the result through design/implementation of the verification platform based on the proposed technology.

A STUDY ON INTER-RELATIONSHIP OF VEGETATION INDICES USING IKONOS AND LANDSAT-7 ETM+ IMAGERY

  • Yun, Young-Bo;Lee, Sung-Hun;Cho, Seong-Ik;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.852-855
    • /
    • 2006
  • There is an increasing need to use data from different sensors in order to maximize the chances of obtaining a cloud-free image and to meet timely requirements for information. However, the use of data from multiple sensor systems is depending on comprehensive relationships between sensors of different types. Indeed, a study of inter-sensor relationships is well advanced in the effective use of remotely sensed data from multiple sensors. This paper was concerned with relationships between sensors of different types for vegetation indices (VI). The study was conducted using IKONOS and Landsat-7 ETM+ images. IKONOS and Landsat-7 ETM+ image of the same or about the same dates were acquired. The Landsat-7 ETM+ images were resampled in order to make them coincide with the pixel sizes of IKONOS. Inter-relationships of vegetation indices between images were performed using at-satellite reflectance obtained by converting image digital number (DN). All images were applied to topographic normalization method in order to reduce topographic effect in digital imagery. Also, Inter-sensor model equations between two sensors were developed and applied to other study region. In the result, the relational equations can be used to compute or interpret VI of one sensor using the VI of another sensor.

  • PDF

Analysis of the Cloud Removal Effect of Sentinel-2A/B NDVI Monthly Composite Images for Rice Paddy and High-altitude Cabbage Fields (논과 고랭지 배추밭 대상 Sentinel-2A/B 정규식생지수 월 합성영상의 구름 제거 효과 분석)

  • Eun, Jeong;Kim, Sun-Hwa;Kim, Taeho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1545-1557
    • /
    • 2021
  • Crops show sensitive spectral characteristics according to their species and growth conditions and although frequent observation is required especially in summer, it is difficult to utilize optical satellite images due to the rainy season. To solve this problem, Constrained Cloud-Maximum Normalized difference vegetation index Composite (CC-MNC) algorithm was developed to generate periodic composite images with minimal cloud effect. In thisstudy, using this method, monthly Sentinel-2A/B Normalized Difference Vegetation Index (NDVI) composite images were produced for paddies and high-latitude cabbage fields from 2019 to 2021. In August 2020, which received 200mm more precipitation than other periods, the effect of clouds, was also significant in MODIS NDVI 16-day composite product. Except for this period, the CC-MNC method was able to reduce the cloud ratio of 45.4% of the original daily image to 14.9%. In the case of rice paddy, there was no significant difference between Sentinel-2A/B and MODIS NDVI values. In addition, it was possible to monitor the rice growth cycle well even with a revisit cycle 5 days. In the case of high-latitude cabbage fields, Sentinel-2A/B showed the short growth cycle of cabbage well, but MODIS showed limitations in spatial resolution. In addition, the CC-MNC method showed that cloud pixels were used for compositing at the harvest time, suggesting that the View Zenith Angle (VZA) threshold needsto be adjusted according to the domestic region.

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.