• Title/Summary/Keyword: Multi-temporal images

Search Result 214, Processing Time 0.021 seconds

Selective Histogram Matching of Multi-temporal High Resolution Satellite Images Considering Shadow Effects in Urban Area (도심지역의 그림자 영향을 고려한 다시기 고해상도 위성영상의 선택적 히스토그램 매칭)

  • Yeom, Jun-Ho;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.47-54
    • /
    • 2012
  • Additional high resolution satellite images, other period or site, are essential for efficient city modeling and analysis. However, the same ground objects have a radiometric inconsistency in different satellite images and it debase the quality of image processing and analysis. Moreover, in an urban area, buildings, trees, bridges, and other artificial objects cause shadow effects, which lower the performance of relative radiometric normalization. Therefore, in this study, we exclude shadow areas and suggest the selective histogram matching methods for image based application without supplementary digital elevation model or geometric informations of sun and sensor. We extract the shadow objects first using adjacency informations with the building edge buffer and spatial and spectral attributes derived from the image segmentation. And, Outlier objects like a asphalt roads are removed. Finally, selective histogram matching is performed from the shadow masked multi-temporal Quickbird-2 images.

Estimation of the Flood Area Using Multi-temporal RADARSAT SAR Imagery

  • Sohn, Hong-Gyoo;Song, Yeong-Sun;Yoo, Hwan-Hee;Jung, Won-Jo
    • Korean Journal of Geomatics
    • /
    • v.2 no.1
    • /
    • pp.37-46
    • /
    • 2002
  • Accurate classification of water area is an preliminary step to accurately analyze the flooded area and damages caused by flood. This step is especially useful for monitoring the region where annually repeating flood is a problem. The accurate estimation of flooded area can ultimately be utilized as a primary source of information for the policy decision. Although SAR (Synthetic Aperture Radar) imagery with its own energy source is sensitive to the water area, its shadow effect similar to the reflectance signature of the water area should be carefully checked before accurate classification. Especially when we want to identify small flood area with mountainous environment, the step for removing shadow effect turns out to be essential in order to accurately classify the water area from the SAR imagery. In this paper, the flood area was classified and monitored using multi-temporal RADARSAT SAR images of Ok-Chun and Bo-Eun located in Chung-Book Province taken in 12th (during the flood) and 19th (after the flood) of August, 1998. We applied several steps of geometric and radiometric calculations to the SAR imagery. First we reduced the speckle noise of two SAR images and then calculated the radar backscattering coefficient $(\sigma^0)$. After that we performed the ortho-rectification via satellite orbit modeling developed in this study using the ephemeris information of the satellite images and ground control points. We also corrected radiometric distortion caused by the terrain relief. Finally, the water area was identified from two images and the flood area is calculated accordingly. The identified flood area is analyzed by overlapping with the existing land use map.

  • PDF

Field Crop Classification Using Multi-Temporal High-Resolution Satellite Imagery: A Case Study on Garlic/Onion Field (고해상도 다중시기 위성영상을 이용한 밭작물 분류: 마늘/양파 재배지 사례연구)

  • Yoo, Hee Young;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.621-630
    • /
    • 2017
  • In this paper, a study on classification targeting a main production area of garlic and onion was carried out in order to figure out the applicability of multi-temporal high-resolution satellite imagery for field crop classification. After collecting satellite imagery in accordance with the growth cycle of garlic and onion, classifications using each sing date imagery and various combinations of multi-temporal dataset were conducted. In the case of single date imagery, high classification accuracy was obtained in December when the planting was completed and March when garlic and onion started to grow vigorously. Meanwhile, higher classification accuracy was obtained when using multi-temporal dataset rather than single date imagery. However, more images did not guarantee higher classification accuracy. Rather, the imagery at the planting season or right after planting reduced classification accuracy. The highest classification accuracy was obtained when using the combination of March, April and May data corresponding the growth season of garlic and onion. Therefore, it is recommended to secure imagery at main growth season in order to classify garlic and onion field using multi-temporal satellite imagery.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Multi-Temporal Spectral Analysis of Rice Fields in South Korea Using MODIS and RapidEye Satellite Imagery

  • Kim, Hyun Ok;Yeom, Jong Min
    • Journal of Astronomy and Space Sciences
    • /
    • v.29 no.4
    • /
    • pp.407-411
    • /
    • 2012
  • Space-borne remote sensing is an effective and inexpensive way to identify crop fields and detect the crop condition. We examined the multi-temporal spectral characteristics of rice fields in South Korea to detect their phenological development and condition. These rice fields are compact, small-scale parcels of land. For the analysis, moderate resolution imaging spectroradiometer (MODIS) and RapidEye images acquired in 2011 were used. The annual spectral tendencies of different crop types could be detected using MODIS data because of its high temporal resolution, despite its relatively low spatial resolution. A comparison between MODIS and RapidEye showed that the spectral characteristics changed with the spatial resolution. The vegetation index (VI) derived from MODIS revealed more moderate values among different land-cover types than the index derived from RapidEye. Additionally, an analysis of various VIs using RapidEye satellite data showed that the VI adopting the red edge band reflected crop conditions better than the traditionally used normalized difference VI.

Automatic Estimation of Geometric Translations Between High-resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 자동 변위량 추정)

  • Han, You Kyung;Byun, Young Gi;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.3
    • /
    • pp.41-48
    • /
    • 2012
  • Using multi-sensor or multi-temporal high resolution satellite images together is essential for efficient applications in remote sensing area. The purpose of this paper is to estimate geometric difference of translations between high-resolution optical and SAR images automatically. The geometric and radiometric pre-processing steps were fulfilled to calculate the similarity between optical and SAR images by using Mutual Information method. The coarsest-level pyramid images of each sensor constructed by gaussian pyramid method were generated to estimate the initial translation difference of the x, y directions for calculation efficiency. The precise geometric difference of translations was able to be estimated by applying this method from coarsest-level pyramid image to original image in order. Yet even when considered only translation between optical and SAR images, the proposed method showed RMSE lower than 5m in all study sites.

Comparing LAI Estimates of Corn and Soybean from Vegetation Indices of Multi-resolution Satellite Images

  • Kim, Sun-Hwa;Hong, Suk Young;Sudduth, Kenneth A.;Kim, Yihyun;Lee, Kyungdo
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.597-609
    • /
    • 2012
  • Leaf area index (LAI) is important in explaining the ability of the crop to intercept solar energy for biomass production and in understanding the impact of crop management practices. This paper describes a procedure for estimating LAI as a function of image-derived vegetation indices from temporal series of IKONOS, Landsat TM, and MODIS satellite images using empirical models and demonstrates its use with data collected at Missouri field sites. LAI data were obtained several times during the 2002 growing season at monitoring sites established in two central Missouri experimental fields, one planted to soybean (Glycine max L.) and the other planted to corn (Zea mays L.). Satellite images at varying spatial and spectral resolutions were acquired and the data were extracted to calculate normalized difference vegetation index (NDVI) after geometric and atmospheric correction. Linear, exponential, and expolinear models were developed to relate temporal NDVI to measured LAI data. Models using IKONOS NDVI estimated LAI of both soybean and corn better than those using Landsat TM or MODIS NDVI. Expolinear models provided more accurate results than linear or exponential models.

Integrated Water Resources Management in the Era of nGreat Transition

  • Ashkan Noori;Seyed Hossein Mohajeri;Milad Niroumand Jadidi;Amir Samadi
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.34-34
    • /
    • 2023
  • The Chah-Nimeh reservoirs, which are a sort of natural lakes located in the border of Iran and Afghanistan, are the main drinking and agricultural water resources of Sistan arid region. Considering the occurrence of intense seasonal wind, locally known as levar wind, this study aims to explore the possibility to provide a TSM (Total Suspended Matter) monitoring model of Chah-Nimeh reservoirs using multi-temporal satellite images and in-situ wind speed data. The results show that a strong correlation between TSM concentration and wind speed are present. The developed empirical model indicated high performance in retrieving spatiotemporal distribution of the TSM concentration with R2=0.98 and RMSE=0.92g/m3. Following this observation, we also consider a machine learning-based model to predicts the average TSM using only wind speed. We connect our in-situ wind speed data to the TSM data generated from the inversion of multi-temporal satellite imagery to train a neural network based mode l(Wind2TSM-Net). Examining Wind2TSM-Net model indicates this model can retrieve the TSM accurately utilizing only wind speed (R2=0.88 and RMSE=1.97g/m3). Moreover, this results of this study show tha the TSM concentration can be estimated using only in situ wind speed data independent of the satellite images. Specifically, such model can supply a temporally persistent means of monitoring TSM that is not limited by the temporal resolution of imagery or the cloud cover problem in the optical remote sensing.

  • PDF

Application of the 3D Discrete Wavelet Transformation Scheme to Remotely Sensed Image Classification

  • Yoo, Hee-Young;Lee, Ki-Won;Kwon, Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.355-363
    • /
    • 2007
  • The 3D DWT(The Three Dimensional Discrete Wavelet Transform) scheme is potentially regarded as useful one on analyzing both spatial and spectral information. Nevertheless, few researchers have attempted to process or classified remotely sensed images using the 3D DWT. This study aims to apply the 3D DWT to the land cover classification of optical and SAR(Synthetic Aperture Radar) images. Then, their results are evaluated quantitatively and compared with the results of traditional classification technique. As the experimental results, the 3D DWT shows superior classification results to conventional techniques, especially dealing with the high-resolution imagery and SAR imagery. It is thought that the 3D DWT scheme can be extended to multi-temporal or multi-sensor image classification.