• Title/Summary/Keyword: Spatio-temporal Fusion

Search Result 17, Processing Time 0.02 seconds

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

Comparison of Spatio-temporal Fusion Models of Multiple Satellite Images for Vegetation Monitoring (식생 모니터링을 위한 다중 위성영상의 시공간 융합 모델 비교)

  • Kim, Yeseul;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1209-1219
    • /
    • 2019
  • For consistent vegetation monitoring, it is necessary to generate time-series vegetation index datasets at fine temporal and spatial scales by fusing the complementary characteristics between temporal and spatial scales of multiple satellite data. In this study, we quantitatively and qualitatively analyzed the prediction accuracy of time-series change information extracted from spatio-temporal fusion models of multiple satellite data for vegetation monitoring. As for the spatio-temporal fusion models, we applied two models that have been widely employed to vegetation monitoring, including a Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and an Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM). To quantitatively evaluate the prediction accuracy, we first generated simulated data sets from MODIS data with fine temporal scales and then used them as inputs for the spatio-temporal fusion models. We observed from the comparative experiment that ESTARFM showed better prediction performance than STARFM, but the prediction performance for the two models became degraded as the difference between the prediction date and the simultaneous acquisition date of the input data increased. This result indicates that multiple data acquired close to the prediction date should be used to improve the prediction accuracy. When considering the limited availability of optical images, it is necessary to develop an advanced spatio-temporal model that can reflect the suggestions of this study for vegetation monitoring.

Effect of Correcting Radiometric Inconsistency between Input Images on Spatio-temporal Fusion of Multi-sensor High-resolution Satellite Images (입력 영상의 방사학적 불일치 보정이 다중 센서 고해상도 위성영상의 시공간 융합에 미치는 영향)

  • Park, Soyeon;Na, Sang-il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.999-1011
    • /
    • 2021
  • In spatio-temporal fusion aiming at predicting images with both high spatial and temporal resolutionsfrom multi-sensor images, the radiometric inconsistency between input multi-sensor images may affect prediction performance. This study investigates the effect of radiometric correction, which compensate different spectral responses of multi-sensor satellite images, on the spatio-temporal fusion results. The effect of relative radiometric correction of input images was quantitatively analyzed through the case studies using Sentinel-2, PlanetScope, and RapidEye images obtained from two croplands. Prediction performance was improved when radiometrically corrected multi-sensor images were used asinput. In particular, the improvement in prediction performance wassubstantial when the correlation between input images was relatively low. Prediction performance could be improved by transforming multi-sensor images with different spectral responses into images with similar spectral responses and high correlation. These results indicate that radiometric correction is required to improve prediction performance in spatio-temporal fusion of multi-sensor satellite images with low correlation.

Applicability Evaluation of Spatio-Temporal Data Fusion Using Fine-scale Optical Satellite Image: A Study on Fusion of KOMPSAT-3A and Sentinel-2 Satellite Images (고해상도 광학 위성영상을 이용한 시공간 자료 융합의 적용성 평가: KOMPSAT-3A 및 Sentinel-2 위성영상의 융합 연구)

  • Kim, Yeseul;Lee, Kwang-Jae;Lee, Sun-Gu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1931-1942
    • /
    • 2021
  • As the utility of an optical satellite image with a high spatial resolution (i.e., fine-scale) has been emphasized, recently, various studies of the land surface monitoring using those have been widely carried out. However, the usefulness of fine-scale satellite images is limited because those are acquired at a low temporal resolution. To compensate for this limitation, the spatiotemporal data fusion can be applied to generate a synthetic image with a high spatio-temporal resolution by fusing multiple satellite images with different spatial and temporal resolutions. Since the spatio-temporal data fusion models have been developed for mid or low spatial resolution satellite images in the previous studies, it is necessary to evaluate the applicability of the developed models to the satellite images with a high spatial resolution. For this, this study evaluated the applicability of the developed spatio-temporal fusion models for KOMPSAT-3A and Sentinel-2 images. Here, an Enhanced Spatial and Temporal Adaptive Fusion Model (ESTARFM) and Spatial Time-series Geostatistical Deconvolution/Fusion Model (STGDFM), which use the different information for prediction, were applied. As a result of this study, it was found that the prediction performance of STGDFM, which combines temporally continuous reflectance values, was better than that of ESTARFM. Particularly, the prediction performance of STGDFM was significantly improved when it is difficult to simultaneously acquire KOMPSAT and Sentinel-2 images at a same date due to the low temporal resolution of KOMPSAT images. From the results of this study, it was confirmed that STGDFM, which has relatively better prediction performance by combining continuous temporal information, can compensate for the limitation to the low revisit time of fine-scale satellite images.

Spatio-Temporal Image Segmentation Based on Intensity and Motion Information (밝기 및 움직임 정보에 기반한 시공간 영상 분할)

  • 최재각;이시웅김성대
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.871-874
    • /
    • 1998
  • This paper presents a new morphological spatio-temporal segmentation algorithm. The algorithm incorporates intensity and motion information simultaneously, and uses morphological tools such as morphological filters and watershed algorithm. The procedure toward complete segmetnation consists of three steps: joint marker extraction, boundary decision, and motion-based region fusion. By incorporating spatial and temporal information simultaneously, we can obtain visually meaningful segmentation results. Simulation results demonstrates the efficiency of the proposed method.

  • PDF

Spatio-temporal video segmentation using a joint similarity measure (결합 유사성 척도를 이용한 시공간 영상 분할)

  • 최재각;이시웅;조순제;김성대
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1195-1209
    • /
    • 1997
  • This paper presents a new morphological spatio-temporal segmentation algorithm. The algorithm incorporates luminance and motion information simultaneously, and uses morphological tools such as morphological filtersand watershed algorithm. The procedure toward complete segmentation consists of three steps:joint marker extraction, boundary decision, and motion-based region fusion. First, the joint marker extraction identifies the presence of homogeneours regions in both motion and luminance, where a simple joint marker extraction technique is proposed. Second, the spatio-temporal boundaries are decided by the watershed algorithm. For this purposek, a new joint similarity measure is proposed. Finally, an elimination ofredundant regions is done using motion-based region function. By incorporating spatial and temporal information simultaneously, we can obtain visually meaningful segmentation results. Simulation results demonstratesthe efficiency of the proposed method.

  • PDF

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.

Ontology-Based Dynamic Context Management and Spatio-Temporal Reasoning for Intelligent Service Robots (지능형 서비스 로봇을 위한 온톨로지 기반의 동적 상황 관리 및 시-공간 추론)

  • Kim, Jonghoon;Lee, Seokjun;Kim, Dongha;Kim, Incheol
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1365-1375
    • /
    • 2016
  • One of the most important capabilities for autonomous service robots working in living environments is to recognize and understand the correct context in dynamically changing environment. To generate high-level context knowledge for decision-making from multiple sensory data streams, many technical problems such as multi-modal sensory data fusion, uncertainty handling, symbolic knowledge grounding, time dependency, dynamics, and time-constrained spatio-temporal reasoning should be solved. Considering these problems, this paper proposes an effective dynamic context management and spatio-temporal reasoning method for intelligent service robots. In order to guarantee efficient context management and reasoning, our algorithm was designed to generate low-level context knowledge reactively for every input sensory or perception data, while postponing high-level context knowledge generation until it was demanded by the decision-making module. When high-level context knowledge is demanded, it is derived through backward spatio-temporal reasoning. In experiments with Turtlebot using Kinect visual sensor, the dynamic context management and spatio-temporal reasoning system based on the proposed method showed high performance.

Analyzing Difference of Urban Forest Edge Vegetation Condition by Land Cover Types Using Spatio-temporal Data Fusion Method (시공간 위성영상 융합기법을 활용한 도시 산림 임연부 인접 토지피복 유형별 식생 활력도 차이 분석)

  • Sung, Woong Gi;Lee, Dong Kun;Jin, Yihua
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.3
    • /
    • pp.279-290
    • /
    • 2018
  • The importance of monitoring and assessing the status of urban forests in the aspect of urban forest management is emerging as urban forest edges increase due to urbanization and human impacts. The purpose of this study was to investigate the status of vegetation condition of urban forest edge that is affected by different land cover types using $NDVI_{max}$ images derived from FSDAF (Flexible Spatio-temporal DAta Fusion). Among 4 land cover types,roads had the greatest effect on the forest edge, especially up to 30m, and it was found to affect up to 90m in Seoul urban forest. It was also found that $NDVI_{max}$ increased with distance away from the forest edge. The results of this study are expected to be useful for assessing the effects of land cover types and land cover change on forest edges in terms of urban forest monitoring and urban forest management.