• 제목/요약/키워드: spatio temporal

검색결과 1,181건 처리시간 0.026초

MOVING OBJECT JOIN ALGORITHMS USING TB- TREE

  • Lee Jai-Ho;Lee Seong-Ho;Kim Ju-Wan
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.309-312
    • /
    • 2005
  • The need for LBS (Loc,ation Based Services) is increasing due to the wnespread of mobile computing devices and positioning technologies~ In LBS, there are many applications that need to manage moving objects (e.g. taxies, persons). The moving object join operation is to make pairs with spatio-temporal attribute for two sets in the moving object database system. It is import and complicated operation. And processing time increases by geometric progression with numbers of moving objects. Therefore efficient methods of spatio-temporal join is essential to moving object database system. In this paper, we apply spatial join methods to moving objects join. We propose two kind of join methods with TB- Tree that preserves trajectories of moving objects. One is depth first traversal spatio-temporaljoin and another is breadth-first traversal spatio-temporal join. We show results of performance test with sample data sets which are created by moving object ,generator tool.

  • PDF

편마비 환자의 장애물 높이에 따른 마비측과 비마비측 하지의 시공간적 보행변수 비교 (Comparison of Spatio-temporal Gait Parameters between Paretic and Non-paretic Limb while Stepping over the Different Obstacle's Heights in Subjects with Stroke)

  • 한진태
    • 대한물리의학회지
    • /
    • 제9권1호
    • /
    • pp.69-74
    • /
    • 2014
  • PURPOSE: The aim of this study is to compare the spatio-temporal gait parameters between paretic and non-paretic limb while stepping over the different obstacle's heights in subjects with stroke. METHODS: Nine subjects with stroke were participated in this study. Subjects were asked to step over obstacles with a different height. 8 camera motion analysis system(Motion Analysis Corporation, Santa Rosa, USA) was used to measure spatio-temporal parameters. The two way repeated measurement ANOVA was used to compare spati-temporal gait parameters between paretic and non-paretic limbs while stepping over a different obstacle's height(0cm, 10cm, 20cm). RESULTS: Step width, velocity, single supoort time, and double support time were not different among obstacle's height(p>0.05) but stride length, step length, and cadence were significantly different(p>0.05). In stride length, cadence, and double support time, the interactions between obstacle's heights and limbs were not different(p>0.05) but it was significantly different in velocity, step length, and single support time(p<0.05). Velocity, stride length, cadence, and double support times were not different between paretic limb and non-paretic limb(p>0.05) but step length and single support times were significantly different between paretic limb and non-paretic limb(p<0.05). CONCLUSION: These results show that there are differences with spatio-temporal gait parameters among obstacle's heights and between paretic and non-paretic limb during obstacle crossing in subjects with stroke.

입력 영상의 방사학적 불일치 보정이 다중 센서 고해상도 위성영상의 시공간 융합에 미치는 영향 (Effect of Correcting Radiometric Inconsistency between Input Images on Spatio-temporal Fusion of Multi-sensor High-resolution Satellite Images)

  • 박소연;나상일;박노욱
    • 대한원격탐사학회지
    • /
    • 제37권5_1호
    • /
    • pp.999-1011
    • /
    • 2021
  • 다중 센서 영상으로부터 공간 및 시간해상도가 모두 높은 영상을 예측하는 시공간 융합에서 다중 센서 영상의 방사학적 불일치는 예측 성능에 영향을 미칠 수 있다. 이 연구에서는 다중 센서 위성영상의 서로 다른 분광학적 특성을 보정하는 방사보정이 융합 결과에 미치는 영향을 분석하였다. 두 농경지에서 얻어진 Sentinel-2, PlanetScope 및 RapidEye 영상을 이용한 사례연구를 통해 상대 방사보정의 효과를 정량적으로 분석하였다. 사례연구 결과, 상대 방사보정을 적용한 다중 센서 영상을 사용하였을 때 융합의 예측 정확도가 향상되었다. 특히 입력 자료 간 상관성이 낮은 경우에 상대 방사보정에 의한 예측 정확도 향상이 두드러졌다. 분광 특성의 차이를 보이는 다중 센서 자료를 서로 유사하게 변환함으로써 예측 성능이 향상된 것으로 보인다. 이 결과를 통해 상대 방사보정은 상관성이 낮은 다중 센서 위성영상의 시공간 융합에서 예측 능력을 향상시키기 위해 필요할 것으로 판단된다.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

결합 유사성 척도를 이용한 시공간 영상 분할 (Spatio-temporal video segmentation using a joint similarity measure)

  • 최재각;이시웅;조순제;김성대
    • 한국통신학회논문지
    • /
    • 제22권6호
    • /
    • pp.1195-1209
    • /
    • 1997
  • This paper presents a new morphological spatio-temporal segmentation algorithm. The algorithm incorporates luminance and motion information simultaneously, and uses morphological tools such as morphological filtersand watershed algorithm. The procedure toward complete segmentation consists of three steps:joint marker extraction, boundary decision, and motion-based region fusion. First, the joint marker extraction identifies the presence of homogeneours regions in both motion and luminance, where a simple joint marker extraction technique is proposed. Second, the spatio-temporal boundaries are decided by the watershed algorithm. For this purposek, a new joint similarity measure is proposed. Finally, an elimination ofredundant regions is done using motion-based region function. By incorporating spatial and temporal information simultaneously, we can obtain visually meaningful segmentation results. Simulation results demonstratesthe efficiency of the proposed method.

  • PDF

시공간 탐지 정확성을 고려한 다변량 누적합 관리도의 비교 (Comparison of Multivariate CUSUM Charts Based on Identification Accuracy for Spatio-temporal Surveillance)

  • 이미림
    • 품질경영학회지
    • /
    • 제43권4호
    • /
    • pp.521-532
    • /
    • 2015
  • Purpose: The purpose of this study is to compare two multivariate cumulative sum (MCUSUM) charts designed for spatio-temporal surveillance in terms of not only temporal detection performance but also spatial detection performance. Method: Experiments under various configurations are designed and performed to test two CUSUM charts, namely SMCUSUM and RMCUSUM. In addition to average run length(ARL), two measures of spatial identification accuracy are reported and compared. Results: The RMCUSUM chart provides higher level of spatial identification accuracy while two charts show comparable performance in terms of ARL. Conclusion: The RMCUSUM chart has more flexibility, robustness, and spatial identification accuracy when compared to those of the SMCUSUM chart. We recommend to use the RMCUSUM chart if control limit calibration is not an urgent task.

Methodology of Spatio-temporal Matching for Constructing an Analysis Database Based on Different Types of Public Data

  • Jung, In taek;Chong, Kyu soo
    • 한국측량학회지
    • /
    • 제35권2호
    • /
    • pp.81-90
    • /
    • 2017
  • This study aimed to construct an integrated database using the same spatio-temporal unit by employing various public-data types with different real-time information provision cycles and spatial units. Towards this end, three temporal interpolation methods (piecewise constant interpolation, linear interpolation, nonlinear interpolation) and a spatial matching method by district boundaries was proposed. The case study revealed that the linear interpolation is an excellent method, and the spatial matching method also showed good results. It is hoped that various prediction models and data analysis methods will be developed in the future using different types of data in the analysis database.

Forecasting COVID-19 confirmed cases in South Korea using Spatio-Temporal Graph Neural Networks

  • Ngoc, Kien Mai;Lee, Minho
    • International Journal of Contents
    • /
    • 제17권3호
    • /
    • pp.1-14
    • /
    • 2021
  • Since the outbreak of the coronavirus disease 2019 (COVID-19) pandemic, a lot of efforts have been made in the field of data science to help combat against this disease. Among them, forecasting the number of cases of infection is a crucial problem to predict the development of the pandemic. Many deep learning-based models can be applied to solve this type of time series problem. In this research, we would like to take a step forward to incorporate spatial data (geography) with time series data to forecast the cases of region-level infection simultaneously. Specifically, we model a single spatio-temporal graph, in which nodes represent the geographic regions, spatial edges represent the distance between each pair of regions, and temporal edges indicate the node features through time. We evaluate this approach in COVID-19 in a Korean dataset, and we show a decrease of approximately 10% in both RMSE and MAE, and a significant boost to the training speed compared to the baseline models. Moreover, the training efficiency allows this approach to be extended for a large-scale spatio-temporal dataset.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

Collective Prediction exploiting Spatio Temporal correlation (CoPeST) for energy efficient wireless sensor networks

  • ARUNRAJA, Muruganantham;MALATHI, Veluchamy
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2488-2511
    • /
    • 2015
  • Data redundancy has high impact on Wireless Sensor Network's (WSN) performance and reliability. Spatial and temporal similarity is an inherent property of sensory data. By reducing this spatio-temporal data redundancy, substantial amount of nodal energy and bandwidth can be conserved. Most of the data gathering approaches use either temporal correlation or spatial correlation to minimize data redundancy. In Collective Prediction exploiting Spatio Temporal correlation (CoPeST), we exploit both the spatial and temporal correlation between sensory data. In the proposed work, the spatial redundancy of sensor data is reduced by similarity based sub clustering, where closely correlated sensor nodes are represented by a single representative node. The temporal redundancy is reduced by model based prediction approach, where only a subset of sensor data is transmitted and the rest is predicted. The proposed work reduces substantial amount of energy expensive communication, while maintaining the data within user define error threshold. Being a distributed approach, the proposed work is highly scalable. The work achieves up to 65% data reduction in a periodical data gathering system with an error tolerance of 0.6℃ on collected data.