• Title/Summary/Keyword: Spatial-temporal combining

Search Result 65, Processing Time 0.027 seconds

Error Performance of Spatial-temporal Combining-based Spatial Multiplexing UWB Systems Using Transmit Antenna Selection

  • Kim, Sang-Choon
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.3
    • /
    • pp.215-219
    • /
    • 2012
  • This paper applies transmit antenna selection algorithms to spatial-temporal combining-based spatial multiplexing (SM) ultra-wideband (UWB) systems. The employed criterion is based on the largest minimum output signal-to-noise ratio of the multiplexed streams. It is shown via simulations that the bit error rate (BER) performance of the SM UWB systems based on the two-dimensional Rake receiver is significantly improved by antenna diversity through transmit antenna selection on a log-normal multipath fading channel. When the transmit antenna diversity through antenna selection is exploited in the SM UWB systems, the BER performance of the spatial-temporal combining-based zero-forcing (ZF) receiver is also compared with that of the ZF detector followed by the Rake receiver.

Temporal and spatial outlier detection in wireless sensor networks

  • Nguyen, Hoc Thai;Thai, Nguyen Huu
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.437-451
    • /
    • 2019
  • Outlier detection techniques play an important role in enhancing the reliability of data communication in wireless sensor networks (WSNs). Considering the importance of outlier detection in WSNs, many outlier detection techniques have been proposed. Unfortunately, most of these techniques still have some potential limitations, that is, (a) high rate of false positives, (b) high time complexity, and (c) failure to detect outliers online. Moreover, these approaches mainly focus on either temporal outliers or spatial outliers. Therefore, this paper aims to introduce novel algorithms that successfully detect both temporal outliers and spatial outliers. Our contributions are twofold: (i) modifying the Hampel Identifier (HI) algorithm to achieve high accuracy identification rate in temporal outlier detection, (ii) combining the Gaussian process (GP) model and graph-based outlier detection technique to improve the performance of the algorithm in spatial outlier detection. The results demonstrate that our techniques outperform the state-of-the-art methods in terms of accuracy and work well with various data types.

3D video coding for e-AG using spatio-temporal scalability (e-AG를 위한 시공간적 계위를 이용한 3차원 비디오 압축)

  • 오세찬;이영호;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.199-202
    • /
    • 2003
  • In this paper, we propose a new 3D coding method for heterogeneous systems over enhanced Access Grid (e-AG) with 3D display using spatio-temporal scalability. The proposed encoder produces four bit-streams: one base layer and enhancement layer l, 2 and 3. The base layer represents a video sequence for left eye with lower spatial resolution. An enhancement layer l provides additional bit-stream needed for reproduction of frames produced in base layer with full resolution. Similarly, the enhancement layer 2 represents a video sequence for right eye with lower spatial resolution and an enhancement layer 3 provides additional bit-stream needed for reproduction of its reference pictures with full resolution. In this system, temporal resolution reduction is obtained by dropping B-frames in the receiver according to network condition. The receiver system can select the spatial and temporal resolution of video sequence with its display condition by properly combining bit-streams.

  • PDF

Multiscale Spatial Position Coding under Locality Constraint for Action Recognition

  • Yang, Jiang-feng;Ma, Zheng;Xie, Mei
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1851-1863
    • /
    • 2015
  • – In the paper, to handle the problem of traditional bag-of-features model ignoring the spatial relationship of local features in human action recognition, we proposed a Multiscale Spatial Position Coding under Locality Constraint method. Specifically, to describe this spatial relationship, we proposed a mixed feature combining motion feature and multi-spatial-scale configuration. To utilize temporal information between features, sub spatial-temporal-volumes are built. Next, the pooled features of sub-STVs are obtained via max-pooling method. In classification stage, the Locality-Constrained Group Sparse Representation is adopted to utilize the intrinsic group information of the sub-STV features. The experimental results on the KTH, Weizmann, and UCF sports datasets show that our action recognition system outperforms the classical local ST feature-based recognition systems published recently.

Implementation of a Geo-Semantic App by Combining Mobile User Contexts with Geographic Ontologies

  • Lee, Ha-Jung;Lee, Yang-Won
    • Spatial Information Research
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2013
  • This paper describes a GIS framework for geo-semantic information retrieval in mobile computing environments. We built geographic ontologies of POI (point of interest) and weather information for use in the combination of semantic, spatial, and temporal functions in a fully integrated database. We also implemented a geo-semantic app for Android-based smartphones that can extract more appropriate POIs in terms of user contexts and geographic ontologies and can visualize the POIs using Google Maps API (application programming interface). The feasibility tests showed our geo-semantic app can provide pertinent POI information according to mobile user contexts such as location, time, schedule, and weather. We can discover a baking CVS (convenience store) in the test of bakery search and can find out a drive-in theater for a not rainy day, which are good examples of the geo-semantic query using semantic, spatial, and temporal functions. As future work, we should need ontology-based inference systems and the LOD (linked open data) of various ontologies for more advanced sharing of geographic knowledge.

Characteristics of Hybrid Expression in Fashion Illustration (패션 일러스트레이션의 혼성적 표현 특성에 관한 연구)

  • Kim, Soon-Ja
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.15 no.1
    • /
    • pp.59-74
    • /
    • 2013
  • Post-modern society leads us to accept diversity and variability instead of pursuit of the absolute truth, beauty or classical value systems, thus leading to hybrid phenomena. The purpose of this study is to analyze characteristics and expressive effects of hybrid expressions through which to provide conceptual bases for interpreting expanded meanings of fashion illustrations that express aesthetic concepts of hybrid culture. Hybrid refers to a condition on which diverse elements are mixed with each other, so any one element can not dominate others. It is often used to create something unique and new by a combination of unprecedented things. Hybrid can be classified into four categories: temporal hybrid, spatial hybrid, morphological hybrid and hybrid of different genres. Temporal hybrid from a combination of past and present in fashion illustration includes temporal blending by repetition and juxtaposition. Spatial hybrid shows itself in the form of inter-penetration and interrelationship by means of projection, overlapping, juxtaposition and multiple space. Morphological hybrid expresses itself through combination of heterogenous forms and restructuring of deformed forms. Hybrid of different genres in fashion illustration applies various graphic elements or photos within the space, and represents blending of arts and daily living. Such hybrid expressions in fashion illustrations reflect the phenomena of diversity and variability of post-modern society. Hybrid expressions in fashion illustrations predict endless possibility of expressing new images through combining various forms or casual elements and can develop toward a new creative technique.

  • PDF

Applicability Evaluation of Spatio-Temporal Data Fusion Using Fine-scale Optical Satellite Image: A Study on Fusion of KOMPSAT-3A and Sentinel-2 Satellite Images (고해상도 광학 위성영상을 이용한 시공간 자료 융합의 적용성 평가: KOMPSAT-3A 및 Sentinel-2 위성영상의 융합 연구)

  • Kim, Yeseul;Lee, Kwang-Jae;Lee, Sun-Gu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1931-1942
    • /
    • 2021
  • As the utility of an optical satellite image with a high spatial resolution (i.e., fine-scale) has been emphasized, recently, various studies of the land surface monitoring using those have been widely carried out. However, the usefulness of fine-scale satellite images is limited because those are acquired at a low temporal resolution. To compensate for this limitation, the spatiotemporal data fusion can be applied to generate a synthetic image with a high spatio-temporal resolution by fusing multiple satellite images with different spatial and temporal resolutions. Since the spatio-temporal data fusion models have been developed for mid or low spatial resolution satellite images in the previous studies, it is necessary to evaluate the applicability of the developed models to the satellite images with a high spatial resolution. For this, this study evaluated the applicability of the developed spatio-temporal fusion models for KOMPSAT-3A and Sentinel-2 images. Here, an Enhanced Spatial and Temporal Adaptive Fusion Model (ESTARFM) and Spatial Time-series Geostatistical Deconvolution/Fusion Model (STGDFM), which use the different information for prediction, were applied. As a result of this study, it was found that the prediction performance of STGDFM, which combines temporally continuous reflectance values, was better than that of ESTARFM. Particularly, the prediction performance of STGDFM was significantly improved when it is difficult to simultaneously acquire KOMPSAT and Sentinel-2 images at a same date due to the low temporal resolution of KOMPSAT images. From the results of this study, it was confirmed that STGDFM, which has relatively better prediction performance by combining continuous temporal information, can compensate for the limitation to the low revisit time of fine-scale satellite images.

Hole-filling Algorithm Based on Extrapolating Spatial-Temporal Background Information for View Synthesis in Free Viewpoint Television (자유 시점 TV에서 시점 합성을 위한 시공간적 배경 정보 추정 기반 홀 채움 방식)

  • Kim, Beomsu;Nguyen, Tien-Dat;Hong, Min-cheol
    • Journal of IKEEE
    • /
    • v.20 no.1
    • /
    • pp.31-44
    • /
    • 2016
  • This paper presents a hole-filling algorithm based on extrapolating spatial-temporal background information used in view synthesis for free-viewpoint television. A new background codebook is constructed and updated in order to extract reliable temporal background information. In addition, an estimation of spatial local background values is conducted to discriminate an adaptive boundary between the background region and the foreground region as well as to update the information about the hole region. The holes then are filled by combining the spatial background information and the temporal background information. In addition, an exemplar-based inpainting technique is used to fill the rest of holes, in which a priority function using background-depth information is defined to determine the order in which the holes are filled. The experimental results demonstrated that the proposed algorithm outperformed the other comparative methods about average 0.3-0.6 dB, and that it synthesized satisfactory views regardless of video characteristics and type of hole region.

Combining 2D CNN and Bidirectional LSTM to Consider Spatio-Temporal Features in Crop Classification (작물 분류에서 시공간 특징을 고려하기 위한 2D CNN과 양방향 LSTM의 결합)

  • Kwak, Geun-Ho;Park, Min-Gyu;Park, Chan-Won;Lee, Kyung-Do;Na, Sang-Il;Ahn, Ho-Yong;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.681-692
    • /
    • 2019
  • In this paper, a hybrid deep learning model, called 2D convolution with bidirectional long short-term memory (2DCBLSTM), is presented that can effectively combine both spatial and temporal features for crop classification. In the proposed model, 2D convolution operators are first applied to extract spatial features of crops and the extracted spatial features are then used as inputs for a bidirectional LSTM model that can effectively process temporal features. To evaluate the classification performance of the proposed model, a case study of crop classification was carried out using multi-temporal unmanned aerial vehicle images acquired in Anbandegi, Korea. For comparison purposes, we applied conventional deep learning models including two-dimensional convolutional neural network (CNN) using spatial features, LSTM using temporal features, and three-dimensional CNN using spatio-temporal features. Through the impact analysis of hyper-parameters on the classification performance, the use of both spatial and temporal features greatly reduced misclassification patterns of crops and the proposed hybrid model showed the best classification accuracy, compared to the conventional deep learning models that considered either spatial features or temporal features. Therefore, it is expected that the proposed model can be effectively applied to crop classification owing to its ability to consider spatio-temporal features of crops.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.