• Title/Summary/Keyword: temporal network

Search Result 613, Processing Time 0.026 seconds

3D video coding for e-AG using spatio-temporal scalability (e-AG를 위한 시공간적 계위를 이용한 3차원 비디오 압축)

  • 오세찬;이영호;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.199-202
    • /
    • 2003
  • In this paper, we propose a new 3D coding method for heterogeneous systems over enhanced Access Grid (e-AG) with 3D display using spatio-temporal scalability. The proposed encoder produces four bit-streams: one base layer and enhancement layer l, 2 and 3. The base layer represents a video sequence for left eye with lower spatial resolution. An enhancement layer l provides additional bit-stream needed for reproduction of frames produced in base layer with full resolution. Similarly, the enhancement layer 2 represents a video sequence for right eye with lower spatial resolution and an enhancement layer 3 provides additional bit-stream needed for reproduction of its reference pictures with full resolution. In this system, temporal resolution reduction is obtained by dropping B-frames in the receiver according to network condition. The receiver system can select the spatial and temporal resolution of video sequence with its display condition by properly combining bit-streams.

  • PDF

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

Using the fusion of spatial and temporal features for malicious video classification (공간과 시간적 특징 융합 기반 유해 비디오 분류에 관한 연구)

  • Jeon, Jae-Hyun;Kim, Se-Min;Han, Seung-Wan;Ro, Yong-Man
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.365-374
    • /
    • 2011
  • Recently, malicious video classification and filtering techniques are of practical interest as ones can easily access to malicious multimedia contents through the Internet, IPTV, online social network, and etc. Considerable research efforts have been made to developing malicious video classification and filtering systems. However, the malicious video classification and filtering is not still being from mature in terms of reliable classification/filtering performance. In particular, the most of conventional approaches have been limited to using only the spatial features (such as a ratio of skin regions and bag of visual words) for the purpose of malicious image classification. Hence, previous approaches have been restricted to achieving acceptable classification and filtering performance. In order to overcome the aforementioned limitation, we propose new malicious video classification framework that takes advantage of using both the spatial and temporal features that are readily extracted from a sequence of video frames. In particular, we develop the effective temporal features based on the motion periodicity feature and temporal correlation. In addition, to exploit the best data fusion approach aiming to combine the spatial and temporal features, the representative data fusion approaches are applied to the proposed framework. To demonstrate the effectiveness of our method, we collect 200 sexual intercourse videos and 200 non-sexual intercourse videos. Experimental results show that the proposed method increases 3.75% (from 92.25% to 96%) for classification of sexual intercourse video in terms of accuracy. Further, based on our experimental results, feature-level fusion approach (for fusing spatial and temporal features) is found to achieve the best classification accuracy.

Improved Tweet Bot Detection Using Spatio-Temporal Information (시공간 정보를 사용한 개선된 트윗 봇 검출)

  • Kim, Hyo-Sang;Shin, Won-Yong;Kim, Donggeon;Cho, Jaehee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2885-2891
    • /
    • 2015
  • Twitter, one of online social network services, is one of the most popular micro-blogs, which generates a large number of automated programs, known as tweet bots because of the open structure of Twitter. While these tweet bots are categorized to legitimate bots and malicious bots, it is important to detect tweet bots since malicious bots spread spam and malicious contents to human users. In the conventional work, temporal information was utilized for the classficiation of human and bot. In this paper, by utilizing geo-tagged tweets that provide high-precision location information of users, we first identify both Twitter users' exact location and the corresponding timestamp, and then propose an improved two-stage tweet bot detection algorithm by computing an entropy based on spatio-temporal information. As a main result, the proposed algorithm shows superior bot detection and false alarm probabilities over the conventional result which only uses temporal information.

SECOND BEST TEMPORALLY REPEATED FLOWS

  • Eleonor, Ciurea
    • Journal of applied mathematics & informatics
    • /
    • v.9 no.1
    • /
    • pp.77-86
    • /
    • 2002
  • Ford and Fulkerson have shown that a stationary maximal dynamic flow can be obtained by solving a transhipment problem associated with the static network and thereby finding the maximal temporally repeated dynamic flow. This flow is known to be an optical dynamic flow. This paper presents an algorithm for second best temporal1y repeated flows. A numerical example is presented.

Cortical Network Activated by Korean Traditional Opera (Pansori): A Functional MR Study

  • Kim, Yun-Hee;Kim, Hyun-Gi;Kim, Seong-Yong;Kim, Hyoung-Ihl;Todd. B. Parrish;Hong, In-Ki;Sohn, Jin-Hun
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.113-119
    • /
    • 2000
  • The Pansori is a Korean traditional vocal music that has a unique story and melody which converts deep emotion into art. It has both verbal and emotional components. which can be coordinated by large-scale neural network. The purpose of this study is to illustrate the cortical network activated by a Korean traditional opera, Pansori, with different emotional valence using functional MRI (fMRI).Nine right-handed volunteers participated. Their mean age was 25.3 and the mean modified Edinburgh score was +90.1. Activation tasks were designed for the subjects to passively listen to the two parts of Pansories with sad or hilarious emotional valence. White noise was introduced during the control periods. Imaging was conducted on a 1.5T Siemens Vision Vision scanner. Single-shot echoplanar fMRI scans (TR/TE 3840/40 ms, flip angle 90, FOV 220, 64 x 64 matrix, 6mm thickness) were acquired in 20 contiguous slices. Imaging data were motion-corrected, coregistered, normalized, and smoothed using SPM-96 software.Bilateral posterior temporal regions were activated in both of Pansori tasks, but different asymmetry between the tasks was found. The Pansori with sad emotion showed more activation in the light superior temporal regions as well as the right inferior frontal and the orbitofrontal areas than in the right superior temporal regions as well as the right inferior frontal and the orbitofrontal areas than in the left side. In the Pansori with hilarious emotion, there was a remarkable activation in the left hemisphere especially at the posterior temporal and the temporooccipital regions as well as in the left inferior and the prefrontal areas. After subtraction between two tasks, the sad Pansori showed more activation in the right temporoparietal and the orbitofrontal areas, in contrast, the one with hilarious emotion showed more activation in the left temporal and the prefrontal areas. These results suggested that different hemispheric asymmetry and cortical areas are subserved for the processing of different emotional valences carried by the Pansories.

  • PDF

A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models (순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구)

  • 김기석;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.8
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

Spatio-temporal Analysis using Real-Time Data Processing for Wireless Sensor Networks (무선 센서 네트워크에서 실시간 데이터 처리를 이용한 시공간 분석)

  • Baek, Jeong-Ho;Mun, Young-Chae;Lee, Hong-Ro
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.688-692
    • /
    • 2010
  • Wireless sensor network system collects and analyzes real-time data that have been requested by the many application nodes. This paper has constructed a sensor network cluster with various elements in the Gunsan City area of Jeollabuk-do, S.korea. The purpose of this paper is to utilize the constructed system in order to illustrate the real-time data in a diagram and analyze it to deduce the change ratio. The resulting analysis contents allow simple data interpretation by illustrating the data in change ratio by time, space, and motional directions. This analytical method will offer great benefit to those users using the wireless sensor network.

System Identification Using Hybrid Recurrent Neural Networks (Hybrid 리커런트 신경망을 이용한 시스템 식별)

  • Choi Han-Go;Go Il-Whan;Kim Jong-In
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.45-52
    • /
    • 2005
  • Dynamic neural networks have been applied to diverse fields requiring temporal signal processing. This paper describes system identification using the hybrid neural network, composed of locally(LRNN) and globally recurrent neural networks(GRNN) to improve dynamics of multilayered recurrent networks(RNN). The structure of the hybrid nework combines IIR-MLP as LRNN and Elman RNN as GRNN. The hybrid network is evaluated in linear and nonlinear system identification, and compared with Elman RNN and IIR-MLP networks for the relative comparison of its performance. Simulation results show that the hybrid network performs better with respect to the convergence and accuracy, indicating that it can be a more effective network than conventional multilayered recurrent networks in system identification.

  • PDF

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.