• Title, Summary, Keyword: temporal feature

Search Result 282, Processing Time 0.059 seconds

Semi-fragile Watermarking Scheme for H.264/AVC Video Content Authentication Based on Manifold Feature

  • Ling, Chen;Ur-Rehman, Obaid;Zhang, Wenjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.12
    • /
    • pp.4568-4587
    • /
    • 2014
  • Authentication of videos and images based on the content is becoming an important problem in information security. Unfortunately, previous studies lack the consideration of Kerckhoffs's principle in order to achieve this (i.e., a cryptosystem should be secure even if everything about the system, except the key, is public knowledge). In this paper, a solution to the problem of finding a relationship between a frame's index and its content is proposed based on the creative utilization of a robust manifold feature. The proposed solution is based on a novel semi-fragile watermarking scheme for H.264/AVC video content authentication. At first, the input I-frame is partitioned for feature extraction and watermark embedding. This is followed by the temporal feature extraction using the Isometric Mapping algorithm. The frame index is included in the feature to produce the temporal watermark. In order to improve security, the spatial watermark will be encrypted together with the temporal watermark. Finally, the resultant watermark is embedded into the Discrete Cosine Transform coefficients in the diagonal positions. At the receiver side, after watermark extraction and decryption, temporal tampering is detected through a mismatch between the frame index extracted from the temporal watermark and the observed frame index. Next, the feature is regenerate through temporal feature regeneration, and compared with the extracted feature. It is judged through the comparison whether the extracted temporal watermark is similar to that of the original watermarked video. Additionally, for spatial authentication, the tampered areas are located via the comparison between extracted and regenerated spatial features. Experimental results show that the proposed method is sensitive to intentional malicious attacks and modifications, whereas it is robust to legitimate manipulations, such as certain level of lossy compression, channel noise, Gaussian filtering and brightness adjustment. Through a comparison between the extracted frame index and the current frame index, the temporal tempering is identified. With the proposed scheme, a solution to the Kerckhoffs's principle problem is specified.

Dynamic gesture recognition using a model-based temporal self-similarity and its application to taebo gesture recognition

  • Lee, Kyoung-Mi;Won, Hey-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2824-2838
    • /
    • 2013
  • There has been a lot of attention paid recently to analyze dynamic human gestures that vary over time. Most attention to dynamic gestures concerns with spatio-temporal features, as compared to analyzing each frame of gestures separately. For accurate dynamic gesture recognition, motion feature extraction algorithms need to find representative features that uniquely identify time-varying gestures. This paper proposes a new feature-extraction algorithm using temporal self-similarity based on a hierarchical human model. Because a conventional temporal self-similarity method computes a whole movement among the continuous frames, the conventional temporal self-similarity method cannot recognize different gestures with the same amount of movement. The proposed model-based temporal self-similarity method groups body parts of a hierarchical model into several sets and calculates movements for each set. While recognition results can depend on how the sets are made, the best way to find optimal sets is to separate frequently used body parts from less-used body parts. Then, we apply a multiclass support vector machine whose optimization algorithm is based on structural support vector machines. In this paper, the effectiveness of the proposed feature extraction algorithm is demonstrated in an application for taebo gesture recognition. We show that the model-based temporal self-similarity method can overcome the shortcomings of the conventional temporal self-similarity method and the recognition results of the model-based method are superior to that of the conventional method.

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.

Robust Traffic Monitoring System by Spatio-Temporal Image Analysis (시공간 영상 분석에 의한 강건한 교통 모니터링 시스템)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1534-1542
    • /
    • 2004
  • A novel vision-based scheme of extracting real-time traffic information parameters is presented. The method is based on a region classification followed by a spatio-temporal image analysis. The detection region images for each traffic lane are classified into one of the three categories: the road, the vehicle, and the shadow, using statistical and structural features. Misclassification in a frame is corrected by using temporally correlated features of vehicles in the spatio-temporal image. Since only local images of detection regions are processed, the real-time operation of more than 30 frames per second is realized without using dedicated parallel processors, while ensuring detection performance robust to the variation of weather conditions, shadows, and traffic load.

Recognition experiment of Korean connected digit telephone speech using the temporal filter based on training speech data (훈련데이터 기반의 temporal filter를 적용한 한국어 4연숫자 전화음성의 인식실험)

  • Jung Sung Yun;Kim Min Sung;Son Jong Mok;Bae Keun Sung;Kang Jeom Ja
    • Proceedings of the KSPS conference
    • /
    • /
    • pp.149-152
    • /
    • 2003
  • In this paper, data-driven temporal filter methods[1] are investigated for robust feature extraction. A principal component analysis technique is applied to the time trajectories of feature sequences of training speech data to get appropriate temporal filters. We did recognition experiments on the Korean connected digit telephone speech database released by SITEC, with data-driven temporal filters. Experimental results are discussed with our findings.

  • PDF

Feature Extraction and Fusion for land-Cover Discrimination with Multi-Temporal SAR Data (다중 시기 SAR 자료를 이용한 토지 피복 구분을 위한 특징 추출과 융합)

  • Park No-Wook;Lee Hoonyol;Chi Kwang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.145-162
    • /
    • 2005
  • To improve the accuracy of land-cover discrimination in SAB data classification, this paper presents a methodology that includes feature extraction and fusion steps with multi-temporal SAR data. Three features including average backscattering coefficient, temporal variability and coherence are extracted from multi-temporal SAR data by considering the temporal behaviors of backscattering characteristics of SAR sensors. Dempster-Shafer theory of evidence(D-S theory) and fuzzy logic are applied to effectively integrate those features. Especially, a feature-driven heuristic approach to mass function assignment in D-S theory is applied and various fuzzy combination operators are tested in fuzzy logic fusion. As experimental results on a multi-temporal Radarsat-1 data set, the features considered in this paper could provide complementary information and thus effectively discriminated water, paddy and urban areas. However, it was difficult to discriminate forest and dry fields. From an information fusion methodological point of view, the D-S theory and fuzzy combination operators except the fuzzy Max and Algebraic Sum operators showed similar land-cover accuracy statistics.

Temporal Error Concealment Using Boundary Region Feature and Adaptive Block Matching (경계 영역 특성과 적응적 블록 정합을 이용한 시간적 오류 은닉)

  • Bae, Tae-Wuk;Kim, Seung-Jin;Kim, Tae-Su;Lee, Kun-Il
    • Proceedings of the KIEE Conference
    • /
    • /
    • pp.12-14
    • /
    • 2005
  • In this paper, we proposed an temporal error concealment (EC) using the proposed boundary matching method and the adaptive block matching method. The proposed boundary matching method improves the spatial correlation of the macroblocks (MBs) by reusing the pixels of the concealed MB to estimate a motion vector of a error MB. The adaptive block matching method inspects the horizontal edge and the vertical edge feature of a error MB surroundings, and it conceals the error MBs in reference to more stronger edge feature. This improves video quality by raising edge connection feature of the error MBs and the neighborhood MBs. In particular, we restore a lost MB as the unit of 8${\times}$16 block or 16${\times}$8 block by using edge feature from the surrounding macroblocks. Experimental results show that the proposed algorithm gives better results than the conventional algorithms from a subjective and an objective viewpoint.

  • PDF

New Temporal Features for Cardiac Disorder Classification by Heart Sound (심음 기반의 심장질환 분류를 위한 새로운 시간영역 특징)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.133-140
    • /
    • 2010
  • We improve the performance of cardiac disorder classification by adding new temporal features extracted from continuous heart sound signals. We add three kinds of novel temporal features to a conventional feature based on mel-frequency cepstral coefficients (MFCC): Heart sound envelope, murmur probabilities, and murmur amplitude variation. In cardiac disorder classification and detection experiments, we evaluate the contribution of the proposed features to classification accuracy and select proper temporal features using the sequential feature selection method. The selected features are shown to improve classification accuracy significantly and consistently for neural network-based pattern classifiers such as multi-layer perceptron (MLP), support vector machine (SVM), and extreme learning machine (ELM).

Video Object Segmentation with Weakly Temporal Information

  • Zhang, Yikun;Yao, Rui;Jiang, Qingnan;Zhang, Changbin;Wang, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1434-1449
    • /
    • 2019
  • Video object segmentation is a significant task in computer vision, but its performance is not very satisfactory. A method of video object segmentation using weakly temporal information is presented in this paper. Motivated by the phenomenon in reality that the motion of the object is a continuous and smooth process and the appearance of the object does not change much between adjacent frames in the video sequences, we use a feed-forward architecture with motion estimation to predict the mask of the current frame. We extend an additional mask channel for the previous frame segmentation result. The mask of the previous frame is treated as the input of the expanded channel after processing, and then we extract the temporal feature of the object and fuse it with other feature maps to generate the final mask. In addition, we introduce multi-mask guidance to improve the stability of the model. Moreover, we enhance segmentation performance by further training with the masks already obtained. Experiments show that our method achieves competitive results on DAVIS-2016 on single object segmentation compared to some state-of-the-art algorithms.