• Title/Summary/Keyword: consecutive video sequences

Search Result 19, Processing Time 0.029 seconds

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.

Fast Video Fire Detection Using Luminous Smoke and Textured Flame Features

  • Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Salman, Yucel Batu;Ince, Omer Faruk;Lee, Geun-Hoo;Park, Jang-Sik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5485-5506
    • /
    • 2016
  • In this article, a video based fire detection framework for CCTV surveillancesystems is presented. Two novel features and a novel image type with their corresponding algorithmsareproposed for this purpose. One is for the slow-smoke detection and another one is for fast-smoke/flame detection. The basic idea is slow-smoke has a highly varying chrominance/luminance texture in long periods and fast-smoke/flame has a highly varying texture waiting at the same location for long consecutive periods. Experiments with a large number of smoke/flame and non-smoke/flame video sequences outputs promising results in terms of algorithmic accuracy and speed.

Multiple Objection and Tracking based on Morphological Region Merging from Real-time Video Sequences (실시간 비디오 시퀀스로부터 형태학적 영역 병합에 기반 한 다중 객체 검출 및 추적)

  • Park Jong-Hyun;Baek Seung-Cheol;Toan Nguyen Dinh;Lee Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.40-50
    • /
    • 2007
  • In this paper, we propose an efficient method for detecting and tracking multiple moving objects based on morphological region merging from real-time video sequences. The proposed approach consists of adaptive threshold extraction, morphological region merging and detecting and tracking of objects. Firstly, input frame is separated into moving regions and static regions using the difference of images between two consecutive frames. Secondly, objects are segmented with a reference background image and adaptive threshold values, then, the segmentation result is refined by morphological region merge algorithm. Lastly, each object segmented in a previous step is assigned a consistent identification over time, based on its spatio-temporal information. The experimental results show that a proposed method is efficient and useful in terms of real-time multiple objects detecting and tracking.

Corresponding Points Tracking of Aerial Sequence Images

  • Ochirbat, Sukhee;Shin, Sung-Woong;Yoo, Hwan-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.4
    • /
    • pp.11-16
    • /
    • 2008
  • The goal of this study is to evaluate the KLT(Kanade-Lucas-Tomasi) for extracting and tracking the features using various data acquired from UAV. Sequences of images were collected for Jangsu-Gun area to perform the analysis. Four data sets were subjected to extract and track the features using the parameters of the KLT. From the results of the experiment, more than 90 percent of the features extracted from the first frame could successfully track through the next frame when the shift between frames is small. But when the frame to frame motion is large in non-consecutive frames, KLT tracker is failed to track the corresponding points. Future research will be focused on feature tracking of sequence frames with large shift and rotation.

  • PDF

Shot Boundary Detection of Video Data Based on Fuzzy Inference (퍼지 추론에 의한 비디오 데이터의 샷 경계 추출)

  • Jang, Seok-Woo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.611-618
    • /
    • 2003
  • In this paper, we describe a fuzzy inference approach for detecting and classifying shot transitions in video sequences. Our approach basically extends FAM (Fuzzy Associative Memory) to detect and classify shot transitions, including cuts, fades and dissolves. We consider a set of feature values that characterize differences between two consecutive frames as input fuzzy sets, and the types of shot transitions as output fuzzy sets. The inference system proposed in this paper is mainly composed of a learning phase and an inferring phase. In the learning phase, the system initializes its basic structure by determining fuzzy membership functions and constructs fuzzy rules. In the inferring phase, the system conducts actual inference using the constructed fuzzy rules. In order to verify the performance of the proposed shot transition detection method experiments have been carried out with a video database that includes news, movies, advertisements, documentaries and music videos.

An Adaptive Block Matching Algorithm based on Temporal Correlations

  • Yoon, Hyo-Sun;Son, Nam-Rye;Lee, Guee-Sang;Kim, Soo-Hyung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.188-191
    • /
    • 2002
  • To reduce the bit-rate of video sequences by removing temporal redundancy, motion estimation techniques have been developed. However, the high computational complexity of the problem makes such techniques very difficult to be applied to high-resolution applications in a real time environment. For this reason, low computational complexity motion estimation algorithms are viable solutions. If a priori knowledge about the motion of the current block is available before the motion estimation, a better starting point for the search of n optimal motion vector on be selected and also the computational complexity will be reduced. In this paper, we present an adaptive block matching algorithm based on temporal correlations of consecutive image frames that defines the search pattern and the location of initial starting point adaptively to reduce computational complexity. Experiments show that, comparing with DS(Diamond Search) algorithm, the proposed algorithm is about 0.1∼0.5(㏈) better than DS in terms of PSNR and improves as much as 50% in terms of the average number of search points per motion estimation.

  • PDF

Feature-based Object Tracking using an Active Camera (능동카메라를 이용한 특징기반의 물체추적)

  • 정영기;호요성
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.694-701
    • /
    • 2004
  • In this paper, we proposed a feature-based tracking system that traces moving objects with a pan-tilt camera after separating the global motion of an active camera and the local motion of moving objects. The tracking system traces only the local motion of the comer features in the foreground objects by finding the block motions between two consecutive frames using a block-based motion estimation and eliminating the global motion from the block motions. For the robust estimation of the camera motion using only the background motion, we suggest a dominant motion extraction to classify the background motions from the block motions. We also propose an efficient clustering algorithm based on the attributes of motion trajectories of corner features to remove the motions of noise objects from the separated local motion. The proposed tracking system has demonstrated good performance for several test video sequences.

Unsupervised Segmentation of Objects using Genetic Algorithms (유전자 알고리즘 기반의 비지도 객체 분할 방법)

  • 김은이;박세현
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.9-21
    • /
    • 2004
  • The current paper proposes a genetic algorithm (GA)-based segmentation method that can automatically extract and track moving objects. The proposed method mainly consists of spatial and temporal segmentation; the spatial segmentation divides each frame into regions with accurate boundaries, and the temporal segmentation divides each frame into background and foreground areas. The spatial segmentation is performed using chromosomes that evolve distributed genetic algorithms (DGAs). However, unlike standard DGAs, the chromosomes are initiated from the segmentation result of the previous frame, then only unstable chromosomes corresponding to actual moving object parts are evolved by mating operators. For the temporal segmentation, adaptive thresholding is performed based on the intensity difference between two consecutive frames. The spatial and temporal segmentation results are then combined for object extraction, and tracking is performed using the natural correspondence established by the proposed spatial segmentation method. The main advantages of the proposed method are twofold: First, proposed video segmentation method does not require any a priori information second, the proposed GA-based segmentation method enhances the search efficiency and incorporates a tracking algorithm within its own architecture. These advantages were confirmed by experiments where the proposed method was success fully applied to well-known and natural video sequences.

Analysis of Camera Rotation Using Three Symmetric Motion Vectors in Video Sequence (동영상에서의 세 대칭적 움직임벡터를 이용한 카메라 회전각 분석)

  • 문성헌;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.2
    • /
    • pp.7-14
    • /
    • 2002
  • This paper proposes a camera motion estimation technique using special relations of motion vectors of geometrically symmetrical triple points of two consecutive views of single camera. The proposed technique uses camera-induced motion vectors and their relations other than feature points and epioplar constraints. As contrast to the time consuming iterations or numerical methods in the calculation of E-matrix or F-matrix induced by epipolar constraints, the proposed technique calculates camera motion parameters such as panning, tilting, rolling, and zooming at once by applying the proposed linear equation sets to the motion vectors. And by devised background discriminants, it effectively reflects only the background region into the calculation of motion parameters, thus making the calculation more accurate and fast enough to accommodate MPEG-4 requirements. Experimental results on various types of sequences show the validity and the broad applicability of the proposed technique.

  • PDF