• 제목/요약/키워드: Video Content Indexing

검색결과 75건 처리시간 0.038초

Video Content Indexing using Kullback-Leibler Distance

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • 제5권4호
    • /
    • pp.51-54
    • /
    • 2009
  • In huge video databases, the effective video content indexing method is required. While manual indexing is the most effective approach to this goal, it is slow and expensive. Thus automatic indexing is desirable and recently various indexing tools for video databases have been developed. For efficient video content indexing, the similarity measure is an important factor. This paper presents new similarity measures between frames and proposes a new algorithm to index video content using Kullback-Leibler distance defined between two histograms. Experimental results show that the proposed algorithm using Kullback-Leibler distance gives remarkable high accuracy ratios compared with several conventional algorithms to index video content.

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

An Efficient Video Retrieval Algorithm Using Key Frame Matching for Video Content Management

  • Kim, Sang Hyun
    • International Journal of Contents
    • /
    • 제12권1호
    • /
    • pp.1-5
    • /
    • 2016
  • To manipulate large video contents, effective video indexing and retrieval are required. A large number of video indexing and retrieval algorithms have been presented for frame-wise user query or video content query whereas a relatively few video sequence matching algorithms have been proposed for video sequence query. In this paper, we propose an efficient algorithm that extracts key frames using color histograms and matches the video sequences using edge features. To effectively match video sequences with a low computational load, we make use of the key frames extracted by the cumulative measure and the distance between key frames, and compare two sets of key frames using the modified Hausdorff distance. Experimental results with real sequence show that the proposed video sequence matching algorithm using edge features yields the higher accuracy and performance than conventional methods such as histogram difference, Euclidean metric, Battachaya distance, and directed divergence methods.

내용기반 비디오 색인 및 검색을 위한 음성인식기술 이용에 관한 연구 (A Study on the Use of Speech Recognition Technology for Content-based Video Indexing and Retrieval)

  • 손종목;배건성;강경옥;김재곤
    • 한국음향학회지
    • /
    • 제20권2호
    • /
    • pp.16-20
    • /
    • 2001
  • 비디오 프로그램 색인 및 검색에 있어서 비디오 프로그램을 의미 있는 부분으로 분할하는 것, 즉 내용기반 비디오 프로그램 분할은 중요하다. 본 논문에서는 내용기반 비디오 프로그램 분할을 위해 음성인식기술을 이용하는 새로운 방법을 제안한다. 제안한 방법은 음성신호와 캡션 (Closed Caption)의 정확한 동기를 위해 음성인식 기법을 사용한다. 실험을 통하여 내용기반 비디오 프로그램 분할을 위해 제안한 방법의 가능성을 확인하였다.

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • 음성과학
    • /
    • 제12권1호
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

MPEG-7 표준에 따른 내용기반 비디오 검색 시스템 (Content-based Video Indexing and Retrieval System using MPEG-7 Standard)

  • 김형준;김회율
    • 방송공학회논문지
    • /
    • 제9권2호
    • /
    • pp.151-163
    • /
    • 2004
  • 본 논문에서는 비디오의 효율적인 검색과 관리를 위해 MPEG-7 표준에 따른 내용기반 비디오 검색 시스템을 제안한다. 제안된 시스템은 비디오 DB 구축을 위한 인덱싱 모듈과 웹을 통한 비디오 검색 모듈로 구성되며 검색 모듈에서는 다양한 질의 방법을 지원한다. 비디오 인덱싱 모듈은 관리자가 입력한 키워드, 인덱싱 모듈이 자동으로 추출한 등장 인물 정보와 MPEG-7 비주얼 서술자와 같은 메타데이터를 서버에 저장한다. 일반 사용자는 웹을 통해 검색 모듈에 접근하며 키워드, 얼굴 예제 및 스케치 질의와 같은 다양한 질의 방법을 통해 원하는 비디오를 검색할 수 있다. 이러한 비디오 검색 시스템을 구성하기 위해서 본 논문에서는 효율적인 비디오 인덱싱을 위한 장면 전환 검출 방법으로 ATC(Adaptive Twin Comparison)와 사용자 편의성을 위한 개선된 내용기반 질의 방법으로 QBME(Query By Modified Example)를 제안한다. 실험에서 제안된 장면 전환 검출 방법이 기존의 방법보다 우수함을 보였고, 제안된 질의 방법을 통해 기존의 질의 방법인 QBE(Query By Example)나 QBS(Query By Sketch) 보다 사용자에게 검색의 편의성을 제공할 수 있음을 보였다.

고차원 벡터 데이터 색인을 위한 시그니쳐-기반 Hybrid Spill-Tree의 설계 및 성능평가 (Design and Performance Analysis of Signature-Based Hybrid Spill-Tree for Indexing High Dimensional Vector Data)

  • 이현조;홍승태;나소라;장유진;장재우;심춘보
    • 인터넷정보학회논문지
    • /
    • 제10권6호
    • /
    • pp.173-189
    • /
    • 2009
  • 최근 UCC를 중심으로 동영상 데이터에 대해 사람들의 관심이 증가하고 있다. 따라서 동영상 데이터의 내용-기반 검색을 지원하는 효율적인 색인 기법이 요구된다. 그러나 Hybrid Spill-Tree를 제외한 대부분의 색인 기법들은 대용량의 고차원 데이터를 다루는데 비효율적이다. 본 논문에서는 동영상 데이터의 내용-기반 검색을 지원하기 위한 효율적인 고차원 색인 기법을 제안한다. 제안하는 고차원 색인 기법은 기존 Hybrid Spill-Tree을 기반으로 새롭게 제안하는 클러스터링 방법과 시그니쳐를 이용한 데이터 저장 방법을 결합하여 확장된 색인 기법이다. 또한 제안하는 시그니쳐-기반 고차원 색인 기법이 기존 M-Tree 및 Hybrid Spill-Tree에 비해 성능이 우수함을 보인다.

  • PDF

압축영역에서 객체 움직임 맵에 의한 효율적인 비디오 인덱싱 방법에 관한 연구 (An Efficient Video Indexing Method using Object Motion Map in compresed Domain)

  • 김소연;노용만
    • 한국정보처리학회논문지
    • /
    • 제7권5호
    • /
    • pp.1570-1578
    • /
    • 2000
  • Object motion is an important feature of content in video sequences. By now, various methods to exact feature about the object motion have been reported[1,2]. However they are not suitable to index video using the motion, since a lot of bits and complex indexing parameters are needed for the indexing [3,4] In this paper, we propose object motion map which could provide efficient indexing method for object motion. The proposed object motion map has both global and local motion information during an object is moving. Furthermore, it requires small bit of memory for the indexing. to evaluate performance of proposed indexing technique, experiments are performed with video database consisting of MPEG-1 video sequence in MPEG-7 test set.

  • PDF

Object segmentation and object-based surveillance video indexing

  • Kim, Jin-Woong;Kim, Mun-Churl;Lee, Kyu-Won;Kim, Jae-Gon;Ahn, Chie-Teuk
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1999년도 KOBA 방송기술 워크샵 KOBA Broadcasting Technology Workshop
    • /
    • pp.165.1-170
    • /
    • 1999
  • Object segmentation fro natural video scenes has recently become one of very active research to pics due to the object-based video coding standard MPEG-4. Object detection and isolation is also useful for object-based indexing and search of video content, which is a goal of the emerging new standard, MPEG-7. In this paper, an automatic segmentation method of moving objects in image sequence is presented which is applicable to multimedia content authoring for MPEG-4, and two different segmentation approaches suitable for surveillance applications are addressed in raw data domain and compressed bitstream domains. We also propose an object-based video description scheme based on object segmentation for video indexing purposes.

Automatic Name Line Detection for Person Indexing Based on Overlay Text

  • Lee, Sanghee;Ahn, Jungil;Jo, Kanghyun
    • Journal of Multimedia Information System
    • /
    • 제2권1호
    • /
    • pp.163-170
    • /
    • 2015
  • Many overlay texts are artificially superimposed on the broadcasting videos by humans. These texts provide additional information to the audiovisual content. Especially, the overlay text in news videos contains concise and direct description of the content. Therefore, it is most reliable clue for constructing a news video indexing system. To make the automatic person indexing of interview video in the TV news program, this paper proposes the method to only detect the name text line among the whole overlay texts in one frame. The experimental results on Korean television news videos show that the proposed framework efficiently detects the overlaid name text line.