• Title/Summary/Keyword: Video extraction

Search Result 462, Processing Time 0.029 seconds

An Efficient Video Retrieval Algorithm Using Luminance Projection

  • Kim, Sang-Hyun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.891-898
    • /
    • 2004
  • An effective video indexing is required to manipulate large video databases. Most algorithms for video indexing have been commonly used histograms, edges, or motion features. In this paper, we propose an efficient algorithm using the luminance projection for video retrieval. To effectively index the video sequences and to reduce the computational complexity, we use the key frames extracted by the cumulative measure, and compare the set of key frames using the modified Hausdorff distance. Experimental results show that the proposed video indexing and video retrieval algorithm yields the higher accuracy and performance than the conventional algorithm.

  • PDF

Preprocessing System for Real-time and High Compression MPEG-4 Video Coding (실시간 고압축 MPEG-4 비디오 코딩을 위한 전처리 시스템)

  • 김준기;홍성수;이호석
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.5
    • /
    • pp.509-520
    • /
    • 2003
  • In this paper, we developed a new and robust algorithm for a practical and very efficient MPEG-4 video coding. The MPEG-4 video group has developed the video Verification Model(VM) which evolved through time by means of core experiments. And in the standardization process, MS-FDAM was developed based on the standard document of ISO/IEC 14496-2 and VM as a reference MPEG-4 coding system. But MS -FDAM has drawbacks in practical MPEG-4 coding and it does not have the VOP extraction functionality. In this research, we implemented a preprocessing system for a real-time input and the VOP extraction for a practical content-based MPEG-4 video coding and also implemented the motion detection to achieve the high compression rate of 180:1.

Keyframe Extraction from Home Videos Using 5W and 1H Information (육하원칙 정보에 기반한 홈비디오 키프레임 추출)

  • Jang, Cheolhun;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.19 no.2
    • /
    • pp.9-18
    • /
    • 2013
  • We propose a novel method to extract keyframes from home videos based on the 5W and 1H information. Keyframe extraction is a kind of video summarization which selects only specific frames containing important information of a video. As a home video may have content with a variety of topics, we cannot make specific assumptions for information extraction. In addition, to summarize a home video we must analyze human behaviors, because people are important subjects in home videos. In this paper, we extract 5W and 1H information by analyzing human faces, human behaviors, and the global information of background. Experimental results demonstrate that our technique extract more similar keyframes to human selections than previous methods.

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

Implementing Renderer for Viewport Dependent 360 Video (사용자 시점 기반 360 영상을 위한 렌더러 구현)

  • Jang, Dongmin;Son, Jang-Woo;Jeong, JongBeom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.747-759
    • /
    • 2018
  • In this paper, we implement viewport dependent tile partitioning for high quality 360 video transmission and rendering method to present a HMD (Head Mounted Display) screen for 360 video quality evaluation. As a method for high-quality video transmission based on a user's viewport, this paper introduces MCTS (Motion Constrained Tile Sets) technique for solving the motion reference problem and EIS (Extraction Information Sets) SEI including pre-configured tile information, and extractor that extracts tiles. In addition, it explains tile extraction method based on user's viewport and implementation contents of the method of expressing on an HMD. Therefore, if 360 video is transferred by the proposed implementation which only transfers video from the user viewport area, it is possible to express higher quality video with lower bandwidth while avoiding unnecessary image transmission.

A NEW DETAIL EXTRACTION TECHNIQUE FOR VIDEO SEQUENCE CODING USING MORPHOLOGICAL LAPLACIAN OPERATOR (수리형태학적 Laplacian 연산을 이용한 새로운 동영상 Detail 추출 방법)

  • Eo, Jin-Woo;Kim, Hui-Jun
    • Journal of IKEEE
    • /
    • v.4 no.2 s.7
    • /
    • pp.288-294
    • /
    • 2000
  • In this paper, an efficient detail extraction technique for a progressive coding scheme is proposed. The existing technique using the top-hat transformation yields an efficient extraction scheme for isolated and visually important details, but yields an inefficient results containing significant redundancy extracting the contour information. The proposed technique using the strong edge feature extraction property of the morphological Laplacian in this paper can reduce the redundancy, and thus provides lower bit-rate. Experimental results show that the proposed technique is more efficient than the existing one, and promise the applicability of the morphological Laplacian operator.

  • PDF

Study on News Video Character Extraction and Recognition (뉴스 비디오 자막 추출 및 인식 기법에 관한 연구)

  • 김종열;김성섭;문영식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.10-19
    • /
    • 2003
  • Caption information in news videos can be useful for video indexing and retrieval since it usually suggests or implies the contents of the video very well. In this paper, a new algorithm for extracting and recognizing characters from news video is proposed, without a priori knowledge such as font type, color, size of character. In the process of text region extraction, in order to improve the recognition rate for videos with complex background at low resolution, continuous frames with identical text regions are automatically detected to compose an average frame. The image of the averaged frame is projected to horizontal and vertical direction, and we apply region filling to remove backgrounds to produce the character. Then, K-means color clustering is applied to remove remaining backgrounds to produce the final text image. In the process of character recognition, simple features such as white run and zero-one transition from the center, are extracted from unknown characters. These feature are compared with the pre-composed character feature set to recognize the characters. Experimental results tested on various news videos show that the proposed method is superior in terms of caption extraction ability and character recognition rate.

A network-adaptive SVC Streaming Architecture

  • Chen, Peng;Lim, Jeong-Yeon;Lee, Bum-Shik;Kim, Mun-Churl;Hahm, Sang-Jin;Kim, Byung-Sun;Lee, Keun-Sik;Park, Keun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.257-260
    • /
    • 2006
  • In Video streaming environment, we must consider terminal and network characteristics, such as display resolution, frame rate, computational resource, network bandwidth, etc. The JVT (Joint Video Team) by ISO/IEC MPEG and ITU-TVCEG is currently standardizing Scalable Video Coding (SVC). This can represent video bitstreams in different sealable layers for flexible adaptation to terminal and network characteristics. This characteristic is very useful in video streaming applications. One fully scalable video can be extracted with specific target spatial resolution, temporal frame rate and quality level to match the requirements of terminals and networks. Besides, the extraction process is fast and consumes little computational resource, so it is possible to extract the partial video bitstream online to accommodate with changing network conditions etc. With all the advantages of SVC, we design and implement a network-adaptive SVC streaming system with an SVC extractor and a streamer to extract appropriate amounts of bitstreams to meet the required target bitrates and spatial resolutions. The proposed SVC extraction is designed to allow for flexible switching from layer to layer in SVC bitstreams online to cope with the change in network bandwidth. The extraction is made in every GOP unit. We present the implementation of our SVC streaming system with experimental results.

  • PDF

A New Video Watermarking Scheme Resistant to Collusion and Synchronization Attacks

  • Kim, Ki-Jung
    • International Journal of Contents
    • /
    • v.5 no.2
    • /
    • pp.32-37
    • /
    • 2009
  • A new video watermarking scheme with robustness against collusion and synchronization attacks is presented. We propose to embed only a few copies of the watermark along the temporal axis into frames, which are located at the borders of each two different plotlines of the video. As a result, each change of the video plotline is transformed into pulse, which is used for watermark embedding and extraction. In addition, since we embed a watermark only into a small number of frames, the distortions of the video are reduced to minimum. Experimental results show the robustness of the proposed scheme.

Energy Minimization Based Semantic Video Object Extraction

  • Kim, Dong-Hyun;Choi, Sung-Hwan;Kim, Bong-Joe;Shin, Hyung-Chul;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.138-141
    • /
    • 2010
  • In this paper, we propose a semi-automatic method for semantic video object extraction which extracts meaningful objects from an input sequence with one correctly segmented training image. Given one correctly segmented image acquired by the user's interaction in the first frame, the proposed method automatically segments and tracks the objects in the following frames. We formulate the semantic object extraction procedure as an energy minimization problem at the fragment level instead of pixel level. The proposed energy function consists of two terms: data term and smoothness term. The data term is computed by considering patch similarity, color, and motion information. Then, the smoothness term is introduced to enforce the spatial continuity. Finally, iterated conditional modes (ICM) optimization is used to minimize energy function in a globally optimal manner. The proposed semantic video object extraction method provides faithful results for various types of image sequences.

  • PDF