• Title/Summary/Keyword: Video Caption

Search Result 65, Processing Time 0.027 seconds

Caption Data Transmission Method for HDTV Picture Quality Improvement (DTV 화질향상을 위한 자막데이터 전송방법)

  • Han, Chan-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.10
    • /
    • pp.1628-1636
    • /
    • 2017
  • Such as closed caption, ancillary data, electronic program guide(EPG), data broadcasting, and etc, increased data for service convenience cause to degrade video quality of high definition contents. This article propose a method to transfer the closed caption data of video contents without video quality degradation. Video quality degradation does not cause in video compression by the block image insertion of caption data in DTV essential hidden area. Additionally the proposed methods have advantage to synchronize video, audio, and caption from preinserted script without time delay.

Extraction of Superimposed-Caption Frame Scopes and Its Regions for Analyzing Digital Video (비디오 분석을 위한 자막프레임구간과 자막영역 추출)

  • Lim, Moon-Cheol;Kim, Woo-Saeng
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3333-3340
    • /
    • 2000
  • Recently, Requnremeni for video data have been increased rapidly by high progress of both hardware and cornpression technique. Because digital video data are unformed and mass capacity, it needs various retrieval techniquesjust as contednt-based rehieval Superimposed-caption ina digital video can help us to analyze the video story easier and be used as indexing information for many retrieval techniques In this research we propose a new method that segments the caption as analyzing texture eature of caption regions in each video frame, and that extracts the accurate scope of superimposed-caption frame and its key regions and color by measunng cominuity of caption regions between frames

  • PDF

Creation of Soccer Video Highlight Using The Structural Features of Caption (자막의 구조적 특징을 이용한 축구 비디오 하이라이트 생성)

  • Huh, Moon-Haeng;Shin, Seong-Yoon;Lee, Yang-Weon;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.10D no.4
    • /
    • pp.671-678
    • /
    • 2003
  • A digital video is usually very long temporally. requiring large storage capacity. Therefore, users want to watch pre-summarized video before they watch a large long video. Especially in the field of sports video, they want to watch a highlight video. Consequently, highlight video is used that the viewers decide whether it is valuable for them to watch the video or not. This paper proposes how to create soccer video highlight using the structural features of the caption such as temporal and spatial features. Caption frame intervals and caption key frames are extracted by using those structural features. And then, highlight video is created by using scene relocation, logical indexing and highlight creation rule. Finally. retrieval and browsing of highlight and video segment is performed by selection of item on browser.

An Effective Method for Replacing Caption in Video Images (비디오 자막 문자의 효과적인 교환 방법)

  • Chun Byung-Tae;Kim Sook-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.97-104
    • /
    • 2005
  • Caption texts frequently inserted in a manufactured video image for helping an understanding of the TV audience. In the movies. replacement of the caption texts can be achieved without any loss of an original image, because the caption texts have their own track in the films. To replace the caption texts in early methods. the new texts have been inserted the caption area in the video images, which is filled a certain color for removing established caption texts. However, the use of these methods could be lost the original images in the caption area, so it is a Problematic method to the TV audience. In this Paper, we propose a new method for replacing the caption text after recovering original image in the caption area. In the experiments. the results in the complex images show some distortion after recovering original images, but most results show a good caption text with the recovered image. As such, this new method is effectively demonstrated to replace the caption texts in video images.

  • PDF

Caption Region Extraction of Sports Video Using Multiple Frame Merge (다중 프레임 병합을 이용한 스포츠 비디오 자막 영역 추출)

  • 강오형;황대훈;이양원
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.4
    • /
    • pp.467-473
    • /
    • 2004
  • Caption in video plays an important role that delivers video content. Existing caption region extraction methods are difficult to extract caption region from background because they are sensitive to noise. This paper proposes the method to extract caption region in sports video using multiple frame merge and MBR(Minimum Bounding Rectangles). As preprocessing, adaptive threshold can be extracted using contrast stretching and Othu Method. Caption frame interval is extracted by multiple frame merge and caption region is efficiently extracted by median filtering, morphological dilation, region labeling, candidate character region filtering, and MBR extraction.

  • PDF

Creation of Soccer Video Highlights Using Caption Information (자막 정보를 이용한 축구 비디오 하이라이트 생성)

  • Shin Seong-Yoon;Kang Il-Ko;Rhee Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.65-76
    • /
    • 2005
  • A digital video is a very long data that requires large-capacity storage space. As such, prior to watching a long original video, video watchers want to watch a summarized version of the video. In the field of sports, in particular, highlights videos are frequently watched. In short, a highlights video allows a video watcher to determine whether the highlights video is well worth watching. This paper proposes a scheme for creating soccer video highlights using the structural features of captions in terms of time and space. Such structural features are used to extract caption frame intervals and caption keyframes. A highlights video is created through resetting shots for caption keyframes, by means of logical indexing, and through the use of the rule for creating highlights. Finally, highlights videos and video segments can be searched and browsed in a way that allows the video watcher to select his/her desired items from the browser.

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

Knowledge-based Video Retrieval System Using Korean Closed-caption (한국어 폐쇄자막을 이용한 지식기반 비디오 검색 시스템)

  • 조정원;정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.115-124
    • /
    • 2004
  • The content-based retrieval using low-level features can hardly provide the retrieval result that corresponds with conceptual demand of user for intelligent retrieval. Video includes not only moving picture data, but also audio or closed-caption data. Knowledge-based video retrieval is able to provide the retrieval result that corresponds with conceptual demand of user because of performing automatic indexing with such a variety data. In this paper, we present the knowledge-based video retrieval system using Korean closed-caption. The closed-caption is indexed by Korean keyword extraction system including the morphological analysis process. As a result, we are able to retrieve the video by using keyword from the indexing database. In the experiment, we have applied the proposed method to news video with closed-caption generated by Korean stenographic system, and have empirically confirmed that the proposed method provides the retrieval result that corresponds with more meaningful conceptual demand of user.

A Study on the Use of Speech Recognition Technology for Content-based Video Indexing and Retrieval (내용기반 비디오 색인 및 검색을 위한 음성인식기술 이용에 관한 연구)

  • 손종목;배건성;강경옥;김재곤
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.16-20
    • /
    • 2001
  • An important aspect of video program indexing and retrieval is the ability to segment video program into meaningful segments, in other words, the ability of content-based video program segmentation. In this paper, a new approach using speech recognition technology has been proposed for content-based video program segmentation. This approach uses speech recognition technique to synchronize closed caption with speech signal. Experimental results demonstrate that the proposed scheme is very promising for content-based video program segmentation.

  • PDF

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.