• Title/Summary/Keyword: video metadata

Search Result 115, Processing Time 0.026 seconds

Metadata Transcoding between MPEG-7 and TV Anytime for Segmentation Information of Video (비디오 내용기술을 위한 MPEG-7과 TV Anytime 메타데이타의 상호 변환)

  • 임화영;이창윤;김혁만
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10c
    • /
    • pp.31-33
    • /
    • 2002
  • 본 논문에서는 MPEG-7 스키마에 따라 비디오의 내용을 기술한 메타데이타를 TVA 스키마에 따른 메타데이타로 변환, 그리고 그 역방향으로의 변환 방법을 제안한다. 이를 위해 MPEG-7과 TVA스키마를 분석하여 그들의 유사점과 차이점을 밝히다. 또한 중첩 표현법과 참조 표현법으로의 상호 변환시 야기되는 id 처리문제, 대표화면 정보의 처리, 위치 정보의 처리 등에 관한 방법 을 기술한다.

  • PDF

Contextual In-Video Advertising Using Situation Information (상황 정보를 활용한 동영상 문맥 광고)

  • Yi, Bong-Jun;Woo, Hyun-Wook;Lee, Jung-Tae;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.3036-3044
    • /
    • 2010
  • With the rapid growth of video data service, demand to provide advertisements or additional information with regard to a particular video scene is increasing. However, the direct use of automated visual analysis or speech recognition on videos virtually has limitations with current level of technology; the metadata of video such as title, category information, or summary does not reflect the content of continuously changing scenes. This work presents a new video contextual advertising system that serves relevant advertisements on a given scene by leveraging the scene's situation information inferred from video scripts. Experimental results show that the use of situation information extracted from scripts leads to better performance and display of more relevant advertisements to the user.

Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification (수술 동영상의 비식별화를 위한 개인식별정보 자동 검출 시스템 설계 및 구현)

  • Cho, Youngtak;Ahn, Kiok
    • Convergence Security Journal
    • /
    • v.19 no.5
    • /
    • pp.75-84
    • /
    • 2019
  • Recently, the value of video as an important data of medical information technology is increasing due to the feature of rich clinical information. On the other hand, video is also required to be de-identified as a medical image, but the existing methods are mainly specialized in the stereotyped data and still images, which makes it difficult to apply the existing methods to the video data. In this paper, we propose an automated system to index candidate elements of personal identification information on a frame basis to solve this problem. The proposed system performs indexing process using text and person detection after preprocessing by scene segmentation and color knowledge based method. The generated index information is provided as metadata according to the purpose of use. In order to verify the effectiveness of the proposed system, the indexing speed was measured using prototype implementation and real surgical video. As a result, the work speed was more than twice as fast as the playing time of the input video, and it was confirmed that the decision making was possible through the case of the production of surgical education contents.

Semantic Indexing for Soccer Videos Using Web-Extracted Information (웹에서 축출된 정보를 이용한 축구 경기의 시맨틱 인덱싱)

  • Hirata, Issao;Kim, Myeong-Hoon;Sull, Sang-Hoon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.41-45
    • /
    • 2007
  • The rapid growing of video content production leads to the necessity of developing more complex indexing systems in order to efficiently allow searching, retrieval and presentation of the desired segments of videos. This paper presents a method for indexing soccer video through automatic extraction of information from internet. The proposed paper defines a metadata structure to formally represent the knowledge of soccer matches and provides an automatic method to extract semantic information from web-sites. This approach improves the capability to extract more reliable and richer semantic Information for soccer videos. Experimental results demonstrate that the proposed method provides an efficient performance.

  • PDF

A Study for identifier (lastURL) fix-up algorithm in MPV's implementation (MPV(MultiPhoto/Video) 적용에 있어서의 식별자 복구 알고리즘에 대한 연구)

  • Kim, Du-Il
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.23-26
    • /
    • 2002
  • MPV(MultiPhoto/Video)는 디지털 기기 간의 상호-연동성(Inter-operability)를 향상시키고, 사용자의 컨텐츠 관리를 용이하게 하기 위해서 제안되고 있는 표준안이다. MPV에서는 사용자의 편의를 위해서 데이터 부분에 해당하는 컨텐츠를 직접 관리하지 않고 XML 포맷으로 형성되는 메타데이터(Metadata)를 통하여 관리를 하게 되는데, MPV 와 관계 없는 소프트웨어에 의해서 컨텐츠의 정보가 변경되면, 이를 바로 잡을 수 있도록 복구 알고리즘을 제안하고 있다. 하지만. 제안된 알고리즘은 속도가 매우 느리다는 단점을 가지고 있어, 본 논문에서 이의 속도 및 효율를 향상시킬 수 있는 두 가지 알고리즘을 제안한다.

  • PDF

Towards the Generation of Language-based Sound Summaries Using Electroencephalogram Measurements (뇌파측정기술을 활용한 언어 기반 사운드 요약의 생성 방안 연구)

  • Kim, Hyun-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.3
    • /
    • pp.131-148
    • /
    • 2019
  • This study constructed a cognitive model of information processing to understand the topic of a sound material and its characteristics. It then proposed methods to generate sound summaries, by incorporating anterior-posterior N400/P600 components of event-related potential (ERP) response, into the language representation of the cognitive model of information processing. For this end, research hypotheses were established and verified them through ERP experiments, finding that P600 is crucial in screening topic-relevant shots from topic-irrelevant shots. The results of this study can be applied to the design of classification algorithm, which can then be used to generate the content-based metadata, such as generic or personalized sound summaries and video skims.

A multidisciplinary analysis of the main actor's conflict emotions in Animation film's Turning Point (장편 애니메이션 극적전환점에서 주인공의 갈등 정서에 대한 다학제적 분석)

  • Lee, Tae Rin;Kim, Jong Dae;Liu, Guoxu;Ingabire, Jesse;Kim, Jae Ho
    • Korea Science and Art Forum
    • /
    • v.34
    • /
    • pp.275-290
    • /
    • 2018
  • The study began with the recognition that the animations movie need objective and reasonable methods to classify conflicts in visual to analyze conflicts centering on narratives. Study the emotions of the hero in conflict. The purpose of the study is to analyze conflict intensity and emotion. The results and contents of the study are as follows. First, we found a Turning Point and suggested a conflict classification model (Conflict 6B Model). Second, Based on the conflict classification model, the conflict based shot DB was extracted. Third, I found strength and emotion in inner and super personal conflicts. Fourth, Experiments and tests of strength and emotion were conducted in internal and super personal conflicts. The results of this study are metadata extracted from the emotional research on conflict. It is expected to be applied to video indexing of conflicts.

A Study on Identification of the Source of Videos Recorded by Smartphones (스마트폰으로 촬영된 동영상의 출처 식별에 대한 연구)

  • Kim, Hyeon-seung;Choi, Jong-hyun;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.4
    • /
    • pp.885-894
    • /
    • 2016
  • As smartphones become more common, anybody can take pictures and record videos easily nowadays. Video files taken from smartphones can be used as important clues and evidence. While you analyze video files taken from smartphones, there are some occasions where you need to prove that a video file was recorded by a specific smartphone. To do this, you can utilize various fingerprint techniques mentioned in existing research. But you might face the situation where you have to strengthen the result of fingerprinting or fingerprint technique can't be used. Therefore forensic investigation of the smartphone must be done before fingerprinting and the database of metadata of video files should be established. The artifacts in a smartphone after video recording and the database mentioned above are discussed in this paper.

The Development of Information Circulation System for Science & Technology Video Digital Contents Based on KOI(Knowledge Object Identifier) (식별체계기반 과학기술 동영상 콘텐츠 유통시스템 구축 방안)

  • Seok Jung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.65-71
    • /
    • 2005
  • With the rapid improvement of the internet and information technology, digital contents containing knowledge and information resource is circulated through the internet. A circulation system based on a standardized identifier is required to share this kind of information, generated from seminars and workshops conducted in the area of science and technology and saved in the form of digital video contents. The main objective of this study is on constructing an information circulation system based on the KOI identifier to effectively share the digital video contents produced from seminars and workshops related to the area of science and technology. Furthermore, the overview and status of a standardized identifier, and the functional aspects of the system such as the methods to apply the KOI identification system on the subject and its slides of digital video contents, a digital video contents management system, a centralized identifier management system, and the methods applied for the search of digital video metadata have been suggested to support construction of the information circulation system.

  • PDF

A Content-Based Synchronization Approach using Scene Keywords in Enhanced TV based on MPEG-4 (MPEG-4 기반 연동형 방송에서 장면 키워드를 이용한 내용 기반 동기화 기법)

  • Yim, Hyun-Jeong;Lim, Soon-Bum
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.737-741
    • /
    • 2010
  • When implementing Enhanced TV services, the time synchronization between the video stream that forms the background and the data contents overlaid on audio/video is an important issue. Currently, however, the basic method of synchronizing the data in the MPEG-4 environment is based on absolute time values. For more efficient synchronization when developing Enhanced TV content, this paper proposes a content-based synchronization in which the data content varies depending on the video content. The proposed content-based synchronization method is implemented by defining BIFS nodes more widely, based on scene keywords, and then using the metadata of MPEG7.