• Title/Summary/Keyword: video metadata

Search Result 115, Processing Time 0.025 seconds

Design and Implementation of UCC Multimedia Service Systems (UCC 멀티미디어 서비스 시스템 설계 및 구현)

  • Bok, kyoung-soo;Yeo, myung-ho;Lee, mi-sook;Lee, nak-gyu;Yoo, kwan-hee;Yoo, jae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.178-182
    • /
    • 2007
  • In this paper, we design and implement the UCC services prototype system for image and video. The proposed system consists of the two components such as the multimedia processing subsystem and the metadata management subsystem, and provides the API to UCC service developers. The multimedia processing subsystem supports the media management and editing of image and video, and the streaming services of video. The metadata management subsystem supports the metadata management and retrieval of image and video, and the reply management and script processing of UCC.

  • PDF

An XML-based Metadata Engine Design for Effective Retrieval in Video Recording System (동영상 저장 시스템에서 효율적인 검색을 위한 XML 메타데이터 엔진 설계)

  • Shin Eun Young;PARK Sung Han
    • Journal of Broadcast Engineering
    • /
    • v.10 no.2
    • /
    • pp.202-209
    • /
    • 2005
  • In this paper, we propose a design of the metadata engine of the video recording system to minimize the retrieval time. For this purpose, the proposed metadata engine stores the XML metadata as a separated fragment and construct a hierarchical indexing scheme based on the contextual and structural properties of metadata. The hierarchical indexing scheme is consisted of a node index for basic searching and a group index for advanced searching. In this way our approach can minimize the number of indexes and thus the retrieval time. Our simulation results show that the response time of our proposed system is shorter than that of the previous works.

Efficient Browsing Method based on Metadata of Video Contents (동영상 컨텐츠의 메타데이타에 기반한 효율적인 브라우징 기법)

  • Chun, Soo-Duck;Shin, Jung-Hoon;Lee, Sang-Jun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.513-518
    • /
    • 2010
  • The advancement of information technology along with the proliferation of communication and multimedia has increased the demand of digital contents. Video data of digital contents such as VOD, NOD, Digital Library, IPTV, and UCC are getting more permeated in various application fields. Video data have sequential characteristic besides providing the spatial and temporal information in its 3D format, making searching or browsing ineffective due to long turnaround time. In this paper, we suggest ATVC(Authoring Tool for Video Contents) for solving this issue. ATVC is a video editing tool that detects key frames using visual rhythm and insert metadata such as keywords into key frames via XML tagging. Visual rhythm is applied to map 3D spatial and temporal information to 2D information. Its processing speed is fast because it can get pixel information without IDCT, and it can classify edit-effects such as cut, wipe, and dissolve. Since XML data save key frame information via XML tag and keyword information, it can furnish efficient browsing.

XMARS : XML-based Multimedia Annotation and Retrieval System (XMARS : XML 기반 멀티미디어 주석 및 검색 시스템)

  • Nam, Yun-Young;Hwang, Een-Jun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.541-548
    • /
    • 2002
  • This paper proposes an XML based Multimedia Annotation and Retrieval System, which can represent and retrieve video data efficiently using XML. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is Implemented based on the hierarchical metadata model to represent multimedia information. The metadata about video is organized based on multimedia description schema using XML Schema that basically conforms to the MPEG-7 standard. Also, for the effective indexing and retrieval of multimedia data, video segments are annotated and categorized using the closed caption.

MXF-based Broadcast Metadata Authoring and Browsing (MXF 기반 방송용 메타데이터 저작 및 브라우징)

  • Lee Moon-Sik;Jung Byung-Hee;Park Sung-Choon;Oh Yeon-Hee
    • Journal of Broadcast Engineering
    • /
    • v.11 no.3 s.32
    • /
    • pp.276-283
    • /
    • 2006
  • This paper analyzes metadata workflow from creation to browsing, and discusses metadata authoring and browsing technology. Unlike usual multimedia authoring, broadcast metadata authoring means metadata editing synchronized with video. In order to make practical application of other systems, the result is in XML or MXF(Material eXchange Format) based on common metadata scheme. The MXF Browser developed with the consideration of broadcast metadata that is time-synchronized with AV content provides not only metadata authoring capability but also advanced content browsing services such as summary playback and highlight browsing based on metadata multi-track.

A Study on Implementation of XML-Based Information Retrieval System for Video Contents (XML 기반의 동영상콘텐츠 검색 시스템 설계 및 구현)

  • Kim, Yong;So, Min-Ho
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.4
    • /
    • pp.113-128
    • /
    • 2009
  • Generally, a user uses briefly summarized video data and text information to search video contents. To provide fast and accurate search tool for video contents in the process of searching video contents, this study proposes a method to search video clips which was partitioned from video contents. To manage and control video contents and metadata, the proposed method creates structural information based on XML on a video and metadata, and saves the information into XML database. With the saved information, when a user try to search video contents, the results of user's query to retrieve video contents would be provided through creating Xpath which has class structure information. Based on the proposed method, an information retrieval system for video clips was designed and implemented.

Connecting Online Video Clips to a TV Program: Watching Online Video Clips on a TV Screen with a Related Program (인터넷 비디오콘텐츠를 관련 방송프로그램과 함께 TV환경에서 시청하기 위한 기술 및 방법에 관한 연구)

  • Cho, Jae-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.12 no.5
    • /
    • pp.435-444
    • /
    • 2007
  • In this paper, we presented the concept and some methods to watch online video clips related to a TV program on atelevision which is called lean-back media, and we simulated our concept on a PC system. The key point of this research is suggesting a new service model to TV viewers and the TV industry, which the model provides simple and easy ways to watch online video clips on a TV screen. The paper defined new tags for metadata and algorithm for the model, then showed simple example using those metadata. At the end, it mentioned the usage of the model in the digital broadcasting environment and discuss about the issues which should handle as future works.

User-Created Content Recommendation Using Tag Information and Content Metadata

  • Rhie, Byung-Woon;Kim, Jong-Woo;Lee, Hong-Joo
    • Management Science and Financial Engineering
    • /
    • v.16 no.2
    • /
    • pp.29-38
    • /
    • 2010
  • As the Internet is more embedded in people's lives, Internet users draw on new Internet applications to express themselves through "user-created content (UCC)." In addition, there is a noticeable shift from text-centered contents mainly posted on bulletin boards to multimedia contents such as images and videos on UCC web sites. The changes require different way of recommendations comparing to traditional products or contents recommendation on the Internet. This paper aims to design UCC recommendation methods with user behavior data and contents metadata such as tags and titles, and compare performances of the suggested methods. Real web logs data of a major Korean video UCC site was used to empirical experiments. The results of the experiments show that collaborative filtering technique based on similarity of UCC customers' preferences performs better than other content-based recommendation methods based on tag information and content metadata.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

A Research on the Method of Automatic Metadata Generation of Video Media for Improvement of Video Recommendation Service (영상 추천 서비스의 개선을 위한 영상 미디어의 메타데이터 자동생성 방법에 대한 연구)

  • You, Yeon-Hwi;Park, Hyo-Gyeong;Yong, Sung-Jung;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.281-283
    • /
    • 2021
  • The representative companies mentioned in the recommendation service in the domestic OTT(Over-the-top media service) market are YouTube and Netflix. YouTube, through various methods, started personalized recommendations in earnest by introducing an algorithm to machine learning that records and uses users' viewing time from 2016. Netflix categorizes users by collecting information such as the user's selected video, viewing time zone, and video viewing device, and groups people with similar viewing patterns into the same group. It records and uses the information collected from the user and the tag information attached to the video. In this paper, we propose a method to improve video media recommendation by automatically generating metadata of video media that was written by hand.

  • PDF