• Title/Summary/Keyword: video metadata

Search Result 115, Processing Time 0.025 seconds

A Generation Method of Spatially Encoded Video Data for Geographic Information Systems

  • Joo, In-Hak;Hwang, Tae-Hyun;Choi, Kyoung-Ho;Jang, Byung-Tae
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.801-803
    • /
    • 2003
  • In this paper, we present a method for generating and providing spatially encoded video data that can be effectively used by GIS applications. We collect the video data by a mobile mapping system called 4S-Van that is equipped by GPS, INS, CCD camera, and DVR system. The information about spatial object appearing in video, such as occupied region in each frame, attribute value, and geo-coordinate, are generated and encoded. We suggest methods that can generate such data for each frame in semi-automatic manner. We adopt standard MPEG-7 metadata format for representation of the spatially encoded video data to be generally used by GIS application. The spatial and attribute information encoded to each video frame can make visual browsing between map and video possible. The generated video data can be provided and applied to various GIS applications where location and visual data are both important.

  • PDF

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Meta-trailed Caching for Transcoding Proxies (트랜스코딩 프록시를 위한 메타데이터 추가 캐슁)

  • Kang, Jai-Woong;Choi, Chang-Yeol
    • Journal of Industrial Technology
    • /
    • v.27 no.B
    • /
    • pp.185-192
    • /
    • 2007
  • Transcoding video proxy is necessary to support various bandwidth requirements for mobile multimedia and to provide adapting video streams to mobile clients. Caching algorithms for proxy are to reduce the network traffic between the content servers and the proxy. This paper proposes a Meta-tailed caching for transcoding proxy that is efficient to lower network load and CPU load. Caching of two different data types - transcoded video, and metadata - provides a foundation to achieve superior balance between network resource and computation resource at transcoding proxies. Experimental results show that the Meta-tailed caching lowers at least 10% of CPU-load and at least 9% of network-load at a transcoding proxy.

  • PDF

Temporal Video Modeling of Cultural Video (교양비디오의 시간지원 비디오 모델링)

  • 강오형;이지현;고성현;김정은;오재철
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.439-442
    • /
    • 2004
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, 1 present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we tan use video information more efficiently by me variation.

  • PDF

Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning (MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현)

  • YU, LI
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.897-904
    • /
    • 2022
  • Sports video is a very critical information resource. High-precision extraction of effective segments in sports video can better assist coaches in analyzing the player's actions in the video, and enable users to more intuitively appreciate the player's hitting action. Aiming at the shortcomings of the current sports video clip extraction results, such as strong subjectivity, large workload and low efficiency, a classification method of sports video clips based on MobileNetV3 is proposed to save user time. Experiments evaluate the effectiveness of effective segment extraction. Among the extracted segments, the effective proportion is 97.0%, indicating that the effective segment extraction results are good, and it can lay the foundation for the construction of the subsequent badminton action metadata video dataset.

A Movie Recommendation Method based on Emotion Ontology (감정 온톨로지 기반의 영화 추천 기법)

  • Kim, Ok-Seob;Lee, Seok-Won
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1068-1082
    • /
    • 2015
  • Due to the rapid advancement of the mobile technology, smart phones have been widely used in the current society. This lead to an easier way to retrieve video contents using web and mobile services. However, it is not a trivial problem to retrieve particular video contents based on users' specific preferences. The current movie recommendation system is based on the users' preference information. However, this system does not consider any emotional means or perspectives in each movie, which results in the dissatisfaction of user's emotional requirements. In order to address users' preferences and emotional requirements, this research proposes a movie recommendation technology to represent a movie's emotion and its associations. The proposed approach contains the development of emotion ontology by representing the relationship between the emotion and the concepts which cause emotional effects. Based on the current movie metadata ontology, this research also developed movie-emotion ontology based on the representation of the metadata related to the emotion. The proposed movie recommendation method recommends the movie by using movie-emotion ontology based on the emotion knowledge. Using this proposed approach, the user will be able to get the list of movies based on their preferences and emotional requirements.

An Analysis of Multimedia Search Services Provided by Major Korean Search Portals (주요 포털들의 멀티미디어 검색 서비스 비교 분석)

  • Park, So-Yeon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.44 no.4
    • /
    • pp.395-412
    • /
    • 2010
  • This study aims to perform an evaluation of multimedia search services provided by major Korean search portals: Naver, Nate, Daum, Yahoo-Korea, Paran, and Google-Korea. These multimedia search services are evaluated in terms of the metadata of search results, search functionalities, searching methods, other functionalities, and display options. Every search portal offers image and video searching, whereas only Naver, Nate, and Daum offer music searching. Advanced searching methods and functions are mostly developed and supported in image and video searching rath than music searching. Naver, Nate, and Daum support various searching functions which search portals abroad have not developed. Google-Korea supports advanced searching functions. Search portals provide a limited number of metadata in search results. This study could contribute to the development and improvement of portal's multimedia search services.

Temporal_based Video Retrival System (시간기반 비디오 검색 시스템)

  • Lee, Ji-Hyun;Kang, Oh-Hyung;Na, Do-Won;Lee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.631-634
    • /
    • 2005
  • Traditional database systems have been used models supported for the operations and relationships based on simple interval. video data models are required in order to provide supporting temporal paradigm, various object operations and temporal operations, efficient retrieval and browsing in video model. As video model is based on object-oriented paradigm, I present entire model structure for video data through the design of metadata which is used of logical schema of video, attribute and operation of object, and inheritance and annotation. by using temporal paradigm through the definition of time point and time interval in object-oriented based model, we can use video information more efficiently by time variation.

  • PDF

Design of video ontology for semantic web service (시맨틱 웹 서비스를 위한 동영상 온톨로지 설계)

  • Lee, Young-seok;Youn, Sung-dae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.195-198
    • /
    • 2009
  • Recently, research in building up semantic web for exchanging information and knowledge is active. To make use of video contents as knowledge on semantic web, semantic-based retrieval should be preceded. At present, retrieval based on consentaneity between metadata and keyword is common used. In this paper, I propose ontolgy establishment which enlarge user participation and add usefulness value and history information. This will facilitate semantic retrieval as well as use of video contents by using collective Intelligence. The proposed ontology schema will allow semantic-based retrieval of video contents on semantic web get higher recall compared to current way of retrieval. Moreover it enables you to make use of various video contents as knowledge.

  • PDF

A Method of Sensory Effect Metadata Generation Based on User Feedback Information (사용자 피드백 정보 기반 실감효과 메타데이터 생성 방법)

  • Kim, Cheol Min;Heo, Yong Soo;Kim, Eun Seok
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.7
    • /
    • pp.802-812
    • /
    • 2018
  • Recently, there are several active and vibrant studies on Realistic Media that try to provide the immersion and the presence by adding sensory effects to video. Because existing sensory effects on Realistic Media are produced through experts and automation programs on sensory effects, there is a limit to fill up the gap of user satisfaction which comes from the diversity of cybernauts. In this paper, we propose a method to improve the satisfaction with sensory effects by collecting and analyzing user's response information, and applying the result to the attributes of sensory effects. The proposed method allows you to produce the Sensory Effect Metadata effectively by analyzing users' biometric data and subjective satisfaction with realistic media experience, measuring the weighted value to revise the effects, and applying to the attribute of sensory effect. The experimental result shows that the proposed method improves the satisfaction with realistic content by generating the revised Sensory Effect Metadata in response to user's experiential feedback information.