• Title/Summary/Keyword: Video Metadata

Search Result 115, Processing Time 0.022 seconds

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

Toward a Structural and Semantic Metadata Framework for Efficient Browsing and Searching of Web Videos

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.1
    • /
    • pp.227-243
    • /
    • 2017
  • This study proposed a structural and semantic framework for the characterization of events and segments in Web videos that permits content-based searches and dynamic video summarization. Although MPEG-7 supports multimedia structural and semantic descriptions, it is not currently suitable for describing multimedia content on the Web. Thus, the proposed metadata framework that was designed considering Web environments provides a thorough yet simple way to describe Web video contents. Precisely, the metadata framework was constructed on the basis of Chatman's narrative theory, three multimedia metadata formats (PBCore, MPEG-7, and TV-Anytime), and social metadata. It consists of event information, eventGroup information, segment information, and video (program) information. This study also discusses how to automatically extract metadata elements including structural and semantic metadata elements from Web videos.

Representation of Video Data using Dublin core Model (더블린 코아 모델을 이용한 비디오 데이터의 표현)

  • Lee, Sun-Hui;Kim, Sang-Ho;Sin, Jeong-Hun;Kim, Gil-Jun;Ryu, Geun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.531-542
    • /
    • 2002
  • As most of metadata have been handled on restricted applications, we need a same metadata in order to represent a same video data. However, these metadata make problems that the same video data should be supported by the same metadata. Therefore, in this paper, we extend the Dublin core elements to support the metadata which can solve the problems. The proposed video data representation is managed by the extended metadata of Doblin core model, by using the information of structure, content and manipulation of video data. The thirteen temporal relationship operators are reduced to the six temporal relationship operators by using a dummy shot temporal transformation relationship. The reduced six temporal relationship operators through excluding reverse temporal relationship not only maintain a consistency of representation between a metadata and a video data, but also transform n-ary temporal relationship to binary relationship on shots. We show that the proposed metadata model can be applied to representing and retrieving on various applications as equivalent as the same structure.

Design of a Video Metadata Schema and Implementation of an Authoring Tool for User Edited Contents Creation (User Edited Contents 생성을 위한 동영상 메타데이터 스키마 설계 및 저작 도구 구현)

  • Song, Insun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.413-418
    • /
    • 2015
  • In this paper, we design new video metadata schema for searching video segments to create UEC (User Edited Contents). The proposed video metadata schema employs hierarchically structured units of 'Title-Event-Place(Scene)-Shot', and defines the fields of the semantic information as structured form in each segment unit. Since this video metadata schema is defined by analyzing the structure of existing UECs and by experimenting the tagging and searching the video segment units for creating the UECs, it helps the users to search useful video segments for UEC easily than MPEG-7 MDS (Multimedia Description Scheme) which is a general purpose international standard for video metadata schema.

Partial video downloading scheme using metadata (메타 정보를 이용한 부분 분할동영상 다운로딩 구현)

  • 최형석;최만석;설상훈;김혁만
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.233-236
    • /
    • 2001
  • Due to the current limitation on network bandwidth, it is difficult to get the video files of interests from the server by downloading/streaming. To solve this problem, we propose a scheme for generating a virtual single video in the client side by downloading the selected video segments and the corresponding metadata from the server. Our system is based MPEG-7 standard on the multimedia metadata. The experimental system demonstrates the feasibility of our approach.

  • PDF

Extractiong mood metadata through sound effects of video (영상의 효과음을 통한 분위기 메타데이터 추출)

  • You, Yeon-Hwi;Park, Hyo-Gyeong;Yong, Sung-Jung;Lee, Seo-Young;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.453-455
    • /
    • 2022
  • Metadata is data that explains attributes and features to the data as structured data. Among them, video metadata refers to data extracted from information constituting the video for accurate content-based search. Recently, as the number of users using video content increases, the number of OTT providers is also increasing, and the role of metadata is becoming more important for OTT providers to recommend a large amount of video content to individual users or to search appropriately. In this paper, a study was conducted on a method of automatically extracting metadata for mood attributes through sound effects of images. In order to classify the sound effect of the video and generate metadata about the attributes of the mood, I would like to propose a method of establishing a terminology dictionary for the mood and extracting information through supervised learning.

  • PDF

Hybrid Video Information System Supporting Content-based Retrieval and Similarity Retrieval (비디오의 의미검색과 유사성검색을 위한 통합비디오정보시스템)

  • Yun, Mi-Hui;Yun, Yong-Ik;Kim, Gyo-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2031-2041
    • /
    • 1999
  • In this paper, we present the HVIS (Hybrid Video Information System) which bolsters up meaning retrieval of all the various users by integrating feature-based retrieval and annotation-based retrieval of unformatted formed and massive video data. HVIS divides a set of video into video document, sequence, scene and object to model the metadata and suggests the Two layered Hybrid Object-oriented Metadata Model(THOMM) which is composed of raw-data layer for physical video stream, metadata layer to support annotation-based retrieval, content-based retrieval, and similarity retrieval. Grounded on this model, we presents the video query language which make the annotation-based query, content-based query and similar query possible and Video Query Processor to process the query and query processing algorithm. Specially, We present the similarity expression to appear degree of similarity which considers interesting of user. The proposed system is implemented with Visual C++, ActiveX and ORACLE.

  • PDF

Automatic Generation of Video Metadata for the Super-personalized Recommendation of Media

  • Yong, Sung Jung;Park, Hyo Gyeong;You, Yeon Hwi;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.288-294
    • /
    • 2022
  • The media content market has been growing, as various types of content are being mass-produced owing to the recent proliferation of the Internet and digital media. In addition, platforms that provide personalized services for content consumption are emerging and competing with each other to recommend personalized content. Existing platforms use a method in which a user directly inputs video metadata. Consequently, significant amounts of time and cost are consumed in processing large amounts of data. In this study, keyframes and audio spectra based on the YCbCr color model of a movie trailer were extracted for the automatic generation of metadata. The extracted audio spectra and image keyframes were used as learning data for genre recognition in deep learning. Deep learning was implemented to determine genres among the video metadata, and suggestions for utilization were proposed. A system that can automatically generate metadata established through the results of this study will be helpful for studying recommendation systems for media super-personalization.

A Study on COP-Transformation Based Metadata Security Scheme for Privacy Protection in Intelligent Video Surveillance (지능형 영상 감시 환경에서의 개인정보보호를 위한 COP-변환 기반 메타데이터 보안 기법 연구)

  • Lee, Donghyeok;Park, Namje
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.417-428
    • /
    • 2018
  • The intelligent video surveillance environment is a system that extracts various information about a video object and enables automated processing through the analysis of video data collected in CCTV. However, since the privacy exposure problem may occur in the process of intelligent video surveillance, it is necessary to take a security measure. Especially, video metadata has high vulnerability because it can include various personal information analyzed based on big data. In this paper, we propose a COP-Transformation scheme to protect video metadata. The proposed scheme is advantageous in that it greatly enhances the security and efficiency in processing the video metadata.

Design and Implementation of A Video Information Management System for Digital Libraries (디지털 도서관을 위한 동영상 정보 관리 시스템의 설계 및 구현)

  • 김현주;권재길;정재희;김인홍;강현석;배종민
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.131-141
    • /
    • 1998
  • Video data occurred in multimedia documents consist of a large scale of irregular data including audio-visual, spatial-temporal, and semantic information. In general, it is difficult to grasp the exact meaning of such a video information because video data apparently consist of unmeaningful symbols and numerics. In order to relieve these difficulties, it is necessary to develop an integrated manager for complex structures of video data and provide users of video digital libraries with easy, systematic access mechanisms to video informations. This paper proposes a generic integrated video information model(GIVIM) based on an extended Dublin Core metadata system to effectively store and retrieve video documents in digital libraries. The GIVIM is an integrated mo이 of a video metadata model(VMN) and a video architecture information model(VAIM). We also present design and implementation results of a video document management system(VDMS) based on the GIVIM.

  • PDF