• Title/Summary/Keyword: Video Content Analysis

Search Result 355, Processing Time 0.026 seconds

Video Content Manipulation Using 3D Analysis for MPEG-4

  • Sull, Sanghoon
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.125-135
    • /
    • 1997
  • This paper is concerned with realistic mainpulation of content in video sequences. Manipulation of content in video sequences is one of the content-based functionalities for MPEG-4 Visual standard. We present an approach to synthesizing video sequences by using the intermediate outputs of three-dimensional (3D) motion and depth analysis. For concreteness, we focus on video showing 3D motion of an observer relative to a scene containing planar runways (or roads). We first present a simple runway (or road) model. Then, we describe a method of identifying the runway (or road) boundary in the image using the Point of Heading Direction (PHD) which is defined as the image of, the ray along which a camera moves. The 3D motion of the camera is obtained from one of the existing 3D analysis methods. Then, a video sequence containing a runway is manipulated by (i) coloring the scene part above a vanishing line, say blue, to show sky, (ii) filling in the occluded scene parts, and (iii) overlaying the identified runway edges and placing yellow disks in them, simulating lights. Experimental results for a real video sequence are presented.

  • PDF

Design and Implementation of Video File Structure Analysis Tool for Detecting Manipulated Video Contents

  • Choi, Yun-Seok
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.128-135
    • /
    • 2018
  • The various video recording device, like car black box and cctv, are used currently and video contents are used as evidence of traffic accidents and scenes of crime. To verify integrity of video content, there are various study on manipulated video content analysis. Among these studies, a study based on analysis of video file structure and its variables needs a tool which can be used to analyze file structure and extract interested attributes. In this paper, we proposed design and implementation of an analyzing tool which visualizes video file structure and its attributes. The proposed tool use a model which reflects commonality of various video container format, so it is available to analyze video structure with regardless of the video file types. And the tool specifies interested file structure properties in XML and therefore we can change target properties easily without modification of the tool.

Video Learning Enhances Financial Literacy: A Systematic Review Analysis of the Impact on Video Content Distribution

  • Yin Yin KHOO;Mohamad Rohieszan RAMDAN;Rohaila YUSOF;Chooi Yi WEI
    • Journal of Distribution Science
    • /
    • v.21 no.9
    • /
    • pp.43-53
    • /
    • 2023
  • Purpose: This study aims to examine the demographic similarities and differences in objectives, methodology, and findings of previous studies in the context of gaining financial literacy using videos. This study employs a systematic review design. Research design, data and methodology: Based on the content analysis method, 15 articles were chosen from Scopus and Science Direct during 2015-2020. After formulating the research questions, the paper identification process, screening, eligibility, and quality appraisal are discussed in the methodology. The keywords for the advanced search included "Financial literacy," "Financial Education," and "Video". Results: The results of this study indicate the effectiveness of learning financial literacy using videos. Significant results were obtained when students interacted with the video content distribution. The findings of this study provide an overview and lead to a better understanding of the use of video in financial literacy. Conclusions: This study is important as a guide for educators in future research and practice planning. A systematic review on this topic is the research gap. Video learning was active learning that involved student-centered activities that help students engage with financial literacy. By conducting a systematic review, researchers and readers may also understand how extending an individual's financial literacy may change after financial education.

A Study on System for Analyzing Story of Cinematographic work Based on Estimating Tension of User (감성 상태 기반의 영상 저작물 스토리 분석 시스템 및 분석 방법 개발에 관한 연구)

  • Woo, Jeong-gueon
    • Journal of Engineering Education Research
    • /
    • v.18 no.6
    • /
    • pp.64-69
    • /
    • 2015
  • A video-work story analysis system based on emotional state measurement includes a content provision unit which provides story content of a video-work, a display unit which displays content provided by the content provision unit, an emotional state measurement unit which measures a tense-relaxed emotional state of a viewer viewing the displayed story content, a story pattern analysis unit which analyzes the tense-relaxed emotional state measured from the emotional state measurement unit according to a scene in the story content provided by the content provision unit, and a story pattern display unit which prints out an analysis result or displays the analysis result as an image. The emotional state measurement unit measures a tense or relaxed emotional state through one or more analyses among a brainwave analysis, a vital sign analysis, or an ocular state analysis. A writer may obtain support in an additional scenario modification work, and an investor may obtain support in making a decision through the above description. Furthermore, the video-work story analysis system and analysis method based on emotional state measurement may extract a particular pattern with respect to a change in an emotional state of a viewer, compile statistics, and analyze a correlation between a story and an emotional state.

Automatic Superimposed Text Localization from Video Using Temporal Information

  • Jung, Cheol-Kon;Kim, Joong-Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.834-839
    • /
    • 2007
  • The superimposed text in video brings important semantic clues into content analysis. In this paper, we present the new and fast superimposed text localization method in video segments. We detect the superimposed text by using temporal information contained in the video. To detect the superimposed text fast, we have minimized the candidate region of localizing superimposed texts by using the difference between consecutive frames. Experimental results are presented to demonstrate the good performance of the new superimposed text localization algorithm.

A Study on Video Content Application Based on Mobile Device Platform in China (중국의 Mobile Device Platform 기반 영상콘텐츠 Application 연구)

  • ShI, Yu;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.10
    • /
    • pp.433-438
    • /
    • 2019
  • In this paper, the write analysis the application of basic video content of Mobil Device Platform in China and studies the future development scheme. According to the survey, as of 2019, 78% of Chinese total population is the Internet service users and application software users. Mobile Device Platform is independently developed to provide video content application services, which already have 640 million video application users from 2013 to the present. 65% of users install and use more than two video content applications. In China TikTok, Kuai Shou, MeiPai and other video content applications, not only can easily achieve simple user interface and users can directly shoot video content. These production functions is different from YouTube, a famous video platform in the United States. In the video platform market, the core competitiveness is content creation. In the future, the integration of video content of VR, AR and other video projects will expecte to further activate the video platform market.

Segmentation of Objects of Interest for Video Content Analysis (동영상 내용 분석을 위한 관심 객체 추출)

  • Park, So-Jung;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.967-980
    • /
    • 2007
  • Video objects of interest play an important role in representing the video content and are useful for improving the performance of video retrieval and compression. The objects of interest may be a main object in describing contents of a video shot or a core object that a video producer wants to represent in the video shot. We know that any object attracting one's eye much in the video shot may not be an object of interest and a non-moving object may be an object of interest as well as a moving one. However it is not easy to define an object of interest clearly, because procedural description of human interest is difficult. In this paper, a set of four filtering conditions for extracting moving objects of interest is suggested, which is defined by considering variation of location, size, and moving pattern of moving objects in a video shot. Non-moving objects of interest are also defined as another set of four extracting conditions that are related to saliency of color/texture, location, size, and occurrence frequency of static objects in a video shot. On a test with 50 video shots, the segmentation method based on the two sets of conditions could extract the moving and non-moving objects of interest chosen manually on accuracy of 84%.

  • PDF

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Comparison of big data image analysis techniques for user curation (사용자 큐레이션을 위한 빅데이터 영상 분석 기법 비교)

  • Lee, Hyoun-Sup;Kim, Jin-Deog
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.563-565
    • /
    • 2021
  • The most important feature of the recently increasing content providing service is that the amount of content increase over time is very large. Accordingly, the importance of user curation is increasing, and various techniques are used to implement it. In this paper, among the techniques for video recommendation, the analysis technique using voice data and subtitles and the video comparison technique based on keyframe extraction are compared with the results of implementing and applying the video content of real big data. In addition, through the comparison result, a video content environment to which each analysis technique can be applied is proposed.

  • PDF

Implementation of User Recommendation System based on Video Contents Story Analysis and Viewing Pattern Analysis (영상 스토리 분석과 시청 패턴 분석 기반의 추천 시스템 구현)

  • Lee, Hyoun-Sup;Kim, Minyoung;Lee, Ji-Hoon;Kim, Jin-Deog
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1567-1573
    • /
    • 2020
  • The development of Internet technology has brought the era of one-man media. An individual produces content on user own and uploads it to related online services, and many users watch the content of online services using devices that allow them to use the Internet. Currently, most users find and watch content they want through search functions provided by existing online services. These features are provided based on information entered by the user who uploaded the content. In an environment where content needs to be retrieved based on these limited word data, user unwanted information is presented to users in the search results. To solve this problem, in this paper, the system actively analyzes the video in the online service, and presents a way to extract and reflect the characteristics held by the video. The research was conducted to extract morphemes based on the story content based on the voice data of a video and analyze them with big data technology.