• Title/Summary/Keyword: Performance video content

Search Result 192, Processing Time 0.024 seconds

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Developing a Quality Prediction Model for Wireless Video Streaming Using Machine Learning Techniques

  • Alkhowaiter, Emtnan;Alsukayti, Ibrahim;Alreshoodi, Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.229-234
    • /
    • 2021
  • The explosive growth of video-based services is considered as the dominant contributor to Internet traffic. Hence it is very important for video service providers to meet the quality expectations of end-users. In the past, the Quality of Service (QoS) was the key performance of networks but it considers only the network performances (e.g., bandwidth, delay, packet loss rate) which fail to give an indication of the satisfaction of users. Therefore, Quality of Experience (QoE) may allow content servers to be smarter and more efficient. This work is motivated by the inherent relationship between the QoE and the QoS. We present a no-reference (NR) prediction model based on Deep Neural Network (DNN) to predict video QoE. The DNN-based model shows a high correlation between the objective QoE measurement and QoE prediction. The performance of the proposed model was also evaluated and compared with other types of neural network architectures, and three known machine learning methodologies, the performance comparison shows that the proposed model appears as a promising way to solve the problems.

The method for protecting contents on a multimedia system (멀티미디어 시스템에서 콘텐츠를 보호하기 위한 방법)

  • Kim, Seong-Ki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.7
    • /
    • pp.113-121
    • /
    • 2009
  • As a DRM is recently being removed from many sites, the content protection on a video server becomes important. However, many protection methods have their own limitations, or aren't used due to the deterioration of the streaming performance. This paper proposes a content protection method that uses both the eCryptFS and the SELinux at the same time, and measures the performance of the proposed method by using various benchmarks. Then, this paper verifies that the method doesn't significantly decrease the streaming performance although the proposed method decreases the other performances, so it can be used for the content protection in a multimedia system.

An Efficient Video Indexing Method using Object Motion Map in compresed Domain (압축영역에서 객체 움직임 맵에 의한 효율적인 비디오 인덱싱 방법에 관한 연구)

  • Kim, So-Yeon;No, Yong-Man
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1570-1578
    • /
    • 2000
  • Object motion is an important feature of content in video sequences. By now, various methods to exact feature about the object motion have been reported[1,2]. However they are not suitable to index video using the motion, since a lot of bits and complex indexing parameters are needed for the indexing [3,4] In this paper, we propose object motion map which could provide efficient indexing method for object motion. The proposed object motion map has both global and local motion information during an object is moving. Furthermore, it requires small bit of memory for the indexing. to evaluate performance of proposed indexing technique, experiments are performed with video database consisting of MPEG-1 video sequence in MPEG-7 test set.

  • PDF

Social Network Analysis of Changes in YouTube Home Economics Education Content Before and After COVID-19 (SNA(Social Network Analysis)를 활용한 코로나19 전후의 가정과교육 유튜브 콘텐츠 변화 분석)

  • Shim, Jae Young;Kim, Eun Kyung;Ko, Eun Mi;Kim, Hyoung Sun;Park, Mi Jeong
    • Human Ecology Research
    • /
    • v.60 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • This paper presents a social network analysis of changes in Home Economics education content loaded on YouTube before and after the outbreak of COVID-19. From January 1, 2008 to June 30, 2021, a basic analysis was conducted of 761 Home Economics education videos loaded on YouTube, using NetMiner 4.3 to analyze important keywords and the centrality of video titles and full texts. Before COVID-19, there were 164 Home Economics education videos posted on YouTube, increasing significantly to 597 following the emergence of the pandemic. In both periods, there was more middle school content than high school content. The content in the child-family field was the most, and the main keywords were youth and family. Before COVID-19, a performance evaluation indicated that the proportion of student content was high, whereas after the outbreak of the disease, teacher content increased significantly due to the effect of distance learning. However, compared with video use, the self-expression and participation of users were lower in both periods. The centrality analysis indicated that in the title, 'family' exhibited a high degree of both centrality and eigenvector centrality over the entire period. Degree centrality of the video title was found to be high in the order of class, online, family, management, etc. after the outbreak of COVID-19, and the connection of keywords was strong overall. Eigenvector centrality indicated that career, search, life, and design were influential keywords before COVID-19, while class, youth, online, and development were influential keywords after COVID-19.

Design and Implementation of YouTube-based Educational Video Recommendation System

  • Kim, Young Kook;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.37-45
    • /
    • 2022
  • As of 2020, about 500 hours of videos are uploaded to YouTube, a representative online video platform, per minute. As the number of users acquiring information through various uploaded videos is increasing, online video platforms are making efforts to provide better recommendation services. The currently used recommendation service recommends videos to users based on the user's viewing history, which is not a good way to recommend videos that deal with specific purposes and interests, such as educational videos. The recent recommendation system utilizes not only the user's viewing history but also the content features of the item. In this paper, we extract the content features of educational video for educational video recommendation based on YouTube, design a recommendation system using it, and implement it as a web application. By examining the satisfaction of users, recommendataion performance and convenience performance are shown as 85.36% and 87.80%.

Automatic Parsing of MPEG-Compressed Video (MPEG 압축된 비디오의 자동 분할 기법)

  • Kim, Ga-Hyeon;Mun, Yeong-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.868-876
    • /
    • 1999
  • In this paper, an efficient automatic video parsing technique on MPEG-compressed video that is fundamental for content-based indexing is described. The proposed method detects scene changes, regardless of IPB picture composition. To detect abrupt changes, the difference measure based on the dc coefficient in I picture and the macroblock reference feature in P and B pictures are utilized. For gradual scene changes, we use the macroblock reference information in P and B pictures. the process of scene change detection can be efficiently handled by extracting necessary data without full decoding of MPEG sequence. The performance of the proposed algorithm is analyzed based on precision and recall. the experimental results verified the effectiveness of the method for detecting scene changes of various MPEG sequences.

  • PDF

Content-Based Video Retrieval Algorithms using Spatio-Temporal Information about Moving Objects (객체의 시공간적 움직임 정보를 이용한 내용 기반 비디오 검색 알고리즘)

  • Jeong, Jong-Myeon;Moon, Young-Shik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.631-644
    • /
    • 2002
  • In this paper efficient algorithms for content-based video retrieval using motion information are proposed, including temporal scale-invariant retrieval and temporal scale-absolute retrieval. In temporal scale-invariant video retrieval, the distance transformation is performed on each trail image in database. Then, from a given que교 trail the pixel values along the query trail are added in each distance image to compute the average distance between the trails of query image and database image, since the intensity of each pixel in distance image represents the distance from that pixel to the nearest edge pixel. For temporal scale-absolute retrieval, a new coding scheme referred to as Motion Retrieval Code is proposed. This code is designed to represent object motions in the human visual sense so that the retrieval performance can be improved. The proposed coding scheme can also achieve a fast matching, since the similarity between two motion vectors can be computed by simple bit operations. The efficiencies of the proposed methods are shown by experimental results.

Analysis of Screen Content Coding Based on HEVC

  • Ahn, Yong-Jo;Ryu, Hochan;Sim, Donggyu;Kang, Jung-Won
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.231-236
    • /
    • 2015
  • In this paper, the technical analysis and characteristics of screen content coding (SCC) based on High efficiency video coding (HEVC) are presented. For SCC, which is increasingly used these days, HEVC SCC standardization has been proceeded. Technologies such as intra block copy (IBC), palette coding, and adaptive color transform are developed and adopted to the HEVC SCC standard. This paper examines IBC and palette coding that significantly impacts RD performance of SCC for screen content. The HEVC SCC reference model (SCM) 4.0 was used to comparatively analyze the coding performance of HEVC SCC based on the HEVC range extension (RExt) model for screen content.

Optimal Coding Model for Screen Contents Applications from the Coding Performance Analysis of High Efficient Coding Tools in HEVC (HEVC 고성능 압축 도구들의 성능 분석을 통한 스크린 콘텐츠 응용 최적 부호화 모델)

  • Han, Chan-Hee;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.544-554
    • /
    • 2012
  • Screen content refers to images or videos generated by various electronic devices such as computers or mobile phones, whereas natural content refers to images captured by cameras. Screen contents show different statistical characteristics from natural images, so the conventional video codecs which were developed mainly for the coding of natural videos cannot guarantee good coding performances for screen contents. Recently, researches on efficient SCC(Screen Content Coding) are being actively studied, and especially at ongoing JCT-VC(Joint Collaborative Team on Video Coding) meeting for HEVC(High Efficiency Video Coding) standard, SCC issues are being discussed steadily. In this paper, we analyze the performances of high efficient coding tools in HM(HEVC Test Model) on SCC, and present an optimized SCC model based on the analysis results. We also present the characteristics of screen contents and the future research issues as well.