• Title/Summary/Keyword: Video Summarization

Search Result 60, Processing Time 0.02 seconds

Viewer's Affective Feedback for Video Summarization

  • Dammak, Majdi;Wali, Ali;Alimi, Adel M.
    • Journal of Information Processing Systems
    • /
    • v.11 no.1
    • /
    • pp.76-94
    • /
    • 2015
  • For different reasons, many viewers like to watch a summary of films without having to waste their time. Traditionally, video film was analyzed manually to provide a summary of it, but this costs an important amount of work time. Therefore, it has become urgent to propose a tool for the automatic video summarization job. The automatic video summarization aims at extracting all of the important moments in which viewers might be interested. All summarization criteria can differ from one video to another. This paper presents how the emotional dimensions issued from real viewers can be used as an important input for computing which part is the most interesting in the total time of a film. Our results, which are based on lab experiments that were carried out, are significant and promising.

Dynamic Summarization and Summary Description Scheme for Efficient Video Browsing (효율적인 비디오 브라우징을 위한 동적 요약 및 요약 기술구조)

  • 김재곤;장현성;김문철;김진웅;김형명
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.82-93
    • /
    • 2000
  • Recently, the capability of efficient access to the desired video content is of growing importance because more digital video data are available at an increasing rate. A video summary abstracting the gist from the entirety enables the efficient browsing as well as the fast skimming of the video contents. In this paper, we discuss a novel dynamic summarization method based on the detection of highlights which represent semantically significant content and the description scheme (DS) proposed to MPEG-7 aiming to provide summary description. The summary DS proposed to MPEG-7 allows for efficient navigation and browsing to the contents of interest through the functionalities of multi-level highlights, hierarchical browsing and user-customized summarization. In this paper, we also show the validation and the usefulness of the methodology for dynamic summarization and the summary DS in real applications with soccer video sequences.

  • PDF

An Automatic Summarization System of Baseball Game Video Using the Caption Information (자막 정보를 이용한 야구경기 비디오의 자동요약 시스템)

  • 유기원;허영식
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.107-113
    • /
    • 2002
  • In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization. In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization.

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

Aerial Video Summarization Approach based on Sensor Operation Mode for Real-time Context Recognition (실시간 상황 인식을 위한 센서 운용 모드 기반 항공 영상 요약 기법)

  • Lee, Jun-Pyo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.87-97
    • /
    • 2015
  • An Aerial video summarization is not only the key to effective browsing video within a limited time, but also an embedded cue to efficiently congregative situation awareness acquired by unmanned aerial vehicle. Different with previous works, we utilize sensor operation mode of unmanned aerial vehicle, which is global, local, and focused surveillance mode in order for accurately summarizing the aerial video considering flight and surveillance/reconnaissance environments. In focused mode, we propose the moving-react tracking method which utilizes the partitioning motion vector and spatiotemporal saliency map to detect and track the interest moving object continuously. In our simulation result, the key frames are correctly detected for aerial video summarization according to the sensor operation mode of aerial vehicle and finally, we verify the efficiency of video summarization using the proposed mothed.

Video Summarization Using Eye Tracking and Electroencephalogram (EEG) Data (시선추적-뇌파 기반의 비디오 요약 생성 방안 연구)

  • Kim, Hyun-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.1
    • /
    • pp.95-117
    • /
    • 2022
  • This study developed and evaluated audio-visual (AV) semantics-based video summarization methods using eye tracking and electroencephalography (EEG) data. For this study, twenty-seven university students participated in eye tracking and EEG experiments. The evaluation results showed that the average recall rate (0.73) of using both EEG and pupil diameter data for the construction of a video summary was higher than that (0.50) of using EEG data or that (0.68) of using pupil diameter data. In addition, this study reported that the reasons why the average recall (0.57) of the AV semantics-based personalized video summaries was lower than that (0.69) of the AV semantics-based generic video summaries. The differences and characteristics between the AV semantics-based video summarization methods and the text semantics-based video summarization methods were compared and analyzed.

A Video Summarization Study On Selecting-Out Topic-Irrelevant Shots Using N400 ERP Components in the Real-Time Video Watching (동영상 실시간 시청시 유발전위(ERP) N400 속성을 이용한 주제무관 쇼트 선별 자동영상요약 연구)

  • Kim, Yong Ho;Kim, Hyun Hee
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1258-1270
    • /
    • 2017
  • 'Semantic gap' has been a year-old problem in automatic video summarization, which refers to the gap between semantics implied in video summarization algorithms and what people actually infer from watching videos. Using the external EEG bio-feedback obtained from video watchers as a solution of this semantic gap problem has several another issues: First, how to define and measure noises against ERP waveforms as signals. Second, whether individual differences among subjects in terms of noise and SNR for conventional ERP studies using still images captured from videos are the same with those differently conceptualized and measured from videos. Third, whether individual differences of subjects by noise and SNR levels help to detect topic-irrelevant shots as signals which are not matched with subject's own semantic topical expectations (mis-match negativity at around 400m after stimulus on-sets). The result of repeated measures ANOVA test clearly shows a 2-way interaction effect between topic-relevance and noise level, implying that subjects of low noise level for video watching session are sensitive to topic-irrelevant visual shots, while showing another 3-way interaction among topic-relevance, noise and SNR levels, implying that subjects of high noise level are sensitive to topic-irrelevant visual shots only if they are of low SNR level.

Video Summarization Using Importance-based Fuzzy One-Class Support Vector Machine (중요도 기반 퍼지 원 클래스 서포트 벡터 머신을 이용한 비디오 요약 기술)

  • Kim, Ki-Joo;Choi, Young-Sik
    • Journal of Internet Computing and Services
    • /
    • v.12 no.5
    • /
    • pp.87-100
    • /
    • 2011
  • In this paper, we address a video summarization task as generating both visually salient and semantically important video segments. In order to find salient data points, one can use the OC-SVM (One-class Support Vector Machine), which is well known for novelty detection problems. It is, however, hard to incorporate into the OC-SVM process the importance measure of data points, which is crucial for video summarization. In order to integrate the importance of each point in the OC-SVM process, we propose a fuzzy version of OC-SVM. The Importance-based Fuzzy OC-SVM weights data points according to the importance measure of the video segments and then estimates the support of a distribution of the weighted feature vectors. The estimated support vectors form the descriptive segments that best delineate the underlying video content in terms of the importance and salience of video segments. We demonstrate the performance of our algorithm on several synthesized data sets and different types of videos in order to show the efficacy of the proposed algorithm. Experimental results showed that our approach outperformed the well known traditional method.

Automatic Summarization of Basketball Video Using the Score Information (스코어 정보를 이용한 농구 비디오의 자동요약)

  • Jung, Cheol-Kon;Kim, Eui-Jin;Lee, Gwang-Gook;Kim, Whoi-Yul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.881-887
    • /
    • 2007
  • In this paper, we proposed a method for content based automatic summarization of basketball game videos. For meaningful summary, we used the score information in basketball videos. And the score information is obtained by recognizing the digits on the score caption and analyzing the variation of the score. Generally, important events of basketball are the 3-point shot, one-sided runs, the lead changes, and so on. We have detected these events using score information and made summaries and highlights of basketball video games.

Automatic Summarization of Basketball Video Using the Score Information (스코어 정보를 이용한 농구 비디오의 자동요약)

  • Jung, Cheol-Kon;Kim, Eui-Jin;Lee, Gwang-Gook;Kim, Whoi-Yul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8C
    • /
    • pp.738-744
    • /
    • 2007
  • In this paper, we proposed a method for content based automatic summarization of basketball game videos. For meaningful summary, we used the score information in basketball videos. And the score information is obtained by recognizing the digits on the score caption and analyzing the variation of the score. Generally, important events of basketball are the 3-point shot, one-sided runs, the lead changes, and so on. We have detected these events using score information and made summaries and highlights of basketball video games.