• Title/Summary/Keyword: Video

Search Result 12,733, Processing Time 0.033 seconds

Video Cut Detection Using Complementary Color (보색개념을 도입한 Video Cut 검출)

  • 김재학;박종승;한준희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.411-413
    • /
    • 1998
  • Video영상을 의미있는 부분으로 나누는 Video segmentation을 위해서는 Video Cut의 검출이 필요하다. 본 논문에서는 Video Cut의 검출을 위하여 신경망을 이용하였으며, cut의 측정 방법으로 보색(complementary color)의 개념을 도입하였다. 이 방법을 이용하여, 여러개의 Video data로부터 학습을 한 뒤 새로운 Video에 대해서 테스트한 결과 좋은 성능을 보였다.

  • PDF

Analyzing Performance of MPEG-7 Video Signature for Video Copy Detection (동영상 복사본 검출을 위한 MPEG-7 Video Signature 성능분석)

  • Yu, Jeongsoo;Ryu, Jaesug;Nang, Jongho
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.586-591
    • /
    • 2014
  • In recent years, we can access to video contents anywhere and at any time. Therefore distributed video is easily copied, transformed and republished. Since it brings copyright problem, similarity/duplicate detection and measurement is essential to identify the excessive content duplication. In this paper, we analysed various discernment of video which has been transformed with various ways using MPEG-7 Video Signature. MPEG-7 Video Signature, one of video copy detection algorithms, is block based abstraction. Thus we assume Video Signature is weak for spatial transform. The experiments show that MPEG-7 Video Signature is very weak for spatial transform which could occur general as we have assumed.

GeoVideo: A First Step to MediaGIS

  • Kim, Kyong-Ho;Kim, Sung-Soo;Lee, Sung-Ho;Kim, Kyoung-Ok;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.827-831
    • /
    • 2002
  • MediaGIS is a concept of tightly integrated multimedia with spatial information. VideoGIS is an example of MediaGIS focused on the interaction or interaction of video and spatial information. Our suggested GeoVideo, a new concept of VideoGIS has its key feature in interactiveness. In GeoVideo, the geographic tasks such as browsing, searching, querying, spatial analysis can be performed based on video itself. GeoVideo can have the meaning of paradigm shift from artificial, static, abstracted and graphical paradigm to natural, dynamic, real, and image-based paradigm. We discuss about the integration of video and geography and also suggest the GeoVideo system design. Several considerations on expanding the functionalities of GeoVideo are explained for the future works.

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

Content similarity matching for video sequence identification

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • v.6 no.3
    • /
    • pp.5-9
    • /
    • 2010
  • To manage large database system with video, effective video indexing and retrieval are required. A large number of video retrieval algorithms have been presented for frame-wise user query or video content query, whereas a few video identification algorithms have been proposed for video sequence query. In this paper, we propose an effective video identification algorithm for video sequence query that employs the Cauchy function of histograms between successive frames and the modified Hausdorff distance. To effectively match the video sequences with a low computational load, we make use of the key frames extracted by the cumulative Cauchy function and compare the set of key frames using the modified Hausdorff distance. Experimental results with several color video sequences show that the proposed algorithm for video identification yields remarkably higher performance than conventional algorithms such as Euclidean metric, and directed divergence methods.

Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing (내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현)

  • Jo, Eun-Hui;Kim, Yong-Geol;Lee, Hun-Sun;Jeong, Yeong-Eun;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

Semi-Dynamic Digital Video Adaptation System for Mobile Environment (모바일 환경을 위한 준-동적 디지털 비디오 어댑테이션 시스템)

  • 추진호;이상민;낭종호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1320-1331
    • /
    • 2004
  • A video adaptation system translates the source video stream into appropriate video stream while satisfying the network and client constraints and maximizing the video quality as much as possible. This paper proposes a semi-dynamic video adaptation scheme, in which several intermediate video streams and the information for the measuring of video quality are generated statically. The intermediate video streams are generated by reducing the resolution of the video stream by a power of two several times, and they are stored as the intermediate video streams on the video server. The statically generated information for the input video stream consists of the degrees of smoothness for each frame rate and the degree of frame definition for each pixel bit rate. It helps to dynamically generate the target video stream according to the client's QoS at run-time as quickly as possible. Experimental result shows that the proposed adaptation scheme can generate the target video stream about thirty times faster while keeping the quality degradation as less than 2% comparing to the target video stream that is totally dynamically generated, although the extra storages for the intermediate video streams are required.

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

The Study on the Distortion Estimate of Video Quality at the Real Time HD Level Video Multicasting Transmission (실시간 HD급 동영상의 멀티캐스트 전송에서 영상품질의 왜곡평가에 관한 연구)

  • Cho, Tae-Kyung;Lee, Jea-Hee;Lee, Sang-Ha
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.60 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • In this paper, for analysing the major factors giving the effect to the quality of HD level video on the multicasting service, we tried this test experiment on the College school network similar to the real service network environment. We measure the video quality distortion on the multicasting HD level video according to generating and increasing the broadcasting traffic on the situation that apply the QoS technique to the test network and not apply the QoS technique and then find out the threshold value of network factors giving the effect to the video quality distortion on the multicasting HD level video service. This paper can be used for guaranteeing the constant video quality and reducing the video quality distortion on the multicasting HD level video service.