• Title/Summary/Keyword: Video

Search Result 12,722, Processing Time 0.039 seconds

Analysis of the Robustness and Discrimination for Video Fingerprints in Video Copy Detection (복제 비디오 검출에서 비디오 지문의 강인함과 분별력 분석)

  • Kim, Semin;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1281-1287
    • /
    • 2013
  • In order to prevent illegal video copies, many video fingerprints have been developed. Video fingerprints should be robust from various video transformations and have high discriminative powers. In general, video fingerprints are generated from three feature spaces such as luminance, gradient, and DCT coefficients. However, there is a few study for the robustness and discrimination according to feature spaces. Thus, we analyzed the property of each feature space by video copy detion task with the robustness and the discrimination of video fingerprints. We generated three video fingerprints from these feature spaces using a same algorithm. In our test, a video fingerprint. based on DCT coefficient outperformed others because the discrimination of it was higher.

A Frame-Based Video Signature Method for Very Quick Video Identification and Location

  • Na, Sang-Il;Oh, Weon-Geun;Jeong, Dong-Seok
    • ETRI Journal
    • /
    • v.35 no.2
    • /
    • pp.281-291
    • /
    • 2013
  • A video signature is a set of feature vectors that compactly represents and uniquely characterizes one video clip from another for fast matching. To find a short duplicated region, the video signature must be robust against common video modifications and have a high discriminability. The matching method must be fast and be successful at finding locations. In this paper, a frame-based video signature that uses the spatial information and a two-stage matching method is presented. The proposed method is pair-wise independent and is robust against common video modifications. The proposed two-stage matching method is fast and works very well in finding locations. In addition, the proposed matching structure and strategy can distinguish a case in which a part of the query video matches a part of the target video. The proposed method is verified using video modified by the VCE7 experimental conditions found in MPEG-7. The proposed video signature method achieves a robustness of 88.7% under an independence condition of 5 parts per million with over 1,000 clips being matched per second.

Automatic Video Genre Identification Method in MPEG compressed domain

  • Kim, Tae-Hee;Lee, Woong-Hee;Jeong, Dong-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1527-1530
    • /
    • 2002
  • Video summary is one of the tools which can provide the fast and effective browsing fur a lengthy video. Video summary consists of many key-frames that could be defined differently depending on the video genre it belongs to. Consequently, the video summary constructed by the uniform manner might lead into inadequate result. Therefore, identifying the video genre is the important first step in generating the meaningful video summary. We propose a new method that can classify the genre of the video data in MPEG compressed bit-stream domain. Since the proposed method operates directly on the com- pressed bit-stream without decoding the frame, it has merits such as simple calculation and short processing time. In the proposed method, only the visual information is utilized through the spatial-temporal analysis to classify the video genre. Experiments are done for 6 genres of video: Cartoon, Commercial, Music Video, News, Sports, and Talk Show. Experimental result shows more than 90% of accuracy in genre classification for the well-structured video data such as Talk Show and Sports.

  • PDF

A study on the audio/video integrated control system based on network

  • Lee, Seungwon;Kwon, Soonchul;Lee, Seunghyun
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.1-9
    • /
    • 2022
  • The recent development of information and communication technology is also affecting audio/video systems used in industry. The audio/video device configuration system changes from analog to digital, and the network-based audio/video system control has the advantage of reducing costs in accordance with system operation. However, audio/video systems released on the market have limitations in that they can only control their own products or can only be performed on specific platforms (Windows, Mac, Linux). This paper is a study on a device (Network Audio Video Integrated Control: NAVICS) that can integrate and control multiple audio / video devices with different functions, and can control digitalized audio / video devices through network and serial communication. As a result of the study, it was confirmed that individual control and integrated control were possible through the protocol provided by each audio/video device by NAVICS, and that even non-experts could easily control the audio/video system. In the future, it is expected that network-based audio/video integrated control technology will become the technical standard for complex audio/video system control.

A Selective Video Data Deletion Algorithm to Free Up Storage Space in Video Proxy Server (비디오 프록시 서버에서의 저장 공간 확보를 위한 선택적 동영상 데이터 삭제 알고리즘)

  • Lee, Jun-Pyo;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.4
    • /
    • pp.121-126
    • /
    • 2009
  • Video poxy server which is located near clients can store the frequently requested video data in storage space in order to minimize initial latency and network traffic significantly. However, due to the limited storage space in video proxy server, an appropriate deletion algorithm is needed to remove the old video data which is not serviced for a long time. Thus, we propose an efficient video data deletion algorithm for video proxy server. The proposed deletion algorithm removes the video which has the lowest request possibility based on the user access patterns. In our algorithm, we arrange the videos which are stored in video proxy server according to the requested time sequence and then, select the video which has the oldest requested time. The selected video is partially removed in order to free up storage space in video poky server. The simulation results show that the proposed algorithm performs better than other algorithms in terms of the block hit rate and the number of block deletion.

Video Automatic Editing Method and System based on Machine Learning (머신러닝 기반의 영상 자동 편집 방법 및 시스템)

  • Lee, Seung-Hwan;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.235-237
    • /
    • 2022
  • Video content is divided into long-form video content and short-form video content according to the length. Long form video content is created with a length of 15 minutes or longer, and all frames of the captured video are included without editing. Short-form video content can be edited to a shorter length from 1 minute to 15 minutes, and only some frames from the frames of the captured video. Due to the recent growth of the single-person broadcasting market, the demand for short-form video content to increase viewers is increasing. Therefore, there is a need for research on content editing technology for editing and generating short-form video content. This study studies the technology to create short-form videos of main scenes by capturing images, voices, and motions. Short-form videos of key scenes use a pre-trained highlight extraction model through machine learning. An automatic video editing system and method for automatically generating a highlight video is a core technology of short-form video content. Machine learning-based automatic video editing method and system research will contribute to competitive content activities by reducing the effort and cost and time invested by single creators for video editing

  • PDF

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Efficient Media Synchronization Mechanism for SVC Video Transport over IP Networks

  • Seo, Kwang-Deok;Jung, Soon-Heung;Kim, Jin-Soo
    • ETRI Journal
    • /
    • v.30 no.3
    • /
    • pp.441-450
    • /
    • 2008
  • The scalable extension of H.264, known as scalable video coding (SVC) has been the main focus of the Joint Video Team's work and was finalized at the end of 2007. Synchronization between media is an important aspect in the design of a scalable video streaming system. This paper proposes an efficient media synchronization mechanism for SVC video transport over IP networks. To support synchronization between video and audio bitstreams transported over IP networks, a real-time transport protocol/RTP control protocol (RTP/RTCP) suite is usually employed. To provide an efficient mechanism for media synchronization between SVC video and audio, we suggest an efficient RTP packetization mode for inter-layer synchronization within SVC video and propose a computationally efficient RTCP packet processing method for inter-media synchronization. By adopting the computationally simple RTCP packet processing, we do not need to process every RTCP sender report packet for inter-media synchronization. We demonstrate the effectiveness of the proposed mechanism by comparing its performance with that of the conventional method.

  • PDF

Development of Video Data-base and a Video Annotation Tool for Evaluation of Smart CCTV System (지능형CCTV시스템 성능평가를 위한 영상DB와 영상 주석도구 개발)

  • Park, Jang-Sik;Yi, Seung-Jai
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.7
    • /
    • pp.739-745
    • /
    • 2014
  • In this paper, an evaluation of intelligent CCTV system is proposed with recording and implementation video and video DB. Videos for evaluation are recorded by dividing far, mid and near zone. Video DB has video recording information, detection area, and ground truth in XML format. A video annotation tool is proposed to make ground truth effectively in this paper. A video annotation tool writes ground truths of videos and includes evaluation comparing system alarms with ground truths.

Development of an Perceptual Video Quality Assessment Metric Using HVS and Video Communication Parameters (인간 시각 특성과 영상통신 파라미터를 이용한 동영상 품질 메트릭 개발)

  • Lee, Won-Kyun;Jang, Seong-Hwan;Park, Heui-Cheol;Lee, Ju-Yong;Suh, Chang-Ryul;Kim, Jung-Joon
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.155-158
    • /
    • 2007
  • In this paper, we solved the underestimation problem of PSNR, which is caused by repeated frames, by easily synchronizing original and decoded frames using the proposed marks. Also we propose full-reference system which can be applied for measuring the quality of various kinds of video communication systems, e.g. wireless handsets, mobile phones and applications for PC. In addition, we propose a new video quality assessment metric using video communication parameters, i.e. frame rate and delay. According to the experiments, the proposed metric is not only appropriate for real-time video communication systems but also shows better correlation with the subjective video quality assessment than PSNR. The proposed measuring system and metric can be effectively used for measuring and standardizing the video quality of future communications.

  • PDF