• Title/Summary/Keyword: Video extraction

Search Result 466, Processing Time 0.026 seconds

Effective Automatic Foreground Motion Detection Using the Statistic Information of Background

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.121-128
    • /
    • 2015
  • In this paper, we proposed and implemented the effective automatic foreground motion detection algorithm that detect the foreground motion by analyzing the digital video data that captured by the network camera. We classified the background as moving background, fixed background and normal background based on the standard deviation of background and used it to detect the foreground motion. According to the result of experiment, our algorithm decreased the fault detection of the moving background and increased the accuracy of the foreground motion detection. Also it could extract foreground more exactly by using the statistic information of background in the phase of our foreground extraction.

Color Space Based Objects Detection System from Video Sequences

  • Alom, Md. Zahangir;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 2011.11a
    • /
    • pp.347-350
    • /
    • 2011
  • This paper propose a statistical color model of background extraction base on Hue-Saturation-Value(HSV) color space, instead of the traditional RGB space, and shows that it provides a better use of the color information. HSV color space corresponds closely to the human perception of color and it has revealed more accuracy to distinguish shadows [3] [4]. The key feature of this segmentation method is based on processing hue component of color in HSV color space on image area. The HSV color model is used, its color components are efficiently analyzed and treated separately so that the proposed algorithm can adapt to different environmental illumination condition and shadows. Polar and linear statistical operations are used to calculate the background from the video frames. The experimental results show that the proposed background subtraction method can automatically segment video objects robustly and accurately in various illuminating and shadow environments.

Real-Time Arbitrary Face Swapping System For Video Influencers Utilizing Arbitrary Generated Face Image Selection

  • Jihyeon Lee;Seunghoo Lee;Hongju Nam;Suk-Ho Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.31-38
    • /
    • 2023
  • This paper introduces a real-time face swapping system that enables video influencers to swap their faces with arbitrary generated face images of their choice. The system is implemented as a Django-based server that uses a REST request to communicate with the generative model,specifically the pretrained stable diffusion model. Once generated, the generated image is displayed on the front page so that the influencer can decide whether to use the generated face or not, by clicking on the accept button on the front page. If they choose to use it, both their face and the generated face are sent to the landmark extraction module to extract the landmarks, which are then used to swap the faces. To minimize the fluctuation of landmarks over time that can cause instability or jitter in the output, a temporal filtering step is added. Furthermore, to increase the processing speed the system works on a reduced set of the extracted landmarks.

Trends in Video Visual Relationship Understanding (비디오 시각적 관계 이해 기술 동향)

  • Y.J. Kwon;D.H. Kim;J.H. Kim;S.C. Oh;J.S. Ham;J.Y. Moon
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.12-21
    • /
    • 2023
  • Visual relationship understanding in computer vision allows to recognize meaningful relationships between objects in a scene. This technology enables the extraction of representative information within visual content. We discuss the technology of visual relationship understanding, specifically focusing on videos. We first introduce visual relationship understanding concepts in videos and then explore the latest existing techniques. Next, we present benchmark datasets commonly used in video visual relationship understanding. Finally, we discuss future research directions in video visual relationship understanding.

Digital Watermarking for Robustness of Low Bit Rate Video Contents on the Mobile (모바일 상에서 비트율이 낮은 비디오 콘텐츠의 강인성을 위한 디지털 워터마킹)

  • Seo, Jung-Hee;Park, Hung-Bog
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.1
    • /
    • pp.47-54
    • /
    • 2012
  • Video contents in the mobile environment are processed with the low bit-rate relative to normal video contents due to the consideration of network traffic; hence, it is necessary to protect the copyright of the low bit-rate video contents. The algorithm for watermarking appropriate for the mobile environment should be developed because the performance of the mobile devices is much lower than that of personal computers. This paper suggested the invisible spread spectrum watermarking method to the low bit-rate video contents, considering the low performance of the mobile device in the M-Commerce environment; it also enables to track down illegal users of the video contents to protect the copyright. The robustness of the contents with watermark is expressed with the correlation of extraction algorithm from watermark removed or distorted contents. The results of our experiment showed that we could extract the innate frequencies of M-Sequence when we extracted M-Sequence after compressing the contents with watermark easily. Therefore, illegal users of the contents can be tracked down because watermark can be extracted from the low bit-rate video contents.

A Research on the Teaser Video Production Method by Keyframe Extraction Based on YCbCr Color Model (YCbCr 컬러모델 기반의 키프레임 추출을 통한 티저 영상 제작 방법에 대한 연구)

  • Lee, Seo-young;Park, Hyo-Gyeong;Young, Sung-Jung;You, Yeon-Hwi;Moon, Il-Young
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.439-445
    • /
    • 2022
  • Due to the development of online media platforms and the COVID-19 incident, the mass production and consumption of digital video content are rapidly increasing. In order to select digital video content, users grasp it in a short time through thumbnails and teaser videos, and select and watch digital video content that suits them. It is very inconvenient to check all digital video contents produced around the world one by one and manually edit teaser videos for users to choose from. In this paper, keyframes are extracted based on YCbCr color models to automatically generate teaser videos, and keyframes extracted through clustering are optimized. Finally, we present a method of producing a teaser video to help users check digital video content by connecting the finally extracted keyframes.

Eigen Value Based Image Retrieval Technique (Eigen Value 기반의 영상검색 기법)

  • 김진용;소운영;정동석
    • The Journal of Information Technology and Database
    • /
    • v.6 no.2
    • /
    • pp.19-28
    • /
    • 1999
  • Digital image and video libraries require new algorithms for the automated extraction and indexing of salient image features. Eigen values of an image provide one important cue for the discrimination of image content. In this paper we propose a new approach for automated content extraction that allows efficient database searching using eigen values. The algorithm automatically extracts eigen values from the image matrix represented by the covariance matrix for the image. We demonstrate that the eigen values representing shape information and the skewness of its distribution representing complexity provide good performance in image query response time while providing effective discriminability. We present the eigen value extraction and indexing techniques. We test the proposed algorithm of searching by eigen value and its skewness on a database of 100 images.

  • PDF

Key frame extraction using Fourier transform (퓨리에 변환을 이용한 키 프레임 추출)

  • 이중용;문영식
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.179-182
    • /
    • 2001
  • In this paper. a key frame extraction algorithm for browsing and searching the summary of a video is proposed. Toward this end, important frames representing a shot are selected according to the correlations among frames. by using the Fourier descriptor which is useful for the shot boundary detection. To quantitatively evaluate the importance of selected frames. a new measure based on correlation coefficients of frames is proposed. If there are several frames with a same importance. another criteria is introduced to break the tie. by computing the partial moment of subframes including each candidate key frame so that the distortion rate is minimized Since a key frame extraction algorithm can be evaluated subjectively. the performance of the proposed algorithm has been verified by a statistical test. Experiments show that more than 20% improvement has been obtained by the proposed algorithm compared to existing methods.

  • PDF