• Title/Summary/Keyword: Video Frames

Search Result 888, Processing Time 0.027 seconds

An Identification Method of Detrimental Video Images Using Color Space Features (컬러공간 특성을 이용한 유해 동영상 식별방법에 관한 연구)

  • Kim, Soung-Gyun;Kim, Chang-Geun;Jeong, Dae-Yul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.6
    • /
    • pp.2807-2814
    • /
    • 2011
  • This paper proposes an identification algorithm that detects detrimental digital video contents based on the color space features. In this paper, discrimination algorithm based on a 2-Dimensional Projection Maps is suggested to find targeted video images. First, 2-Dimensional Projection Maps which is extracting the color characteristics of the video images is applied to extract effectively detrimental candidate frames from the videos, and next estimates similarity between the extracted frames and normative images using the suggested algorithm. Then the detrimental candidate frames are selected from the result of similarity evaluation test which uses critical value. In our experimental test, it is suggested that the results of the comparison between the Color Histogram and the 2-Dimensional Projection Maps technique to detect detrimental candidate frames. Through the various experimental data to test the suggested method and the similarity algorithm, detecting method based on the 2-Dimensional Projection Maps show more superior performance than using the Color Histogram technique in calculation speed and identification abilities searching target video images.

A Method for Structuring Digital Video

  • Lee, Jae-Yeon;Jeong, Se-Yoon;Yoon, Ho-Sub;Kim, Kyu-Heon;Bae, Younglae-J;Jang, Jong-whan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.92-97
    • /
    • 1998
  • For the efficient searching and browsing of digital video, it is essential to extract the internal structure of the video contents. As an example, a news video consists of several sections such as politics, economics, sports and others, and also each section consists of individual topics. With this information in hand, users can ore easily access the required video frames. This paper addresses the problem of automatic shot boundary detection and selection of representative frames (R-frames), which are the essential step in recognizing the internal structure of video contents. In the shot boundary detection, a new algorithm that have dual detectors which are designed specifically for the abrupt boundaries (cuts) and gradually changing bounaries respectively is proposed. Compared to the existing 미algorithms that mostly have tried to detect both types by a single mechanism, the proposed algorithm is proved to be more robust and accurate. Also in the problem of R-frame selection, simple mechanical approaches such as selecting one frame every other second have been adopted. However this approach often selects too many R-frames in static short, while drops important frames in dynamic shots. To improve the selection mechanism, a new R-frame selection algorithm that uses motion information extracted from pixel difference is proposed.

  • PDF

Registration of Aerial Video Frames for Generating Image Map (영상지도제작을 위한 항공 비디오 영상 등록)

  • Kim, Seong-Sam;Shin, Sung-Woong;Kim, Eui-Myoung;Yoo, Hwan-Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.4
    • /
    • pp.279-287
    • /
    • 2007
  • The increased availability of portable, low-cost, high resolution video equipments have resulted in a rapid growth of the applications for video sequences. These video devices can be mounted in handhold unit, mobile unit and airborne platforms like maned or unmaned helicopter, plane, airship, etc. This paper describes the feasibility fur generating image map from the experimental results we designed to track the interested points extracted by KLT operator in the neighboring frames and implement image matching for each frames taken from UAV (Unmaned Aerial Vehicle). In the image registration for neighbourhood frames of aerial video, the results demonstrate the successful rate of matching slightly decreases as the drift between frames increases, and also that the stable photographing is more important matching condition than the pixel shift.

MPEG Video Segmentation Using Frame Feature Comparison (프레임 특징 비교를 이용한 압축비디오 분할)

  • 김영호;강대성
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.2
    • /
    • pp.25-30
    • /
    • 2003
  • Recently, development of digital technology is occupying a large part of multimedia information like character, voice, image, video, etc. Research about video indexing and retrieval progresses especially in research relative to video. In this paper, we propose new algorithm(Frame Feature Comparison) for MPEG video segmentation. Shot, Scene Change detection is basic and important works that segment it in MPEG video sequence. Generally, the segmentation algorithm that uses much has defect that occurs an error detection according to a flash of camera, movement of camera and fast movement of an object, because of comparing former frames with present frames. Therefore, we distinguish a scene change one more time using a scene change point detected in the conventional algorithm through comparing its mean value with abutted frames. In the result, we could detect more corrective scene change than the conventional algorithm.

  • PDF

Video Automatic Editing Method and System based on Machine Learning (머신러닝 기반의 영상 자동 편집 방법 및 시스템)

  • Lee, Seung-Hwan;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.235-237
    • /
    • 2022
  • Video content is divided into long-form video content and short-form video content according to the length. Long form video content is created with a length of 15 minutes or longer, and all frames of the captured video are included without editing. Short-form video content can be edited to a shorter length from 1 minute to 15 minutes, and only some frames from the frames of the captured video. Due to the recent growth of the single-person broadcasting market, the demand for short-form video content to increase viewers is increasing. Therefore, there is a need for research on content editing technology for editing and generating short-form video content. This study studies the technology to create short-form videos of main scenes by capturing images, voices, and motions. Short-form videos of key scenes use a pre-trained highlight extraction model through machine learning. An automatic video editing system and method for automatically generating a highlight video is a core technology of short-form video content. Machine learning-based automatic video editing method and system research will contribute to competitive content activities by reducing the effort and cost and time invested by single creators for video editing

  • PDF

Fast key-frame extraction for 3D reconstruction from a handheld video

  • Choi, Jongho;Kwon, Soonchul;Son, Kwangchul;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.1-9
    • /
    • 2016
  • In order to reconstruct a 3D model in video sequences, to select key frames that are easy to estimate a geometric model is essential. This paper proposes a method to easily extract informative frames from a handheld video. The method combines selection criteria based on appropriate-baseline determination between frames, frame jumping for fast searching in the video, geometric robust information criterion (GRIC) scores for the frame-to-frame homography and fundamental matrix, and blurry-frame removal. Through experiments with videos taken in indoor space, the proposed method shows creating a more robust 3D point cloud than existing methods, even in the presence of motion blur and degenerate motions.

Robust Video-Based Barcode Recognition via Online Sequential Filtering

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.8-16
    • /
    • 2014
  • We consider the visual barcode recognition problem in a noisy video data setup. Unlike most existing single-frame recognizers that require considerable user effort to acquire clean, motionless and blur-free barcode signals, we eliminate such extra human efforts by proposing a robust video-based barcode recognition algorithm. We deal with a sequence of noisy blurred barcode image frames by posing it as an online filtering problem. In the proposed dynamic recognition model, at each frame we infer the blur level of the frame as well as the digit class label. In contrast to a frame-by-frame based approach with heuristic majority voting scheme, the class labels and frame-wise noise levels are propagated along the frame sequences in our model, and hence we exploit all cues from noisy frames that are potentially useful for predicting the barcode label in a probabilistically reasonable sense. We also suggest a visual barcode tracking approach that efficiently localizes barcode areas in video frames. The effectiveness of the proposed approaches is demonstrated empirically on both synthetic and real data setup.

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF

An Efficient Face Region Detection for Content-based Video Summarization (내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출)

  • Kim Jong-Sung;Lee Sun-Ta;Baek Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.675-686
    • /
    • 2005
  • In this paper, we propose an efficient face region detection technique for the content-based video summarization. To segment video, shot changes are detected from a video sequence and key frames are selected from the shots. We select one frame that has the least difference between neighboring frames in each shot. The proposed face detection algorithm detects face region from selected key frames. And then, we provide user with summarized frames included face region that has an important meaning in dramas or movies. Using Bayes classification rule and statistical characteristic of the skin pixels, face regions are detected in the frames. After skin detection, we adopt the projection method to segment an image(frame) into face region and non-face region. The segmented regions are candidates of the face object and they include many false detected regions. So, we design a classifier to minimize false lesion using CART. From SGLD matrices, we extract the textual feature values such as Inertial, Inverse Difference, and Correlation. As a result of our experiment, proposed face detection algorithm shows a good performance for the key frames with a complex and variant background. And our system provides key frames included the face region for user as video summarized information.

Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC

  • Byun, Juwon;Kim, Jaeseok
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.5
    • /
    • pp.430-442
    • /
    • 2013
  • This paper presents a fast multi-reference frame integer motion estimator for H.264/AVC. The proposed system uses the previously proposed fast multi-reference frame algorithm. The previously proposed algorithm executes a full search area motion estimation in reference frames 0 and 1. After that, the search areas of motion estimation in reference frames 2, 3 and 4 are minimized by a linear relationship between the motion vector and the distances from the current frame to the reference frames. For hardware implementation, the modified algorithm optimizes the search area, reduces the overlapping search area and modifies a division equation. Because the search area is reduced, the amount of computation is reduced by 58.7%. In experimental results, the modified algorithm shows an increase of bit-rate in 0.36% when compared with the five reference frame standard. The pipeline structure and the memory controller are also adopted for real-time video encoding. The proposed system is implemented using 0.13 um CMOS technology, and the gate count is 1089K with 6.50 KB of internal SRAM. It can encode a Full HD video ($1920{\times}1080P@30Hz$) in real-time at a 135 MHz clock speed with 5 reference frames.