• Title/Summary/Keyword: Video Extraction

Search Result 466, Processing Time 0.019 seconds

Implementation of Sports Video Clip Extraction Based on MobileNetV3 Transfer Learning (MobileNetV3 전이학습 기반 스포츠 비디오 클립 추출 구현)

  • YU, LI
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.897-904
    • /
    • 2022
  • Sports video is a very critical information resource. High-precision extraction of effective segments in sports video can better assist coaches in analyzing the player's actions in the video, and enable users to more intuitively appreciate the player's hitting action. Aiming at the shortcomings of the current sports video clip extraction results, such as strong subjectivity, large workload and low efficiency, a classification method of sports video clips based on MobileNetV3 is proposed to save user time. Experiments evaluate the effectiveness of effective segment extraction. Among the extracted segments, the effective proportion is 97.0%, indicating that the effective segment extraction results are good, and it can lay the foundation for the construction of the subsequent badminton action metadata video dataset.

New Framework for Automated Extraction of Key Frames from Compressed Video

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.693-700
    • /
    • 2012
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.

A New Framework for Automatic Extraction of Key Frames Using DC Image Activity

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.12
    • /
    • pp.4533-4551
    • /
    • 2014
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. Moreover, we implement our proposed framework in Android to confirm the validity, availability and usefulness. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

A Study on the Extraction of the dynamic objects using temporal continuity and motion in the Video (비디오에서 객체의 시공간적 연속성과 움직임을 이용한 동적 객체추출에 관한 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.115-121
    • /
    • 2016
  • Recently, it has become an important problem to extract semantic objects from videos, which are useful for improving the performance of video compression and video retrieval. In this thesis, an automatic extraction method of moving objects of interest in video is suggested. We define that an moving object of interest should be relatively large in a frame image and should occur frequently in a scene. The moving object of interest should have different motion from camera motion. Moving object of interest are determined through spatial continuity by the AMOS method and moving histogram. Through experiments with diverse scenes, we found that the proposed method extracted almost all of the objects of interest selected by the user but its precision was 69% because of over-extraction.

Signature Extraction Method from H.264 Compressed Video (H.264/AVC로 압축된 비디오로부터 시그너쳐 추출방법)

  • Kim, Sung-Min;Kwon, Yong-Kwang;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.10-17
    • /
    • 2009
  • This paper proposes a compressed domain signature extraction method which can be used for CBCD (Content Based Copy Detection). Since existing signature extraction methods for the CBCD are executed in spatial domain, they need additional computations to decode the compressed video before the signature extraction. To avoid this overhead, we generate a thumbnail image directly from the compressed video without full decoding. Then we can extract the video signature from the thumbnail image. Experimental results of extracting brightness ordering information as the signature for CBCD show that our proposed method is 2.8 times faster than the spatial domain method while maintaining 80.98% accuracy.

Moving Object Block Extraction for Compressed Video Signal Based on 2-Mode Selection (2-모드 선택 기반의 압축비디오 신호의 움직임 객체 블록 추출)

  • Kim, Dong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.5
    • /
    • pp.163-170
    • /
    • 2007
  • In this paper, We propose a new technique for extraction of moving objects included in compressed video signal. Moving object extraction is used in several fields such as contents based retrieval and target tracking. In this paper, in order to extract moving object blocks, motion vectors and DCT coefficients are used selectively. The proposed algorithm has a merit that it is no need of perfect decoding, because it uses only coefficients on the DCT transform domain. We used three test video sequences in the computer simulation, and obtained satisfactory results.

  • PDF

Car Frame Extraction using Background Frame in Video (동영상에서 배경프레임을 이용한 차량 프레임 검출)

  • Nam, Seok-Woo;Oh, Hea-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.705-710
    • /
    • 2003
  • Recent years, as a rapid development of multimedia technology, video database system to retrieve video data efficiently seems to core technology in the oriented society. This thesis describes an efficient automatic frame detection and location method for content based retrieval of video. Frame extraction part is consist of incoming / outgoing car frame extraction and car number frame extraction stage. We gain star/end time of car video also car number frames. Frames are selected at fixed time interval from video and key frames are selected by color scale histogram and edge operation method. Car frame recognized can be searched by content based retrieval method.

Video Segmentation and Key frame Extraction using Multi-resolution Analysis and Statistical Characteristic

  • Cho, Wan-Hyun;Park, Soon-Young;Park, Jong-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.457-469
    • /
    • 2003
  • In this paper, we have proposed the efficient algorithm that can segment the video scene change using a various statistical characteristics obtained from by applying the wavelet transformation for each frames. Our method firstly extracts the histogram features from low frequency subband of wavelet-transformed image and then uses these features to detect the abrupt scene change. Second, it extracts the edge information from applying the mesh method to the high frequency subband of transformed image. We quantify the extracted edge information as the values of variance characteristic of each pixel and use these values to detect the gradual scene change. And we have also proposed an algorithm how extract the proper key frame from segmented video scene. Experiment results show that the proposed method is both very efficient algorithm in segmenting video frames and also is to become the appropriate key frame extraction method.

Study on 3 DoF Image and Video Stitching Using Sensed Data

  • Kim, Minwoo;Chun, Jonghoon;Kim, Sang-Kyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4527-4548
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from inertia sensors to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw, pitch, and roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data. In addition, the stitching accuracy of video data was improved using the same sensed data, with discrete calculation of homograph matrix. The experimental results for stitching accuracies and speed using sensed data are presented in this paper.