• Title/Summary/Keyword: Untrimmed Video

Search Result 5, Processing Time 0.021 seconds

DeepAct: A Deep Neural Network Model for Activity Detection in Untrimmed Videos

  • Song, Yeongtaek;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.150-161
    • /
    • 2018
  • We propose a novel deep neural network model for detecting human activities in untrimmed videos. The process of human activity detection in a video involves two steps: a step to extract features that are effective in recognizing human activities in a long untrimmed video, followed by a step to detect human activities from those extracted features. To extract the rich features from video segments that could express unique patterns for each activity, we employ two different convolutional neural network models, C3D and I-ResNet. For detecting human activities from the sequence of extracted feature vectors, we use BLSTM, a bi-directional recurrent neural network model. By conducting experiments with ActivityNet 200, a large-scale benchmark dataset, we show the high performance of the proposed DeepAct model.

Salient Video Frames Sampling Method Using the Mean of Deep Features for Efficient Model Training (효율적인 모델 학습을 위한 심층 특징의 평균값을 활용한 의미 있는 비디오 프레임 추출 기법)

  • Yoon, Hyeok;Kim, Young-Gi;Han, Ji-Hyeong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.318-321
    • /
    • 2021
  • 최근 정보통신의 발달과 함께 인터넷에 접속하는 사용자 수와 그에 따른 비디오 데이터의 전송량이 늘어나는 추세이다. 이렇게 늘어나는 많은 비디오 데이터를 관리하고 분석하기 위해서 최근에는 딥 러닝 기법을 많이 활용하게 된다. 일반적으로 비디오 데이터에 딥 러닝 모델을 학습할 때 컴퓨터 자원의 한계로 인해 전체 비디오 프레임에서 균등한 간격 또는 무작위로 프레임을 선택하는 방법을 많이 사용한다. 하지만 학습에 사용되는 비디오 데이터는 항상 시간 축에 따라 같은 문맥을 담고 있는 Trimmed 비디오라고 가정할 수가 없다. 만약 같지 않은 문맥을 지닌 Untrimmed 비디오에서 균등한 간격 또는 무작위로 프레임을 선택해서 사용하게 된다면 비디오의 범주와 관련이 없는 프레임이 샘플링 될 가능성이 있기 때문에 모델의 학습 및 최적화에 전혀 도움이 되지 않는다. 이를 해결하기 위해 우리는 각 비디오 프레임에서 심층 특징을 추출하여 평균값을 계산하고 이와 각 추출된 심층특징들과 코사인 유사도를 계산해서 얻은 유사도 점수를 바탕으로 Untrimmed 비디오에서 의미 있는 비디오 프레임을 추출하는 기법을 제안한다. 그리고 Untrimmed 비디오로 구성된 데이터셋으로 유명한 ActivityNet 데이터셋에 대해서 대표적인 2가지 프레임 샘플링 방식(균등한 간격, 무작위)과 비교하여 우리가 제안하는 기법이 Untrimmed 비디오에서 효과적으로 비디오의 범주에 해당하는 의미 있는 프레임 추출이 가능함을 보일 것이다. 우리가 실험에 사용한 코드는 https://github.com/titania7777/VideoFrameSampler에서 확인할 수 있다.

  • PDF

Trends in Online Action Detection in Streaming Videos (온라인 행동 탐지 기술 동향)

  • Moon, J.Y.;Kim, H.I.;Lee, Y.J.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Online action detection (OAD) in a streaming video is an attractive research area that has aroused interest lately. Although most studies for action understanding have considered action recognition in well-trimmed videos and offline temporal action detection in untrimmed videos, online action detection methods are required to monitor action occurrences in streaming videos. OAD predicts action probabilities for a current frame or frame sequence using a fixed-sized video segment, including past and current frames. In this article, we discuss deep learning-based OAD models. In addition, we investigated OAD evaluation methodologies, including benchmark datasets and performance measures, and compared the performances of the presented OAD models.

YOLOv4-based real-time object detection and trimming for dogs' activity analysis (강아지 행동 분석을 위한 YOLOv4 기반의 실시간 객체 탐지 및 트리밍)

  • Atif, Othmane;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.967-970
    • /
    • 2020
  • In a previous work we have done, we presented a monitoring system to automatically detect some dogs' behaviors from videos. However, the input video data used by that system was pre-trimmed to ensure it contained a dog only. In a real-life situation, the monitoring system would continuously receive video data, including frames that are empty and ones that contain people. In this paper, we propose a YOLOv4-based system for automatic object detection and trimming of dog videos. Sequences of frames trimmed from the video data received from the camera are analyzed to detect dogs and people frame by frame using a YOLOv4 model, and then records of the occurrences of dogs and people are generated. The records of each sequence are then analyzed through a rule-based decision tree to classify the sequence, forward it if it contains a dog only or ignore it otherwise. The results of the experiments on long untrimmed videos show that our proposed method manages an excellent detection performance reaching 0.97 in average of precision, recall and f-1 score at a detection rate of approximately 30 fps, guaranteeing with that real-time processing.

A Bi-directional Information Learning Method Using Reverse Playback Video for Fully Supervised Temporal Action Localization (완전지도 시간적 행동 검출에서 역재생 비디오를 이용한 양방향 정보 학습 방법)

  • Huiwon Gwon;Hyejeong Jo;Sunhee Jo;Chanho Jung
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.145-149
    • /
    • 2024
  • Recently, research on temporal action localization has been actively conducted. In this paper, unlike existing methods, we propose two approaches for learning bidirectional information by creating reverse playback videos for fully supervised temporal action localization. One approach involves creating training data by combining reverse playback videos and forward playback videos, while the other approach involves training separate models on videos with different playback directions. Experiments were conducted on the THUMOS-14 dataset using TALLFormer. When using both reverse and forward playback videos as training data, the performance was 5.1% lower than that of the existing method. On the other hand, using a model ensemble shows a 1.9% improvement in performance.