• 제목/요약/키워드: video action recognition

검색결과 65건 처리시간 0.027초

모션 그래디언트 히스토그램 기반의 시공간 크기 변화에 강인한 동작 인식 (Spatial-Temporal Scale-Invariant Human Action Recognition using Motion Gradient Histogram)

  • 김광수;김태형;곽수영;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권12호
    • /
    • pp.1075-1082
    • /
    • 2007
  • 본 논문은 동영상에 등장하는 다수 사람의 동작을 검출하여 검출된 동작을 개별적으로 인식하는 방법을 제안한다. 동작이 수행되는 속도 또는 크기 변화에 강인한 인식 성능을 갖기 위해 시공간축 피라미드(Spatial-Temporal Pyramid)방식을 적용한다. 동작 표현 방식을 통계적 특성 기반의 모션 그래디언트 히스토그램(MGH:Motion Gradient Histogram)으로 선택하여 인식 과정에서 발생하는 복잡도를 최소화 하였다. 다수의 동작을 검출하기 위하여 이진 차영상을 축적한 모션 에너지 이미지(MEI: Motion Energy Image) 방법을 적용하여 효율적으로 개별적 동작 영역을 획득한다. 각 영역은 동작 표현 방법인 MGH로 나타내어지고, 크기 변화에 강인하도록 피라미드 방식을 적응하여 학습된 템플릿 MGH와 유사도를 상호 비교하여 최종 인식 결과를 얻는다. 인식 성능의 평가를 위해 10개의 동영상을 활용하여 단일 객체, 다수 객체, 속도 및 크기 변화, 기존 방식과의 비교, 기타 추가 실험 등을 실시하여 다양한 조건의 영상에서 양호한 인식 결과를 확인 할 수 있었다.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권4호
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

Binary Hashing CNN Features for Action Recognition

  • Li, Weisheng;Feng, Chen;Xiao, Bin;Chen, Yanquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권9호
    • /
    • pp.4412-4428
    • /
    • 2018
  • The purpose of this work is to solve the problem of representing an entire video using Convolutional Neural Network (CNN) features for human action recognition. Recently, due to insufficient GPU memory, it has been difficult to take the whole video as the input of the CNN for end-to-end learning. A typical method is to use sampled video frames as inputs and corresponding labels as supervision. One major issue of this popular approach is that the local samples may not contain the information indicated by the global labels and sufficient motion information. To address this issue, we propose a binary hashing method to enhance the local feature extractors. First, we extract the local features and aggregate them into global features using maximum/minimum pooling. Second, we use the binary hashing method to capture the motion features. Finally, we concatenate the hashing features with global features using different normalization methods to train the classifier. Experimental results on the JHMDB and MPII-Cooking datasets show that, for these new local features, binary hashing mapping on the sparsely sampled features led to significant performance improvements.

비디오 캡셔닝을 적용한 수어 번역 및 행동 인식을 적용한 수어 인식 (Sign language translation using video captioning and sign language recognition using action recognition)

  • 김기덕;이근후
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2024년도 제69차 동계학술대회논문집 32권1호
    • /
    • pp.317-319
    • /
    • 2024
  • 본 논문에서는 비디오 캡셔닝 알고리즘을 적용한 수어 번역 및 행동 인식 알고리즘을 적용한 수어 인식 알고리즘을 제안한다. 본 논문에 사용된 비디오 캡셔닝 알고리즘으로 40개의 연속된 입력 데이터 프레임을 CNN 네트워크를 통해 임베딩 하고 트랜스포머의 입력으로 하여 문장을 출력하였다. 행동 인식 알고리즘은 랜덤 샘플링을 하여 한 영상에 40개의 인덱스에서 40개의 연속된 데이터에 CNN 네트워크를 통해 임베딩하고 GRU, 트랜스포머를 결합한 RNN 모델을 통해 인식 결과를 출력하였다. 수어 번역에서 BLEU-4의 경우 7.85, CIDEr는 53.12를 얻었고 수어 인식으로 96.26%의 인식 정확도를 얻었다.

  • PDF

Action Recognition with deep network features and dimension reduction

  • Li, Lijun;Dai, Shuling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.832-854
    • /
    • 2019
  • Action recognition has been studied in computer vision field for years. We present an effective approach to recognize actions using a dimension reduction method, which is applied as a crucial step to reduce the dimensionality of feature descriptors after extracting features. We propose to use sparse matrix and randomized kd-tree to modify it and then propose modified Local Fisher Discriminant Analysis (mLFDA) method which greatly reduces the required memory and accelerate the standard Local Fisher Discriminant Analysis. For feature encoding, we propose a useful encoding method called mix encoding which combines Fisher vector encoding and locality-constrained linear coding to get the final video representations. In order to add more meaningful features to the process of action recognition, the convolutional neural network is utilized and combined with mix encoding to produce the deep network feature. Experimental results show that our algorithm is a competitive method on KTH dataset, HMDB51 dataset and UCF101 dataset when combining all these methods.

행동 인식을 위한 시공간 앙상블 기법 (Spatial-temporal Ensemble Method for Action Recognition)

  • 서민석;이상우;최동걸
    • 로봇학회논문지
    • /
    • 제15권4호
    • /
    • pp.385-391
    • /
    • 2020
  • As deep learning technology has been developed and applied to various fields, it is gradually changing from an existing single image based application to a video based application having a time base in order to recognize human behavior. However, unlike 2D CNN in a single image, 3D CNN in a video has a very high amount of computation and parameter increase due to the addition of a time axis, so improving accuracy in action recognition technology is more difficult than in a single image. To solve this problem, we investigate and analyze various techniques to improve performance in 3D CNN-based image recognition without additional training time and parameter increase. We propose a time base ensemble using the time axis that exists only in the videos and an ensemble in the input frame. We have achieved an accuracy improvement of up to 7.1% compared to the existing performance with a combination of techniques. It also revealed the trade-off relationship between computational and accuracy.

온라인 행동 탐지 기술 동향 (Trends in Online Action Detection in Streaming Videos)

  • 문진영;김형일;이용주
    • 전자통신동향분석
    • /
    • 제36권2호
    • /
    • pp.75-82
    • /
    • 2021
  • Online action detection (OAD) in a streaming video is an attractive research area that has aroused interest lately. Although most studies for action understanding have considered action recognition in well-trimmed videos and offline temporal action detection in untrimmed videos, online action detection methods are required to monitor action occurrences in streaming videos. OAD predicts action probabilities for a current frame or frame sequence using a fixed-sized video segment, including past and current frames. In this article, we discuss deep learning-based OAD models. In addition, we investigated OAD evaluation methodologies, including benchmark datasets and performance measures, and compared the performances of the presented OAD models.

A Deep Learning Algorithm for Fusing Action Recognition and Psychological Characteristics of Wrestlers

  • Yuan Yuan;Yuan Yuan;Jun Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권3호
    • /
    • pp.754-774
    • /
    • 2023
  • Wrestling is one of the popular events for modern sports. It is difficult to quantitatively describe a wrestling game between athletes. And deep learning can help wrestling training by human recognition techniques. Based on the characteristics of latest wrestling competition rules and human recognition technologies, a set of wrestling competition video analysis and retrieval system is proposed. This system uses a combination of literature method, observation method, interview method and mathematical statistics to conduct statistics, analysis, research and discussion on the application of technology. Combined the system application in targeted movement technology. A deep learning-based facial recognition psychological feature analysis method for the training and competition of classical wrestling after the implementation of the new rules is proposed. The experimental results of this paper showed that the proportion of natural emotions of male and female wrestlers was about 50%, indicating that the wrestler's mentality was relatively stable before the intense physical confrontation, and the test of the system also proved the stability of the system.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.