• Title/Summary/Keyword: ConvGRU

Search Result 7, Processing Time 0.023 seconds

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Development of Demand Forecasting Model for Public Bicycles in Seoul Using GRU (GRU 기법을 활용한 서울시 공공자전거 수요예측 모델 개발)

  • Lee, Seung-Woon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-25
    • /
    • 2022
  • After the first Covid-19 confirmed case occurred in Korea in January 2020, interest in personal transportation such as public bicycles not public transportation such as buses and subways, increased. The demand for 'Ddareungi', a public bicycle operated by the Seoul Metropolitan Government, has also increased. In this study, a demand prediction model of a GRU(Gated Recurrent Unit) was presented based on the rental history of public bicycles by time zone(2019~2021) in Seoul. The usefulness of the GRU method presented in this study was verified based on the rental history of Around Exit 1 of Yeouido, Yeongdengpo-gu, Seoul. In particular, it was compared and analyzed with multiple linear regression models and recurrent neural network models under the same conditions. In addition, when developing the model, in addition to weather factors, the Seoul living population was used as a variable and verified. MAE and RMSE were used as performance indicators for the model, and through this, the usefulness of the GRU model proposed in this study was presented. As a result of this study, the proposed GRU model showed higher prediction accuracy than the traditional multi-linear regression model and the LSTM model and Conv-LSTM model, which have recently been in the spotlight. Also the GRU model was faster than the LSTM model and the Conv-LSTM model. Through this study, it will be possible to help solve the problem of relocation in the future by predicting the demand for public bicycles in Seoul more quickly and accurately.

Convolutional GRU and Attention based Fall Detection Integrating with Human Body Keypoints and DensePose

  • Yi Zheng;Cunyi Liao;Ruifeng Xiao;Qiang He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2782-2804
    • /
    • 2024
  • The integration of artificial intelligence technology with medicine has rapidly evolved, with increasing demands for quality of life. However, falls remain a significant risk leading to severe injuries and fatalities, especially among the elderly. Therefore, the development and application of computer vision-based fall detection technologies have become increasingly important. In this paper, firstly, the keypoint detection algorithm ViTPose++ is used to obtain the coordinates of human body keypoints from the camera images. Human skeletal feature maps are generated from this keypoint coordinate information. Meanwhile, human dense feature maps are produced based on the DensePose algorithm. Then, these two types of feature maps are confused as dual-channel inputs for the model. The convolutional gated recurrent unit is introduced to extract the frame-to-frame relevance in the process of falling. To further integrate features across three dimensions (spatio-temporal-channel), a dual-channel fall detection algorithm based on video streams is proposed by combining the Convolutional Block Attention Module (CBAM) with the ConvGRU. Finally, experiments on the public UR Fall Detection Dataset demonstrate that the improved ConvGRU-CBAM achieves an F1 score of 92.86% and an AUC of 95.34%.

Utilizing Deep Learning for Early Diagnosis of Autism: Detecting Self-Stimulatory Behavior

  • Seongwoo Park;Sukbeom Chang;JooHee Oh
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.148-158
    • /
    • 2024
  • We investigate Autism Spectrum Disorder (ASD), which is typified by deficits in social interaction, repetitive behaviors, limited vocabulary, and cognitive delays. Traditional diagnostic methodologies, reliant on expert evaluations, frequently result in deferred detection and intervention, particularly in South Korea, where there is a dearth of qualified professionals and limited public awareness. In this study, we employ advanced deep learning algorithms to enhance early ASD screening through automated video analysis. Utilizing architectures such as Convolutional Long Short-Term Memory (ConvLSTM), Long-term Recurrent Convolutional Network (LRCN), and Convolutional Neural Networks with Gated Recurrent Units (CNN+GRU), we analyze video data from platforms like YouTube and TikTok to identify stereotypic behaviors (arm flapping, head banging, spinning). Our results indicate that the LRCN model exhibited superior performance with 79.61% accuracy on the augmented platform video dataset and 79.37% on the original SSBD dataset. The ConvLSTM and CNN+GRU models also achieved higher accuracy than the original SSBD dataset. Through this research, we underscore AI's potential in early ASD detection by automating the identification of stereotypic behaviors, thereby enabling timely intervention. We also emphasize the significance of utilizing expanded datasets from social media platform videos in augmenting model accuracy and robustness, thus paving the way for more accessible diagnostic methods.

Speech emotion recognition through time series classification (시계열 데이터 분류를 통한 음성 감정 인식)

  • Kim, Gi-duk;Kim, Mi-sook;Lee, Hack-man
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.11-13
    • /
    • 2021
  • 본 논문에서는 시계열 데이터 분류를 통한 음성 감정 인식을 제안한다. mel-spectrogram을 사용하여 음성파일에서 특징을 뽑아내 다변수 시계열 데이터로 변환한다. 이를 Conv1D, GRU, Transformer를 결합한 딥러닝 모델에 학습시킨다. 위의 딥러닝 모델에 음성 감정 인식 데이터 세트인 TESS, SAVEE, RAVDESS, EmoDB에 적용하여 각각의 데이터 세트에서 기존의 모델 보다 높은 정확도의 음성 감정 분류 결과를 얻을 수 있었다. 정확도는 99.60%, 99.32%, 97.28%, 99.86%를 얻었다.

  • PDF

Hand gesture recognition based on RGB image data (RGB 영상 데이터 기반 손동작 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.15-16
    • /
    • 2021
  • 본 논문에서는 RGB 영상 데이터를 입력으로 하여 mediapipe의 손 포즈 추정 알고리즘을 적용해 손가락 관절 및 주요 부위의 위치를 얻고 이를 기반으로 딥러닝 모델에 학습 후 손동작 인식 방법을 제안한다. 연속된 프레임에서 한 손의 손가락 주요 부위 간 좌표를 얻고 차분 벡터의 x, y좌표를 저장한 후 Conv1D, Bidirectional GRU, Transformer를 결합한 딥러닝 모델에 학습 후 손동작 인식 분류를 하였다. IC4You Gesture Dataset 의 한 손 동적 데이터 9개 클래스에 적용한 결과 99.63%의 손동작 인식 정확도를 얻었다.

  • PDF

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF