• Title/Summary/Keyword: Emotion-based music classification

Search Result 21, Processing Time 0.018 seconds

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Emotion Transition Model based Music Classification Scheme for Music Recommendation (음악 추천을 위한 감정 전이 모델 기반의 음악 분류 기법)

  • Han, Byeong-Jun;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.159-166
    • /
    • 2009
  • So far, many researches have been done to retrieve music information using static classification descriptors such as genre and mood. Since static classification descriptors are based on diverse content-based musical features, they are effective in retrieving similar music in terms of such features. However, human emotion or mood transition triggered by music enables more effective and sophisticated query in music retrieval. So far, few works have been done to evaluate the effect of human mood transition by music. Using formal representation of such mood transitions, we can provide personalized service more effectively in the new applications such as music recommendation. In this paper, we first propose our Emotion State Transition Model (ESTM) for describing human mood transition by music and then describe a music classification and recommendation scheme based on the ESTM. In the experiment, diverse content-based features were extracted from music clips, dimensionally reduced by NMF (Non-negative Matrix Factorization, and classified by SVM (Support Vector Machine). In the performance analysis, we achieved average accuracy 67.54% and maximum accuracy 87.78%.

  • PDF

Music Emotion Classification Based On Three-Level Structure (3 레벨 구조 기반의 음악 무드분류)

  • Kim, Hyoung-Gook;Jeong, Jin-Guk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.56-62
    • /
    • 2007
  • This paper presents the automatic music emotion classification on acoustic data. A three-level structure is developed. The low-level extracts the timbre and rhythm features. The middle-level estimates the indication functions that represent the emotion probability of a single analysis unit. The high-level predicts the emotion result based on the indication function values. Experiments are carried out on 695 homogeneous music pieces labeled with four emotions, including pleasant, calm, sad, and excited. Three machine learning methods, GMM, MLP, and SVM, are compared on the high-level. The best result of 90.16% is obtained by MLP method.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

Rough Set-Based Approach for Automatic Emotion Classification of Music

  • Baniya, Babu Kaji;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.400-416
    • /
    • 2017
  • Music emotion is an important component in the field of music information retrieval and computational musicology. This paper proposes an approach for automatic emotion classification, based on rough set (RS) theory. In the proposed approach, four different sets of music features are extracted, representing dynamics, rhythm, spectral, and harmony. From the features, five different statistical parameters are considered as attributes, including up to the $4^{th}$ order central moments of each feature, and covariance components of mutual ones. The large number of attributes is controlled by RS-based approach, in which superfluous features are removed, to obtain indispensable ones. In addition, RS-based approach makes it possible to visualize which attributes play a significant role in the generated rules, and also determine the strength of each rule for classification. The experiments have been performed to find out which audio features and which of the different statistical parameters derived from them are important for emotion classification. Also, the resulting indispensable attributes and the usefulness of covariance components have been discussed. The overall classification accuracy with all statistical parameters has recorded comparatively better than currently existing methods on a pair of datasets.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Music Similarity Search Based on Music Emotion Classification

  • Kim, Hyoung-Gook;Kim, Jang-Heon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.69-73
    • /
    • 2007
  • This paper presents an efficient algorithm to retrieve similar music files from a large archive of digital music database. Users are able to navigate and discover new music files which sound similar to a given query music file by searching for the archive. Since most of the methods for finding similar music files from a large database requires on computing the distance between a given query music file and every music file in the database, they are very time-consuming procedures. By measuring the acoustic distance between the pre-classified music files with the same type of emotion, the proposed method significantly speeds up the search process and increases the precision in comparison with the brute-force method.

An Implementation of a Classification and Recommendation Method for a Music Player Using Customized Emotion (맞춤형 감성 뮤직 플레이어를 위한 음악 분류 및 추천 기법 구현)

  • Song, Yu-Jeong;Kang, Su-Yeon;Ihm, Sun-Young;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.195-200
    • /
    • 2015
  • Recently, most people use android based smartphones and we can find music players in any smartphones. However, it's hard to find a personalized music player which applies user's preference. In this paper, we propose an emotion-based music player, which analyses and classifies the music with user's emotion, recommends the music, applies the user's preference, and visualizes the music by color. Through the proposed music player, user could be able to select musics easily and use an optimized application.

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

Parting Lyrics Emotion Classification using Word2Vec and LSTM (Word2Vec과 LSTM을 활용한 이별 가사 감정 분류)

  • Lim, Myung Jin;Park, Won Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.90-97
    • /
    • 2020
  • With the development of the Internet and smartphones, digital sound sources are easily accessible, and accordingly, interest in music search and recommendation is increasing. As a method of recommending music, research using melodies such as pitch, tempo, and beat to classify genres or emotions is being conducted. However, since lyrics are becoming one of the means of expressing human emotions in music, the role of the lyrics is increasing, so a study of emotion classification based on lyrics is needed. Therefore, in this thesis, we analyze the emotions of the farewell lyrics in order to subdivide the farewell emotions based on the lyrics. After constructing an emotion dictionary by vectoriziong the similarity between words appearing in the parting lyrics through Word2Vec learning, we propose a method of classifying parting lyrics emotions using Word2Vec and LSTM, which classify lyrics by similar emotions by learning lyrics using LSTM.