• Title/Summary/Keyword: Classification of Music

Search Result 242, Processing Time 0.026 seconds

Study of Music Classification Optimized Environment and Atmosphere for Intelligent Musical Fountain System (지능형 음악분수 시스템을 위한 환경 및 분위기에 최적화된 음악분류에 관한 연구)

  • Park, Jun-Heong;Park, Seung-Min;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.218-223
    • /
    • 2011
  • Various research studies are underway to explore music classification by genre. Because sound professionals define the criterion of music to categorize differently each other, those classification is not easy to come up clear result. When a new genre is appeared, there is onerousness to renew the criterion of music to categorize. Therefore, music is classified by emotional adjectives, not genre. We classified music by light and shade in precedent study. In this paper, we propose the music classification system that is based on emotional adjectives to suitable search for atmosphere, and the classification criteria is three kinds; light and shade in precedent study, intense and placid, and grandeur and trivial. Variance Considered Machines that is an improved algorithm for Support Vector Machine was used as classification algorithm, and it represented 85% classification accuracy with the result that we tried to classify 525 songs.

Automatic Music-Story Video Generation Using Music Files and Photos in Automobile Multimedia System (자동차 멀티미디어 시스템에서의 사진과 음악을 이용한 음악스토리 비디오 자동생성 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.5
    • /
    • pp.80-86
    • /
    • 2010
  • This paper presents automated music story video generation technique as one of entertainment features that is equipped in multimedia system of the vehicle. The automated music story video generation is a system that automatically creates stories to accompany musics with photos stored in user's mobile phone by connecting user's mobile phone with multimedia systems in vehicles. Users watch the generated music story video at the same time. while they hear the music according to mood. The performance of the automated music story video generation is measured by accuracies of music classification, photo classification, and text-keyword extraction, and results of user's MOS-test.

Effective Mood Classification Method based on Music Segments (부분 정보에 기반한 효과적인 음악 무드 분류 방법)

  • Park, Gun-Han;Park, Sang-Yong;Kang, Seok-Joong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.391-400
    • /
    • 2007
  • According to the recent advances in multimedia computing, storage and searching technology have made large volume of music contents become prevalent. Also there has been increasing needs for the study on efficient categorization and searching technique for music contents management. In this paper, a new classifying method using the local information of music content and music tone feature is proposed. While the conventional classifying algorithms are based on entire information of music content, the algorithm proposed in this paper focuses on only the specific local information, which can drastically reduce the computing time without losing classifying accuracy. In order to improve the classifying accuracy, it uses a new classification feature based on music tone. The proposed method has been implemented as a part of MuSE (Music Search/Classification Engine) which was installed on various systems including commercial PDAs and PCs.

  • PDF

Emotion Transition Model based Music Classification Scheme for Music Recommendation (음악 추천을 위한 감정 전이 모델 기반의 음악 분류 기법)

  • Han, Byeong-Jun;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.159-166
    • /
    • 2009
  • So far, many researches have been done to retrieve music information using static classification descriptors such as genre and mood. Since static classification descriptors are based on diverse content-based musical features, they are effective in retrieving similar music in terms of such features. However, human emotion or mood transition triggered by music enables more effective and sophisticated query in music retrieval. So far, few works have been done to evaluate the effect of human mood transition by music. Using formal representation of such mood transitions, we can provide personalized service more effectively in the new applications such as music recommendation. In this paper, we first propose our Emotion State Transition Model (ESTM) for describing human mood transition by music and then describe a music classification and recommendation scheme based on the ESTM. In the experiment, diverse content-based features were extracted from music clips, dimensionally reduced by NMF (Non-negative Matrix Factorization, and classified by SVM (Support Vector Machine). In the performance analysis, we achieved average accuracy 67.54% and maximum accuracy 87.78%.

  • PDF

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

A Study of the 780 Music of DDC (DDC에 있어서의 음악분야 분류상의 제문제)

  • Hahn Kyung-Shin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.26
    • /
    • pp.75-112
    • /
    • 1994
  • The purpose of this study is to investigate the problems concerning 780 music division of DDC. The object is especially arrangement of 780 music in the 20th edition of DDC which is the complete revision. The result is summarized as follows : 1. Although music is an important subject in humanities, especially in arts, it was classified as one division (780) not class. 2. The arrangement of 780 music is severely west-oriented music theory, vocal music and instrumental music. 3. Classification number of 780 music becomes longer because of the limitation of decimal notation. 4. 780 music division of DDC neglects music theory and emphasizes music practicing, especially performance. 5. The assignment of classification number is unbalanced, especially between theory and practice, composition and performance, and among sub-sections of vocal and instrumental music. 6. Many important subject are omitted in DDC music schedule, for example, musicology and branches of musicology, composition and traditional instruments of many countries. 7. Employment of terminology is often improper and inconsistant.

  • PDF

Music Genre Classification Based on Timbral Texture and Rhythmic Content Features

  • Baniya, Babu Kaji;Ghimire, Deepak;Lee, Joonwhon
    • Annual Conference of KIPS
    • /
    • 2013.05a
    • /
    • pp.204-207
    • /
    • 2013
  • Music genre classification is an essential component for music information retrieval system. There are two important components to be considered for better genre classification, which are audio feature extraction and classifier. This paper incorporates two different kinds of features for genre classification, timbral texture and rhythmic content features. Timbral texture contains several spectral and Mel-frequency Cepstral Coefficient (MFCC) features. Before choosing a timbral feature we explore which feature contributes less significant role on genre discrimination. This facilitates the reduction of feature dimension. For the timbral features up to the 4-th order central moments and the covariance components of mutual features are considered to improve the overall classification result. For the rhythmic content the features extracted from beat histogram are selected. In the paper Extreme Learning Machine (ELM) with bagging is used as classifier for classifying the genres. Based on the proposed feature sets and classifier, experiment is performed with well-known datasets: GTZAN databases with ten different music genres, respectively. The proposed method acquires the better classification accuracy than the existing approaches.

An Implementation of Automatic Genre Classification System for Korean Traditional Music (한국 전통음악 (국악)에 대한 자동 장르 분류 시스템 구현)

  • Lee Kang-Kyu;Yoon Won-Jung;Park Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.29-37
    • /
    • 2005
  • This paper proposes an automatic genre classification system for Korean traditional music. The Proposed system accepts and classifies queried input music as one of the six musical genres such as Royal Shrine Music, Classcal Chamber Music, Folk Song, Folk Music, Buddhist Music, Shamanist Music based on music contents. In general, content-based music genre classification consists of two stages - music feature vector extraction and Pattern classification. For feature extraction. the system extracts 58 dimensional feature vectors including spectral centroid, spectral rolloff and spectral flux based on STFT and also the coefficient domain features such as LPC, MFCC, and then these features are further optimized using SFS method. For Pattern or genre classification, k-NN, Gaussian, GMM and SVM algorithms are considered. In addition, the proposed system adopts MFC method to settle down the uncertainty problem of the system performance due to the different query Patterns (or portions). From the experimental results. we verify the successful genre classification performance over $97{\%}$ for both the k-NN and SVM classifier, however SVM classifier provides almost three times faster classification performance than the k-NN.

Opera Clustering: K-means on librettos datasets

  • Jeong, Harim;Yoo, Joo Hun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.45-52
    • /
    • 2022
  • With the development of artificial intelligence analysis methods, especially machine learning, various fields are widely expanding their application ranges. However, in the case of classical music, there still remain some difficulties in applying machine learning techniques. Genre classification or music recommendation systems generated by deep learning algorithms are actively used in general music, but not in classical music. In this paper, we attempted to classify opera among classical music. To this end, an experiment was conducted to determine which criteria are most suitable among, composer, period of composition, and emotional atmosphere, which are the basic features of music. To generate emotional labels, we adopted zero-shot classification with four basic emotions, 'happiness', 'sadness', 'anger', and 'fear.' After embedding the opera libretto with the doc2vec processing model, the optimal number of clusters is computed based on the result of the elbow method. Decided four centroids are then adopted in k-means clustering to classify unsupervised libretto datasets. We were able to get optimized clustering based on the result of adjusted rand index scores. With these results, we compared them with notated variables of music. As a result, it was confirmed that the four clusterings calculated by machine after training were most similar to the grouping result by period. Additionally, we were able to verify that the emotional similarity between composer and period did not appear significantly. At the end of the study, by knowing the period is the right criteria, we hope that it makes easier for music listeners to find music that suits their tastes.

Content-Based Genre Classification Using Climax Extraction in Music (음악의 클라이맥스 추출을 이용한 내용 기반 장르 분류)

  • Ko, Il-Ju;Chung, Myoung-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.817-826
    • /
    • 2007
  • The existing a music genre classification research used signal feature of the part which gets 20 seconds interval of the random or the $40%{\sim}45%$ after in the music. This paper propose it to increase the accuracy of existing research to classify music genre using climax part in the music. Generally the music is divided to three parts; introduction, progress and climax. And the climax is the part which the music emphasizes and expresses the feature of the music best. So, we can get efficient result if the climax is used, when the music classify. We can get the climax in the music finding the tempo and node which uses FFT and the maximum waveform from each node. In this paper, we did a genre classification experiment which uses existing research method and proposing method. The existing method expressed 47% accuracy. And proposing method expressed 56% accuracy which is improved than existing method.

  • PDF