• Title/Summary/Keyword: Classification of Music

Search Result 242, Processing Time 0.02 seconds

A Method for Measuring the Difficulty of Music Scores

  • Song, Yang-Eui;Lee, Yong Kyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.4
    • /
    • pp.39-46
    • /
    • 2016
  • While the difficulty of the music can be classified by a variety of standard, conventional methods are classified by the subjective judgment based on the experience of many musicians or conductors. Music score is difficult to evaluate as there is no quantitative criterion to determine the degree of difficulty. In this paper, we propose a new classification method for determining the degree of difficulty of the music. In order to determine the degree of difficulty, we convert the score, which is expressed as a traditional music score, into electronic music sheet. Moreover, we calculate information about the elements needed to play sheet music by distance of notes, tempo, and quantifying the ease of interpretation. Calculating a degree of difficulty of the entire music via the numerical data, we suggest the difficulty evaluation of the score, and show the difficulty of music through experiments.

Improvement of Speech/Music Classification Based on RNN in EVS Codec for Hearing Aids (EVS 코덱에서 보청기를 위한 RNN 기반의 음성/음악 분류 성능 향상)

  • Kang, Sang-Ick;Lee, Sang Min
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.2
    • /
    • pp.143-146
    • /
    • 2017
  • In this paper, a novel approach is proposed to improve the performance of speech/music classification using the recurrent neural network (RNN) in the enhanced voice services (EVS) of 3GPP for hearing aids. Feature vectors applied to the RNN are selected from the relevant parameters of the EVS for efficient speech/music classification. The performance of the proposed algorithm is evaluated under various conditions and large speech/music data. The proposed algorithm yields better results compared with the conventional scheme implemented in the EVS.

Music Similarity Search Based on Music Emotion Classification

  • Kim, Hyoung-Gook;Kim, Jang-Heon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.69-73
    • /
    • 2007
  • This paper presents an efficient algorithm to retrieve similar music files from a large archive of digital music database. Users are able to navigate and discover new music files which sound similar to a given query music file by searching for the archive. Since most of the methods for finding similar music files from a large database requires on computing the distance between a given query music file and every music file in the database, they are very time-consuming procedures. By measuring the acoustic distance between the pre-classified music files with the same type of emotion, the proposed method significantly speeds up the search process and increases the precision in comparison with the brute-force method.

Implementation of Music Source Classification System by Embedding Information Code (정보코드 결합을 이용한 음원분류 시스템 구현)

  • Jo, Jae-Young;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.3
    • /
    • pp.250-255
    • /
    • 2006
  • In digital multimedia society, we usually use the digital sound music ( Mp3, wav, etc.) system instead of analog music. In the middle of generating or recording and transmitting, if we embed the digital code which is useful to music information, we can easily select as well as classify the music title by using Mp3 player that embedded sound source classification system. In this paper, sound source classification system which could be classify and search a music informations by way of user friendly scheme is implemented. We performed some experiments to testify the validity of proposed scheme by using implemented system.

  • PDF

Music Genre Classification System Using Decorrelated Filter Bank (Decorrelated Filter Bank를 이용한 음악 장르 분류 시스템)

  • Lim, Shin-Cheol;Jang, Sei-Jin;Lee, Seok-Pil;Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.2
    • /
    • pp.100-106
    • /
    • 2011
  • Music recordings have been digitalized such that huge size of music database is available to the public. Thus, the automatic classification system of music genres is required to effectively manage the growing music database. Mel-Frequency Cepstral Coefficient (MFCC) is a popular feature vector for genre classification. In this paper, the combined super-vector with Decorrelated Filter Bank (DFB) and Octave-based Spectral Contrast (OSC) using texture windows is processed by Support Vector Machine (SVM) for genre classification. Even with the lower order of the feature vector, the proposed super-vector produces 4.2 % improved classification accuracy compared with the conventional Marsyas system.

Brainwave-based Mood Classification Using Regularized Common Spatial Pattern Filter

  • Shin, Saim;Jang, Sei-Jin;Lee, Donghyun;Park, Unsang;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.807-824
    • /
    • 2016
  • In this paper, a method of mood classification based on user brainwaves is proposed for real-time application in commercial services. Unlike conventional mood analyzing systems, the proposed method focuses on classifying real-time user moods by analyzing the user's brainwaves. Applying brainwave-related research in commercial services requires two elements - robust performance and comfortable fit of. This paper proposes a filter based on Regularized Common Spatial Patterns (RCSP) and presents its use in the implementation of mood classification for a music service via a wireless consumer electroencephalography (EEG) device that has only 14 pins. Despite the use of fewer pins, the proposed system demonstrates approximately 10% point higher accuracy in mood classification, using the same dataset, compared to one of the best EEG-based mood-classification systems using a skullcap with 32 pins (EU FP7 PetaMedia project). This paper confirms the commercial viability of brainwave-based mood-classification technology. To analyze the improvements of the system, the changes of feature variations after applying RCSP filters and performance variations between users are also investigated. Furthermore, as a prototype service, this paper introduces a mood-based music list management system called MyMusicShuffler based on the proposed mood-classification method.

Generating Data and Applying Machine Learning Methods for Music Genre Classification (음악 장르 분류를 위한 데이터 생성 및 머신러닝 적용 방안)

  • Bit-Chan Eom;Dong-Hwi Cho;Choon-Sung Nam
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • This paper aims to enhance the accuracy of music genre classification for music tracks where genre information is not provided, by utilizing machine learning to classify a large amount of music data. The paper proposes collecting and preprocessing data instead of using the commonly employed GTZAN dataset in previous research for genre classification in music. To create a dataset with superior classification performance compared to the GTZAN dataset, we extract specific segments with the highest energy level of the onset. We utilize 57 features as the main characteristics of the music data used for training, including Mel Frequency Cepstral Coefficients (MFCC). We achieved a training accuracy of 85% and a testing accuracy of 71% using the Support Vector Machine (SVM) model to classify into Classical, Jazz, Country, Disco, Soul, Rock, Metal, and Hiphop genres based on preprocessed data.

Determining Key Features of Recognition Korean Traditional Music Using Spectrogram

  • Kim Jae Chun;Kwak Kyung Sup
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.67-70
    • /
    • 2005
  • To realize a traditional music recognition system, some characteristics pertinent to Far East Asian music should be found. Using Spectrogram, some distinct attributes of Korean traditional music are surveyed. Frequency distribution, beat cycle and frequency energy intensity within samples have distinct characteristics of their own. Experiment is done for pre-experimentation to realize Korean traditional music recognition system. Using characteristics of Korean traditional music, $94.5\%$ of classification accuracy is acquired. As Korea, Japan and China have the same musical roots, both in instruments and playing style, analyzing Korean traditional music can be helpful in the understanding of Far East Asian traditional music.

An investigation of subband decomposition and feature-dimension reduction for musical genre classification (음악 장르 분류를 위한 부밴드 분해와 특징 차수 축소에 관한 연구)

  • Seo, Jin Soo;Kim, Junghyun;Park, Jihyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.144-150
    • /
    • 2017
  • Musical genre is indispensible in constructing music information retrieval system, such as music search and classification. In general, the spectral characteristics of a music signal are obtained based on a subband decomposition to represent the relative distribution of the harmonic and the non-harmonic components. In this paper, we investigate the subband decomposition parameters in extracting features, which improves musical genre classification accuracy. In addition, the linear projection methods are studied to reduce the resulting feature dimension. Experiments on the widely used music datasets confirmed that the subband decomposition finer than the widely-adopted octave scale is conducive in improving genre-classification accuracy and showed that the feature-dimension reduction is effective reducing a classifier's computational complexity.

Speech/Music Signal Classification Based on Spectrum Flux and MFCC For Audio Coder (오디오 부호화기를 위한 스펙트럼 변화 및 MFCC 기반 음성/음악 신호 분류)

  • Sangkil Lee;In-Sung Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.239-246
    • /
    • 2023
  • In this paper, we propose an open-loop algorithm to classify speech and music signals using the spectral flux parameters and Mel Frequency Cepstral Coefficients(MFCC) parameters for the audio coder. To increase responsiveness, the MFCC was used as a short-term feature parameter and spectral fluxes were used as a long-term feature parameters to improve accuracy. The overall voice/music signal classification decision is made by combining the short-term classification method and the long-term classification method. The Gaussian Mixed Model (GMM) was used for pattern recognition and the optimal GMM parameters were extracted using the Expectation Maximization (EM) algorithm. The proposed long-term and short-term combined speech/music signal classification method showed an average classification error rate of 1.5% on various audio sound sources, and improved the classification error rate by 0.9% compared to the short-term single classification method and 0.6% compared to the long-term single classification method. The proposed speech/music signal classification method was able to improve the classification error rate performance by 9.1% in percussion music signals with attacks and 5.8% in voice signals compared to the Unified Speech Audio Coding (USAC) audio classification method.