• Title/Summary/Keyword: Tempo feature extraction

Search Result 4, Processing Time 0.018 seconds

Extraction of Chord and Tempo from Polyphonic Music Using Sinusoidal Modeling

  • Kim, Do-Hyoung;Chung, Jae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4E
    • /
    • pp.141-149
    • /
    • 2003
  • As music of digital form has been widely used, many people have been interested in the automatic extraction of natural information of music itself, such as key of a music, chord progression, melody progression, tempo, etc. Although some studies have been tried, consistent and reliable results of musical information extraction had not been achieved. In this paper, we propose a method to extract chord and tempo information from general polyphonic music signals. Chord can be expressed by combination of some musical notes and those notes also consist of some frequency components individually. Thus, it is necessary to analyze the frequency components included in musical signal for the extraction of chord information. In this study, we utilize a sinusoidal modeling, which uses sinusoids corresponding to frequencies of musical tones, and show reliable chord extraction results of sinusoidal modeling. We could also find that the tempo of music, which is the one of remarkable feature of music signal, interactively supports the chord extraction idea, if used together. The proposed scheme of musical feature extraction is able to be used in many application fields, such as digital music services using queries of musical features, the operation of music database, and music players mounting chord displaying function, etc.

A New Tempo Feature Extraction Based on Modulation Spectrum Analysis for Music Information Retrieval Tasks

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.2
    • /
    • pp.95-106
    • /
    • 2007
  • This paper proposes an effective tempo feature extraction method for music information retrieval. The tempo information is modeled by the narrow-band temporal modulation components, which are decomposed into a modulation spectrum via joint frequency analysis. In implementation, the tempo feature is directly extracted from the modified discrete cosine transform coefficients, which is the output of partial MP3(MPEG 1 Layer 3) decoder. Then, different features are extracted from the amplitudes of modulation spectrum and applied to different music information retrieval tasks. The logarithmic scale modulation frequency coefficients are employed in automatic music emotion classification and music genre classification. The classification precision in both systems is improved significantly. The bit vectors derived from adaptive modulation spectrum is used in audio fingerprinting task That is proved to be able to achieve high robustness in this application. The experimental results in these tasks validate the effectiveness of the proposed tempo feature.

  • PDF

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2E
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

Content-Based Genre Classification Using Climax Extraction in Music (음악의 클라이맥스 추출을 이용한 내용 기반 장르 분류)

  • Ko, Il-Ju;Chung, Myoung-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.817-826
    • /
    • 2007
  • The existing a music genre classification research used signal feature of the part which gets 20 seconds interval of the random or the $40%{\sim}45%$ after in the music. This paper propose it to increase the accuracy of existing research to classify music genre using climax part in the music. Generally the music is divided to three parts; introduction, progress and climax. And the climax is the part which the music emphasizes and expresses the feature of the music best. So, we can get efficient result if the climax is used, when the music classify. We can get the climax in the music finding the tempo and node which uses FFT and the maximum waveform from each node. In this paper, we did a genre classification experiment which uses existing research method and proposing method. The existing method expressed 47% accuracy. And proposing method expressed 56% accuracy which is improved than existing method.

  • PDF