• Title/Summary/Keyword: Music Representative Part Detection

Search Result 2, Processing Time 0.021 seconds

Music Genre Classification based on Musical Features of Representative Segments (대표구간의 음악 특징에 기반한 음악 장르 분류)

  • Lee, Jong-In;Kim, Byeong-Man
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.692-700
    • /
    • 2008
  • In some previous works on musical genre classification, human experts specify segments of a song for extracting musical features. Although this approach might contribute to performance enhancement, it requires manual intervention and thus can not be easily applied to new incoming songs. To extract musical features without the manual intervention, most of recent researches on music genre classification extract features from a pre-determined part of a song (for example, 30 seconds after initial 30 seconds), which may cause loss of accuracy. In this paper, in order to alleviate the accuracy problem, we propose a new method, which extracts features from representative segments (or main theme part) identified by structure analysis of music piece. The proposed method detects segments with repeated melody in a song and selects representative ones among them by considering their positions and energies. Experimental results show that the proposed method significantly improve the accuracy compared to the approach using a pre-determined part.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.