• Title/Summary/Keyword: music information retrieval

Search Result 109, Processing Time 0.025 seconds

Music Lyrics Summarization Method using TextRank Algorithm (TextRank 알고리즘을 이용한 음악 가사 요약 기법)

  • Son, Jiyoung;Shin, Yongtae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.45-50
    • /
    • 2018
  • This research paper describes how to summarize music lyrics using the TextRank algorithm. This method can summarize music lyrics as important lyrics. Therefore, we recommend music more effectively than analyzing the number of words and recommending music.

Error-Tolerant Music Information Retrieval Method Using Query-by-Humming (허밍 질의를 이용한 오류에 강한 악곡 정보 검색 기법)

  • 정현열;허성필
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.488-496
    • /
    • 2004
  • This paper describes a music information retrieval system which uses humming as the key for retrieval Humming is an easy way for the user to input a melody. However, there are several problems with humming that degrade the retrieval of information. One problem is a human factor. Sometimes people do not sing accurately, especially if they are inexperienced or unaccompanied. Another problem arises from signal processing. Therefore, a music information retrieval method should be sufficiently robust to surmount various humming errors and signal processing problems. A retrieval system has to extract pitch from the user's humming. However pitch extraction is not perfect. It often captures half or double pitches. even if the extraction algorithms take the continuity of the pitch into account. Considering these problems. we propose a system that takes multiple pitch candidates into account. In addition to the frequencies of the pitch candidates. the confidence measures obtained from their powers are taken into consideration as well. We also propose the use of an algorithm with three dimensions that is an extension of the conventional DP algorithm, so that multiple pitch candidates can be treated. Moreover in the proposed algorithm. DP paths are changed dynamically to take deltaPitches and IOIratios of input and reference notes into account in order to treat notes being split or unified. We carried out an evaluation experiment to compare the proposed system with a conventional system. From the experiment. the proposed method gave better retrieval performance than the conventional system.

A User Study on Information Searching Behaviors for Designing User-centered Query Interface of Content-Based Music Information Retrieval System (내용기반 음악정보 검색시스템을 위한 이용자 중심의 질의 인터페이스 설계에 관한 연구)

  • Lee, Yoon-Joo;Moon, Sung-Been
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • The purpose of this study is to observe and analyze information searching behaviors of various user groups in different access modes for designing user-centered query interface of content-based Music Information Retrieval System(MIRS). Two expert groups and two non-expert groups were recruited for this research. The data gathering techniques employed in this study were in-depth interviewing, participant observation, searching task experiments, think-aloud protocols, and post-search surveys. Expert users, especially majoring in music theory, preferred to input exact notes one by one using the devices such as keyboard and musical score. On the other hand, non-expert users preferred to input melodic contours by humming.

Musical Genre Classification System based on Multiple-Octave Bands (다중 옥타브 밴드 기반 음악 장르 분류 시스템)

  • Byun, Karam;Kim, Moo Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.238-244
    • /
    • 2013
  • For musical genre classification, various types of feature vectors are utilized. Mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), and octave-based spectral contrast (OSC) are widely used as short-term features, and their long-term variations are also utilized. In this paper, OSC features are extracted not only in the single-octave band domain, but also in the multiple-octave band one to capture the correlation between octave bands. As a baseline system, we select the genre classification system that won the fourth place in the 2012 music information retrieval evaluation exchange (MIREX) contest. By applying the OSC features based on multiple-octave bands, we obtain the better classification accuracy by 0.40% and 3.15% for the GTZAN and Ballroom databases, respectively.

Representative Melodies Retrieval using Waveform and FFT Analysis of Audio (오디오의 파형과 FFT 분석을 이용한 대표 선율 검색)

  • Chung, Myoung-Bum;Ko, Il-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1037-1044
    • /
    • 2007
  • Recently, we extract the representative melody of the music and index the music to reduce searching time at the content-based music retrieval system. The existing study has used MIDI data to extract a representative melody but it has a weak point that can use only MIDI data. Therefore, this paper proposes a representative melody retrieval method that can be use at all audio file format and uses digital signal processing. First, we use Fast Fourier Transform (FFT) and find the tempo and node for the representative melody retrieval. And we measure the frequency of high value that appears from PCM Data of each node. The point which the high value is gathering most is the starting point of a representative melody and an eight node from the starting point is a representative melody section of the audio data. To verity the performance of the method, we chose a thousand of the song and did the experiment to extract a representative melody from the song. In result, the accuracy of the extractive representative melody was 79.5% among the 737 songs which was found tempo.

Emotion-Based Music Retrieval Using Consistency Principle and Multi-Query Feedback (검색의 일관성원리와 피드백을 이용한 감성기반 음악 검색 시스템)

  • Shin, Song-Yi;Park, En-Jong;Eum, Kyoung-Bae;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.17B no.2
    • /
    • pp.99-106
    • /
    • 2010
  • In this paper, we propose the construction of multi-queries and consistency principle for the user's emotion-based music retrieval system. The features used in the system are MPEG-7 audio descriptors, which are international standards recommended for content-based audio retrievals. In addition we propose the method to determine the weight that represent the importance of each descriptor for each emotion in order to reduce the computation. Also, the proposed retrieval algorithm that uses the relevance feedback based on consistency principal and multi-queries improves the success ratio of musics corresponding to user's emotion.

Musician Search in Time-Series Pattern Index Files using Features of Audio (오디오 특징계수를 이용한 시계열 패턴 인덱스 화일의 뮤지션 검색 기법)

  • Kim, Young-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.69-74
    • /
    • 2006
  • The recent development of multimedia content-based retrieval technologies brings great attention of musician retrieval using features of a digital audio data among music information retrieval technologies. But the indexing techniques for music databases have not been studied completely. In this paper, we present a musician retrieval technique for audio features using the space split methods in the time-series pattern index file. We use features of audio to retrieve the musician and a time-series pattern index file to search the candidate musicians. Experimental results show that the time-series pattern index file using the rotational split method is efficient for musician retrievals in the time-series pattern files.

  • PDF

Implementation of an Efficient Music Retrieval System based on the Analysis of User Query Pattern (사용자 질의 패턴 분석을 통한 효율적인 음악 검색 시스템의 구현)

  • Rho, Seung-min;Hwang, Een-jun
    • The KIPS Transactions:PartA
    • /
    • v.10A no.6
    • /
    • pp.737-748
    • /
    • 2003
  • With the popularity of digital music contents, querying and retrieving music contents efficiently from database has become essential. In this paper, we propose a Fast Melody Finder (FMF) that can retrieve melodies fast and efficiently from music database using frequently queried tunes. This scheme is based on the observation that users have a tendency to memorize and query a small number of melody segments, and indexing such segments enables fast retrieval. To handle those tunes, FMF transcribes all the acoustic and common music notational inputs into a specific string such as UDR and LSR. We have implemented a prototype system and showed on its performance through various experiments.

The Weight Decision of Multi-dimensional Features using Fuzzy Similarity Relations and Emotion-Based Music Retrieval (퍼지 유사관계를 이용한 다차원 특징들의 가중치 결정과 감성기반 음악검색)

  • Lim, Jee-Hye;Lee, Joon-Whoan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.637-644
    • /
    • 2011
  • Being digitalized, the music can be easily purchased and delivered to the users. However, there is still some difficulty to find the music which fits to someone's taste using traditional music information search based on musician, genre, tittle, album title and so on. In order to reduce the difficulty, the contents-based or the emotion-based music retrieval has been proposed and developed. In this paper, we propose new method to determine the importance of MPEG-7 low-level audio descriptors which are multi-dimensional vectors for the emotion-based music retrieval. We measured the mutual similarities of musics which represent a pair of emotions expressed by opposite meaning in terms of each multi-dimensional descriptor. Then rough approximation, and inter- and intra similarity ratio from the similarity relation are used for determining the importance of a descriptor, respectively. The set of weights based on the importance decides the aggregated similarity measure, by which emotion-based music retrieval can be achieved. The proposed method shows better result than previous method in terms of the average number of satisfactory musics in the experiment emotion-based retrieval based on content-based search.

Multiclass Music Classification Approach Based on Genre and Emotion

  • Jonghwa Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.27-32
    • /
    • 2024
  • Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a "ground-truth" dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.