• Title/Summary/Keyword: 음악 검색 시스템

Search Result 121, Processing Time 0.043 seconds

An Experimental Study on the Retrieval Efficiency of the FRBR Based Bibliographic Retrieval System (FRBR 모형 기반 서지검색시스템의 검색 효율성 평가 연구)

  • Kim, Hyun-Hee
    • Journal of Korean Library and Information Science Society
    • /
    • v.38 no.3
    • /
    • pp.223-246
    • /
    • 2007
  • This study examines the retrieval efficiency of the FRBR-based bibliographic retrieval system. To do this, we built two experimental retrieval systems(a FRBR-based system constructed through FRBRizing algorithms and an OPAC-based retrieval system) using 387 music materials coded in a KORMARC format. Next, we set up six hypotheses and compared these two systems in terms of recall, precision, and retrieval time using 28 participants and a questionnaire with 12 queries. The results show that the average recall value of the FRBR-based system Is higher than that of the OPAC system regardless of query types and the average precision and retrieval time values of manifestation queries of the OPAC system is more efficient that those of the FRBR-based system. This study results can be used to customize digital library interfaces as well as to improve the retrieval efficiency of the bibliographic retrieval system.

  • PDF

A Design and Implementation of Music & Image Retrieval Recommendation System based on Emotion (감성기반 음악.이미지 검색 추천 시스템 설계 및 구현)

  • Kim, Tae-Yeun;Song, Byoung-Ho;Bae, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • Emotion intelligence computing is able to processing of human emotion through it's studying and adaptation. Also, Be able more efficient to interaction of human and computer. As sight and hearing, music & image is constitute of short time and continue for long. Cause to success marketing, understand-translate of humanity emotion. In this paper, Be design of check system that matched music and image by user emotion keyword(irritability, gloom, calmness, joy). Suggested system is definition by 4 stage situations. Then, Using music & image and emotion ontology to retrieval normalized music & image. Also, A sampling of image peculiarity information and similarity measurement is able to get wanted result. At the same time, Matched on one space through pared correspondence analysis and factor analysis for classify image emotion recognition information. Experimentation findings, Suggest system was show 82.4% matching rate about 4 stage emotion condition.

A code-based chromagram similarity for cover song identification (커버곡 검색을 위한 코드 기반 크로마그램 유사도)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.3
    • /
    • pp.314-319
    • /
    • 2019
  • Computing chromagram similarity is indispensable in constructing cover song identification system. This paper proposes a code-based chromagram similarity to reduce the computational and the storage costs for cover song identification. By learning a song-specific codebook, a chromagram sequence is converted into a code sequence, which results in the reduction of the feature storage cost. We build a lookup table over the learned codebooks to compute chromagram similarity efficiently. Experiments on two music datasets were performed to compare the proposed code-based similarity with the conventional one in terms of cover song search accuracy, feature storage, and computational cost.

A Karaoke system based on the vocal characteristics (음성 특성을 고려한 가라오케 시스템)

  • Kim, Yu-Seung;Kim, Rin-Chul
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.380-387
    • /
    • 2008
  • This paper presents a karaoke system employing a vocal region detection algorithm based on the vocal characteristics. In the proposed system, an input song is classified into vocal and instrumental regions using the vocal region detection algorithm. Then, a vocal removal method is applied only to the vocal region. To detect vocal region, a classification algorithm is designed based on the vocal characteristics in the TICFT (twice iterated composite Fourier transform) domain. For vocal removal, vocal components are extracted from a band pass filtered vocal region and they are subtracted from the original song, yielding a vocal removed song. The performance of the proposed method is measured on four different songs.

A User Study on Information Searching Behaviors for Designing User-centered Query Interface of Content-Based Music Information Retrieval System (내용기반 음악정보 검색시스템을 위한 이용자 중심의 질의 인터페이스 설계에 관한 연구)

  • Lee, Yoon-Joo;Moon, Sung-Been
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • The purpose of this study is to observe and analyze information searching behaviors of various user groups in different access modes for designing user-centered query interface of content-based Music Information Retrieval System(MIRS). Two expert groups and two non-expert groups were recruited for this research. The data gathering techniques employed in this study were in-depth interviewing, participant observation, searching task experiments, think-aloud protocols, and post-search surveys. Expert users, especially majoring in music theory, preferred to input exact notes one by one using the devices such as keyboard and musical score. On the other hand, non-expert users preferred to input melodic contours by humming.

Extracting Melodies from Piano Solo Music Based on its Characteristics (음악의 특성에 따른 피아노 솔로 음악으로 부터의 멜로디 추출)

  • Choi, Yoon-Jae;Park, Jong-C.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.923-927
    • /
    • 2009
  • The recent growth of a digital music market induces increasing demands for music searching and recommendation services. In order to improve the performance of music-based application services, the process of extracting melodies from polyphonic music is essential. In this paper, we propose a method to extract melodies from piano solo music which is highly polyphonic and has a wide pitch range. We categorize piano music into three classes taking into account the characteristics of music, and extract melodies according to each class. The performance evaluation for the implemented system showed that our method works successfully on a variety of piano solo music.

Design and Implementation of ebXML Registry & Repository for B2B e-Commerce of Music Records (음반 B2B를 위한 ebXML 등록기 및 저장소의 설계 및 구현)

  • Kim Joo-Sung;Kim Yoo-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.561-564
    • /
    • 2004
  • 음악 상품의 검색, 주문(계약), 대금결제, 배송 등에 있어서 기업들은 자신의 독자적인 비즈니스 방식과 거래 시스템을 구축하여 운영하기 때문에 기업과 기업간(B2B) 음악 상품의 전자 상거래에는 많은 어려움이 따른다. e비즈니스 표준 프레임워크인 ebXML은 기업의 전자 상거래를 위한 비즈니스 정보를 교환할 때 확장성 표기 언어를 적용해 기업간 시스템간의 상호 운용을 가능하게 하지만, 음반 산업분야의 적용은 미흡한 실정이다. 본 논문에서는 음악 B2B를 위해 ebXML 등록기 및 저장소를 설계 및 구현하였다. 본 논문에서 설계, 구현한 등록기는 거래 당사자인 기업간에 음악 상품 및 기업의 거래 관련 정보를 공유하는 서비스를 제공하며, 저장소는 실세계의 기업간 음악 거래 정보 및 음악 거래에 사용되는 개체간의 연관성 정보를 저장하고 있다.

  • PDF

Extracting Melodies from Polyphonic Piano Solo Music Based on Patterns of Music Structure (음악 구조의 패턴에 기반을 둔 다음(Polyphonic) 피아노 솔로 음악으로부터의 멜로디 추출)

  • Choi, Yoon-Jae;Lee, Ho-Dong;Lee, Ho-Joon;Park, Jong C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.725-732
    • /
    • 2009
  • Thanks to the development of the Internet, people can easily access a vast amount of music. This brings attention to application systems such as a melody-based music search service or music recommendation service. Extracting melodies from music is a crucial process to provide such services. This paper introduces a novel algorithm that can extract melodies from piano music. Since piano can produce polyphonic music, we expect that by studying melody extraction from piano music, we can help extract melodies from general polyphonic music.

  • PDF

Development of melody similarity based on chroma representation, dynamic time warping, and hinge distance (크로마 레벨 표현, 동적 시간 왜곡, 꺾인 거리함수에 기반한 멜로디 사이의 유사도 개발)

  • Jang, Dalwon;Park, Sung-Ju;Jang, Sei-Jin;Lee, Seok-Pil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.258-260
    • /
    • 2011
  • 이 논문에서는 쿼리-바이-싱잉/허밍 (Query-by-singing/humming, QbSH) 시스템 또는 커버 노래 인식 (cover song identification) 시스템에서 사용 가능한 멜로디 유사도를 제안한다. QbSH 또는 커버 노래 인식은 디지털 음악의 사용이 보편화되면서 음악 검색의 방법으로 많은 연구가 진행되어 오고 있다. 멜로디 유사도는 이런 시스템을 구현하는데 필수적인 요소이며, 두 개의 음악에서 멜로디가 추출되었다고 가정하고, 추출된 멜로디 사이의 유사한 정도를 수치로 표현한다. QbSh 시스템이나 커버 노래 인식 시스템은 멜로디 유사도에 기반하여 입력 노래와 유사한 노래를 데이터베이스에서 검색하는 작업을 수행한다. 이 논문에서 제안하는 멜로디 유사도 방식은 기존의 많이 연구되던 동적 시간 왜곡 (dynamic time warping, DTW) 방법과 크로마 표현 방법 (chroma representation)을 사용하였다. DTW방법은 비대칭적으로 사용하고 미디 노트 영역에서 표현된 멜로디 특징은 0이상 12 미만의 크로마 레벨로 표현하였다. 기존의 방법에서는 정수값을 많이 사용하였으나 이 논문에서는 실수값을 사용한다. DTW 에 사용하는 거리 함수를 기존에 사용하던 차이의 절대값 대신 꺾인 함수 형태를 사용함으로써 성능을 높였다. QbSH 시스템에서의 실험을 통해서 성능을 검증하였다. 본 논문에서는 10-12초 길이의 1000번의 쿼리(Query)에 대해서 28시간 정도의 데이터베이스에서 실험한 결과, 순위 역의 평균 (Mean reciprocal rank, MRR) 값이 0.713을 보였다.

  • PDF

Designing emotional model and Ontology based on Korean to support extended search of digital music content (디지털 음악 콘텐츠의 확장된 검색을 지원하는 한국어 기반 감성 모델과 온톨로지 설계)

  • Kim, SunKyung;Shin, PanSeop;Lim, HaeChull
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.5
    • /
    • pp.43-52
    • /
    • 2013
  • In recent years, a large amount of music content is distributed in the Internet environment. In order to retrieve the music content effectively that user want, various studies have been carried out. Especially, it is also actively developing music recommendation system combining emotion model with MIR(Music Information Retrieval) studies. However, in these studies, there are several drawbacks. First, structure of emotion model that was used is simple. Second, because the emotion model has not designed for Korean language, there is limit to process the semantic of emotional words expressed with Korean. In this paper, through extending the existing emotion model, we propose a new emotion model KOREM(KORean Emotional Model) based on Korean. And also, we design and implement ontology using emotion model proposed. Through them, sorting, storage and retrieval of music content described with various emotional expression are available.