An Ontological and Rule-based Reasoning for Music Recommendation using Musical Moods

음악 무드를 이용한 온톨로지 기반 음악 추천

  • Song, Se-Heon (Graduate School of Information and Communication, Ajou University) ;
  • Rho, Seung-Min (School of Electrical Engineering, Korea University) ;
  • Hwang, Een-Jun (School of Electrical Engineering, Korea University) ;
  • Kim, Min-Koo (Division of Information and Computer Engineering, Ajou University)
  • 송세헌 (아주대학교 정보통신전문대학원) ;
  • 노승민 (고려대학교 전기전자전파 공학부) ;
  • 황인준 (고려대학교 전기전자전파 공학부) ;
  • 김민구 (아주대학교 정보 및 컴퓨터 공학부)
  • Received : 2009.12.17
  • Accepted : 2010.02.28
  • Published : 2010.02.28

Abstract

In this paper, we propose Context-based Music Recommendation (COMUS) ontology for modeling user's musical preferences and context and for supporting reasoning about the user's desired emotion and preferences. The COMUS provides an upper Music Ontology that captures concepts about the general properties of music such as title, artists and genre and also provides extensibility for adding domain-specific ontologies, such as Mood and Situation, in a hierarchical manner. The COMUS is music dedicated ontology in OWL constructed by incorporating domain specific classes for music recommendation into the Music Ontology. Using this context ontology, we believe that the use of logical reasoning by checking the consistency of context information, and reasoning over the high-level, implicit context from the low-level, explicit information. As a novelty, our ontology can express detailed and complicated relations among the music, moods and situations, enabling users to find appropriate music for the application. We present some of the experiments we performed as a case-study for music recommendation.

본 논문에서는 사용자의 음악적 선호를 모델링하고, 사용자가 원하는 감정과 선호를 추론할 수 있게 도와주는 컨텍스트 기반 음악 추천 온톨로지 (COMUS)를 제안한다. COMUS는 제목, 연주자, 장르와 같은 음악의 일반적인 속성과 무드와 상황과 같이 도메인에 특화된 확장을 제공하는 확장성을 계층적인 방식으로 제공한다. COMUS는 음악 추천을 위한 도메인에 특화된 클래스들이 음악 온톨로지와 연동되도록 OWL 언어를 사용하여 개발된 음악 온톨로지이다. COMUS에 표현된 컨텍스트 정보를 사용하면, 컨텍스트 정보의 일관성을 체크할 수 있고, 규칙 기반의 추론을 통해 명시적인 정보 뿐만 아니라, 내재된 컨텍스트를 도출하는 상위 레벨의 추론이 가능하다고. 여기서 제안하는 온톨로지는 음악과 감정과 상황 사이의 복잡하고 자세한 관계를 표현할 수 있어서, 사용자가 음악 추론 어플리케이션을 위해 적절한 음악을 찾을 수 있게 해준다. 이와 관련된 음악 추천을 위한 사례 연구로써 수행한 몇 가지 실험을 보인다.

Keywords

References

  1. Birmingham, W., Dannenberg, R., Pardo, B., "An Introduction to Query by Humming with the Vocal Search System," Communications of the ACM, Vol. 49 (8), pp. 49-52, 2006. https://doi.org/10.1145/1145287.1145313
  2. Rho, S., Han, B., Hwang, E., and Kim, M., "MUSEMBLE: A Novel Music Retrieval System with Automatic Voice Query Transcription and Reformulation," Journal of Systems and Software (Elsevier), Vol. 81(7), pp. 1065-1080, July. 2008. https://doi.org/10.1016/j.jss.2007.05.038
  3. Oscar, C., "Foafing the Music: Bridging the semantic gap in music recommendation," Proceedings of 5th International Semantic Web Conference, 2006.
  4. Yves, R., and Frederick, G., "Music Ontology Specification," Available at: http://www.musicontology.com/
  5. Yves R, Samer A, Mark S, and Frederick G., "The Music Ontology," Proceedings of the International Conference on Music Information Retrieval, ISMIR 2007 : 417-422, 2007.
  6. Kanzaki Music Vocabulary, Available at: http://www.kanzaki.com/ns/music
  7. MusicBrainz, Available at: http://musicbrainz.org
  8. 박창호 등, "인지공학심리학: 인간-시스템 상호 작용의 이해," 시그마프레스, 2007.
  9. Russell, J. A., "A Circumplex Model of Affect," Journal of Personality and Social Psychology, Vol. 39, (1980).
  10. Thayer, R. E., "The Biopsychology of Mood and Arousal," New York: Oxford University Press, (1989).
  11. Lie Lu, D. Liu, Hong-Jiang Zhang, "Automatic Mood Detection and Tracking of Music Audio Signals," IEEE Transactions on Audio, Speech & Language Processing, Vol. 14(1), pp. 5-18, (2006).
  12. Sanghoon Jun, Seungmin Rho, Byeong-jun Han, and Eenjun Hwang, "A Fuzzy Inference-based Music Emotion Recognition System," International Conference on Visual Information Engineering, pp. 673-677, July 29 - Aug. 1, 2008.
  13. Juslin, P.N., Sloboda, J.A., "Music and Emotion: Theory and research", New York: Oxford University Press, 2001.
  14. Last.fm, Available at: http://www.last.fm
  15. GarageBand, Available at: http://www.garageband.com/
  16. MyStrands, Available at: http://www.mystrands.com
  17. The Friend of a Friend (FOAF) project, Available at: http://www.foaf-project.org/
  18. OWL Web Ontology Language, Available at: http://www.w3.org/TR/owl-ref/
  19. Protege Editor, Available at: http://protege.stanford.edu
  20. Gerd Ruebenstrunk, "Emotional Computers," Available at: http://ruebenstrunk.de/emeocomp/content.htm
  21. P. Cano, et al., "Content-based music audio recommendation," Proc. ACM Multimedia, pp. 212-212, 2005.
  22. S. Pauws and B. Eggen, "PATS: Realization and user evaluation of an automatic playlist generator," Proceedings of ISMIR, 2002.
  23. Klaus R.S., and Marcel R.Z., "Emotional Effects of Music: Production Rules", Music and emotion: theory and research. Oxford; New York: Oxford University Press, 2001.
  24. Klaus R.S., and Marcel R.Z., "Emotional Effects of Music: Production Rules", Music and emotion: theory and research. Oxford; New York: Oxford University Press, 2001
  25. Novais, Paulo, et al., "Emotions on Agent based Simulators for Group Formation," Proceedings of the European Simulation and Modelling Conference, pp. 5-18, 2006.
  26. RacerPro, Available at: http://www.racer-systems.com/
  27. Jess, Available at: http://www.jessrules.com/