Speech and Music Discrimination Using Spectral Transition Rate

주파수 변화율을 이용한 음성과 음악의 구분

  • Published : 2009.04.30

Abstract

In this paper, we propose the spectral transition rate (STR) as a novel feature for speech and music discrimination (SMD). We observed that the spectral peaks of speech signal are gradually changing due to coarticulation effect. However, the sound of musical instruments in general tends to keep the peak frequencies and energies unchanged for relatively long period of time compared to speech. The STR of speech is much higher than that of music. The experimental results show that the STR based SMD method outperforms a conventional method. Especially, the STR based SMD gives relatively fast output without any performance degradation.

주파수 분석을 통해 음성과 음악의 특성을 살펴보면, 대부분 악기는 특정 주파수 소리를 지속적으로 내도록 고안되어 있다는 것을 알 수 있고, 음성은 조음 현상에 의해서 점차적인 주파수 변화가 발생하는 것을 알 수 있다. 본 논문에서는 이러한 음성과 음악이 갖고 있는 주파수 변화 특성을 이용하여 음성과 음악을 구별하는 방법을 제안한다. 즉, 음성과 음악을 구분해 주는 특성 값으로서 주파수 변화율을 사용하고자 한다. 제안한 주파수 변화율인 STR (spectral transition rate) 기반의 SMD (speech music discrimination) 실험 결과, 기존의 알고리즘보다 빠른 응답 속도에서 상대적으로 높은 성능을 보임을 알 수 있었다.

Keywords

References

  1. R. Jarina, N. O'Connor, and S. Marlow, "Rhythm detection for speech-music discrimination in MPEG compressed domain," IEEE Conference on Digital Signal Processing, vol. 1, pp. 129-132, Jul. 2002 https://doi.org/10.1109/ICDSP.2002.1027851
  2. O. M. Mubarak, E. Ambikairajah, and J. Epps, "Novel features for effective speech and music discrimination," IEEE International Conference on Engineering of Intelligent Systems, pp. 1-5, Sep. 2006 https://doi.org/10.1109/ICEIS.2006.1703190
  3. M. J. Carey, E. S. Pains, and H. Lloyd-Thomas, “A comparison of feature for Speech, music discrimination,” IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 149-152, Mar. 1999 https://doi.org/10.1109/ICASSP.1999.758084
  4. Ji - Soo Keum, and Hyon - Soo Lee, "Speech/mnusic discrimination based on spectral peak analysis and multi-layer perceptron," International Conference on Hybrid In-formation Technology (ICHIT'06), vol. 2. pp 56-61, 2006 https://doi.org/10.1109/ICHIT.2006.253589
  5. Nima Mesgarani, MaIcoln Slaney, and Shihab A. Shamma, “Discrimination of speech from monspeech based on multi-scale spectro-temporal modulation,” IEEE Trans on Audio, Speech, and Language Processing, vol. 14, No. 3. pp. 920-930, May. 2006 https://doi.org/10.1109/TSA.2005.858055
  6. M. Y. Choi, H. J. Song, and H. S. Kim, "Speech/music discrimination for robust speech recognition in robots," IEEE International Symposium on Robot and Human Interactive Communication, pp. 118-121, Aug. 2007 https://doi.org/10.1109/ROMAN.2007.4415064
  7. E. Scheirer and M. Slaney, “Construction and evaluation of a robust multifeature speech/music discriminator,” IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, pp. 1331-1334, Apr. 1997 https://doi.org/10.1109/ICASSP.1997.596192
  8. L. Rabiner and B-H. Juang, Fundamentals of Speech Re-cognition Prentice Hall, New Jersey, pp. 20-28, 1993
  9. 양경철, 육동석, "주파수 변화율을 이용한 음성과 음악의 구분," 한국음향학회 2008년도 추계학술발표대회 논문집, 37-41쪽, 2008