• Title/Summary/Keyword: 보컬 추출

Search Result 8, Processing Time 0.021 seconds

A Karaoke system based on the vocal characteristics (음성 특성을 고려한 가라오케 시스템)

  • Kim, Yu-Seung;Kim, Rin-Chul
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.380-387
    • /
    • 2008
  • This paper presents a karaoke system employing a vocal region detection algorithm based on the vocal characteristics. In the proposed system, an input song is classified into vocal and instrumental regions using the vocal region detection algorithm. Then, a vocal removal method is applied only to the vocal region. To detect vocal region, a classification algorithm is designed based on the vocal characteristics in the TICFT (twice iterated composite Fourier transform) domain. For vocal removal, vocal components are extracted from a band pass filtered vocal region and they are subtracted from the original song, yielding a vocal removed song. The performance of the proposed method is measured on four different songs.

A system for recommending audio devices based on frequency band analysis of vocal component in sound source (음원 내 보컬 주파수 대역 분석에 기반한 음향기기 추천시스템)

  • Jeong-Hyun, Kim;Cheol-Min, Seok;Min-Ju, Kim;Su-Yeon, Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.6
    • /
    • pp.1-12
    • /
    • 2022
  • As the music streaming service and the Hi-Fi market grow, various audio devices are being released. As a result, consumers have a wider range of product choices, but it has become more difficult to find products that match their musical tastes. In this study, we proposed a system that extracts the vocal component from the user's preferred sound source and recommends the most suitable audio device to the user based on this information. To achieve this, first, the original sound source was separated using Python's Spleeter Library, the vocal sound source was extracted, and the result of collecting frequency band data of manufacturers' audio devices was shown in a grid graph. The Matching Gap Index (MGI) was proposed as an indicator for comparing the frequency band of the extracted vocal sound source and the measurement data of the frequency band of the audio devices. Based on the calculated MGI value, the audio device with the highest similarity with the user's preference is recommended. The recommendation results were verified using equalizer data for each genre provided by sound professional companies.

A Study on Vocal Removal Scheme of SAOC Using Harmonic Information (하모닉 정보를 이용한 SAOC의 보컬 신호 제거 방법에 관한 연구)

  • Park, Ji-Hoon;Jang, Dae-Geun;Hahn, Min-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.10
    • /
    • pp.1171-1179
    • /
    • 2013
  • Interactive audio service provide with audio generating and editing functionality according to user's preference. A spatial audio object coding (SAOC) scheme is audio coding technology that can support the interactive audio service with relatively low bit-rate. However, when the SAOC scheme remove the specific one object such as vocal object signal for Karaoke mode, the scheme support poor quality because the removed vocal object remain in the SAOC-decoded background music. Thus, we propose a new SAOC vocal harmonic extranction and elimination technique to improve the background music quality in the Karaoke service. Namely, utilizing the harmonic information of the vocal object, we removed the harmonics of the vocal object remaining in the background music. As harmonic parameters, we utilize the pitch, MVF(maximum voiced frequency), and harmonic amplitude. To evaluate the performance of the proposed scheme, we perform the objective and subjective evaluation. As our experimental results, we can confirm that the background music quality is improved by the proposed scheme comparing with the SAOC scheme.

Music and Voice Separation Using Log-Spectral Amplitude Estimator Based on Kernel Spectrogram Models Backfitting (커널 스펙트럼 모델 backfitting 기반의 로그 스펙트럼 진폭 추정을 적용한 배경음과 보컬음 분리)

  • Lee, Jun-Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.227-233
    • /
    • 2015
  • In this paper, we propose music and voice separation using kernel sptectrogram models backfitting based on log-spectral amplitude estimator. The existing method separates sources based on the estimate of a desired objects by training MSE (Mean Square Error) designed Winer filter. We introduce rather clear music and voice signals with application of log-spectral amplitude estimator, instead of adaptation of MSE which has been treated as an existing method. Experimental results reveal that the proposed method shows higher performance than the existing methods.

Investigation of Timbre-related Music Feature Learning using Separated Vocal Signals (분리된 보컬을 활용한 음색기반 음악 특성 탐색 연구)

  • Lee, Seungjin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1024-1034
    • /
    • 2019
  • Preference for music is determined by a variety of factors, and identifying characteristics that reflect specific factors is important for music recommendations. In this paper, we propose a method to extract the singing voice related music features reflecting various musical characteristics by using a model learned for singer identification. The model can be trained using a music source containing a background accompaniment, but it may provide degraded singer identification performance. In order to mitigate this problem, this study performs a preliminary work to separate the background accompaniment, and creates a data set composed of separated vocals by using the proven model structure that appeared in SiSEC, Signal Separation and Evaluation Campaign. Finally, we use the separated vocals to discover the singing voice related music features that reflect the singer's voice. We compare the effects of source separation against existing methods that use music source without source separation.

A sturdy on the blind audio source separation based on multi-step NMF-EM algorithm (다중 단계 NMF-EM 알고리즘 기반의 오디오 소스 분리 방법에 대한 연구)

  • Cho, Choongsang;Kim, Jewoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.9-11
    • /
    • 2014
  • 본 논문에서는 오디오 신호의 특성 표현에 유용한 nonnegative matrix factorization(NMF)에 대해 설명하였으며, expectation maximization (EM)을 이용한 NMF 파라미터 추출 및 EM-NMF 기반한 오디오 소스 분리 기술에 대해서 설명했다. 또한, 다중 단계 NMF-EM 구조의 객체 분리를 통해서 객체 분리 성능을 향상시키기 위한 알고리즘을 제안하며, 제안된 알고리즘은 K-pop 음원과 SDR(source distortion ratio)를 통해서 객체 분리 성능을 평가한다. 성능 평가 결과 제안된 알고리즘은 다중 단계를 통해 약 3dB 의 보컬 분리 성능이 향상되며, 상업적 음원 제작에서 사용되는 가상 오디오 효과가 많이 적용된 음원에서 약 5dB 의 분리 성능을 향상시켰다. 그러므로 제안된 방식은 오디오 객체 분리에 유용한 방법이 될 것으로 생각된다.

  • PDF

Deep Learning based Music Classification System (딥러닝 기반의 음원검색 및 분류 시스템)

  • Lee, Sei-Hoon;Jeong, Ui-Jung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.07a
    • /
    • pp.119-120
    • /
    • 2018
  • 본 논문에서는 음악을 듣고 어떤 음악인지 인식하고 판별하는 음원분류 시스템과 해당 기술 구현을 딥러닝을 통해 적용하도록 제안하였다. 제안한 시스템은 인공심층신경망을 통해 음원파일을 여러 음원 특징 추출 모델에 따라 검출된 특징들을 학습하여 해당 음원의 고유한 보컬이나 반주의 특색 등을 찾아내어 이를 인식할 수 있도록 구현하였다. 이를 통해, 기존의 Fingerprint 방식의 데이터베이스 검색 시스템과는 다른 접근방식으로 보다 사람이 음악을 기억하는 방법에 가깝도록 구현하여 능동성과 유연성을 개선하고 다양한 응용분야로 활용할 수 있는 시스템을 제안하였다.

  • PDF

A Unified Method for Vocal Source Separation From Stereophonic Music Signals (스테레오 음악 신호에서의 보컬 음원 분리를 위한 통합 알고리즘)

  • Kim, Min-Je;Jang, In-Seon;Kang, Kyeong-Ok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.89-99
    • /
    • 2010
  • A unified method for separating musical sources, singing voice for example, from stereophonic mixtures is provided. We usually have two observed signals in stereophonic music contents, where more than two instruments are played together. If we regard each instrument as source, this problem becomes an underdetermined source separation problem and cannot be solved by conventional methods, which infers the spatial environment of the downmixing process happens. Instead, source-specific information has been exploited to recover a particular instrumental source. This paper provides a unifying structure consists of heterogenious ad-hoc separate algorithms, which are designed for separating vocal sources using stereophonic channel information and dominant pitch information of the sources, respectively. Experiments on real world music contents show that the proposed unification can neutralize the drawbacks of the two ad-hoc separation algorithms and finally enhance the separation results.