• 제목/요약/키워드: Cochannel speech

검색결과 2건 처리시간 0.015초

청각 모델을 이용한 Cochannel 음성에서의 피치 추출에 관한 연구 (A Study on Pitch Detection using Cochlear Model on Cochannel Speech)

  • 신대규;신중인;이재혁;한두진;박상희
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제49권6호
    • /
    • pp.330-333
    • /
    • 2000
  • In this paper, a new pitch estimation method is proposed using the Robinson cochlear model. This estimation method is useful in noisy environments and especially very efficient under cochannel in which two speaker voices exist at the same time. For the one speaker speech, the pitch can be extracted from just the neurogram of the Robinson cochlear model. In this case, as the estimation is performed in time domain, the exact pitch period can be detected though the pitch period is various. But in noisy and cochannel cases, the neurogram has many spurious peaks, so we use the autocorrelators in the neurogram to manifest the period. It the autocorrelators are used for the all delays, the large amount of calculations is necessary. Due to this defect, we propose that the autocorrelators are used for the part of the delays on which energy is concentrated. First of all, the proposed algorithm is applied to the one speaker speech, and later to the cochannel speech. And then the results are compared with the autocorrelation pitch detection method.

  • PDF

Sinusoidal Model을 이용한 Cochannel상에서의 음성분리에 관한 연구 (A Study on Speech Separation in Cochannel using Sinusoidal Model)

  • 박현규;신중인;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1997년도 추계학술대회 논문집 학회본부
    • /
    • pp.597-599
    • /
    • 1997
  • Cochannel speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. Cochannel speech occurs in many common situations such as when two AM signals containing speech are transmitted on the same frequency or when two people are speaking simultaneously (e. g., when talking on the telephone). In this paper, the method that separated the speech in such a situation is proposed. Especially, only the voiced sound of few sound states is separated. And the similarity of the signals by the cross correlation between the signals for exactness of original signal and separated signal is proved.

  • PDF