• Title/Summary/Keyword: 음향음성학

Search Result 749, Processing Time 0.02 seconds

Analysis of Speech Signals According to the Various Emotional Contents (정서정보의 변화에 따른 음성신호의 특성분석에 관한 연구)

  • Jo, Cheol-Woo;Jo, Eun-Kyung;Min, Kyung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.33-37
    • /
    • 1997
  • This paper describes experimental results from emotional speech materials, which is analysed by various signal processing methods. Speech materials with emotional informations are collected from actors. Analysis is focused to the variations of pitch informations and durations. From the analysed results we can observe the characteristics of emotional speech. The materials from this experiment provides valuable resources for analysing emotional speech.

  • PDF

Large Vocabulary Continuous Speech Recognition using Stochastic Pronunciatioin Lexicon Modeling (확률 발음사전을 이용한 대어휘 연속음성인식)

  • 윤성진
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.315-319
    • /
    • 1998
  • 대어휘 연속음성인식을 위한 확률 발음사전 모델에 대해서 제안하였다. 제안된 확률 발음 사전은 연속음성과 같은 자연스런 발성에서 자주 발생되는 단어의 변이를 확률적인 subword-state로 이루어진 HMM으로 모델화 함으로써 단어의 발음 변이를 효과적으로 표현할 수 있으며, 단위 인식 시스템의 성능을 보다 높일 수 있도록 구성되었다. 확률 발음사전의 생성은 음성 자료와 음소 모델을 이용하여 단어 단위의 분할과 학습을 통해서 자동으로 생성되게 됨 음소와 같은 언어학적인 단위뿐만 아니라 PLU 이나 비언어학적인 인식 모델을 이용한 연속음성인식기에도 적용이 가능하다.연속음성인식실험결과 확률 발음사전을 사용함으로써 표준 발음 표기를 사용하는 인식 시스템에 비해 단어 오류율은 39.8%, 문장 오류율은 24.4%의 큰 폭으로 오류율을 감소시킬 수 있었다.

  • PDF

On the Use of a KAK Filter for Enhancement of Noisy Speech (KAK 필터를 이용한 잡음이 섞인 음성의 음질향상)

  • 조동호;유득수;은종관
    • The Journal of the Acoustical Society of Korea
    • /
    • v.5 no.2
    • /
    • pp.48-57
    • /
    • 1986
  • 광대역 또는 협대역잡음이 섞인 음성의 음질을 개선하기 위해 KAK 필터를 사용하는 방법을 제 안한다. KAK 필터는 그 구조가 간단하지만, 잡음이 섞인 음성의 음질을 개선하는데 있어서 객관적인 음질척도로 볼 때 spectral subtraction 방법과 성능이 비슷하다. 또한 귀로 들어봐도 kak 필터를 사용한 경우와 spectral subtraction 방법을 이용한 경우의 개선된 음질이 거의 비슷하다. 그런데 이 kak 필터는 구조가 다른 기존방법보다 훨씬 간단하며, 다른 음질개선 알고리즘과는 달리 음성과 묵음의 판별이 필 요하지 않다. 또한 kak 필터는 ADPCM과 같은 파형 부호화기와 결합하는 것이 용이하다. 따라서 깨끗 한 음성뿐만 아니라 잡음이 섞인 음성을 부호화하는데 있어서 제안한 kak 필터를 ADPCM과 같은 파형 부호화기에 결합하여 사용하는 것이 적합하다.

  • PDF

An End Point Detection Technique Using the LSP Distance in EVRC Packets (EVRC 패킷에서 LSP 거리를 이용한 음성 끝점 검출)

  • 민병준;강명수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.44-48
    • /
    • 1999
  • This paper presents a simple and fast method for end point detection under low-level noisy environment. The proposed algorithm uses a threshold logic with LSP distances and takes vocoded packets as input to the recognition system. The results from the proposed method are compared with those manually checked in decoded speeches. From the result it exhibits acceptable accuracy.

  • PDF

Implementation and Performance Evaluation of the System for Speech Services using VMEbus (VMEbus 를 이용한 음성 서비스 시스템의 구현 및 성능평가)

  • Kwon, Oh-Il;Kang, Kyung-Young;Kim, Tong-Ha;Rhee, Tae-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.1
    • /
    • pp.93-101
    • /
    • 1996
  • In this paper, we implement the system for speech processing to provide the subscribers who are using the telephone network with better speech services. We develop the specified board which is processing speech signal and devise the system which carries out storing and replaying the speech signal under the condition that one master board controls multiple DSP(Digital Signal Processing) boards using VME bus. We use CPU30 board as a maste board and develop SPM(Signal Processing Module) board as a DSP board and then evaluate performance of the system.

  • PDF

Pre-Processing for Performance Enhancement of Speech Recognition in Digital Communication Systems (디지털 통신 시스템에서의 음성 인식 성능 향상을 위한 전처리 기술)

  • Seo, Jin-Ho;Park, Ho-Chong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.7
    • /
    • pp.416-422
    • /
    • 2005
  • Speech recognition in digital communication systems has very low performance due to the spectral distortion caused by speech codecs. In this paper, the spectral distortion by speech codecs is analyzed and a pre-processing method which compensates for the spectral distortion is proposed for performance enhancement of speech recognition. Three standard speech codecs. IS-127 EVRC. ITU G.729 CS-ACELP and IS-96 QCELP. are considered for algorithm development and evaluation, and a single method which can be applied commonly to all codecs is developed. The performance of the proposed method is evaluated for three codecs, and by using the speech features extracted from the compensated spectrum. the recognition rate is improved by the maximum of $15.6\%$ compared with that using the degraded speech features.

Robust End Point Detection for Robot Speech Recognition Using Double Talk Detection (음성인식 로봇을 위한 동시통화검출 기반의 강인한 음성 끝점 검출)

  • Moon, Sung-Kyu;Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.161-169
    • /
    • 2012
  • This paper presents a robust speech end-point detector using double talk detection in echoic conditioned speech recognition robot. The proposed method consists of combining conventional end-point detector result and double talk detector result. We have tested the proposed method in isolated word recognition system under echoic conditioned environment. As a result, the proposed algorithm shows superior performance of 30 % to the available techniques in the points of speech recognition rates.

A Study on the Audio Compensation System (음향 보상 시스템에 관한 연구)

  • Jeoung, Byung-Chul;Won, Chung-Sang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.509-517
    • /
    • 2013
  • In this paper, we researched a method that makes a good acoustic-speech system using a digital signal processing technique with dynamic microphone as a transducer. Good acoustic-speech system should deliver the original sound input to electric signal without distortion. By measuring the frequency response of the microphone, adjustment factors are obtained by comparing measured data and standard frequency response of microphone for each frequency band. The final sound levels are obtained using the developed adjustment factors of frequency responses from the microphone and speaker to match the original sound levels using the digital signal processing technique. Then, we minimize the changes in the frequency response and level due to the variation of the distance from source to microphone, where the frequency responses were measured according to the distance changes.

The Characteristics of the Vocalization of the Female News Anchors (여성 뉴스 앵커의 발성 특성 분석)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.7
    • /
    • pp.390-395
    • /
    • 2011
  • This paper covers the studies on common voice parameters through the voice analysis of female main news anchors on weekday evening by the station, and differences of relative voices and sounds among stations. To examine voice characteristics, 6 voice parameters were analyzed and it showed anchors of each station had distinctive characteristics of voices and phonations over all fields except the speech rate, and there were also differences in sound systems. As major analysis parameters, basic pitch, tone of the 1st formant and pitch ratio, level of closeness by pitch bandwidth, type of sentence closing through average pitch position within pitch bandwidth, average speech rate, and acoustic tone analysis by energy distribution by frequency band were used. Analyzed values and results could be referred to and utilized in the criteria of phonation characteristics for domestic female news anchors.

An Enhancement of Japanese Acoustic Model using Korean Speech Database (한국어 음성데이터를 이용한 일본어 음향모델 성능 개선)

  • Lee, Minkyu;Kim, Sanghun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.5
    • /
    • pp.438-445
    • /
    • 2013
  • In this paper, we propose an enhancement of Japanese acoustic model which is trained with Korean speech database by using several combination strategies. We describe the strategies for training more than two language combination, which are Cross-Language Transfer, Cross-Language Adaptation, and Data Pooling Approach. We simulated those strategies and found a proper method for our current Japanese database. Existing combination strategies are generally verified for under-resourced Language environments, but when the speech database is not fully under-resourced, those strategies have been confirmed inappropriate. We made tyied-list with only object-language on Data Pooling Approach training process. As the result, we found the ERR of the acoustic model to be 12.8 %.