• Title/Summary/Keyword: 음성다중

Search Result 350, Processing Time 0.023 seconds

A Study on the Automatic Speech Control System Using DMS model on Real-Time Windows Environment (실시간 윈도우 환경에서 DMS모델을 이용한 자동 음성 제어 시스템에 관한 연구)

  • 이정기;남동선;양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.51-56
    • /
    • 2000
  • Is this paper, we studied on the automatic speech control system in real-time windows environment using voice recognition. The applied reference pattern is the variable DMS model which is proposed to fasten execution speed and the one-stage DP algorithm using this model is used for recognition algorithm. The recognition vocabulary set is composed of control command words which are frequently used in windows environment. In this paper, an automatic speech period detection algorithm which is for on-line voice processing in windows environment is implemented. The variable DMS model which applies variable number of section in consideration of duration of the input signal is proposed. Sometimes, unnecessary recognition target word are generated. therefore model is reconstructed in on-line to handle this efficiently. The Perceptual Linear Predictive analysis method which generate feature vector from extracted feature of voice is applied. According to the experiment result, but recognition speech is fastened in the proposed model because of small loud of calculation. The multi-speaker-independent recognition rate and the multi-speaker-dependent recognition rate is 99.08% and 99.39% respectively. In the noisy environment the recognition rate is 96.25%.

  • PDF

Segmental duration modelling for Korean text-to-speech synthesis (한국어 음성합성에서 음운지속시간 모델화)

  • Lee YangHee
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.125-135
    • /
    • 1996
  • 본 논문에서는 자연스러운 음성을 합성하기 위하여, 한국어 음운지속시간의 변화에 있어서 문절과 구내의 음절수와 음절의 위치에 의한 영향과 인접하는 음운의 영향에 대하여 통계적으로 분석하였고, 분석된 시간 특징을 제어 요소로 하는 회귀트리를 생성하여 음운 지속시간을 모델 화하였다. 또한, 제안된 음운 지속시간 모델에 의해 예측실험을 행하여, 측정치와 예측치간의 다중 상관계수가 0.74정도이고, 각 음운의 예측오차의 75%이상이 25ms이내로 제안된 모델의 타당성이 입증되었다.

  • PDF

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

Real-time Implementation of the AMR Speech Coder Using $OakDSPCore^{\circledR}$ ($OakDSPCore^{\circledR}$를 이용한 적응형 다중 비트 (AMR) 음성 부호화기의 실시간 구현)

  • 이남일;손창용;이동원;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.34-39
    • /
    • 2001
  • An adaptive multi-rate (AMR) speech coder was adopted as a standard of W-CDMA by 3GPP and ETSI. The AMR coder is based on the CELP algorithm operating at rates ranging from 12.2 kbps down to 4.75 kbps, and it is a source controlled codec according to the channel error conditions and the traffic loading. In this paper, we implement the DSP S/W of the AMR coder using OakDSPCore. The implementation is based on the CSD17C00A chip developed by C&S Technology, and it is tested using test vectors, for the AMR speech codec, provided by ETSI for the bit exact implementation. The DSP B/W requires 20.6 MIPS for the encoder and 2.7 MIPS for the decoder. Memories required by the Am coder were 21.97 kwords, 6.64 kwords and 15.1 kwords for code, data sections and data ROM, respectively. Also, actual sound input/output test using microphone and speaker demonstrates its proper real-time operation without distortions or delays.

  • PDF

A Study on the Speech Packetized Coding by Zero Bit Reduction of 1'st Order Differences (1차 차분신호의 영비트 제거에 의한 음성신호의 패킷부호화에 관한 연구)

  • Shin, Dong-Jin;Lim, Un-Cheon;Bae, Myung-Jin;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.4
    • /
    • pp.74-82
    • /
    • 1989
  • In this paper, we have studied on the methodologies of implementation and the performance evaluations about the real-time packetized coding of multi-channel speech signals. Our suggested coding algorithm is very, simple and it has majorly the data handling operations rather than the numerical calculations. And it gives about $40\%$ of compression ratio with less than the conventional codings. Nevertheless, using this algorithm, we can save the memories for the speech signal and we can raise the efficiency of the channel transmission. Especially because of its simplicity of algorithm, we can easily obtain the merits of the multi-channel operations.

  • PDF

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.

Multiple Pronunciation Dictionary Generation For Korean Point-of-Interest Data Using Prosodic Words (운율어를 이용한 한국어 위치 정보 데이터의 다중 발음 사전 생성)

  • Kim, Sun-Hee;Jeon, Je-Hun;Na, Min-Soo;Chung, Min-Hwa
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.183-188
    • /
    • 2006
  • 본 논문에서 위치 정보 데이터란 텔레메틱스 분야의 응용을 위하여 웹상에서 수집한 Point-of-Interest (POI) 데이터로서 행정구역 및 지명 인명, 상호명과 같은 위치 검색에 사용되는 어휘로 구성된다. 본 논문은 음성 인식 시스템을 구성하는 발음 사전의 개발에 관한 것으로 250k 위치 정보데이터로부터 운율어를 이용하여 불규칙 발음과 발음 변이를 포함하는 가능한 모든 발음을 생성하는 방법을 제안하는 것을 목적으로 한다. 원래 모든 POI 는 한 번씩만 데이터에 포함되어 있으므로, 그 가운데 불규칙 발음을 포함하는 POI를 검출하거나 발음을 생성하기 위해서는 각각의 POI 하나하나를 일일이 검토하는 방법밖에 없는데, 대부분의 POI 가 복합명사구로 이루어졌다는 점에 착안하여 운율어를 이용한 결과, 불규칙 발음 검출과 다중 발음 생성을 효율적으로 수행할 수 있었다. 이러한 연구는 음성처리 영역에서는 위치정보데이터의 음성인식 성능을 향상하는 데 직접적인 기여를 할 수 있고, 무엇보다도 음성학과 음운론 이론을 음성 인식 분야에 접목한 학제적 연구로서 그 의미가 있다고 할 수 있다.

  • PDF