• Title/Summary/Keyword: 음성다중

Search Result 350, Processing Time 0.023 seconds

PCMM-Based Feature Compensation Method Using Multiple Model to Cope with Time-Varying Noise (시변 잡음에 대처하기 위한 다중 모델을 이용한 PCMM 기반 특징 보상 기법)

  • 김우일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.473-480
    • /
    • 2004
  • In this paper we propose an effective feature compensation scheme based on the speech model in order to achieve robust speech recognition. The proposed feature compensation method is based on parallel combined mixture model (PCMM). The previous PCMM works require a highly sophisticated procedure for estimation of the combined mixture model in order to reflect the time-varying noisy conditions at every utterance. The proposed schemes can cope with the time-varying background noise by employing the interpolation method of the multiple mixture models. We apply the‘data-driven’method to PCMM tot move reliable model combination and introduce a frame-synched version for estimation of environments posteriori. In order to reduce the computational complexity due to multiple models, we propose a technique for mixture sharing. The statistically similar Gaussian components are selected and the smoothed versions are generated for sharing. The performance is examined over Aurora 2.0 and speech corpus recorded while car-driving. The experimental results indicate that the proposed schemes are effective in realizing robust speech recognition and reducing the computational complexities under both simulated environments and real-life conditions.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

The Real-time Monitoring for SIP-based VoIP Network (SIP 기반 음성 통신 환경에서의 실시간 모니터링 플랫폼 개발)

  • Woo, Ho-Jin;Lee, Won-Suk
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2009.05a
    • /
    • pp.365-368
    • /
    • 2009
  • 고속 인터넷 망 구축과 멀티미디어 통신 수요의 증가에 따라 VoIP는 기존의 PSTN 망의 대체 혹은 확장 기술로서 지속적으로 검증되어 왔다. 음성 데이터 처리 규약들 중 SIP는 다른 규약에 비해 신호 처리 단계가 간단하기 때문에 이를 기반으로 RTP를 활용하여 음성 통신 시스템을 구축하는 사례가 늘어나고 있다. 그러나 RTP의 특성상 패킷을 처리할 때마다 복원 과정이 필요하며, 다중 세션으로 통신이 발생할 경우 전체 패킷들의 관리가 복잡해지므로 이들 간에 혼선 없이 데이터를 처리 및 유지할 수 있는 방법론이 요구된다. 본 논문에서는 SIP 기반의 IP 전화를 통해서 고객과 상담원 간의 통화 이벤트가 발생하는 일반 콜센터 환경에서 RTP 음성 데이터를 처리하는 다중 세션 어플리케이션의 구축 사례를 제시한다. 구현한 시스템은 IP 전화에서 발생하는 통화 내역을 통합 스위치 서버에서 포트 미러링하여 녹취 및 녹음 서버로 전송하며, 전송된 패킷 정보들의 세션이 유지되고 있는 동안 음성 데이터를 실시간으로 모니터링한다.

  • PDF

업계소식

  • Korea Electronics Association
    • Journal of Korean Electronics
    • /
    • v.5 no.6
    • /
    • pp.98-100
    • /
    • 1985
  • PDF

업계소식

  • Korea Electronics Association
    • Journal of Korean Electronics
    • /
    • v.5 no.7
    • /
    • pp.94-97
    • /
    • 1985
  • PDF

Improvement in Korean Speech Recognition using Dynamic Multi-Group Mixture Weight (동적 다중 그룹 혼합 가중치를 이용한 한국어 음성 인식의 성능향상)

  • 황기찬;김종광;김진수;이정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.544-546
    • /
    • 2002
  • 본 논문은 CDHMM(Continuous Density Hidden Markov Model)의 훈련하는 방법을 동적 다중 그룹 혼합 가중치(Dynamic Mutli-Group mixture weight)을 이용하여 재구성하는 방법을 제안한다. 음성은 Hidden 상태열에 의하여 특성화되고, 각 상태는 가중된 혼합 가우시안 밑도 함수에 의해 표현된다. 음성신호를 더욱더 정확하게 계산하려면 각 상태를 위한 가우시안 함수를 더욱더 많이 사용해야 하며 이것은 많은 계산량이 요구된다. 이러한 문제는 가우시안 분포 확률의 통계적인 평균을 이용하면 계산량을 줄일 수 있다. 그러나 이러한 기존의 방법들은 다양한 화자의 발화속도와 가중치의 적용이 적합하지 못하여 인식률을 저하시키는 단점을 가지고 있다. 이 문제를 다양한 화자의 발화속도에 적합하도록 화자의 화자의 발화속도에 따라 동적으로 5개의 그룹으로 구성하고 동적 다중 그룹 혼합 가중치를 적용하여 CDHMM 파라미터를 재구성함으로써 8.5%의 인식율이 증가되었다.

  • PDF

A Study on the Speech Recognition for Commands of Ticketing Machine using CHMM (CHMM을 이용한 발매기 명령어의 음성인식에 관한 연구)

  • Kim, Beom-Seung;Kim, Soon-Hyob
    • Journal of the Korean Society for Railway
    • /
    • v.12 no.2
    • /
    • pp.285-290
    • /
    • 2009
  • This paper implemented a Speech Recognition System in order to recognize Commands of Ticketing Machine (314 station-names) at real-time using Continuous Hidden Markov Model. Used 39 MFCC at feature vectors and For the improvement of recognition rate composed 895 tied-state triphone models. System performance valuation result of the multi-speaker-dependent recognition rate and the multi-speaker-independent recognition rate is 99.24% and 98.02% respectively. In the noisy environment the recognition rate is 93.91%.

Effective Feature Extraction in the Individual frequency Sub-bands for Speech Recognition (음성인식을 위한 주파수 부대역별 효과적인 특징추출)

  • 지상문
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.598-603
    • /
    • 2003
  • This paper presents a sub-band feature extraction approach in which the feature extraction method in the individual frequency sub-bands is determined in terms of speech recognition accuracy. As in the multi-band paradigm, features are extracted independently in frequency sub-regions of the speech signal. Since the spectral shape is well structured in the low frequency region, the all pole model is effective for feature extraction. But, in the high frequency region, the nonparametric transform, discrete cosine transform is effective for the extraction of cepstrum. Using the sub-band specific feature extraction method, the linguistic information in the individual frequency sub-bands can be extracted effectively for automatic speech recognition. The validity of the proposed method is shown by comparing the results of speech recognition experiments for our method with those obtained using a full-band feature extraction method.

Multi-level Skip Connection for Nested U-Net-based Speech Enhancement (중첩 U-Net 기반 음성 향상을 위한 다중 레벨 Skip Connection)

  • Seorim, Hwang;Joon, Byun;Junyeong, Heo;Jaebin, Cha;Youngcheol, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.840-847
    • /
    • 2022
  • In a deep neural network (DNN)-based speech enhancement, using global and local input speech information is closely related to model performance. Recently, a nested U-Net structure that utilizes global and local input data information using multi-scale has bee n proposed. This nested U-Net was also applied to speech enhancement and showed outstanding performance. However, a single skip connection used in nested U-Nets must be modified for the nested structure. In this paper, we propose a multi-level skip connection (MLS) to optimize the performance of the nested U-Net-based speech enhancement algorithm. As a result, the proposed MLS showed excellent performance improvement in various objective evaluation metrics compared to the standard skip connection, which means th at the MLS can optimize the performance of the nested U-Net-based speech enhancement algorithm. In addition, the final proposed m odel showed superior performance compared to other DNN-based speech enhancement models.

Real-time Implementation of AMR-WB Speech Codec Using TeakLite DSP (TeakLite DSP를 이용한 적응형 다중 비트율 광대역 (AMR-WB) 음성부호화기의 실시간 구현)

  • 정희범;김경수;한민수;변경진
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.262-267
    • /
    • 2004
  • AMR-WB (Adaptive Multi Rate Wideband) speech codec, the most recent voice codec standardized by 3GPP, has the wider audio bandwidth of 50∼7000 Hz and operates on nine speech coding bit rates between 6.60 and 23.85 kbit/s. This Paper presents the real-time implementation of AMR-WB speech codec by using a 16 bit fixed-point TeakLite DSP. The implemented AMR-WB codec requires the complexity of 52.2 MIPS at 23.85 kbit/s mode and also needs the program memory of 17.9 kwords, data RAM of 11.8 kwords, and data ROM of 10.1kwords. It was verified through passing the all test vectors provided by 3GPP with maintaining bit exactness. Stable operations on the real-time testing board were also proved without any distortions and delays for the audio in/out.