• Title/Summary/Keyword: 음성 훈련

Search Result 280, Processing Time 0.026 seconds

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

On The Voice Training of Stage Speech in Acting Education - Yuri Vasiliev's Stage Speech Training Method - (연기 교육에서 무대 언어의 발성 훈련에 관하여 - 유리 바실리예프의 무대 언어 훈련방법 -)

  • Xu, Cheng-Kang
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.3
    • /
    • pp.203-210
    • /
    • 2021
  • Yuri Vasilyev - actor, director and drama teacher. Russian meritorious artist, winner of the stage "Medal of Friendship" awarded by Russian President Vladimir Putin; academician of the Petrovsky Academy of Sciences and Arts in Russia, professor of the Russian National Academy of Performing Arts, and professor of the Bavarian Academy of Drama in Munich, Germany. The physiological sense stimulation method based on the improvement of voice, language and motor function of drama actors. On the basis of a systematic understanding of performing arts, Yuri Vasiliev created a unique training method of speech expression and skills. From the complicated art training, we find out the most critical skills for focused training, which we call basic skills training. Throughout the whole training process, Professor Yuri made a clear request for the actor's lines: "action! This is the basis of actors' creation. So action is the key! Action and voice are closely linked. Actor's voice is human voice, human life, human feeling, human experience and disaster. It is also the foundation of creation that actors acquire their own voice. What we are engaged in is pronunciation, breathing, tone and intonation, speed and rhythm, expressiveness, sincerity, stage voice and movement, gesture, all of which are used to train the voice of actors according to the standard of drama. In short, Professor Yuri's training course is not only the training of stage performance and skills, but also contains a rich view of drama and performance. I think, in addition to learning from the means and methods of training, it is more important for us to understand the starting point and training objectives of Professor Yuri's use of these exercises.

A Phase-related Feature Extraction Method for Robust Speaker Verification (열악한 환경에 강인한 화자인증을 위한 위상 기반 특징 추출 기법)

  • Kwon, Chul-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.613-620
    • /
    • 2010
  • Additive noise and channel distortion strongly degrade the performance of speaker verification systems, as it introduces distortion of the features of speech. This distortion causes a mismatch between the training and recognition conditions such that acoustic models trained with clean speech do not model noisy and channel distorted speech accurately. This paper presents a phase-related feature extraction method in order to improve the robustness of the speaker verification systems. The instantaneous frequency is computed from the phase of speech signals and features from the histogram of the instantaneous frequency are obtained. Experimental results show that the proposed technique offers significant improvements over the standard techniques in both clean and adverse testing environments.

A Study on the prosody generation of artificial neural networks (인공신경망의 운률 발생에 관한 연구)

  • 신동엽;민경중;강찬구;임운천
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.87-90
    • /
    • 2000
  • 문-음성 합성기의 자연감을 높이기 위해 주로 자연음에 존재하는 운률 법칙을 정확히 구현해 주어야 한다. 일반적으로 언어학적 정보를 이용하거나 자연음으로부터 추출한 운률 정보를 추출한 운률 법칙을 합성에 이용하고 있다. 이와 같이 구한 운률 법칙이 자연음에 존재하는 모든 운률 법칙을 포함할 수 있으면, 자연스러운 합성음을 들을 수 있겠으나, 실질적으로는 모든 법칙을 구현한다는 것은 어려운 실정이고, 자연음으로부터 추출한 운률 법칙이 잘못 구현되는 경우 합성음의 자연성이 떨어지는 것을 피할 수 없을 것이다. 이런 점을 고려하여 우리는 자연음에 내재하는 운율 법칙을 훈련을 통해 학습할 수 있는 인공 신경망을 제안하였다 운률의 세 가지 요소는 피치, 지속시간, 크기 변화가 있는데, 인공 신경망은 문장이 입력되면, 각 해당 음소의 지속시간에 따른 피치 변화와 크기 변화를 학습할 수 있도록 설계하였다. 신경망을 훈련시키기 위해 고립 단어군과 음소균형 문장군을 화자로 하여금 발성하게 하여, 녹음하고, 분석하여 운률 데이터베이스를 구축하였다. 자연음의 각 음소에 대해 지속시간과 피치변화 그리고 크기 변화를 구하여 곡선 적응 방법을 이용하여 각 변화 곡선에 대한 계수를 구해 데이터베이스를 구축한다. 이렇게 구축한 데이터베이스를 이용해 인공 신경망을 훈련시켜 평가한 결과 훈련용 데이터를 계속 확장하면 좀 더 자연스러운 운률을 발생시킬 수 있음을 관찰하였다.

  • PDF

Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models (통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구)

  • Edward Dwijayanto Cahyadi;Hans Nathaniel Hadi Soesilo;Mi-Hwa Song
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.617-623
    • /
    • 2024
  • Identifying emotions through speech poses a significant challenge due to the complex relationship between language and emotions. Our paper aims to take on this challenge by employing feature engineering to identify emotions in speech through a multimodal classification task involving both speech and text data. We evaluated two classifiers-Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM)-both integrated with a BERT-based pre-trained model. Our assessment covers various performance metrics (accuracy, F-score, precision, and recall) across different experimental setups). The findings highlight the impressive proficiency of two models in accurately discerning emotions from both text and speech data.

Performance Improvement of Speech Recognition based on Stereo Data with Dimensionally Weighted Bias Compensation (스테레오 데이터에 기반한 차원별 가중 보상에 의한 음성 인식 성능 향상)

  • Kim Jong Hyeon;Song Hwa Jeon;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.139-142
    • /
    • 2004
  • 훈련 과정과 인식 과정사이의 주변 잡음과 채널 특성으로 인한 환경의 불일치는 음성 인식 성능을 급격히 저하시킨다. 이러한 차이를 극복하기 위해 다양한 전처리 방법이 제안되어 왔으며, 최근에는 스테레오 데이터와 잡음 음성의 Gaussian Mixture Model(GMM)을 이용하여 보상벡터를 구하는 SPLICE 방법이 좋은 성능을 보여주고 있다. 하지만 차원별로 특징벡터를 보상해주는 추정된 보상벡터는 underestimation되는 경향이 있으며, 그 정도가 각각의 차원마다 달라짐이 관찰되었다. 본 논문에서는 SPLICE 방법에 기반하여 추정된 보상벡터와 실제 보상벡터 사이의 관계를 관찰하여 차원별로 다른 가중치를 적용하는 차원별 가중 보상 방법을 제안하였다. 제안한 방법은 Aurora2 Clean-condition인 경우 baseline 실험 결과에 비해 $68\%$의 높은 상대적인 인식 향상율을 얻었다.

  • PDF

An Implementation of the Baseline Recognizer Using the Segmental K-means Algorithm for the Noisy Speech Recognition Using the Aurora DB (Aurora DB를 이용한 잡음 음성 인식실험을 위한 Segmental K-means 훈련 방식의 기반인식기의 구현)

  • Kim Hee-Keun;Chung Young-Joo
    • MALSORI
    • /
    • no.57
    • /
    • pp.113-122
    • /
    • 2006
  • Recently, many studies have been done for speech recognition in noisy environments. Particularly, the Aurora DB has been built as the common database for comparing the various feature extraction schemes. However, in general, the recognition models as well as the features have to be modified for effective noisy speech recognition. As the structure of the HTK is very complex, it is not easy to modify, the recognition engine. In this paper, we implemented a baseline recognizer based on the segmental K-means algorithm whose performance is comparable to the HTK in spite of the simplicity in its implementation.

  • PDF

Recognition of Emotional states in Speech using Hidden Markov Model (HMM을 이용한 음성에서의 감정인식)

  • Kim, Sung-Ill;Lee, Sang-Hoon;Shin, Wee-Jae;Park, Nam-Chun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.560-563
    • /
    • 2004
  • 본 논문은 분노, 행복, 평정, 슬픔, 놀람 둥과 같은 인간의 감정상태를 인식하는 새로운 접근에 대해 설명한다. 이러한 시도는 이산길이를 포함하는 연속 은닉 마르코프 모델(HMM)을 사용함으로써 이루어진다. 이를 위해, 우선 입력음성신호로부터 감정의 특징 파라메타를 정의 한다. 본 연구에서는 피치 신호, 에너지, 그리고 각각의 미분계수 등의 운율 파라메타를 사용하고, HMM으로 훈련과정을 거친다. 또한, 화자적응을 위해서 최대 사후확률(MAP) 추정에 기초한 감정 모델이 이용된다. 실험 결과, 음성에서의 감정 인식률은 적응 샘플수의 증가에 따라 점차적으로 증가함을 보여준다.

  • PDF

Recognition experiment of Korean connected digit telephone speech using the temporal filter based on training speech data (훈련데이터 기반의 temporal filter를 적용한 한국어 4연숫자 전화음성의 인식실험)

  • Jung Sung Yun;Kim Min Sung;Son Jong Mok;Bae Keun Sung;Kang Jeom Ja
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.149-152
    • /
    • 2003
  • In this paper, data-driven temporal filter methods[1] are investigated for robust feature extraction. A principal component analysis technique is applied to the time trajectories of feature sequences of training speech data to get appropriate temporal filters. We did recognition experiments on the Korean connected digit telephone speech database released by SITEC, with data-driven temporal filters. Experimental results are discussed with our findings.

  • PDF

Recognition of Korean Connected Digit Telephone Speech Using the Training Data Based Temporal Filter (훈련데이터 기반의 temporal filter를 적용한 4연숫자 전화음성 인식)

  • Jung, Sung-Yun;Bae, Keun-Sung
    • MALSORI
    • /
    • no.53
    • /
    • pp.93-102
    • /
    • 2005
  • The performance of a speech recognition system is generally degraded in telephone environment because of distortions caused by background noise and various channel characteristics. In this paper, data-driven temporal filters are investigated to improve the performance of a specific recognition task such as telephone speech. Three different temporal filtering methods are presented with recognition results for Korean connected-digit telephone speech. Filter coefficients are derived from the cepstral domain feature vectors using the principal component analysis. According to experimental results, the proposed temporal filtering method has shown slightly better performance than the previous ones.

  • PDF