• Title/Summary/Keyword: Speech Feature Analysis

Search Result 178, Processing Time 0.024 seconds

Construction of Customer Appeal Classification Model Based on Speech Recognition

  • Sheng Cao;Yaling Zhang;Shengping Yan;Xiaoxuan Qi;Yuling Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.258-266
    • /
    • 2023
  • Aiming at the problems of poor customer satisfaction and poor accuracy of customer classification, this paper proposes a customer classification model based on speech recognition. First, this paper analyzes the temporal data characteristics of customer demand data, identifies the influencing factors of customer demand behavior, and determines the process of feature extraction of customer voice signals. Then, the emotional association rules of customer demands are designed, and the classification model of customer demands is constructed through cluster analysis. Next, the Euclidean distance method is used to preprocess customer behavior data. The fuzzy clustering characteristics of customer demands are obtained by the fuzzy clustering method. Finally, on the basis of naive Bayesian algorithm, a customer demand classification model based on speech recognition is completed. Experimental results show that the proposed method improves the accuracy of the customer demand classification to more than 80%, and improves customer satisfaction to more than 90%. It solves the problems of poor customer satisfaction and low customer classification accuracy of the existing classification methods, which have practical application value.

Analysis and Implementation of Speech/Music Classification for 3GPP2 SMV Codec Based on Support Vector Machine (SMV코덱의 음성/음악 분류 성능 향상을 위한 Support Vector Machine의 적용)

  • Kim, Sang-Kyun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.142-147
    • /
    • 2008
  • In this paper, we propose a novel a roach to improve the performance of speech/music classification for the selectable mode vocoder (SMV) of 3GPP2 using the support vector machine (SVM). The SVM makes it possible to build on an optimal hyperplane that is separated without the error where the distance between the closest vectors and the hyperplane is maximal. We first present an effective analysis of the features and the classification method adopted in the conventional SMV. And then feature vectors which are a lied to the SVM are selected from relevant parameters of the SMV for the efficient speech/music classification. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional scheme of the SMV.

Analysis and Implementation of Speech/Music Classification for 3GPP2 SMV Based on GMM (3GPP2 SMV의 실시간 음성/음악 분류 성능 향상을 위한 Gaussian Mixture Model의 적용)

  • Song, Ji-Hyun;Lee, Kye-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.8
    • /
    • pp.390-396
    • /
    • 2007
  • In this letter, we propose a novel approach to improve the performance of speech/music classification for the selectable mode vocoder(SMV) of 3GPP2 using the Gaussian mixture model(GMM) which is based on the expectation-maximization(EM) algorithm. We first present an effective analysis of the features and the classification method adopted in the conventional SMV. And then feature vectors which are applied to the GMM are selected from relevant Parameters of the SMV for the efficient speech/music classification. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional scheme of the SMV.

A Study on Error Correction Using Phoneme Similarity in Post-Processing of Speech Recognition (음성인식 후처리에서 음소 유사율을 이용한 오류보정에 관한 연구)

  • Han, Dong-Jo;Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.3
    • /
    • pp.77-86
    • /
    • 2007
  • Recently, systems based on speech recognition interface such as telematics terminals are being developed. However, many errors still exist in speech recognition and then studies about error correction are actively conducting. This paper proposes an error correction in post-processing of the speech recognition based on features of Korean phoneme. To support this algorithm, we used the phoneme similarity considering features of Korean phoneme. The phoneme similarity, which is utilized in this paper, rams data by mono-phoneme, and uses MFCC and LPC to extract feature in each Korean phoneme. In addition, the phoneme similarity uses a Bhattacharrya distance measure to get the similarity between one phoneme and the other. By using the phoneme similarity, the error of eo-jeol that may not be morphologically analyzed could be corrected. Also, the syllable recovery and morphological analysis are performed again. The results of the experiment show the improvement of 7.5% and 5.3% for each of MFCC and LPC.

  • PDF

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.

A Study on the Automatic Speech Control System Using DMS model on Real-Time Windows Environment (실시간 윈도우 환경에서 DMS모델을 이용한 자동 음성 제어 시스템에 관한 연구)

  • 이정기;남동선;양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.51-56
    • /
    • 2000
  • Is this paper, we studied on the automatic speech control system in real-time windows environment using voice recognition. The applied reference pattern is the variable DMS model which is proposed to fasten execution speed and the one-stage DP algorithm using this model is used for recognition algorithm. The recognition vocabulary set is composed of control command words which are frequently used in windows environment. In this paper, an automatic speech period detection algorithm which is for on-line voice processing in windows environment is implemented. The variable DMS model which applies variable number of section in consideration of duration of the input signal is proposed. Sometimes, unnecessary recognition target word are generated. therefore model is reconstructed in on-line to handle this efficiently. The Perceptual Linear Predictive analysis method which generate feature vector from extracted feature of voice is applied. According to the experiment result, but recognition speech is fastened in the proposed model because of small loud of calculation. The multi-speaker-independent recognition rate and the multi-speaker-dependent recognition rate is 99.08% and 99.39% respectively. In the noisy environment the recognition rate is 96.25%.

  • PDF

An Experimental Phonetic Analysis on Japanese Vowels of Japanese Natives (일본인 화자의 일본어 모음에 관한 실험음성학적 분석)

  • Lee Jae-Gang
    • MALSORI
    • /
    • no.33_34
    • /
    • pp.57-69
    • /
    • 1997
  • In this paper, 1 will try to examine the aspects of formants, based on the LPC analysis. In this analysis, five Japanese vowels (a, i, u, e, o) will experience two kinds of experiments: vowels in isolated forms, and vowels in carrier sentences. The analysis results of Japanese vowels of the Japanese natives show a peculiar feature that Japanese vowels form respective vowel groups. Each Japanese vowel makes a statistically significant difference. In the Fl analysis of the vowels grouped by the informant's sex, Japanese vowel (a) shows the greatest standard deviation without regard to the informant's sex. In the F2 analysis of Japanese vowels, each vowel has a statistically significant difference. The fact that the male's [u] shows great standard deviation means that there is a great difference of the frontness of the tongue among the Japanese males in articulating [u]. Isolated vowels and carried vowels show statistically little significance between Fl and F2 frequency values. In another contrastive analysis between the isolated vowel group and the carried vowel group, whether a vowel is articulated in isolation or in a sentence appears to have little effect on its formant frequency.

  • PDF

Voice Activity Detection in Noisy Environment based on Statistical Nonlinear Dimension Reduction Techniques (통계적 비선형 차원축소기법에 기반한 잡음 환경에서의 음성구간검출)

  • Han Hag-Yong;Lee Kwang-Seok;Go Si-Yong;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.5
    • /
    • pp.986-994
    • /
    • 2005
  • This Paper proposes the likelihood-based nonlinear dimension reduction method of the speech feature parameters in order to construct the voice activity detecter adaptable in noisy environment. The proposed method uses the nonlinear values of the Gaussian probability density function with the new parameters for the speec/nonspeech class. We adapted Likelihood Ratio Test to find speech part and compared its performance with that of Linear Discriminant Analysis technique. In experiments we found that the proposed method has the similar results to that of Gaussian Mixture Models.

Intensified Sentiment Analysis of Customer Product Reviews Using Acoustic and Textual Features

  • Govindaraj, Sureshkumar;Gopalakrishnan, Kumaravelan
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.494-501
    • /
    • 2016
  • Sentiment analysis incorporates natural language processing and artificial intelligence and has evolved as an important research area. Sentiment analysis on product reviews has been used in widespread applications to improve customer retention and business processes. In this paper, we propose a method for performing an intensified sentiment analysis on customer product reviews. The method involves the extraction of two feature sets from each of the given customer product reviews, a set of acoustic features (representing emotions) and a set of lexical features (representing sentiments). These sets are then combined and used in a supervised classifier to predict the sentiments of customers. We use an audio speech dataset prepared from Amazon product reviews and downloaded from the YouTube portal for the purposes of our experimental evaluations.

A Systematic Review on Voice Characteristics and Risk Factors of Voice Disorder of Korea Teachers (우리나라 교사의 음성 특성과 음성장애 위험 요인에 관한 체계적 문헌고찰)

  • Cha, Seulki;Byeon, Haewon
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.8
    • /
    • pp.149-154
    • /
    • 2018
  • As the range of professional voice users are expanding, interest towards voice increases as well. Especially as teachers compose the occupational group, exposed to high risk of voice disorder, it is necessary to identify the cause of speech problems and speech disorders. The purpose of this study is to analyze the voice characteristics of teachers and to investigate the causes of voice disorders. From 2000 to 2018, 414 studies were found under a combinated set search words of 'profession', 'Teacher', 'Professional Voice User', 'Voice', 'Voice disorders', 'Risk' and out of them, 8 studies were selected as final focus analysis subjects. The qualitative evaluation was carried out by modifying the Quality: checklist for assessing the Risk of bias. The study confirmed that voice misuse frequently occurred to teachers when they used their voice and this feature was affected by the environment. These results suggest that environment improvement of teachers' speech abuse and consistent voice education are necessary.