• Title/Summary/Keyword: Speech Feature Analysis

Search Result 177, Processing Time 0.024 seconds

Korean native speakers' perceptive aspects on Korean wh & yes-no questions produced by Chinese Korean learners (중국인학습자들의 한국어 의문사의문문과 부정사의문문에 대한 한국어원어민 화자의 지각양상)

  • Yune, YoungSook
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.37-45
    • /
    • 2014
  • Korean wh-questions and yes-no questions have morphologically the same structure. In speech, however, two types of questions are distinguished by prosodic difference. In this study, we examined if Korean native speakers can distinguish wh-question and yes-no questions produced by Chinese Korean leaners based on the prosodic information contained in the sentences. For this purpose, we performed perception analysis, and 15 Korean native speakers participated in the perception test. The results show that two types of interrogative sentences produced by Chinese Korean leaners were not distinguished by constant pitch contours. These results reveal that Chinese Korean leaners cannot match prosodic meaning and prosodic form. The most saliant prosodic feature used perceptually by native speakers to discriminate two types of interrogative sentences is pitch difference between the F0 pick of wh-word and boundary tone.

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.

Automatic pronunciation assessment of English produced by Korean learners using articulatory features (조음자질을 이용한 한국인 학습자의 영어 발화 자동 발음 평가)

  • Ryu, Hyuksu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.103-113
    • /
    • 2016
  • This paper aims to propose articulatory features as novel predictors for automatic pronunciation assessment of English produced by Korean learners. Based on the distinctive feature theory, where phonemes are represented as a set of articulatory/phonetic properties, we propose articulatory Goodness-Of-Pronunciation(aGOP) features in terms of the corresponding articulatory attributes, such as nasal, sonorant, anterior, etc. An English speech corpus spoken by Korean learners is used in the assessment modeling. In our system, learners' speech is forced aligned and recognized by using the acoustic and pronunciation models derived from the WSJ corpus (native North American speech) and the CMU pronouncing dictionary, respectively. In order to compute aGOP features, articulatory models are trained for the corresponding articulatory attributes. In addition to the proposed features, various features which are divided into four categories such as RATE, SEGMENT, SILENCE, and GOP are applied as a baseline. In order to enhance the assessment modeling performance and investigate the weights of the salient features, relevant features are extracted by using Best Subset Selection(BSS). The results show that the proposed model using aGOP features outperform the baseline. In addition, analysis of relevant features extracted by BSS reveals that the selected aGOP features represent the salient variations of Korean learners of English. The results are expected to be effective for automatic pronunciation error detection, as well.

Utilization of Phase Information for Speech Recognition (음성 인식에서 위상 정보의 활용)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.9
    • /
    • pp.993-1000
    • /
    • 2015
  • Mel-Frequency Cepstral Coefficients(: MFCC) is one of the noble feature vectors for speech signal processing. An evident drawback in MFCC is that the phase information is lost by taking the magnitude of the Fourier transform. In this paper, we consider a method of utilizing the phase information by treating the magnitudes of real and imaginary components of FFT separately. By applying this method to speech recognition with FVQ/HMM, the speech recognition error rate is found to decrease compared to the conventional MFCC. By numerical analysis, we show also that the optimal value of MFCC components is 12 which come from 6 real and imaginary components of FFT each.

A Study on Extracting Valid Speech Sounds by the Discrete Wavelet Transform (이산 웨이브렛 변환을 이용한 유효 음성 추출에 관한 연구)

  • Kim, Jin-Ok;Hwang, Dae-Jun;Baek, Han-Uk;Jeong, Jin-Hyeon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.231-236
    • /
    • 2002
  • The classification of the speech-sound block comes from the multi-resolution analysis property of the discrete wavelet transform, which is used to reduce the computational time for the pre-processing of speech recognition. The merging algorithm is proposed to extract vapid speech-sounds in terms of position and frequency range. It performs unvoiced/voiced classification and denoising. Since the merging algorithm can decide the processing parameters relating to voices only and is independent of system noises, it is useful for extracting valid speech-sounds. The merging algorithm has an adaptive feature for arbitrary system noises and an excellent denoising signal-to-noise ratio and a useful system tuning for the system implementation.

Implementation of HMM-Based Speech Recognizer Using TMS320C6711 DSP

  • Bae Hyojoon;Jung Sungyun;Son Jongmok;Kwon Hongseok;Kim Siho;Bae Keunsung
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.391-394
    • /
    • 2004
  • This paper focuses on the DSP implementation of an HMM-based speech recognizer that can handle several hundred words of vocabulary size as well as speaker independency. First, we develop an HMM-based speech recognition system on the PC that operates on the frame basis with parallel processing of feature extraction and Viterbi decoding to make the processing delay as small as possible. Many techniques such as linear discriminant analysis, state-based Gaussian selection, and phonetic tied mixture model are employed for reduction of computational burden and memory size. The system is then properly optimized and compiled on the TMS320C6711 DSP for real-time operation. The implemented system uses 486kbytes of memory for data and acoustic models, and 24.5kbytes for program code. Maximum required time of 29.2ms for processing a frame of 32ms of speech validates real-time operation of the implemented system.

  • PDF

Characteristics of speech rate and pause in children with spastic cerebral palsy and their relationships with speech intelligibility (경직형 뇌성마비 아동의 하위그룹별 말속도와 쉼의 특성 및 말명료도와의 관계)

  • Jeong, Pil Yeon;Sim, Hyun Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.95-103
    • /
    • 2020
  • The current study aimed to identify the characteristics of speech rate and pause in children with spastic cerebral palsy (CP) and their relationships with speech intelligibility. In all, 26 children with CP, 4 with no speech motor involvement and age-appropriate language ability (NSMI-LCT), 6 with no speech motor involvement and impaired language ability (NSMI-LCI), 6 with speech motor involvement and age-appropriate language ability (SMI- LCT), and 10 with speech motor involvement and impaired language ability (SMI-LCI) participated in the study. Speech samples for the speech rate and pause analysis were extracted using a sentence repetition task. Acoustic analysis were made in Praat. First, it was found that regardless of the presence of language impairment, significant group differences between the NSMI and SMI groups were found in speech rate and articulation rate. Second, the SMI groups showed a higher ratio of pause time to sentence production time, more frequent pauses, and longer durations of pauses than the NSMI groups. Lastly, there were significant correlations among speech rate, articulation rate, and intelligibility. These findings suggest that slow speech rate is the main feature in SMI groups, and that both speech rate and articulation rate play important roles in the intelligibility of children with spastic CP.

The pattern of use by gender and age of the discourse markers 'a', 'eo', and 'eum' (담화표지 '아', '어', '음'의 성별과 연령별 사용 양상)

  • Song, Youngsook;Shim, Jisu;Oh, Jeahyuk
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.37-45
    • /
    • 2020
  • This paper quantitatively calculated the speech frequency of the discourse markers 'a', 'eo', and 'eum' and the speech duration of these discourse markers using the Seoul Corpus, a spontaneous speech corpus. The sound durations were confirmed with Praat, the Seoul Corpus was analyzed with Emeditor, and the results were presented by statistical analysis with R. Based on the corpus analysis, the study investigated whether a particular factor is preferred by speakers of particular categories. The most prominent feature of the corpus is that the sound durations of female speakers were longer than those of men when using the 'eum' discourse marker in a final position. In age-related variables, teenagers uttered 'a' more than 'eo' in an initial position when compared to people in their 40s. This study is significant because it has quantitatively analyzed the discourse markers 'a', 'eo', and 'eum' by gender and age. In order to continue the discussion, more precise research should be conducted considering the context. In addition, similarities can be found in "e" and "ma" in Japanese(Watanabe & Ishi, 2000) and 'uh', 'um' in English(Gries, 2013). afterwards, a study to identify commonalities and differences can be predicted by using the cross-linguistic analysis of the discourse.

Hate Speech Detection Using Modified Principal Component Analysis and Enhanced Convolution Neural Network on Twitter Dataset

  • Majed, Alowaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2023
  • Traditionally used for networking computers and communications, the Internet has been evolving from the beginning. Internet is the backbone for many things on the web including social media. The concept of social networking which started in the early 1990s has also been growing with the internet. Social Networking Sites (SNSs) sprung and stayed back to an important element of internet usage mainly due to the services or provisions they allow on the web. Twitter and Facebook have become the primary means by which most individuals keep in touch with others and carry on substantive conversations. These sites allow the posting of photos, videos and support audio and video storage on the sites which can be shared amongst users. Although an attractive option, these provisions have also culminated in issues for these sites like posting offensive material. Though not always, users of SNSs have their share in promoting hate by their words or speeches which is difficult to be curtailed after being uploaded in the media. Hence, this article outlines a process for extracting user reviews from the Twitter corpus in order to identify instances of hate speech. Through the use of MPCA (Modified Principal Component Analysis) and ECNN, we are able to identify instances of hate speech in the text (Enhanced Convolutional Neural Network). With the use of NLP, a fully autonomous system for assessing syntax and meaning can be established (NLP). There is a strong emphasis on pre-processing, feature extraction, and classification. Cleansing the text by removing extra spaces, punctuation, and stop words is what normalization is all about. In the process of extracting features, these features that have already been processed are used. During the feature extraction process, the MPCA algorithm is used. It takes a set of related features and pulls out the ones that tell us the most about the dataset we give itThe proposed categorization method is then put forth as a means of detecting instances of hate speech or abusive language. It is argued that ECNN is superior to other methods for identifying hateful content online. It can take in massive amounts of data and quickly return accurate results, especially for larger datasets. As a result, the proposed MPCA+ECNN algorithm improves not only the F-measure values, but also the accuracy, precision, and recall.

Emotion Recognition using Pitch Parameters of Speech (음성의 피치 파라메터를 사용한 감정 인식)

  • Lee, Guehyun;Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.3
    • /
    • pp.272-278
    • /
    • 2015
  • This paper studied various parameter extraction methods using pitch information of speech for the development of the emotion recognition system. For this purpose, pitch parameters were extracted from korean speech database containing various emotions using stochastical information and numerical analysis techniques. GMM based emotion recognition system were used to compare the performance of pitch parameters. Sequential feature selection method were used to select the parameters showing the best emotion recognition performance. Experimental results of recognizing four emotions showed 63.5% recognition rate using the combination of 15 parameters out of 56 pitch parameters. Experimental results of detecting the presence of emotion showed 80.3% recognition rate using the combination of 14 parameters.