• Title/Summary/Keyword: speech features

Search Result 647, Processing Time 0.028 seconds

Features of Korean Infants' Vocalizations according to the Stages Models : Focused on 1 to 18 Months (음성발달 모델에 따른 1~18개월 영유아의 음성특징)

  • Pae, Jae-Yeon;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.27-36
    • /
    • 2010
  • The purpose of this study is to investigate the features of Korean infants' vocalizations according to the stages models. A total 88 infants, whose ages range from 1 to 18 months, participated in this study. This age is a critical period for vocal development. However, the study of infants' vocalizations has typically focused on children over the age of two. Because of restrictions related to the study of younger infants, from birth to the age of two, it is usually difficult to investigate what are the major features of their vocal development. Therefore, this study provides documentation and analysis of the features of infant vocalization and their vocal development stages. The results shows that the stages model of Oller & Lynch (1992) might be adapted for Korean infants' vocal development. Furthermore, the features of the infants' vocalization are not linearly appeared one stage to the next stage, but are overlapped (Koopmans-van Beinum & van der Stelt, 1986; Nathani et al., 2006; Oller, 1980; Stark, 1980; Vihman, 1996).

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3E
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.

A Novel Approach to COVID-19 Diagnosis Based on Mel Spectrogram Features and Artificial Intelligence Techniques

  • Alfaidi, Aseel;Alshahrani, Abdullah;Aljohani, Maha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.195-207
    • /
    • 2022
  • COVID-19 has remained one of the most serious health crises in recent history, resulting in the tragic loss of lives and significant economic impacts on the entire world. The difficulty of controlling COVID-19 poses a threat to the global health sector. Considering that Artificial Intelligence (AI) has contributed to improving research methods and solving problems facing diverse fields of study, AI algorithms have also proven effective in disease detection and early diagnosis. Specifically, acoustic features offer a promising prospect for the early detection of respiratory diseases. Motivated by these observations, this study conceptualized a speech-based diagnostic model to aid in COVID-19 diagnosis. The proposed methodology uses speech signals from confirmed positive and negative cases of COVID-19 to extract features through the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images. This is used in addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 of varying ages and speaking different languages, as demonstrated in the simulations. The proposed methodology depends on deep features, followed by the dimension reduction technique for features to detect COVID-19. As a result, it produces better and more consistent performance than handcrafted features used in previous studies.

SOME PROSODIC FEATURES OBSERVED IN THE PASSAGE READING BY JAPANESE LEARNERS OF ENGLISH

  • Kanzaki, Kazuo
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.37-42
    • /
    • 1996
  • This study aims to see some prosodic features of English spoken by Japanese learners of English. It focuses on speech rates, pauses, and intonation when the learners read an English passage. Three Japanese learners of English, who are all male university students, were asked to read the speech material, an English passage of 110 word length, at their normal reading speed. Then a native speaker of English, a male American English teacher. was asked to read the same passage. The Japanese speakers were also asked to read a Japanese passage of 286 letters (Japanese Kana) to compare the reading of English with that of japanese. Their speech was analyzed on a computerized system (KAY Computerized Speech Lab). Wave forms, spectrograms, and F0 contours were shown on the screen to measure the duration of pauses, phrases and sentences and to observe intonation contours. One finding of the experiment was that the movement of the low speakers' speech rates showed a similar tendency in their reading of the English passage. Reading of the Japanese passage by the three learners also had a similar tendency in the movement of speech rates. Another finding was that the frequency of pauses in the learners speech was greater than that in the speech of the native speaker, but that the ration of the total pause length to the whole utterance length was about tile same in both the learners' and the native speaker's speech. A similar tendency was observed about the learners' reading of the Japanese passage except that they used shorter pauses in the mid-sentence position. As to intonation contours, we found that the learners used a narrower pitch range than the native speaker in their reading of the English passage while they used a wider pitch range as they read the Japanese passage. It was found that the learners tended to use falling intonation before pauses whereas the native speaker used different intonation patterns. These findings are applicable to the teaching of English pronunciation at the passage level in the sense that they can show the learners. Japanese here, what their problems are and how they could be solved.

  • PDF

Korean continuous digit speech recognition by multilayer perceptron using KL transformation (KL 변환을 이용한 multilayer perceptron에 의한 한국어 연속 숫자음 인식)

  • 박정선;권장우;권정상;이응혁;홍승홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.105-113
    • /
    • 1996
  • In this paper, a new korean digita speech recognition technique was proposed using muktolayer perceptron (MLP). In spite of its weakness in dynamic signal recognition, MLP was adapted for this model, cecause korean syllable could give static features. It is so simle in its structure and fast in its computing that MLP was used to the suggested system. MLP's input vectors was transformed using karhunen-loeve transformation (KLT), which compress signal successfully without losin gits separateness, but its physical properties is changed. Because the suggested technique could extract static features while it is not affected from the changes of syllable lengths, it is effectively useful for korean numeric recognition system. Without decreasing classification rates, we can save the time and memory size for computation using KLT. The proposed feature extraction technique extracts same size of features form the tow same parts, front and end of a syllable. This technique makes frames, where features are extracted, using unique size of windows. It could be applied for continuous speech recognition that was not easy for the normal neural network recognition system.

  • PDF

Variables for Predicting Speech Acceptability of Children with Cochlear Implants (인공와우이식 아동 말용인도의 예측 변인)

  • Yoon, Mi Sun
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.171-179
    • /
    • 2014
  • Purposes: Speech acceptability means the subjective judgement of listeners regarding the naturalness and normality of the speech. The purpose of this study was to determine the predicting variables for speech acceptabilities of children with cochlear implants. Methods: Twenty seven children with CI participated. They had profound pre-lingual hearing loss without any additional disabilities. The mean of chronological ages was 8;9, and mean of age of implantation was 2;11. Speech samples of reading and spontaneous speech were recorded separately. Twenty college students who were not familiar to the speech of deaf children evaluated the speech acceptabilities using visual analog scale. 1 segmental (articulation) and 6 suprasegmental features (pitch, loudness, quality, resonance, intonation, and speaking rate) of speech were perceptually evaluated by 3 SLPs. Correlation and multiple regression analysis were performed to evaluate the predicting variables. Results: The means of speech acceptability for reading and spontaneous speech were 73.47 and 71.96, respectively. Speech acceptability of reading was predicated by the severity of intonation and articulation. Speech acceptability of spontaneous speech was predicated by the severity of intonation and loudness. Discussion and conclusion: Severity of intonation was the most effective variable to predict the speech acceptabilities of both reading and spontaneous speech. A further study would be necessary to generalize the result and to apply this result to intervention in clinical settings.

Estimation of speech feature vectors and enhancement of speech recognition performance using lip information (입술정보를 이용한 음성 특징 파라미터 추정 및 음성인식 성능향상)

  • Min So-Hee;Kim Jin-Young;Choi Seung-Ho
    • MALSORI
    • /
    • no.44
    • /
    • pp.83-92
    • /
    • 2002
  • Speech recognition performance is severly degraded under noisy envrionments. One approach to cope with this problem is audio-visual speech recognition. In this paper, we discuss the experiment results of bimodal speech recongition based on enhanced speech feature vectors using lip information. We try various kinds of speech features as like linear predicion coefficient, cepstrum, log area ratio and etc for transforming lip information into speech parameters. The experimental results show that the cepstrum parameter is the best feature in the point of reconition rate. Also, we present the desirable weighting values of audio and visual informations depending on signal-to-noiso ratio.

  • PDF

The Effect of the Telephone Channel to the Performance of the Speaker Verification System (전화선 채널이 화자확인 시스템의 성능에 미치는 영향)

  • 조태현;김유진;이재영;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.12-20
    • /
    • 1999
  • In this paper, we compared speaker verification performance of the speech data collected in clean environment and in channel environment. For the improvement of the performance of speaker verification gathered in channel, we have studied on the efficient feature parameters in channel environment and on the preprocessing. Speech DB for experiment is consisted of Korean doublet of numbers, considering the text-prompted system. Speech features including LPCC(Linear Predictive Cepstral Coefficient), MFCC(Mel Frequency Cepstral Coefficient), PLP(Perceptually Linear Prediction), LSP(Line Spectrum Pair) are analyzed. Also, the preprocessing of filtering to remove channel noise is studied. To remove or compensate for the channel effect from the extracted features, cepstral weighting, CMS(Cepstral Mean Subtraction), RASTA(RelAtive SpecTrAl) are applied. Also by presenting the speech recognition performance on each features and the processing, we compared speech recognition performance and speaker verification performance. For the evaluation of the applied speech features and processing methods, HTK(HMM Tool Kit) 2.0 is used. Giving different threshold according to male or female speaker, we compare EER(Equal Error Rate) on the clean speech data and channel data. Our simulation results show that, removing low band and high band channel noise by applying band pass filter(150~3800Hz) in preprocessing procedure, and extracting MFCC from the filtered speech, the best speaker verification performance was achieved from the view point of EER measurement.

  • PDF

Analysis of the Voice Quality in Emotional Speech Using Acoustical Parameters (음향 파라미터에 의한 정서적 음성의 음질 분석)

  • Jo, Cheol-Woo;Li, Tao
    • MALSORI
    • /
    • v.55
    • /
    • pp.119-130
    • /
    • 2005
  • The aim of this paper is to investigate some acoustical characteristics of the voice quality features from the emotional speech database. Six different parameters are measured and compared for 6 different emotions (normal, happiness, sadness, fear, anger, boredom) and from 6 different speakers. Inter-speaker variability and intra-speaker variability are measured. Some intra-speaker consistency of the parameter change across the emotions are observed, but inter-speaker consistency are not observed.

  • PDF

Evaluation for speech signal based on human sense and signal quality

  • Mekada, Yoshito;Hasegawa, Hiroshi;Kumagai, Takeshi;Kasuga, Masao
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.13-18
    • /
    • 1997
  • Each reproducing speech signal has each particular signal property, because of the processing of encoding and decoding for communications through various media. In this paper, we examine the correlation between speech signal quality and sensory pleasure for the sensory improvement of that signal. In experiments, we evaluate the quality of speech signals through various media by psychological auditory test and physical features of these signals.

  • PDF