• Title/Summary/Keyword: Speech Data

Search Result 1,394, Processing Time 0.024 seconds

Effective speech recognition system for patients with Parkinson's disease (파킨슨병 환자에 대한 효과적인 음성인식 시스템)

  • Huiyong, Bak;Ryul, Kim;Sangmin, Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.655-661
    • /
    • 2022
  • Since speech impairment is prevalent in patients with Parkinson's disease (PD), speech recognition systems suitable for these patients are needed. In this paper, we propose a speech recognition system that effectively recognizes the speech of patients with PD. The speech recognition system is firstly pre-trained with the Globalformer using the speech data from healthy people, and then fine-tuned using relatively small amount of speech data from the patient with PD. For this analysis, we used the speech dataset of healthy people built by AI hub and that of patients with PD collected at Inha University Hospital. As a result of the experiment, the proposed speech recognition system recognized the speech of patients with PD with Character Error Rate (CER) of 22.15 %, which was a better result compared to other methods.

An Automatic Data Construction Approach for Korean Speech Command Recognition

  • Lim, Yeonsoo;Seo, Deokjin;Park, Jeong-sik;Jung, Yuchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.17-24
    • /
    • 2019
  • The biggest problem in the AI field, which has become a hot topic in recent years, is how to deal with the lack of training data. Since manual data construction takes a lot of time and efforts, it is non-trivial for an individual to easily build the necessary data. On the other hand, automatic data construction needs to handle data quality issue. In this paper, we introduce a method to automatically extract the data required to develop Korean speech command recognizer from the web and to automatically select the data that can be used for training data. In particular, we propose a modified ResNet model that shows modest performance for the automatically constructed Korean speech command data. We conducted an experiment to show the applicability of the command set of the health and daily life domain. In a series of experiments using only automatically constructed data, the accuracy of the health domain was 89.5% in ResNet15 and 82% in ResNet8 in the daily lives domain, respectively.

Deep learning-based speech recognition for Korean elderly speech data including dementia patients (치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식)

  • Jeonghyeon Mun;Joonseo Kang;Kiwoong Kim;Jongbin Bae;Hyeonjun Lee;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.33-48
    • /
    • 2023
  • In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.

Google speech recognition of an English paragraph produced by college students in clear or casual speech styles (대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

Feature Parameter Extraction and Speech Recognition Using Matrix Factorization (Matrix Factorization을 이용한 음성 특징 파라미터 추출 및 인식)

  • Lee Kwang-Seok;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1307-1311
    • /
    • 2006
  • In this paper, we propose new speech feature parameter using the Matrix Factorization for appearance part-based features of speech spectrum. The proposed parameter represents effective dimensional reduced data from multi-dimensional feature data through matrix factorization procedure under all of the matrix elements are the non-negative constraint. Reduced feature data presents p art-based features of input data. We verify about usefulness of NMF(Non-Negative Matrix Factorization) algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment results, we confirm that proposed feature parameter is superior to MFCC(Mel-Frequency Cepstral Coefficient) in recognition performance that is used generally.

Implementation and Evaluation of an HMM-Based Speech Synthesis System for the Tagalog Language

  • Mesa, Quennie Joy;Kim, Kyung-Tae;Kim, Jong-Jin
    • MALSORI
    • /
    • v.68
    • /
    • pp.49-63
    • /
    • 2008
  • This paper describes the development and assessment of a hidden Markov model (HMM) based Tagalog speech synthesis system, where Tagalog is the most widely spoken indigenous language of the Philippines. Several aspects of the design process are discussed here. In order to build the synthesizer a speech database is recorded and phonetically segmented. The constructed speech corpus contains approximately 89 minutes of Tagalog speech organized in 596 spoken utterances. Furthermore, contextual information is determined. The quality of the synthesized speech is assessed by subjective tests employing 25 native Tagalog speakers as respondents. Experimental results show that the new system is able to obtain a 3.29 MOS which indicates that the developed system is able to produce highly intelligible neutral Tagalog speech with stable quality even when a small amount of speech data is used for HMM training.

  • PDF

Automatic Detection of Intonational and Accentual Phrases in Korean Standard Continuous Speech (한국 표준어 연속음성에서의 억양구와 강세구 자동 검출)

  • Lee, Ki-Young;Song, Min-Suck
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.209-224
    • /
    • 2000
  • This paper proposes an automatic detection method of intonational and accentual phrases in Korean standard continuous speech. We use the pause over 150 msec for detecting intonational phrases, and extract accentual phrases from the intonational phrases by analyzing syllables and pitch contours. The speech data for the experiment are composed of seven male voices and two female voices which read the texts of the fable 'the ant and the grasshopper' and a newspaper article 'manmulsang' in normal speed and in Korean standard variation. The results of the experiment shows that the detection rate of intonational phrases is 95% on the average and that of accentual phrases is 73%. This detection rate implies that we can segment the continuous speech into smaller units(i.e. prosodic phrases) by using the prosodic information and so the objects of speech recognition can narrow down to words or phrases in continuous speech.

  • PDF

Determining the Relative Differences of Emotional Speech Using Vocal Tract Ratio

  • Wang, Jianglin;Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.109-116
    • /
    • 2006
  • In this paper, our study focuses on obtaining the differences of emotional speech in three different vocal tract sections. The vocal tract area was computed from the area function of the emotional speech. The total vocal tract was divided into 3 sections (vocal fold section, middle section and lip section) to acquire the differences in each vocal tract section of emotional speech. The experiment data include 6 emotional speeches from 3 males and 3 females. The 6 emotions consist of neutral, happiness, anger, sadness, fear and boredom. The measured difference is computed by the ratio through comparing each emotional speech with the normal speech. The experimental results present that there is not a remarkable difference at lip section, but the fear and sadness have a great change at the vocal fold part.

  • PDF

A Comparative Study on the Speech Rate of Advanced Korean(L2) Learners and Korean Native Speakers in Conversational Speech (자유 대화에서의 한국어 원어민 화자와 한국어 고급 학습자들의 발화 속도 비교)

  • Hong, Minkyoung
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.345-363
    • /
    • 2018
  • The purpose of this study is to compare the speech rate of advanced Korean(L2) learners and Korean native speakers in spontaneous utterances. Specifically, the current study investigated the difference of the two groups' speech pattern according to utterance length. Eight advanced Korean(L2) learners and eight Korean native speakers participated in this study. The data were collected by recording their conversation and physical measurements (speaking rate, articulatory rates, pause and several types of speech disfluency) were taken on extracted 120 utterances from 12 out of the 16 participants. The findings show that advanced Korean learners' speech pattern is similar to that of Koreans in the short-length utterance. However, in the long-length utterance, two groups show different speech patterns; while the articulatory rate of Korean native speakers increased in the long-length utterance, that of Korean learners decreased. This suggests that the frequency of speech disfluency factors might affect this result.

Method of Speech Feature Parameter Extraction Using Modified-MFCC (Modified-MECC를 이용한 음성 특징 파라미터 추출 방법)

  • 이상복;이철희;정성환;김종교
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.269-272
    • /
    • 2001
  • In speech recognition technology, the utterance of every talker have special resonant frequency according to shape of talker's lip and to the motion of tongue. And utterances are different according to each talker. Accordingly, we need the superior moth-od of speech feature parameter extraction which reflect talker's characteristic well. This paper suggests the modified-MfCC combined existing MFCC with gammatone filter. We experimented with speech data from telephone and then we obtained results of enhanced speech recognition rate which is higher than that of the other methods.

  • PDF