• 제목/요약/키워드: Speech Data

Search Result 1,394, Processing Time 0.022 seconds

Voice Similarities between Sisters

  • Ko, Do-Heung
    • Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.43-50
    • /
    • 2001
  • This paper deals with voice similarities between sisters who are supposed to have common physiological characteristics from a single biological mother. Nine pairs of sisters who are believed to have similar voices participated in this experiment. The speech samples obtained from one pair of sisters were eliminated in the analysis because their perceptual score was relatively low. The words were measured in both isolation and context, and the subjects were asked to read the text five times with about three seconds of interval between readings. Recordings were made at natural speed in a quiet room. The data were analyzed in pitch and formant frequencies using CSL (Computerized Speech Lab) and PCQuirer. It was found that data of the initial vowels are much more similar and homogeneous than those of vowels in other positions. The acoustic data showed that voice similarities are strikingly high in both pitch and formant frequencies. It is assumed that statistical data obtained from this experiment can be used as a guideline for modelling speaker identification and speaker verification.

  • PDF

Short utterance speaker verification using PLDA model adaptation and data augmentation (PLDA 모델 적응과 데이터 증강을 이용한 짧은 발화 화자검증)

  • Yoon, Sung-Wook;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.85-94
    • /
    • 2017
  • Conventional speaker verification systems using time delay neural network, identity vector and probabilistic linear discriminant analysis (TDNN-Ivector-PLDA) are known to be very effective for verifying long-duration speech utterances. However, when test utterances are of short duration, duration mismatch between enrollment and test utterances significantly degrades the performance of TDNN-Ivector-PLDA systems. To compensate for the I-vector mismatch between long and short utterances, this paper proposes to use probabilistic linear discriminant analysis (PLDA) model adaptation with augmented data. A PLDA model is trained on vast amount of speech data, most of which have long duration. Then, the PLDA model is adapted with the I-vectors obtained from short-utterance data which are augmented by using vocal tract length perturbation (VTLP). In computer experiments using the NIST SRE 2008 database, the proposed method is shown to achieve significantly better performance than the conventional TDNN-Ivector-PLDA systems when there exists duration mismatch between enrollment and test utterances.

Speaker Identification in Small Training Data Environment using MLLR Adaptation Method (MLLR 화자적응 기법을 이용한 적은 학습자료 환경의 화자식별)

  • Kim, Se-hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.159-162
    • /
    • 2005
  • Identification is the process automatically identify who is speaking on the basis of information obtained from speech waves. In training phase, each speaker models are trained using each speaker's speech data. GMMs (Gaussian Mixture Models), which have been successfully applied to speaker modeling in text-independent speaker identification, are not efficient in insufficient training data environment. This paper proposes speaker modeling method using MLLR (Maximum Likelihood Linear Regression) method which is used for speaker adaptation in speech recognition. We make SD-like model using MLLR adaptation method instead of speaker dependent model (SD). Proposed system outperforms the GMMs in small training data environment.

  • PDF

Implementation of Extracting Specific Information by Sniffing Voice Packet in VoIP

  • Lee, Dong-Geon;Choi, WoongChul
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.209-214
    • /
    • 2020
  • VoIP technology has been widely used for exchanging voice or image data through IP networks. VoIP technology, often called Internet Telephony, sends and receives voice data over the RTP protocol during the session. However, there is an exposition risk in the voice data in VoIP using the RTP protocol, where the RTP protocol does not have a specification for encryption of the original data. We implement programs that can extract meaningful information from the user's dialogue. The meaningful information means the information that the program user wants to obtain. In order to do that, our implementation has two parts. One is the client part, which inputs the keyword of the information that the user wants to obtain, and the other is the server part, which sniffs and performs the speech recognition process. We use the Google Speech API from Google Cloud, which uses machine learning in the speech recognition process. Finally, we discuss the usability and the limitations of the implementation with the example.

Selecting Good Speech Features for Recognition

  • Lee, Young-Jik;Hwang, Kyu-Woong
    • ETRI Journal
    • /
    • v.18 no.1
    • /
    • pp.29-41
    • /
    • 1996
  • This paper describes a method to select a suitable feature for speech recognition using information theoretic measure. Conventional speech recognition systems heuristically choose a portion of frequency components, cepstrum, mel-cepstrum, energy, and their time differences of speech waveforms as their speech features. However, these systems never have good performance if the selected features are not suitable for speech recognition. Since the recognition rate is the only performance measure of speech recognition system, it is hard to judge how suitable the selected feature is. To solve this problem, it is essential to analyze the feature itself, and measure how good the feature itself is. Good speech features should contain all of the class-related information and as small amount of the class-irrelevant variation as possible. In this paper, we suggest a method to measure the class-related information and the amount of the class-irrelevant variation based on the Shannon's information theory. Using this method, we compare the mel-scaled FFT, cepstrum, mel-cepstrum, and wavelet features of the TIMIT speech data. The result shows that, among these features, the mel-scaled FFT is the best feature for speech recognition based on the proposed measure.

  • PDF

Annotation of a Non-native English Speech Database by Korean Speakers

  • Kim, Jong-Mi
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.111-135
    • /
    • 2002
  • An annotation model of a non-native speech database has been devised, wherein English is the target language and Korean is the native language. The proposed annotation model features overt transcription of predictable linguistic information in native speech by the dictionary entry and several predefined types of error specification found in native language transfer. The proposed model is, in that sense, different from other previously explored annotation models in the literature, most of which are based on native speech. The validity of the newly proposed model is revealed in its consistent annotation of 1) salient linguistic features of English, 2) contrastive linguistic features of English and Korean, 3) actual errors reported in the literature, and 4) the newly collected data in this study. The annotation method in this model adopts the widely accepted conventions, Speech Assessment Methods Phonetic Alphabet (SAMPA) and the TOnes and Break Indices (ToBI). In the proposed annotation model, SAMPA is exclusively employed for segmental transcription and ToBI for prosodic transcription. The annotation of non-native speech is used to assess speaking ability for English as Foreign Language (EFL) learners.

  • PDF

Comparison of overall speaking rate and pause between children with speech sound disorders and typically developing children (말소리장애 아동과 일반 아동의 발화 속도와 쉼 비교)

  • Lee, HeungIm;Kim, SooJin
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.111-118
    • /
    • 2017
  • This study compares speech rate, articulatory rate, and pause between the children with mild and moderate Speech Sound Disorder (SSD) who performed Sentence Repetition Tasks and the Typically Developing children (TD) of the same chronological age. The results showed that three groups are categorized in terms of speaking rate and articulatory rate. There is no difference between the two groups with SSD children, namely between the mild and moderate groups. However, there is a significant difference in their rate of speech and the articulatory rate between the two groups, such that the two groups with SSD are significantly slower than the TD group. The results also showed that there are no significant difference in the length and frequency of pause between the moderate group and the mild group. However, there is a substantial difference between them and the TD group. This study, provided the basic data for evaluating the speech rate of the children and implies that there are limitations in speech rate among the children with SSD.

DNN based Speech Detection for the Media Audio (미디어 오디오에서의 DNN 기반 음성 검출)

  • Jang, Inseon;Ahn, ChungHyun;Seo, Jeongil;Jang, Younseon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.632-642
    • /
    • 2017
  • In this paper, we propose a DNN based speech detection system using acoustic characteristics and context information of media audio. The speech detection for discriminating between speech and non-speech included in the media audio is a necessary preprocessing technique for effective speech processing. However, since the media audio signal includes various types of sound sources, it has been difficult to achieve high performance with the conventional signal processing techniques. The proposed method improves the speech detection performance by separating the harmonic and percussive components of the media audio and constructing the DNN input vector reflecting the acoustic characteristics and context information of the media audio. In order to verify the performance of the proposed system, a data set for speech detection was made using more than 20 hours of drama, and an 8-hour Hollywood movie data set, which was publicly available, was further acquired and used for experiments. In the experiment, it is shown that the proposed system provides better performance than the conventional method through the cross validation for two data sets.

A Comparative Study on Speech Rate Variation between Japanese/Chinese Learners of Korean and Native Korean (학습자의 발화 속도 변이 연구: 일본인과 중국인 한국어 학습자와 한국어 모어 화자 비교)

  • Kim, Miran;Gang, Hyeon-Ju;Ro, Juhyoun
    • Korean Linguistics
    • /
    • v.63
    • /
    • pp.103-132
    • /
    • 2014
  • This study compares various speech rates of Korean learners with those of native Korean. Speech data were collected from 34 native Koreans and 33 Korean learners (19 Chinese and 14 Japanese). Each participant recorded a 9 syllabled Korean sentence at three different speech rate types. A total of 603 speech samples were analyzed by speech rate types (normal, slow, and fast), native languages (Korean, Chinese, Japanese), and learners' proficiency levels (beginner, intermediate, and advanced). We found that learners' L1 background plays a role in categorizing different speech rates in the L2 (Korean), and also that the leaners' proficiency correlates with the increase of speaking rate regardless of speech rate categories. More importantly, faster speech rate values found in the advanced level of learners do not necessarily match to the native speakers' speech rate categories. This means that learning speech rate categories can be more complex than we think of proficiency or fluency. That is, speech rate categories may not be acquired automatically during the course of second language learning, and implicit or explicit exposures to various rate types are necessary for second language learners to acquire a high level of communicative skills including speech rate variation. This paper discusses several pedagogical implications in terms of teaching pronunciation to second language learners.

A Study of Correlation Between Severity of Vocal Polyp and Acoustic Parameters (성대용종의 중증도와 음향지수의 상관관계)

  • Hong, Ki-Hwan;Yang, Yoon-Soo;Kim, Jin-Sung;Lee, Jae-Keun;Lee, Eun-Jung
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.17 no.1
    • /
    • pp.17-27
    • /
    • 2006
  • Background and Objectives: Vocal polyp is the most common disease that causes hoarseness and its incidence is increased currently. The purposes of this study are to investigate the correlation between severity of vocal polyp and acoustic parameters and compare these data with those of the normal Korean. Materials and Methods: We analyzed the acoustic parameters of sustained vowel for 70 vocal polyp patients and 20 normal controls. A CSL(computerized speech lab) was used to carry out the analysis of the voice sample and statistical analysis was used Spearman correlation coefficient & t-test. Results: According to the conclusion of correlation analysis, 21 parameters of all the 34 parameters are significant. Conclusion: These data will be served as basic data for the evaluation of postoperative assessment of the patients with vocal polyp.

  • PDF