• Title/Summary/Keyword: continuous speech

Search Result 319, Processing Time 0.021 seconds

A comparison of normalized formant trajectories of English vowels produced by American men and women

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Formant trajectories reflect the continuous variation of speakers' articulatory movements over time. This study examined formant trajectories of English vowels produced by ninety-three American men and women; the values were normalized using the scale function in R and compared using generalized additive mixed models (GAMMs). Praat was used to read the sound data of Hillenbrand et al. (1995). A formant analysis script was prepared, and six formant values at the corresponding time points within each vowel segment were collected. The results indicate that women yielded proportionately higher formant values than men. The standard deviations of each group showed similar patterns at the first formant (F1) and the second formant (F2) axes and at the measurement points. R was used to scale the first two formant data sets of men and women separately. GAMMs of all the scaled formant data produced various patterns of deviation along the measurement points. Generally, more group difference exists in F1 than in F2. Also, women's trajectories appear more dynamic along the vertical and horizontal axes than those of men. The trajectories are related acoustically to F1 and F2 and anatomically to jaw opening and tongue position. We conclude that scaling and nonlinear testing are useful tools for pinpointing differences between speaker group's formant trajectories. This research could be useful as a foundation for future studies comparing curvilinear data sets.

A Study of Segmental and Syllabic Intervals of Canonical Babbling and Early Speech

  • Chen, Xiaoxiang;Xiao, Yunnan
    • Cross-Cultural Studies
    • /
    • v.28
    • /
    • pp.115-139
    • /
    • 2012
  • Interval or duration of segments, syllables, words and phrases is an important acoustic feature which influences the naturalness of speech. A number of cross-sectional studies regarding acoustic characteristics of children's speech development found that intervals of segments, syllables, words and phrases tend to change with the growing age. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991), it has been supported by quite a number of researches on the basis of cross-sectional studies (Tingley & Allen,1975; Kent & Forner,1980; Chermak & Schneiderman, 1986), but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). Researchers seem to come up with conflicting postulations and inconsistent results about the change trends concerning intervals of segments, syllables, words and phrases, leaving it as an issue unresolved. Most acoustic investigations of children's speech production have been conducted via cross-sectional designs, which involves studying several groups of children. So far, there are only a few longitudinal studies. This issue needs more longitudinal investigations; moreover, the acoustic measures of the intervals of child speech are hardly available. All former studies focus on word stages excluding the babbling stages especially the canonical babbling stage, but we need to find out when concrete changes of intervals begin to occur and what causes the changes. Therefore, we conducted an acoustic study of interval characteristics of segments and words concerning Canonical Babble ( CB) and early speech in an infant aged from 0;9 to 2;4 acquiring Mandarin Chinese. The current research addresses the following two questions: 1. Whether decreases in interval would be greater when children were younger and smaller when they were older or vice versa? 2. Whether the child speech concerning the acoustic features of interval drifts in the direction of the language they are exposed to? The female infant whose L1 was Southern Mandarin living in Changsha was audio- and video-taped at her home for about one hour almost on a weekly basis during her age range from 0;9 to 2;4 under natural observation by us investigators. The recordings were digitized. Parts of the digitized material were labeled. All the repetitions were excluded. The utterances were extracted from 44 sessions ranging from 30 minutes to one hour. The utterances were divided into segments as well as syllable-sized units. Age stages are 0;9-1;0,1;1-1;5, 1;6-2;0, 2;1-2;4. The subject was a monolingual normal child from parents with a good education. The infant was audio-and video-taped in her home almost every week. The data were digitized, segments and syllables from 44 sessions spanning the transition from babble to speech were transcribed in narrow IPA and coded for analysis. Babble was coded from age 0;9-1;0, and words were coded from 1;0 to 2;4, the data has been checked by two professionally trained persons who majored in phonetics. The present investigation is a longitudinal analysis of some temporal characteristics of the child speech during the age periods of 0;9-1;0, 1;1-1;5, 1;6-2;0, 2;1-2;4. The answer to Research Question 1 is that our results are in agreement with neither of the hypotheses. One hypothesis assumed that decreases in intervals would be greater when children were younger and smaller decreases in intervals when older (Thelen,1991); but the other hypothesis predicted that decreases in intervals would be smaller when children were younger and greater decreases in intervals when older (Smith, Kenney & Hussain, 1996). On the whole, there is a tendency of decrease in segmental and syllabic duration with the growing age, but the changes are not drastic and abrupt. For example, /a/ after /k/ in Table 1 has greater decrease during 1;1-1;5, while /a/ after /p/, /t/ and /w/ has greater decrease during 2;1-2;4. /ka/ has greater decrease during 1;1-1;5, while /ta/ and /na/ has greater decrease during 2;1-2;4.Across the age periods, interval change experiences lots of fluctuation all the time. The answer to Research Question 2 is yes. Babbling stage is a period in which the children's acoustic features of intervals of segments, syllables, words and phrases is shifted in the direction of the language to be learned, babbling and children's speech emergence is greatly influenced by ambient language. The phonetic changes in terms of duration would go on until as late as 10-12 years of age before reaching adult-like levels. Definitely, with the increase of exposure to ambient language, the variation would be less and less until they attain the adult-like competence. Via the analysis of the SPSS 15.0, the decrease of segmental and syllabic intervals across the four age periods proves to be of no significant difference (p>0.05). It means that the change of segmental and syllabic intervals is continuous. It reveals that the process of child speech development is gradual and cumulative.

The syllable recovrey rule-based system and the application of a morphological analysis method for the post-processing of a continuous speech recognition (연속음성인식 후처리를 위한 음절 복원 rule-based 시스템과 형태소분석기법의 적용)

  • 박미성;김미진;김계성;최재혁;이상조
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.3
    • /
    • pp.47-56
    • /
    • 1999
  • Various phonological alteration occurs when we pronounce continuously in korean. This phonological alteration is one of the major reasons which make the speech recognition of korean difficult. This paper presents a rule-based system which converts a speech recognition character string to a text-based character string. The recovery results are morphologically analyzed and only a correct text string is generated. Recovery is executed according to four kinds of rules, i.e., a syllable boundary final-consonant initial-consonant recovery rule, a vowel-process recovery rule, a last syllable final-consonant recovery rule and a monosyllable process rule. We use a x-clustering information for an efficient recovery and use a postfix-syllable frequency information for restricting recovery candidates to enter morphological analyzer. Because this system is a rule-based system, it doesn't necessitate a large pronouncing dictionary or a phoneme dictionary and the advantage of this system is that we can use the being text based morphological analyzer.

  • PDF

A Study on Automatic Phoneme Segmentation of Continuous Speech Using Acoustic and Phonetic Information (음향 및 음소 정보를 이용한 연속제의 자동 음소 분할에 대한 연구)

  • 박은영;김상훈;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.4-10
    • /
    • 2000
  • The work presented in this paper is about a postprocessor, which improves the performance of automatic speech segmentation system by correcting the phoneme boundary errors. We propose a postprocessor that reduces the range of errors in the auto labeled results that are ready to be used directly as synthesis unit. Starting from a baseline automatic segmentation system, our proposed postprocessor trains the features of hand labeled results using multi-layer perceptron(MLP) algorithm. Then, the auto labeled result combined with MLP postprocessor determines the new phoneme boundary. The details are as following. First, we select the feature sets of speech, based on the acoustic phonetic knowledge. And then we have adopted the MLP as pattern classifier because of its excellent nonlinear discrimination capability. Moreover, it is easy for MLP to reflect fully the various types of acoustic features appearing at the phoneme boundaries within a short time. At the last procedure, an appropriate feature set analyzed about each phonetic event is applied to our proposed postprocessor to compensate the phoneme boundary error. For phonetically rich sentences data, we have achieved 19.9 % improvement for the frame accuracy, comparing with the performance of plain automatic labeling system. Also, we could reduce the absolute error rate about 28.6%.

  • PDF

The Effect of Accent Method in Treating Vocal Nodule Patients (성대결절 환자에서 액센트 치료법의 효과)

  • Kwon, Soon-Bok;Kim, Yong-Ju;Jo, Cheol-Woo;Jun, Kye-Rok;Lee, Byung-Joo;Wang, Soo-Geun
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.87-98
    • /
    • 2001
  • Vocal nodule is one of the representative chronic diseases of vocal folds, and it can be cured by surgical removal or voice therapy. The aim of this study is to evaluate the effect of the accent method, one of the popular effective voice therapy, in the patients with vocal nodule. Authors executed the accent method in 17 patients with vocal nodule who visited the Voice & Speech Therapy Clinic, Pusan National University Hospital analysed the voice before and after treatment using the local findings, acoustic analysis and aerodynamic analysis MPT. The voice was analysed with MDVP of CSL and MPT was checked using stop watch. The parameters included Fo, Jitter, Shimmer and noise to harmonic ratio(NHR) as acoustic analysis. The results were obtained as follows. In the evaluation by the local findings, it was improved to 77% in the patients of vocal nodule. Jitter and Shimmer were shown to be improved significantly. In particular, it was shown to be improved significantly in patients with vocal nodule. As the result of this study, the improvement of aerodynamic aspect was more statistically significant than that of acoustic parameters. When I generalized the above mentioned results, we suggest that it is a useful voice therapy which can be helpful to the improvement of voice, applying the accent method to the vocal nodule patients, and there are currently many methods to be used in the voice therapy, but it is thought which the accent method is the good treatment as the alternatives of keeping the continuous medical treatment.

  • PDF

The Acoustic Study on the Voices of Korean Normal Adults (한국 성인의 정상 음성에 관한 기본 음성 측정치 연구)

  • Pyo, H.Y.;Sim, H.S.;Song, Y.K.;Yoon, Y.S.;Lee, E.K.;Lim, S.E.;Hah, H.R.;Choi, H.S.
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.179-192
    • /
    • 2002
  • Our present study was performed to investigate acoustically the Korean normal adults' voices, with enough large number of subjects to be reliable. 120 Korean normal adults (60 males and 60 females) of the age of 20 to 39 years produced sustained three vowels, /a/, /i/, and /u/ and read a part of 'Taking a Walk' paragraph, and by analyzing them acoustically with MDVP of CSL, we could get the fundamental frequency ($F_{0}$), jitter, shimmer and NHR of sustained vowels: speaking fundamental frequency ($SF_{0}$), highest speaking frequency (SFhi), lowest speaking frequency (SFlo) of continuous speech. As results, on the average, male voices showed 118.1$\sim$122.6 Hz in $F_{0}$, 0.467$\sim$0.659% in jitter, 1.538$\sim$2.674% in shimmer, 0.117$\sim$0.114 in NHR, 120.8 Hz in $SF_{0}$, 183.2 Hz in SFhi, 82.6 Hz in SFlo. And, female voices showed 211.6∼220.3 Hz in F0, 0.678∼0.935% in jitter, 1.478∼2.582% in shimmer, 0.098∼0.114 in NHR, 217.1 Hz in $SF_{0}$, 340.9 Hz in SFhi, 136.0 Hz in SFlo. Among the 7 parameters, every parameters except shimmer showed the significant difference between male and female voices. And, when we compared the three vowels, they showed significant differences one another in shimmer and NHR of both genders, but not in $F_{0}$ of males and jitter of females.

  • PDF

Automatic Generation of Pronunciation Variants for Korean Continuous Speech Recognition (한국어 연속음성 인식을 위한 발음열 자동 생성)

  • 이경님;전재훈;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.35-43
    • /
    • 2001
  • Many speech recognition systems have used pronunciation lexicon with possible multiple phonetic transcriptions for each word. The pronunciation lexicon is of often manually created. This process requires a lot of time and efforts, and furthermore, it is very difficult to maintain consistency of lexicon. To handle these problems, we present a model based on morphophon-ological analysis for automatically generating Korean pronunciation variants. By analyzing phonological variations frequently found in spoken Korean, we have derived about 700 phonemic contexts that would trigger the multilevel application of the corresponding phonological process, which consists of phonemic and allophonic rules. In generating pronunciation variants, morphological analysis is preceded to handle variations of phonological words. According to the morphological category, a set of tables reflecting phonemic context is looked up to generate pronunciation variants. Our experiments show that the proposed model produces mostly correct pronunciation variants of phonological words. Then we estimated how useful the pronunciation lexicon and training phonetic transcription using this proposed systems.

  • PDF

Extracting Rules from Neural Networks with Continuous Attributes (연속형 속성을 갖는 인공 신경망의 규칙 추출)

  • Jagvaral, Batselem;Lee, Wan-Gon;Jeon, Myung-joong;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.22-29
    • /
    • 2018
  • Over the decades, neural networks have been successfully used in numerous applications from speech recognition to image classification. However, these neural networks cannot explain their results and one needs to know how and why a specific conclusion was drawn. Most studies focus on extracting binary rules from neural networks, which is often impractical to do, since data sets used for machine learning applications contain continuous values. To fill the gap, this paper presents an algorithm to extract logic rules from a trained neural network for data with continuous attributes. It uses hyperplane-based linear classifiers to extract rules with numeric values from trained weights between input and hidden layers and then combines these classifiers with binary rules learned from hidden and output layers to form non-linear classification rules. Experiments with different datasets show that the proposed approach can accurately extract logical rules for data with nonlinear continuous attributes.

A Study on Speech Recognition Using the HM-Net Topology Design Algorithm Based on Decision Tree State-clustering (결정트리 상태 클러스터링에 의한 HM-Net 구조결정 알고리즘을 이용한 음성인식에 관한 연구)

  • 정현열;정호열;오세진;황철준;김범국
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.199-210
    • /
    • 2002
  • In this paper, we carried out the study on speech recognition using the KM-Net topology design algorithm based on decision tree state-clustering to improve the performance of acoustic models in speech recognition. The Korean has many allophonic and grammatical rules compared to other languages, so we investigate the allophonic variations, which defined the Korean phonetics, and construct the phoneme question set for phonetic decision tree. The basic idea of the HM-Net topology design algorithm is that it has the basic structure of SSS (Successive State Splitting) algorithm and split again the states of the context-dependent acoustic models pre-constructed. That is, it have generated. the phonetic decision tree using the phoneme question sets each the state of models, and have iteratively trained the state sequence of the context-dependent acoustic models using the PDT-SSS (Phonetic Decision Tree-based SSS) algorithm. To verify the effectiveness of the above algorithm we carried out the speech recognition experiments for 452 words of center for Korean language Engineering (KLE452) and 200 sentences of air flight reservation task (YNU200). Experimental results show that the recognition accuracy has progressively improved according to the number of states variations after perform the splitting of states in the phoneme, word and continuous speech recognition experiments respectively. Through the experiments, we have got the average 71.5%, 99.2% of the phoneme, word recognition accuracy when the state number is 2,000, respectively and the average 91.6% of the continuous speech recognition accuracy when the state number is 800. Also we haute carried out the word recognition experiments using the HTK (HMM Too1kit) which is performed the state tying, compared to share the parameters of the HM-Net topology design algorithm. In word recognition experiments, the HM-Net topology design algorithm has an average of 4.0% higher recognition accuracy than the context-dependent acoustic models generated by the HTK implying the effectiveness of it.

A Low-Power LSI Design of Japanese Word Recognition System

  • Yoshizawa, Shingo;Miyanaga, Yoshikazu;Wada, Naoya;Yoshida, Norinobu
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.98-101
    • /
    • 2002
  • This paper reports a parallel architecture in a HMM based speech recognition system for a low-power LSI design. The proposed architecture calculates output probability of continuous HMM (CHMM) by using concurrent and pipeline processing. They enable to reduce memory access and have high computing efficiency. The novel point is the efficient use of register arrays that reduce memory access considerably compared with any conventional method. The implemented system can achieve a real time response with lower clock in a middle size vocabulary recognition task (100-1000 words) by using this technique.

  • PDF