• Title/Summary/Keyword: vowel addition

Search Result 76, Processing Time 0.021 seconds

Laryngeal Findings and Phonetic Characteristics in Prelingually Deaf Patients (언어습득기 이전 청각장애인의 후두소견 및 음성학적 특성)

  • Kim, Seong-Tae;Yoon, Tae-Hyun;Kim, Sang-Yoon;Choi, Seung-Ho;Nam, Soon-Yuhl
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.20 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Background and Objectives : There are few studies reported that specifically examine the laryngeal function in patients with profound hearing loss or deafness, This study was designed to examine videostroboscopic findings and phonetic characteristics in adult patients with prelingually deaf. Materials and Method: Sixteen patients (seven males, nine females) diagnosed as prelingually deaf aged from 19 to 54 years, and were compared with a 20 normal control group with no laryngeal pathology and normal hearing group, Videostroboscopic evaluations were rated by experienced judges on various parameters describing the structure and function of the laryngeal mechanism during comfortable pitch and loudness phonations. Acoustic analysis test were done, and a nasalance test performed to measure rabbit, baby, and mother passage. CSL were measured to determine the first and two formant frequencies of vowels /a/, /i/, /u/, Statistical analysis was done using Mann-Whitney U or Wilcoxon signed ranks test. Results: Videostroboscopic findings showed phase symmetry but significantly more occurrences decrement in the amplitude of vibration, mucosal wave, irregularity of the vibration and increased glottal gap size during the closed phase of phonation, In addition, group of prelingually deaf patients were observed to have significantly more occurrences of abnormal supraglottic activities during phonation. The percentage of shimmer in the group of prelingually deaf patients were higher than in the control group. Characteristics of vowels were lower of the second formant of the vowel /i/. Nasalance in prelingually deaf patients showed normal nasality for all passages, Conclusion: Prelingually deaf patients show stroboscopic abnormal findings without any mucosal lesion, suggesting that they have considerable functional voice disorder. We suggest that prelingually deaf adults should perform vocal training for normalized laryngeal function after cochlear implantation.

  • PDF

Trend of conclusive expressions in Post-Modern Edo-language (근세후기 에도어에 나타나는 단정표현(断定表現)의 양상(樣相))

  • Um, phil kyo
    • Cross-Cultural Studies
    • /
    • v.25
    • /
    • pp.775-798
    • /
    • 2011
  • From Post-Modern Edo-language of Japan, it is possible to find expression formats related to current Tokyo language. However, in some cases, Tokyo language and Edo-language has the same format but different usage. One example is the ending portion of a sentence. This research investigates conclusive expressions of Edo-language in literary works excluding the usage of "ダ". Various formats of conclusive expressions appear in a conversation, and the usage is closely related to the speaker's sex, age, and social status. Also from the study, it was possible to see that the social relationship between a speaker and a listener and a conversation circumstance has an effect on the usage of conclusive expressions. In addition, usage does not conform to the current standard Japanese. 1. Currently "である(dearu)" format is seldom used in speaking, it is used with "だ" only in writing. The study found no case of "である(dearu)" in conclusive expressions but some use of "であろうて(dearoute) であらうな(dearouna)" "であったのう(deattanou) であったよ(deattayo)" only in old aged male. 2. "であります(dearimasu)" format is a typical Edo-language used by society-women (Japanese hostess who has a good education and an elegant speaking skills). This format was used once in "浮世風呂"(ukiyoburo) and 14 times in "梅?"(umegoyomi), but speakers were always a female. The reason for 14 occurrences in "梅?" is closely related to the fact that the main characters are society-women and genre is "人情本(ninjoubourn)" which is popular type of cultural literature (based on humanity and romance) in late Edo period. 3. "でござる" format is originally used as a respect-language but later changed to a polite language. The format is always used by male. It is a male language used by old aged people with a genteel manner such as a medical doctor, a retired man, or a funny-song writer. 4. "ございます(gozaimasu) ごぜへます(gozeemasu)" The study found the speaker's social status has a connection with the use of "ごぜへます(gozeemasu)" format. Which is "ございます(gozaimasu)" format but instead of [ai], long vowel [eː] is used. "ごぜへます(gozeemasu)" is more used by a female than a male and only used by young and mid-to-low class people. The format has a tough nuance and less elegant feel, therefore high class and/or educated ladies have a clear tendency to avoiding it

Prediction of speaking fundamental frequency using the voice and speech range profiles in normal adults (정상 성인에서 음성 및 말소리 범위 프로파일을 이용한 발화 기본주파수 예측)

  • Lee, Seung Jin;Kim, Jaeock
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.49-55
    • /
    • 2019
  • This study sought to investigate whether mean speaking fundamental frequency (SFF) can be predicted by parameters of voice and speech range profile (VRP and SRP) in Korean normal adults. Moreover, it explored whether gender differences exist in the absolute differences between the SFF and estimated SFF (ESFF) predicted by the VRP and SRP. A total of 85 native Korean speakers with normal voice participated in the study. Each participant was asked to perform the VRP task using the vowel /a/ and the SRP task using the first sentence of a Korean standard passage "Ga-eul". In addition, the SFF was measured with electroglottography during a passage reading task. Predictive factors of the SFF were explored and the absolute difference between the SFF and the ESFF (DSFF) was compared between gender groups. Results indicated that predictive factors were age, gender, minimum pitch and pitch range for the VRP (adjusted $R^2=.931$), and pitch range (in semi-tones) and maximum pitch for the SRP (adjusted $R^2=.963$), respectively. The SFF and ESFF predicted by the VRP and SRP showed a strong positive correlation. The DSFF of the VRP and SRP, as well as their sum did not differ by gender. In conclusion, the SFF during a passage reading task could be successfully predicted by the parameters of the VRP and SRP tasks. In further studies, clinical implications need to be explored in patients who may exhibit deviations in SFF.

Laryngeal height and voice characteristics in children with autism spectrum disorders (자폐스펙트럼장애 아동의 후두 높이 및 음성 특성)

  • Lee, Jung-Hun;Kim, Go-Woon;Kim, Seong-Tae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.91-101
    • /
    • 2021
  • The purpose of this study was to investigate laryngeal characteristics in children with autism spectrum disorders (ASD). A total of 50 children participated, including eight children aged 2 to 4 years old diagnosed with ASD and 42 normal controls at the same age. All children recorded X-ray images of the midsagittal plane of the cervical spine and larynx, and compared the laryngeal positions of ASD and control. In addition, samples of children with vowel prolongation were collected and analyzed for acoustic parameters. X-rays showed that the height of the hyoid bone in the normal group was the lowest at 3 years of age, and ascended at 4 years of age. Nevertheless, the distance from the external acoustic meatus to the hyoid bone was longest at age 4. 4-year-olds with explosive language development showed laryngeal height elevation and anteriorization. In contrast, the hyoid height of the ASD group of all ages was lower than that of the control group, and there was no difference in the hyoid position between the ages. As a result of acoustic evaluation, PFR, vFo, and vAm were significantly higher ASD than control. Low laryngeal height of ASD children may be associated with delayed language development. PFR, vFo, and vAm seem to be voice markers showing the difference between normal and ASD children.

Acoustic characteristics of speech-language pathologists related to their subjective vocal fatigue (언어재활사의 주관적 음성피로도와 관련된 음향적 특성)

  • Jeon, Hyewon;Kim, Jiyoun;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.87-101
    • /
    • 2022
  • In addition to administering a questionnaire (J-survey), which questions individuals on subjective vocal fatigue, voice samples were collected before and after speech-language pathology sessions from 50 female speech-language pathologists in their 20s and 30s in the Daejeon and Chungnam areas. We identified significant differences in Korean Vocal Fatigue Index scores between the fatigue and non-fatigue groups, with the most prominent differences in sections one and two. Regarding acoustic phonetic characteristics, both groups showed a pattern in which low-frequency band energy was relatively low, and high-frequency band energy was increased after the treatment sessions. This trend was well reflected in the low-to-high ratio of vowels, slope LTAS, energy in the third formant, and energy in the 4,000-8,000 Hz range. A difference between the groups was observed only in the vowel energy of the low-frequency band (0-4,000 Hz) before treatment, with the non-fatigue group having a higher value than the fatigue group. This characteristic could be interpreted as a result of voice abuse and higher muscle tonus caused by long-term voice work. The perturbation parameter and shimmer local was lowered in the non-fatigue group after treatment, and the noise-to-harmonics ratio (NHR) was lowered in both groups following treatment. The decrease in NHR and the fall of shimmer local could be attributed to vocal cord hypertension, but it could be concluded that the effective voice use of speech-language pathologists also contributed to this effect, especially in the non-fatigue group. In the case of the non-fatigue group, the rhamonics-to-noise ratio increased significantly after treatment, indicating that the harmonic structure was more stable after treatment.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.