• Title/Summary/Keyword: vowel recognition

Search Result 138, Processing Time 0.023 seconds

An Analysis of Acoustic Features Caused by Articulatory Changes for Korean Distant-Talking Speech

  • Kim Sunhee;Park Soyoung;Yoo Chang D.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.71-76
    • /
    • 2005
  • Compared to normal speech, distant-talking speech is characterized by the acoustic effect due to interfering sound and echoes as well as articulatory changes resulting from the speaker's effort to be more intelligible. In this paper, the acoustic features for distant-talking speech due to the articulatory changes will be analyzed and compared with those of the Lombard effect. In order to examine the effect of different distances and articulatory changes, speech recognition experiments were conducted for normal speech as well as distant-talking speech at different distances using HTK. The speech data used in this study consist of 4500 distant-talking utterances and 4500 normal utterances of 90 speakers (56 males and 34 females). Acoustic features selected for the analysis were duration, formants (F1 and F2), fundamental frequency, total energy and energy distribution. The results show that the acoustic-phonetic features for distant-talking speech correspond mostly to those of Lombard speech, in that the main resulting acoustic changes between normal and distant-talking speech are the increase in vowel duration, the shift in first and second formant, the increase in fundamental frequency, the increase in total energy and the shift in energy from low frequency band to middle or high bands.

A Comparison of Resonance Parameters before and after Pharyngeal Flap Surgery:A Preliminary Report (인두피판술 전.후의 공명파라미터의 비교: 예비연구)

  • Kang, Young-Ae;Kang, Nak-Heon;Lee, Tae-Yong;Seong, Cheol-Jae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.133-144
    • /
    • 2009
  • Pharyngeal flap surgery changes the space and shape of the oral cavity and vocal tract, and these changing conditions bring resonance change. The purpose of this study was to determine the most reliable and valuable parameters for evaluating hypernasality to distinguish two patients before and after pharyngeal flap surgery. Each patient was asked to clearly speak the vowels /a/, /i/, /u/, /e/, /o/ for voice recording. There were nine parameters: Formant (F1, F2, F3), Bandwidth (BW1, BW2, BW3), LPC energy slope ($\Delta$ |A2-A1/F2-F1|), and Band Energy (0-500 Hz, 500-1000 Hz) by each vowel. From the results of discrimination analyses on acoustic parameters, the vowels /a/, /e/ appeared to be insignificant but vowels /i/, /u/, /o/ appeared to be efficient in the separation. A 95%, 100%, and 100% recognition score could be reached when vowels /i/, /u/, and /o/ were analyzed. The results showed that F2, BW3, and LPC slope are more important parameters than the others. Finally, there is a relation between perceptual evaluation score and LPC energy slope of acoustic parameters by least square slope.

  • PDF

Study for an Artificial Visual Machine for the Blind (맹인용인공시각보조장치에 관한 연구)

  • 홍승홍;이균하
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.15 no.5
    • /
    • pp.19-24
    • /
    • 1978
  • In this paper, the functional propertied of vibrotactile sense of skin were studied by means of psycophysical experiments with respect to frequency and waveform of mechanical vibration, two-point threshold, contactor size of stimulators. Furthermore, leased on the experimental result, a small vibrotactile stimulator made of piezoelectrc ireed vibrator array was proposed for a aid blind to recognition of the Korean letters. A tactile output image is presented by an 8 row$\times$1 column array of samall vibrator reeds with 200 Hz rectangular wave, the array fitting on a fore-finger. Under the control of the NOVA mini-computer, the bimorph reeds array could represent any of one of the 24 characters of the Korean vowel and consonant at the 8 positions from left to right on the array. Without learning effect, the identification test of the Korean characters by the designed experimental system was carried out. The average rate of correct response was 90%.

  • PDF

Development of Parameters for Diagnosing Laryngeal Diseases

  • Kim, Yong-Ju;Wang, Soo-Geun;Kim, Gi-Ryun;Kwon, Soon-Bok;Jeon, Kye-Rok;Back, Moo-Jin;Yang, Byung-Gon;Jo, Cheol-Woo;Kim, Hyung-Soon
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.117-129
    • /
    • 2003
  • Many people suffer from various laryngeal diseases. Since we can notice voice change easily, acoustic analysis can be helpful to diagnose the diseases. Several attempts have been made to clarify the relation between the parameters and the state of sick vocal folds but any decisive parameters are not found yet. The purpose of this study was to select and develop those parameters useful for diagnosing and differentiating laryngeal diseases. We examined eight MDVP parameters, and two additional MFCC and LPC parameters obtained from the production of an open vowel by 252 subjects with or without laryngeal diseases. Using a statistical procedure through the artificial neural networks, we attempted to differentiate laryngeal disease groups. Results showed that the LPC parameters indicated the highest differentiating rate by the networks followed by the MFCC and the MDVP parameters. In addition, Jita, Shim and NHR among the MDVP parameters came out better parameters in diagnosing laryngeal diseases.

  • PDF

A Relationship of Tone, Consonant, and Speech Perception in Audiological Diagnosis

  • Han, Woo-Jae;Allen, Jont B.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.298-308
    • /
    • 2012
  • This study was designed to examine the phoneme recognition errors of hearing-impaired (HI) listeners on a consonant-by-consonant basis, to show (1) how each HI ear perceives individual consonants differently and (2) how standard clinical measurements (i.e., using a tone and word) fail to predict these differences. Sixteen English consonant-vowel (CV) syllables of six signal-to-noise ratios in speech-weighted noise were presented at the most comfortable level for ears with mild-to-moderate sensorineural hearing loss. The findings were as follows: (1) individual HI listeners with a symmetrical pure-tone threshold showed different consonant-loss profiles (CLPs) (i.e., over a set of the 16 English consonants, the likelihood of misperceiving each consonant) in right and left ears. (2) A similar result was found across subjects. Paired ears of different HI individuals with identical pure-tone threshold presented different CLPs in one ear to the other. (3) Paired HI ears having the same averaged consonant score demonstrated completely different CLPs. We conclude that the standard clinical measurements are limited in their ability to predict the extent to which speech perception is degraded in HI ears, and thus they are a necessary, but not a sufficient measurement for HI speech perception. This suggests that the CV measurement would be a useful clinical tool.

Korean language teaching system (한글 언어 교습 시스템)

  • Jung, jae won;Lee, jong weon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.367-371
    • /
    • 2008
  • This system is not only for a foreigner but also for everyone in Korea who doesn't know Hangeul (Korean language). It is difficult to study Hangeul themselves without any helper. This paper would show the AR based system that could help people to learn basic Hangeul letters and pronunciations in their home without any helper by applying the characteristics of consonant and vowel. We also suggest the Word Studying Methods using the proposed system. At this time, it is developed based on the pattern matching function of ARToolKit, we could improve the system by applying the character recognition function.

  • PDF

Adaptive Background Modeling Considering Stationary Object and Object Detection Technique based on Multiple Gaussian Distribution

  • Jeong, Jongmyeon;Choi, Jiyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.51-57
    • /
    • 2018
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

An Efficient Character Image Enhancement and Region Segmentation Using Watershed Transformation (Watershed 변환을 이용한 효율적인 문자 영상 향상 및 영역 분할)

  • Choi, Young-Kyoo;Rhee, Sang-Burm
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.481-490
    • /
    • 2002
  • Off-line handwritten character recognition is in difficulty of incomplete preprocessing because it has not dynamic information has various handwriting, extreme overlap of the consonant and vowel and many error image of stroke. Consequently off-line handwritten character recognition needs to study about preprocessing of various methods such as binarization and thinning. This paper considers running time of watershed algorithm and the quality of resulting image as preprocessing for off-line handwritten Korean character recognition. So it proposes application of effective watershed algorithm for segmentation of character region and background region in gray level character image and segmentation function for binarization by extracted watershed image. Besides it proposes thinning methods that effectively extracts skeleton through conditional test mask considering routing time and quality of skeleton, estimates efficiency of existing methods and this paper's methods as running time and quality. Average execution time on the previous method was 2.16 second and on this paper method was 1.72 second. We prove that this paper's method removed noise effectively with overlap stroke as compared with the previous method.

Frequency of grammar items for Korean substitution of /u/ for /o/ in the word-final position (어말 위치 /ㅗ/의 /ㅜ/ 대체 현상에 대한 문법 항목별 출현빈도 연구)

  • Yoon, Eunkyung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.33-42
    • /
    • 2020
  • This study identified the substitution of /u/ for /o/ (e.g., pyəllo [pyəllu]) in Korean based on the speech corpus as a function of grammar items. Korean /o/ and /u/ share the vowel feature [+rounded], but are distinguished in terms of tongue height. However, researchers have reported that the merger of Korean /o/ and /u/ is in progress, making them indistinguishable. Thus, in this study, the frequency of the phonetic manifestation /u/ of the underlying form of /o/ for each grammar item was calculated in The Korean Corpus of Spontaneous Speech (Seoul Corpus 2015) which is a large corpus from a total of 40 speakers from Seoul or Gyeonggi-do. It was then confirmed that linking endings, particles, and adverbs ending with /o/ in the word-final position were substituted for /u/ approximately 50% of the stimuli, whereas, in nominal items, they were replaced at a frequency of less than 5%. The high rates of substitution were the special particle "-do[du]" (59.6%) and the linking ending "-go[gu]" (43.5%) among high-frequency items. Observing Korean pronunciation in real life provides deep insight into its theoretical implications in terms of speech recognition.

Pronunciation Variation Patterns of Loanwords Produced by Korean and Grapheme-to-Phoneme Conversion Using Syllable-based Segmentation and Phonological Knowledge (한국인 화자의 외래어 발음 변이 양상과 음절 기반 외래어 자소-음소 변환)

  • Ryu, Hyuksu;Na, Minsu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.139-149
    • /
    • 2015
  • This paper aims to analyze pronunciation variations of loanwords produced by Korean and improve the performance of pronunciation modeling of loanwords in Korean by using syllable-based segmentation and phonological knowledge. The loanword text corpus used for our experiment consists of 14.5k words extracted from the frequently used words in set-top box, music, and point-of-interest (POI) domains. At first, pronunciations of loanwords in Korean are obtained by manual transcriptions, which are used as target pronunciations. The target pronunciations are compared with the standard pronunciation using confusion matrices for analysis of pronunciation variation patterns of loanwords. Based on the confusion matrices, three salient pronunciation variations of loanwords are identified such as tensification of fricative [s] and derounding of rounded vowel [ɥi] and [$w{\varepsilon}$]. In addition, a syllable-based segmentation method considering phonological knowledge is proposed for loanword pronunciation modeling. Performance of the baseline and the proposed method is measured using phone error rate (PER)/word error rate (WER) and F-score at various context spans. Experimental results show that the proposed method outperforms the baseline. We also observe that performance degrades when training and test sets come from different domains, which implies that loanword pronunciations are influenced by data domains. It is noteworthy that pronunciation modeling for loanwords is enhanced by reflecting phonological knowledge. The loanword pronunciation modeling in Korean proposed in this paper can be used for automatic speech recognition of application interface such as navigation systems and set-top boxes and for computer-assisted pronunciation training for Korean learners of English.