• Title/Summary/Keyword: Spoken Korean

Search Result 409, Processing Time 0.025 seconds

A Comparative Study of Spoken and Written Sentence Production in Adults with Fluent Aphasia (유창성 실어증 환자의 구어와 문어 문장산출 능력 비교)

  • Ha, Ji-Wan;Pyun, Sung-Bom;Hwang, Yu Mi;Yi, Hoyoung;Sim, Hyun Sub
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.103-111
    • /
    • 2013
  • Traditionally it has been assumed that written abilities are completely dependent on phonology. Therefore spoken and written language skills in aphasic patients have been known to exhibit similar types of impairment. However, a number of latest studies have reported the findings that support the orthographic autonomy hypothesis. The purpose of this study was to examine whether fluent aphasic patients have discrepancy between speaking and writing skills, thereby identifying whether the two skills are realized through independent processes. To this end, this study compared the K-FAST speaking and writing tasks of 30 aphasia patients. In addition, 16 aphasia patients, who were capable of producing sentences not only in speaking but also in writing, were compared in their performances at each phase of the sentence production process. As a result, the subjects exhibited different performances between speaking and writing, along with statistically significant differences between the two language skills at positional and phonological encoding phases of the sentence production process. Therefore, the study's results suggest that written language is more likely to be produced via independent routes without the mediation of the process of spoken language production, beginning from a certain phase of the sentence production process.

Relationships Among Language Ability, Foreign Language Learning Experience, and Metalinguistic Ability in Korean Preschool Children (유아의 모국어 능력, 외국어 경험 정도와 상위언어 능력간의 관계)

  • Han, You Me;Cho, Bok Hee
    • Korean Journal of Child Studies
    • /
    • v.20 no.3
    • /
    • pp.199-216
    • /
    • 1999
  • The 121 five-year-old Korean subjects of this study were divided in 3 groups based on their experience in learning a foreign language (English). A battery of tests was administered to measure spoken and written language ability and the 3 metalinguistic domains of phonological, semantic, and syntactic awareness. Spoken language ability was positively correlated with semantic and syntactic awareness. The relative importance of each metalinguistic domain varied with level of written language development. Phonological awareness was the only predictor of decoding. Syntactic awareness and phonological awareness were significant variables in sentence comprehension. Metalinguistic ability was a better predictor of written language development than spoken language ability. Foreign language learning experience had an effect on syntactic awareness: low experience was superior to no experience, but high experience was not superior to low experience.

  • PDF

Acoustic Cues in Spoken French for the Pronunciation Assessment Multimedia System (발음평가용 멀티미디어 시스템 구현을 위한 구어 프랑스어의 음향학적 단서)

  • Lee, Eun-Yung;Song, Mi-Young
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.185-200
    • /
    • 2005
  • The objective of this study is to examine acoustic cues in spoken French for the assessment of pronunciation which is necessary to realization of the multimedia system. The corpus is composed of simple expressions which consist of the French phonological system include all phonemes. This experiment was made on 4 male and female French native speakers and on 20 Korean speakers, university students who had learned the French language more than two years. We analyzed the recorded data by using spectrograph and measured comparative features by the numerical values. First of all, we found the mean and the deviation of all phonemes, and then chose features which had high error frequency and great differences between French and Korean pronunciations. The selected data were simplified and compared among them. After we judged whether the problems of pronunciation in each Korean speaker were either the utterance mistake or the interference of mother tongue, in terms of articulatory and auditory aspects, we tried to find acoustic features as simplified as possible. From this experiment, we could extract acoustic cues for the construction of the French pronunciation training system.

  • PDF

A Study on the Vowel Fomants in Disguised Speech (위장발화의 단모음 포만트 연구)

  • Noh, Seok-Eun;Park, Mi-Kyoung;Cho, Min-Ha;Shin, Ji-Young;Kang, Sun-Mee
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.215-218
    • /
    • 2004
  • The aim of this paper is to analyze the acoustic features for disguised voice. In this paper we examined the features such as pitch range, vowel formants(F1, F2, F3, F4). So the result of the analysis is as follows. : (1) Pitch range and average of pitch value is very important cue for speaker verification. (2) F3-F2 is also important cue for speaker verification (3) /a/ is more verified than other vowels.

  • PDF

The Voice Dialing System Using Dynamic Hidden Markov Models and Lexical Analysis (DHMM과 어휘해석을 이용한 Voice dialing 시스템)

  • 최성호;이강성;김순협
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.7
    • /
    • pp.548-556
    • /
    • 1991
  • In this paper, Korean spoken continuous digits are ercognized using DHMM(Dynamic Hidden Markov Model) and lexical analysis to provide the base of developing voice dialing system. After segmentation by phoneme unit, it is recognized. This system can be divided into the segmentation section, the design of standard speech section, the recognition section, and the lexical analysis section. In the segmentation section, it is segmented using the ZCR, O order LPC cepstrum, and Ai, parameter of voice speech dectaction, which is changed according to time. In the standard speech design section, 19 phonemes or syllables are trained by DHMM and designed as a standard speech. In the recognition section, phomeme stream are recognized by the Viterbi algorithm.In the lexical decoder section, finally recognized continuous digits are outputed. This experiment shiwed the recognition rate of 85.1% using data spoken 7 times of 21 classes of 7 continuous digits which are combinated all of the occurence, spoken by 10 man.

  • PDF

A Study Using Acoustic Measurement and Perceptual Judgment to identify Prosodic Characteristics of English as Spoken by Koreans (음향 측정과 지각 판단에 의한 한국인 영어의 운율 연구)

  • Koo, Hee-San
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.95-108
    • /
    • 1997
  • The purpose of this experimental study was to investigate prosodic characteristics of English as spoken by Koreans. Test materials were four English words, a sentence, and a paragraph. Six female Korean speakers and five native English speakers participated in acoustic and perceptual experiments. Pitch and duration of word syllables were measured from signals and spectrograms made by the Signalize 3.04 software program for Power Mac 7200. In the perceptual experiment, accent position, intonation patterns, rhythm patterns and phrasing were evaluated by the five native English speakers. Preliminary results from this limited study show that prosodic characteristics of Koreans include (1) pitch on the first part of a word and sentence is lower than that of English speakers, but the pitch on the last part is the opposite; (2) word prosody is quite similar to that of an English speaker, but sentence prosody is quite different; (3) the weakest point of sentence prosody spoken by Koreans is in the rhythmic pattern.

  • PDF

A Study on Laryngeal Behavior of Persons Who Stutter with Fiber-Optic Nasolaryngoscope (후두 내시경(Fiber-Optic Nasolaryngoscope)을 이용한 말더듬인의 후두양상에 관한 연구)

  • Jung, Hun;Ahn, Jong-Bok;Choi, Byung-Heun;Kwon, Do-Ha
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.159-173
    • /
    • 2008
  • The purpose of this study was to use fiber-optic nasolaryngoscope to find out differences in laryngeal behavior between persons who stutter(PS) and those who do not stutter(NS) upon their utterance. To meet the goal above, this study took 5 NS and 5 PS respectively as a part of sampling, so that they were all asked to join an experiment hereof. As a result, this study came to the following findings: First, there was not any significant difference in laryngeal behavior of uttering spoken languages between stuttering group and control. Second, there were some differences in laryngeal behavior of repetition and prolongation, which were a sort of disfluency revealed in the utterance of nonfluent spoken languages between stuttering group and control. Third, as reported by prior studies, it was found that there were differences in laryngeal abehavior of stutterer group's nonfluent spoken languages depending upon stuttering types. In this study, a variety of laryngeal behavior unreported in prior studies could be found. In addition, it was notable that stutterers showed different laryngeal behavior depending on their personal stuttering types. On block condition, Subject 1 showed laryngeal behavior of fAB, INT and fAD; Subject 2 showed laryngeal behavior of fAB, fAD and rAD; Subject 3 showed laryngeal behavior of fAD and rAD; Subject 4 showed only laryngeal behavior of fAD; and Subejct 5 showed laryngeal behavior of fAB, fAD and rAD. Summing up, these findings imply that when stutterers utter nonfluent words, they may reveal a variety of laryngeal behavior depending on their personal stuttering types. Moreover, it is found that there are more or less differences in the utterance of nonfluent spoken languages between NS and stuttering ones. In particular, it is interesting that one common trait of nonfluent spoken languages uttered by PS is evidently excessive laryngeal stress, no matter which type of stuttering they reveal.

  • PDF

An Experiment of a Spoken Digits-Recognition System (숫자음성 자동 인식에 관한 일실험)

  • ;安居院猛
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.15 no.6
    • /
    • pp.23-28
    • /
    • 1978
  • This paper describes a speech recognition system for ten isolated spoken digits. In this system, acoustic parameters such as zero crossing rate, log energy and three formant frequencies estimated by linear prediction method were extracted for classification and/or recognition purpose(s). The former two parameters were used for the classification of unvoiced consonants and the latter one for the recognition of vowels and voiced consonants. Promising recognition results were obtained in this experiment for ten digit utterances spoken by a male speaker.

  • PDF

A Situation-Based Dialogue Management with Dialogue Examples (대화 예제를 이용한 상황 기반 대화 관리 시스템)

  • Lee, Cheong-Jae;Jung, Sang-Keun;Lee, Geun-Bae
    • MALSORI
    • /
    • no.56
    • /
    • pp.185-194
    • /
    • 2005
  • In this paper, we present POSSDM (POSTECH Situation-Based Dialogue Manager) for a spoken dialogue system using a new example and situation-based dialogue management technique for effective generation of appropriate system responses. Spoken dialogue system should generate cooperative responses to smoothly control dialogue flow with the users. We introduce a new dialogue management technique incorporating dialogue examples and situation-based rules for EPG (Electronic Program Guide) domain. For the system response inference, we automatically construct and index a dialogue example database from dialogue corpus, and the best dialogue example is retrieved for a proper system response with the query from a dialogue situation including a current user utterance, dialogue act, and discourse history. When dialogue corpus is not enough to cover the domain, we also apply manually constructed situation-based rules mainly for meta-level dialogue management.

  • PDF

A Study on Recognition of Spoken Numbers Using Spatio-Tempora1 Pattern Recognizer (시공간 패턴인식 신경망에 의한 단어 인식에 관한 연구)

  • Park, Kyoung-Cheol;Kim, Hun-Kee;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.495-497
    • /
    • 1993
  • This paper presents spoken numbers recognition method using a spatio-temporal network This network is efficient in processing the spectrum sequences of speech patterns as spatio-temporal patterns. The number of windows and channels is experimentally determined. The recognition rate has been improved by experiments done on various parameters. The test data is collected form 10 numbers spoken by 2 male and female speakers. A recognition rate of 80% was obtained on a test set of 50 words.

  • PDF