• 제목/요약/키워드: speech task

검색결과 316건 처리시간 0.02초

연령 및 과제특성에 따른 유아들의 혼잣말 발화 분석 (The Analysis of Children's Private Speech on Age and Characteristic of Task)

  • 이정화;박정언;이명희
    • 수산해양교육연구
    • /
    • 제24권4호
    • /
    • pp.494-506
    • /
    • 2012
  • The purpose of this study was to analyse 3, 4, 5-year-old children's private speech according to their age and task characteristics (structured task vs. unstructured task). In order to achieve the goal, the main effect of age, characteristic of tasks and interaction effect were considered among age and characteristic of tasks on preschool children's private speech. The subjects were each 30 3, 4, 5-year-olds from preschool in Busan, South Korea. The structured task was puzzle task and the unstructured task was drawing task from TCT-DP. The data was analyzed by repeated measurement two way ANOVA: 3(age) ${\times}$ 2((characteristic of task). As a result, firstly, total private speech of 4-year-old was higher than 3-year-old, 5-year-old in both tasks, and total private speech of 5-year-old was higher than 3-year-old in both tasks. Secondly, the task-irrelevant private speech was not affected by main effect of age and characteristic of task and interaction effect between age and characteristic of task. Thirdly, the task-relevant private speech was received both main effects and interaction effects between age and characteristic of task. Finally, the external manifestation of inner speech were not received effect of age but received effect of characteristic of task, and received interaction effect between age and characteristic of task. The results of this study imply that characteristic of task is an important factor inducing children's private speech.

언어 및 인지 과제 동시수행이 발화속도에 미치는 영향 (Effects of Concurrent Linguistic or Cognitive Tasks on Speech Rate)

  • 한지연;김효정;김문정
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.102-105
    • /
    • 2007
  • This study was designed to examination effects of concurrent linguistic or cognitive tasks on speech rate. Eight normal speakers were repeated sentences either with or without simultaneous a linguistic task and a cognitive task. Linguistic task was conducted by generating verbs from nouns and cognitive task meaned performing mental arithmetic. Speech rate was measured from acoustic data. One-way ANOVA conducted to know speech rate difference among 3 different type of tasks. The results showed there was no significant difference between sentence repeat and linguistic tasks. But There was significant difference findings: sentence repeat and linguistic task, linguistic and cognitive task.

  • PDF

한국어-영어 말처리 평가시스템 개발을 위한 기초 연구 (Pilot study for the development of Korean and English speech processing task system)

  • 김지영;하지완
    • 말소리와 음성과학
    • /
    • 제16권2호
    • /
    • pp.29-36
    • /
    • 2024
  • 심리언어학적 접근에 기반한 말처리 모델은 다양한 경로를 통해 말소리장애아동의 구체적인 말처리 결함을 한눈에 파악할 수 있는 모델이다. 말소리장애아동이 보이는 말산출 결함은 원인을 알 수 없는 경우가 대부분이기 때문에 개별화된 중재를 위해서는 기저의 강약점을 파악하는 것이 중요하다. 또한 말소리장애 아동의 모국어 결함은 외국어 산출에도 영향을 미칠 수 있기 때문에 모국어와 외국어라는 두 언어 간 말처리 능력을 함께 살펴볼 필요가 있다. 본 연구는 한국어-영어 말처리 평가시스템 개발을 위한 예비연구로, 말소리장애아동(SSD) 10명과 일반아동(NSA) 20명을 대상으로 말산출 과제와 말처리 과제(변별, 음운표상판단, 비단어따라말하기)를 한국어와 영어로 각각 실시하여 언어종류에 따른 집단 간 비교를 시도하였다. 연구 결과, SSD 집단은 두 언어에서 모두 NSA 집단에 비해 유의하게 낮은 산출능력을 보였다. 말처리 과제 결과, 변별과제에서는 유의한 차이를 보이지 않은 반면, 음운표상판단과제의 경우 언어 종류 간, 비단어따라말하기과제의 경우 언어종류와 집단 간에서도 그 차이가 유의하였다. 본 연구의 결과를 통해 아동의 모국어와 외국어 처리능력은 상이할 수 있으며, 추후 말처리 평가 프로그램 개발을 위해 하위과제를 보다 세분화하고 난이도를 조절할 필요가 있음을 확인하였다.

The Effects of Semantic Association Task by Drawing in a Korean Bilingual Aphasic: A Case Study

  • Lee, Ok-Bun;Jeong, Ok-Ran
    • 음성과학
    • /
    • 제9권2호
    • /
    • pp.157-165
    • /
    • 2002
  • The purpose of this study was to determine the effects of associative drawing task in a Korean bilingual aphasic. The subject is a 41-year old male and lived and was educated in the United States for over 25 years(from the age of 14 through 39). His former occupation was a psychiatrist. He has had a massive lesion in the occipital lobe. This study focused on improving his spontaneous language performances by associative drawing task. The associative drawing task along with spontaneous speech is to help the subject's cognition. The ten target words in this treatment were familiar words and could be drawn easily. The results were that the associative drawing task was effective on improving the patient's drawing ability-writing ability in English only-and naming performance both in English and Korean. However, the patient's writing ability in Korean did not show any improvement.

  • PDF

Performance of Vocabulary-Independent Speech Recognizers with Speaker Adaptation

  • Kwon, Oh Wook;Un, Chong Kwan;Kim, Hoi Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권1E호
    • /
    • pp.57-63
    • /
    • 1997
  • In this paper, we investigated performance of a vocabulary-independent speech recognizer with speaker adaptation. The vocabulary-independent speech recognizer does not require task-oriented speech databases to estimate HMM parameters, but adapts the parameters recursively by using input speech and recognition results. The recognizer has the advantage that it relieves efforts to record the speech databases and can be easily adapted to a new task and a new speaker with different recognition vocabulary without losing recognition accuracies. Experimental results showed that the vocabulary-independent speech recognizer with supervised offline speaker adaptation reduced 40% of recognition errors when 80 words from the same vocabulary as test data were used as adaptation data. The recognizer with unsupervised online speaker adaptation reduced abut 43% of recognition errors. This performance is comparable to that of a speaker-independent speech recognizer trained by a task-oriented speech database.

  • PDF

Speech Perception and Production of English Postvocalic Voicing by Korean and English Speakers

  • Chang, Woo-Hyeok
    • 음성과학
    • /
    • 제13권2호
    • /
    • pp.107-120
    • /
    • 2006
  • The main purpose of this study is to investigate whether Korean learners can use the vowel duration cue to distinguish voicing contrasts in word-final consonants in English. Given that the Korean group's performance on the auditory task was much better than their performance on the identification task or on the production task, we conclude that the AX discrimination task makes contact with a different layer of perception. In particular, the AX discrimination task can be done at the auditory or phonetic level, where differences in vowel length are still encoded in the representation. In contrast, the identification and production tasks are probing the mental representation of vowel length and voicing. It was also founded that Korean speakers stored neither vowel length nor voicing in memorized representations and did not internalize the lengthening of the preceding vowel as a rule to differentiate the voicing contrasts of final consonants, even though they were able to detect the acoustic differences in vowel duration provided that they were tested in an appropriate task.

  • PDF

경직형 뇌성마비아동의 말명료도 및 말명료도와 관련된 말 평가 변인 (Speech Evaluation Variables Related to Speech Intelligibility in Children with Spastic Cerebral Palsy)

  • 박지은;김향희;신지철;최홍식;심현섭;박은숙
    • 말소리와 음성과학
    • /
    • 제2권4호
    • /
    • pp.193-212
    • /
    • 2010
  • The purpose of our study was to provide effective speech evaluation items examining the variables of speech that successfully predict the speech intelligibility in CP children. The subjects were 55 children with spastic type cerebral palsy. As for the speech evaluation, we performed a speech subsystem evaluation and a speech intelligibility test. The results of the study are as follows. The evaluation task for the speech subsystems consisted of 48 task items within an observational evaluation stage and three levels of severity. The levels showed correlations with gross motor functions, fine motor functions, and age. Second, the evaluation items for the speech subsystems were rearranged into seven factors. Third, 34 out of 48 task items that positively correlated with the syllable intelligibility rating were as follows. There were four items in the observational evaluation stage. Among the nonverbal articulatory function evaluation items, there were 11 items in level one. There were 12 items in level two. In level three there were eight items. Fourth, there were 23 items among the 48 evaluation tasks that correlated with the sentence intelligibility rating. There was one item in the observational evaluation stage which was in the articulatory structure evaluation task. In level one there were six items. In level two, there were eight items. In level three, there was a total number of eight items. Fifth, there was a total number of 14 items that influenced the syllable intelligibility rating. Sixth, there was a total number of 13 items that influenced the syllable intelligibility rating. According to the results above, the variables that influenced the speech intelligibility of CP children among the articulatory function tasks were in the respiratory function task, phonatory function task, and lip and chin related tasks. We did not find any correlation for the tongue function. The results of our study could be applied to speech evaluation, setting therapy goals, and evaluating the degree of progression in children with CP. We only studied children with the spastic type of cerebral palsy, and there were a small number of severe degree CP children compared to those with a moderate degree of CP. Therefore, when evaluating children with other degrees of severity, we may have to take their characteristics more into account. Further study on speech evaluation variables in relation to the severity of the speech intelligibility and different types of cerebral palsy may be necessary.

  • PDF

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF

정신피로와 음성특징과의 상관관계 측정 (Measuring Correlation between Mental Fatigues and Speech Features)

  • 김정인;권철홍
    • 말소리와 음성과학
    • /
    • 제6권2호
    • /
    • pp.3-8
    • /
    • 2014
  • This paper deals with how mental fatigue has an effect on human voice. For this a monotonous task to increase the feeling of the fatigue and a set of subjective questionnaire for rating the fatigue were designed. From the experiments the designed task was proven to be monotonous based on the results of the questionnaire responses. To investigate a statistical relationship between speech features extracted from the collected speech data and fatigue, the T test for two-related-samples was used. Statistical analysis shows that speech parameters deeply related to the fatigue are the first formant bandwidth, Jitter, H1-H2, cepstral peak prominence, and harmonics-to-noise ratio. According to the experimental results, it can be seen that voice is changed to be breathy as mental fatigue proceeds.

조음 로보틱스 (Articulatory robotics)

  • 남호성
    • 말소리와 음성과학
    • /
    • 제13권2호
    • /
    • pp.1-7
    • /
    • 2021
  • 음성은 개별 조음 기관(입술, 혓끝, 혓몸, 연구개, 성문)에서 일어나는 협착 운동들의 시공간적 협응 구조라 할 수 있다. 다른 인간의 운동(예: 잡기)과 마찬가지로 각각의 협착 운동은 언어학적으로 의미 있는 task이며, 각 task는 그것과 관계된 기본 요소들의 시너지에 의해 수행된다. 본 연구는 이러한 음성 task가 어떻게 기본 요소들인 joint와 동역학적으로 연계될 수 있는지를 로보틱스의 관점에서 논의하고자 한다. 나아가 로보틱스의 기본 원리를 음성과학 분야에 소개함으로써 운동으로서의 음성이 어떻게 발화되는지에 대한 더 깊은 이해를 가능케 하고, 실제 인간의 조음을 모방한 말하는 기계를 구현하는 데 필요한 이론적 토대를 제공하고자 한다.