• 제목/요약/키워드: auditory presentation

검색결과 28건 처리시간 0.021초

청각적 말소리 자극과 시각적 글자 자극 제시방법에 따른 5, 6세 일반아동의 음소인식 수행력 비교 (Effects of auditory and visual presentation on phonemic awareness in 5- to 6- year-old children)

  • 김명헌;하지완
    • 말소리와 음성과학
    • /
    • 제8권1호
    • /
    • pp.71-80
    • /
    • 2016
  • The phonemic awareness tasks (phonemic synthesis, phonemic elision, phonemic segmentation) by auditory presentation and visual presentation were conducted to 40 children who are 5 and 6 years old. The scores and error types in the sub-tasks by two presentations were compared to each other. Also, the correlation between the performances of phonemic awareness sub-tasks in two presentation conditions were examined. As a result, 6-year-old group showed significantly higher phonemic awareness scores than 5-year-old group. Both group showed significantly higher scores in visual presentation than auditory presentation. While the performance under the visual presentation was significantly lower especially in the segmentation than the other two tasks, there was no significant difference among sub-tasks under the auditory presentation. 5-year-old group showed significantly more 'no response' errors than 6-year-old group and 6-year-old group showed significantly more 'phoneme substitution' and 'phoneme omission' errors than 5-year-old group. Significantly more 'phoneme omission' errors were observed in the segmentation than the elision task, and significantly more 'phoneme addition' errors were observed in elision than the synthesis task. Lastly, there are positive correlations in auditory and visual synthesis tasks, auditory and visual elision tasks, and auditory and visual segmentation tasks. Summarizing the results, children tend to depend on orthographic knowledge when acquiring the initial phonemic awareness. Therefore, the result of this research would support the position that the orthographic knowledge affects the improvement of phonemic awareness.

청각적, 시각적 자극제시 방법과 음절위치에 따른 일반아동의 음운인식 능력 (Phonological awareness skills in terms of visual and auditory stimulus and syllable position in typically developing children)

  • 최유미;하승희
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.123-128
    • /
    • 2017
  • This study aims to compare the performance of syllable identification task according to auditory and visual stimuli presentation methods and syllable position. Twenty-two typically developing children (age 4-6) participated in the study. Three-syllable words were used to identify the first syllable and the final syllable in each word with auditory and visual stimuli. For the auditory stimuli presentation, the researcher presented the test word only with oral speech. For the visual stimuli presentation, the test words were presented as a picture, and asked each child to choose appropriate pictures for the task. The results showed that when tasks were presented visually, the performances of phonological awareness were significantly higher than in presenting with auditory stimuli. Also, the performances of the first syllable identification were significantly higher than those of the last syllable identification. When phonological awareness task are presented by auditory stimuli, it is necessary to go through all the steps of the speech production process. Therefore, the phonological awareness performance by auditory stimuli may be low due to the weakness of the other stages in the speech production process. When phonological awareness tasks are presented using visual picture stimuli, it can be performed directly at the phonological representation stage without going through the peripheral auditory processing, phonological recognition, and motor programming. This study suggests that phonological awareness skills can be different depending on the methods of stimulus presentation and syllable position of the tasks. The comparison of performances between visual and auditory stimulus tasks will help identify where children may show weakness and vulnerability in speech production process.

Audio-visual Spatial Coherence Judgments in the Peripheral Visual Fields

  • 이채봉;강대기
    • 융합신호처리학회논문지
    • /
    • 제16권2호
    • /
    • pp.35-39
    • /
    • 2015
  • Auditory and visual stimuli presented in the peripheral visual field were perceived as spatially coincident when the auditory stimulus was presented five to seven degrees outwards from the direction of the visual stimulus. Furthermore, judgments of the perceived distance between auditory and visual stimuli presented in the periphery did not increase when an auditory stimulus was presented in the peripheral side of the visual stimulus. As to the origin of this phenomenon, there would seem to be two possibilities. One is that the participants could not perceptually distinguish the distance on the peripheral side because of the limitation of accuracy perception. The other is that the participants could distinguish the distances, but could not evaluate them because of the insufficient experimental setup of auditory stimuli. In order to confirm which of these two alternative explanations is valid, we conducted an experiment similar to that of our previous study using a sufficient number of loudspeakers for the presentation of auditory stimuli. Results revealed that judgments of perceived distance increased on the peripheral side. This indicates that we can perceive discrimination between audio and visual stimuli on the peripheral side.

TV동영상과 신문텍스트의 정보제시특성이 어린이와 성인의 정보기억에 미치는 영향 (Effects of Presentation Modalities of Television Moving Image and Print Text on Children's and Adult's Recall)

  • 최이정
    • 한국콘텐츠학회논문지
    • /
    • 제9권7호
    • /
    • pp.149-158
    • /
    • 2009
  • 본 연구는 TV동영상과 신문텍스트의 정보제시특성에 따라 어린이와 성인의 정보기억이 각각 어떻게 달라지는가를 고찰한 것이다. 이를 위해 "TV 동영상1(화면과 음성정보 중복)", "TV 동영상2(화면과 음성 정보 분리)", "신문 텍스트" 의 세 가지 서로 다른 제시유형의 정보스토리를 어린이와 성인에게 제시하고 그들의 정보회상정도를 비교하는 실험연구를 수행했다. 검증결과 어린이는 화면과 음성의 중복성 여부와 상관없이 신문텍스트보다 TV동영상 정보를 더 잘 기억하는 것으로 나타났다. 그러나 성인은 화면과 음성의 중복성이 전제되는 경우에만 이중부호화가설을 지지하며 신문텍스트보다 TV 동영상의 장점이 더 부각되어 나타나는 것을 확인할 수 있었다.

Ergonomic Recommendation for Optimum Positions and Warning Foreperiod of Auditory Signals in Human-Machine Interface

  • Lee, Fion C.H.;Chan, Alan H.S.
    • Industrial Engineering and Management Systems
    • /
    • 제6권1호
    • /
    • pp.40-48
    • /
    • 2007
  • This study investigated the optimum positions and warning foreperiod for auditory signals with an experiment on spatial stimulus-response (S-R) compatibility effects. The auditory signals were presented at the front-right, front-left, rear-right, and rear-left positions from the subjects, whose reaction times and accuracies at different spatial mapping conditions were examined. The results showed a significant spatial stimulus-response compatibility effect in which faster and more accurate responses were obtained in the transversely and longitudinally compatible condition while the worst performance was found when spatial stimulus-response compatibility did not exist in either orientation. It was also shown that the transverse compatibility effect was found significantly stronger than the longitudinal compatibility effect. The effect of signal position was found significant and post hoc test suggested that the emergent warning alarm should be placed on the front-right position for right-handed users. The warning foreperiod prior to the signal presentation was shown to influence reaction time and a warning foreperiod of 3 s is found optimal for the 2-choice auditory reaction task.

웹 콘텐츠의 정보제시유형이 어린이 뉴스정보처리과정에 미치는 영향 (The Effects of the Presentation Mode of Web Contents on the Children's Information Processing Process)

  • 최이정
    • 한국콘텐츠학회논문지
    • /
    • 제5권3호
    • /
    • pp.113-122
    • /
    • 2005
  • 본 연구는 웹 콘텐츠 표현의 기본 4요소라고 할 수 있는 동영상, 오디오 이미지, 텍스트의 서로 다른 활용이 수용자의 정보처리과정에 어떤 영향을 미치는지를 특히 어린이 뉴스 사이트를 중심으로 실험연구를 통해 고찰한 것이다. 이를 위해 다섯 개의 어린이 피험자 그룹별로 똑같은 스토리의 뉴스정보를 각각 "동영상1(화면과 음성정보 중복)", "동영상2(화면과 음성정보 분리)", "오디오", "텍스트", "텍스트+이미지(사진)"의 서로 다른 형태로 제작한 웹사이트를 통해 전달하고 집단간 뉴스정보기억차이를 검증했다. 검증결과 동영상으로 뉴스를 전달하도록 디자인된 사이트는 다른 형식의 사이트에 비해 어린이의 뉴스정보기억과 관련하여 가장 효율적인 것으로 나타났고, 이런 동영상의 장점은 동영상내의 화면과 음성정보가 중복될 때 특히 강화되는 것으로 나타났다.

  • PDF

Sound-Field Speech Evoked Auditory Brainstem Response in Cochlear-Implant Recipients

  • Jarollahi, Farnoush;Valadbeigi, Ayub;Jalaei, Bahram;Maarefvand, Mohammad;Zarandy, Masoud Motasaddi;Haghani, Hamid;Shirzhiyan, Zahra
    • 대한청각학회지
    • /
    • 제24권2호
    • /
    • pp.71-78
    • /
    • 2020
  • Background and Objectives: Currently limited information is available on speech stimuli processing at the subcortical level in the recipients of cochlear implant (CI). Speech processing in the brainstem level is measured using speech-auditory brainstem response (S-ABR). The purpose of the present study was to measure the S-ABR components in the sound-field presentation in CI recipients, and compare with normal hearing (NH) children. Subjects and Methods: In this descriptive-analytical study, participants were divided in two groups: patients with CIs; and NH group. The CI group consisted of 20 prelingual hearing impairment children (mean age=8.90±0.79 years), with ipsilateral CIs (right side). The control group consisted of 20 healthy NH children, with comparable age and sex distribution. The S-ABR was evoked by the 40-ms synthesized /da/ syllable stimulus that was indicated in the sound-field presentation. Results: Sound-field S-ABR measured in the CI recipients indicated statistically significant delayed latencies, than in the NH group. In addition, these results demonstrated that the frequency following response peak amplitude was significantly higher in CI recipients, than in the NH counterparts (p<0.05). Finally, the neural phase locking were significantly lower in CI recipients (p<0.05). Conclusions: The findings of sound-field S-ABR demonstrated that CI recipients have neural encoding deficits in temporal and spectral domains at the brainstem level; therefore, the sound-field S-ABR can be considered an efficient clinical procedure to assess the speech process in CI recipients.

Sound-Field Speech Evoked Auditory Brainstem Response in Cochlear-Implant Recipients

  • Jarollahi, Farnoush;Valadbeigi, Ayub;Jalaei, Bahram;Maarefvand, Mohammad;Zarandy, Masoud Motasaddi;Haghani, Hamid;Shirzhiyan, Zahra
    • Journal of Audiology & Otology
    • /
    • 제24권2호
    • /
    • pp.71-78
    • /
    • 2020
  • Background and Objectives: Currently limited information is available on speech stimuli processing at the subcortical level in the recipients of cochlear implant (CI). Speech processing in the brainstem level is measured using speech-auditory brainstem response (S-ABR). The purpose of the present study was to measure the S-ABR components in the sound-field presentation in CI recipients, and compare with normal hearing (NH) children. Subjects and Methods: In this descriptive-analytical study, participants were divided in two groups: patients with CIs; and NH group. The CI group consisted of 20 prelingual hearing impairment children (mean age=8.90±0.79 years), with ipsilateral CIs (right side). The control group consisted of 20 healthy NH children, with comparable age and sex distribution. The S-ABR was evoked by the 40-ms synthesized /da/ syllable stimulus that was indicated in the sound-field presentation. Results: Sound-field S-ABR measured in the CI recipients indicated statistically significant delayed latencies, than in the NH group. In addition, these results demonstrated that the frequency following response peak amplitude was significantly higher in CI recipients, than in the NH counterparts (p<0.05). Finally, the neural phase locking were significantly lower in CI recipients (p<0.05). Conclusions: The findings of sound-field S-ABR demonstrated that CI recipients have neural encoding deficits in temporal and spectral domains at the brainstem level; therefore, the sound-field S-ABR can be considered an efficient clinical procedure to assess the speech process in CI recipients.

A Proposal of the Olfactory Information Presentation Method and Its Application for Scent Generator Using Web Service

  • Kim, Jeong-Do;Byun, Hyung-Gi
    • 센서학회지
    • /
    • 제21권4호
    • /
    • pp.249-255
    • /
    • 2012
  • Among the human senses, olfactory information still does not have a proper data presentation method unlike that regarding vision and auditory information. It makes presenting the sense of smell into multimedia information impossible, which may be an exploratory field in human computer interaction. In this paper, we propose an olfactory information presentation method, which is a way to use smell as multimedia information, and show an application for scent generation and odor display using a web service. The olfactory information can present smell characteristics such as intensity, persistence, hedonic tone, and odor description. The structure of data format based on olfactory information can also be organized according to data types such as integer, float, char, string, and bitmap. Furthermore, it can be used for data transmitting via a web service and for odor display using a scent generator. The scent generator, which can display information of smell, is developed to generate 6 odors using 6 aroma solutions and a diluted solution with 14 micro-valves and a micropump. Throughout the experiment, we confirm that the remote user can grasp information of smell transmitted by messenger service and request odor display to the computer controlled scent generator. It contributes to enlarge existing virtual reality and to be proposed as a standard reference method regarding olfactory information presentation for future multimedia technology.

Acoustic-Phonetic Phenotypes in Pediatric Speech Disorders;An Interdisciplinary Approach

  • Bunnell, H. Timothy
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 추계학술대회 발표논문집
    • /
    • pp.31-36
    • /
    • 2006
  • Research in the Center for Pediatric Auditory and Speech Sciences (CPASS) is attempting to characterize or phenotype children with speech delays based on acoustic-phonetic evidence and relate those phenotypes to chromosome loci believed to be related to language and speech. To achieve this goal we have adopted a highly interdisciplinary approach that merges fields as diverse as automatic speech recognition, human genetics, neuroscience, epidemiology, and speech-language pathology. In this presentation I will trace the background of this project, and the rationale for our approach. Analyses based on a large amount of speech recorded from 18 children with speech delays will be presented to illustrate the approach we will be taking to characterize the acoustic phonetic properties of disordered speech in young children. The ultimate goal of our work is to develop non-invasive and objective measures of speech development that can be used to better identify which children with apparent speech delays are most in need of, or would receive the most benefit from the delivery of therapeutic services.

  • PDF