• Title/Summary/Keyword: visual-stimuli

Search Result 355, Processing Time 0.027 seconds

Audio-visual Spatial Coherence Judgments in the Peripheral Visual Fields

  • Lee, Chai-Bong;Kang, Dae-Gee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.2
    • /
    • pp.35-39
    • /
    • 2015
  • Auditory and visual stimuli presented in the peripheral visual field were perceived as spatially coincident when the auditory stimulus was presented five to seven degrees outwards from the direction of the visual stimulus. Furthermore, judgments of the perceived distance between auditory and visual stimuli presented in the periphery did not increase when an auditory stimulus was presented in the peripheral side of the visual stimulus. As to the origin of this phenomenon, there would seem to be two possibilities. One is that the participants could not perceptually distinguish the distance on the peripheral side because of the limitation of accuracy perception. The other is that the participants could distinguish the distances, but could not evaluate them because of the insufficient experimental setup of auditory stimuli. In order to confirm which of these two alternative explanations is valid, we conducted an experiment similar to that of our previous study using a sufficient number of loudspeakers for the presentation of auditory stimuli. Results revealed that judgments of perceived distance increased on the peripheral side. This indicates that we can perceive discrimination between audio and visual stimuli on the peripheral side.

Phonological awareness skills in terms of visual and auditory stimulus and syllable position in typically developing children (청각적, 시각적 자극제시 방법과 음절위치에 따른 일반아동의 음운인식 능력)

  • Choi, Yu Mi;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.123-128
    • /
    • 2017
  • This study aims to compare the performance of syllable identification task according to auditory and visual stimuli presentation methods and syllable position. Twenty-two typically developing children (age 4-6) participated in the study. Three-syllable words were used to identify the first syllable and the final syllable in each word with auditory and visual stimuli. For the auditory stimuli presentation, the researcher presented the test word only with oral speech. For the visual stimuli presentation, the test words were presented as a picture, and asked each child to choose appropriate pictures for the task. The results showed that when tasks were presented visually, the performances of phonological awareness were significantly higher than in presenting with auditory stimuli. Also, the performances of the first syllable identification were significantly higher than those of the last syllable identification. When phonological awareness task are presented by auditory stimuli, it is necessary to go through all the steps of the speech production process. Therefore, the phonological awareness performance by auditory stimuli may be low due to the weakness of the other stages in the speech production process. When phonological awareness tasks are presented using visual picture stimuli, it can be performed directly at the phonological representation stage without going through the peripheral auditory processing, phonological recognition, and motor programming. This study suggests that phonological awareness skills can be different depending on the methods of stimulus presentation and syllable position of the tasks. The comparison of performances between visual and auditory stimulus tasks will help identify where children may show weakness and vulnerability in speech production process.

f-MRI with Three-Dimensional Visual Stimulation (삼차원 시각 자극을 이용한 f-MRI 연구)

  • Kim C.Y.;Park H.J.;Oh S.J.;Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.24-29
    • /
    • 2005
  • Purpose : Instead of conventional two-dimensional (2-D) visual stimuli, three-dimensional (3-D) visual stimuli with stereoscopic vision were employed for the study of functional Magnetic Resonance Imaging (f-MRI). In this paper f-MRI with 3-D visual stimuli is investigated in comparison with f-MRI with 2-D visual stimuli. Materials and Methods : The anaglyph which generates stereoscopic vision by viewing color coded images with red-blue glasses is used for 3-D visual stimuli. Two-dimensional visual stimuli are also used for comparison. For healthy volunteers, f-MRI experiments were performed with 2-D and 3-D visual stimuli at 3.0 Tesla MRI system. Results : Occipital lobes were activated by the 3-D visual stimuli similarly as in the f-MRI with the conventional 2-D visual stimuli. The activated regions by the 3-D visual stimuli were, however, larger than those by the 2-D visual stimuli by $18\%$. Conclusion : Stereoscopic vision is the basis of the three-dimensional human perception. In this paper 3-D visual stimuli were applied using the anaglyph. Functional MRI was performed with 2-D and 3-D visual stimuli at 3.0 Tesla whole body MRI system. The occipital lobes activated by the 3-D visual stimuli appeared larger than those by the 2-D visual stimuli by about $18\%$. This is due to the more complex character of the 3-D human vision compared to 2-D vision. The f-MRI with 3-D visual stimuli may be useful in various fields using 3-D human vision such as virtual reality, 3-D display, and 3-D multimedia contents.

  • PDF

Increased Gamma-band Neural Synchrony by Pleasant and Unpleasant Visual Stimuli (긍정, 부정 감정 유발 시각자극에 의한 감마-대역 신경동기화 증가)

  • Yeo, Donghoon;Choi, Jeong Woo;Kim, Kyung Hwan
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.2
    • /
    • pp.94-102
    • /
    • 2018
  • It is known that gamma-band activity (GBA) and phase synchrony (GBPS) are induced by emotional visual stimuli. However, the characteristics of GBA and GBPS according to different emotional states have not been identified. The purpose of this study is to investigate the changes in gamma-band neuronal synchronization induced by positive and negative emotional visual stimuli using electroencephalograms (EEGs). Thirteen healthy male subjects have participated in the experiment. The induced spectral power in gamma-band was the highest for negative stimuli, and the lowest for neutral stimuli in 300-2,000 ms after the stimulus onset. The inter-regional phase synchronization in gamma-band was increased in 500-2,000 ms, mainly between the bilateral frontal regions and the parieto-occipital regions. Larger number of significant connections were found by negative stimuli compared to positive ones. Judging from temporal and spatial characteristics of the gamma-band activity and phase synchrony increases, the results may imply that affective visual stimuli cause stronger memory encoding than non-emotional stimuli, and this effect is more significant for negative emotional stimuli than positive ones.

The Effects of Visual and Auditory Feedback on Pain Reduce (시각과 청각되먹임이 통증감소에 미치는 영향)

  • Bae, Young-Sook;Kim, Soon-Hoe;Min, Kyung-Ok
    • Journal of Korean Physical Therapy Science
    • /
    • v.9 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • This study set out to investigate what kind of effects the consistent visual stimuli and verbal and non verbal auditory stimuli have on pain alleviation, as well as to see the influence of joint application of visual and auditory stimuli at the same time on pain alleviation, according to lightness of 50lux and 200lux, ultimately providing basic data in setting up an environment in case of treating pain. The subject were comprised of 30 male and female adults with pain in the neck and back area. The subject were treated in their pain area with Transcutaneous Electrical Nerve Stimulator(TENS) 100HZ for 20 minutes in the research set where each visual, auditory, and joint visual and auditory stimuli was given. For analysis methods, Visual Analogue Scale(VAS) and McGill Pain Questionnaire were adopted to see the changes before and after treatment, and the electrocardiogram, systolic and diastolic pressure, number of heart rate and breathing frequence and endorphin were compared and analyzed using the Wilcoxon singed-rank test. And The Kreskal-walllis test was used to compare the two subgroups from each group. Wilcoxon singed-rank test and the Kreskal-walllis test was used to compare the two subgroups from each group. The results were as follows: 1. The group of 50lux and 200lux were compared given varying degrees of visual stimuli. The group of 200lux showed more reduction in pain points, average systolic and diastolic pressure and average endorphin. 2. The group of verbal and non verbal were compared given varying degrees of auditory stimuli. The group of non-verbal showed more reduction in average systolic and diastolic pressure. 3. The group of 200lux+verbal and 200lux+non verbal were compared given varying degrees of joint visual and auditory stimuli. There was found a statistical significance(p<0.05) in endorphin between the two groups, with more endorphin reduction for 200lux+non verbal group. And there was a statistically significant reduction in VAS and McGill before and after the treatment between the two groups.

  • PDF

The Influence of SOA between the Visual and Auditory Stimuli with Semantic Properties on Integration of Audio-Visual Senses -Focus on the Redundant Target Effect and Visual Dominance Effect- (의미적 속성을 가진 시.청각자극의 SOA가 시청각 통합 현상에 미치는 영향 -중복 표적 효과와 시각 우세성 효과를 중심으로-)

  • Kim, Bo-Seong;Lee, Young-Chang;Lim, Dong-Hoon;Kim, Hyun-Woo;Min, Yoon-Ki
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.475-484
    • /
    • 2010
  • This study examined the influence of the SOA(stimulus onset asynchrony) between visual and auditory stimuli on the integration phenomenon of audio-visual senses. Within the stimulus integration phenomenon, the redundant target effect (the faster and more accurate response to the target stimulus when the target stimulus is presented with more than two modalities) and the visual dominance effect (the faster and more accurate response to a visual stimulus compared to an auditory stimulus) were examined as we composed a visual and auditory unimodal target condition and a multimodal target condition and then observed the response time and accuracy. Consequently, despite the change between visual and auditory stimuli SOA, there was no redundant target effect present. The auditory dominance effect appeared when the SOA between the two stimuli was over 100ms. Theses results imply that the redundant target effect is continuously maintained even when the SOA between two modal stimuli is altered, and also suggests that the behavioral results of superior information processing can only be deducted when the time difference between the onset of the auditory stimuli and the visual stimuli is approximately over 100ms.

  • PDF

A Survey of Objective Measurement of Fatigue Caused by Visual Stimuli (시각자극에 의한 피로도의 객관적 측정을 위한 연구 조사)

  • Kim, Young-Joo;Lee, Eui-Chul;Whang, Min-Cheol;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • Objective: The aim of this study is to investigate and review the previous researches about objective measuring fatigue caused by visual stimuli. Also, we analyze possibility of alternative visual fatigue measurement methods using facial expression recognition and gesture recognition. Background: In most previous researches, visual fatigue is commonly measured by survey or interview based subjective method. However, the subjective evaluation methods can be affected by individual feeling's variation or other kinds of stimuli. To solve these problems, signal and image processing based visual fatigue measurement methods have been widely researched. Method: To analyze the signal and image processing based methods, we categorized previous works into three groups such as bio-signal, brainwave, and eye image based methods. Also, the possibility of adopting facial expression or gesture recognition to measure visual fatigue is analyzed. Results: Bio-signal and brainwave based methods have problems because they can be degraded by not only visual stimuli but also the other kinds of external stimuli caused by other sense organs. In eye image based methods, using only single feature such as blink frequency or pupil size also has problem because the single feature can be easily degraded by other kinds of emotions. Conclusion: Multi-modal measurement method is required by fusing several features which are extracted from the bio-signal and image. Also, alternative method using facial expression or gesture recognition can be considered. Application: The objective visual fatigue measurement method can be applied into the fields of quantitative and comparative measurement of visual fatigue of next generation display devices in terms of human factor.

An Analysis on the Changes in ERP According to Type of Stimuli about Fear of Crime (범죄의 두려움에 대한 자극의 유형에 따른 ERP 변화 분석)

  • Kim, Yong-Woo;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1856-1864
    • /
    • 2017
  • The ultimate goal of multimedia in bio-signal research is to approach multimedia contents through bio-signal. Hence it is important to interpret user's emotions by analyzing his or her bio-signals. In this paper, we construct ERP task of oddball component to analyze EEG signal between normal stimuli and fear stimuli and measure EEG during ERP task. The results from extracted ERP component show that there is a difference in N200 in visual stimuli, P300 in auditory stimuli, and N100 and P300. Moreover, there are larger changes in audiovisual stimuli, indicating that users recognize greater fear of crime when visual and auditory stimuli are simultaneously presented.

Comparison of Motor Skill Acquisition according to Types of Sensory-Stimuli Cue in Serial Reaction Time Task

  • Kwon, Yong Hyun;Lee, Myoung Hee
    • The Journal of Korean Physical Therapy
    • /
    • v.26 no.3
    • /
    • pp.191-195
    • /
    • 2014
  • Purpose: The purpose of this study is to investigate whether types of sensory-stimuli cues in terms of visual, auditory, and visuoauditory cues can be affected to motor sequential learning in healthy adults, using serial reaction time task. Methods: Twenty four healthy subjects participated in this study, who were randomly allocated into three groups, in terms of visual-stimuli (VS) group, auditory-stimuli (AS) group, and visuoauditory-stimuli (VAS) group. In SRT task, eight Arabic numbers were adopted as presentational stimulus, which were composed of three different types of presentational modules, in terms of visual, auditory, and visuoauditory stimuli. On an experiment, all subjects performed total 3 sessions relevant to each stimulus module with a pause of 10 minutes for training and pre-/post-tests. At the pre- and post-tests, reaction time and accuracy were calculated. Results: In reaction time, significant differences were founded in terms of between-subjects, within-subjects, and interaction effect for group ${\times}$ repeated factor. In accuracy, no significant differences were observed in between-group and interaction effect for groups ${\times}$ repeated factor. However, a significant main effect of within-subjects was observed. In addition, a significant difference was showed in comparison of differences of changes between the pre- and post-test only in the reaction time among three groups. Conclusion: This study suggest that short-term sequential motor training on one day induced behavioral modification, such as speed and accuracy of motor response. In addition, we found that motor training using visual-stimuli cue showed better effect of motor skill acquisition, compared to auditory and visuoauditory-stimuli cues.

Comparison between Affective Responses to Tactile Stimuli Based on the Presence of Visual Information Presentation (시각 정보 제시 여부에 따른 촉각 자극에 대한 정서 반응 비교)

  • Jisu Kim;Chaery Park;Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.27 no.2
    • /
    • pp.15-24
    • /
    • 2024
  • Previous studies on texture and emotion have focused on identifying precisely which tactile stimuli trigger specific emotions. Despite the significant role of vision in tactile perception, research has so far only focused on the singular aspect of texture. In this study, we used tactile stimuli to investigate the effects of three variables-roughness, hardness, and visual blocking-on the affective responses to tactile perception. The experimental stimuli that can be encountered in daily life were selected based on the four conditions of "rough/hard," "rough/soft," "smooth/hard" and "smooth/soft" by crossing two roughness conditions (rough, smooth) and two hardness conditions (hard, soft). The experiment was divided into two sessions depending on whether or not visual blocking existed. Participants completed a session in which they evaluated a tactile stimulus after touching it without seeing it and then proceeded with a session in which they evaluated a stimulus after touching it with sight of it. The results of the repeated-measures ANOVA showed that individuals reported a more positive perception when touching stimuli with visual cues and more negative when touching stimuli without visual cues. Furthermore, the inclination to perceive smooth and soft stimuli more positively and rough stimuli more negatively was stronger when touching without visual cues. The results of this study suggest implications for enhancing the understanding of the interaction between emotion and visual information processing by elucidating how emotions are experienced differently in situations where visual information is provided and where it is not.