• Title/Summary/Keyword: auditory perceptual experiment

Search Result 16, Processing Time 0.021 seconds

Temporal-perceptual Judgement of Visuo-Auditory Stimulation (시청각 자극의 시간적 인지 판단)

  • Yu, Mi;Lee, Sang-Min;Piao, Yong-Jun;Kwon, Tae-Kyu;Kim, Nam-Gyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.1 s.190
    • /
    • pp.101-109
    • /
    • 2007
  • In situations of spatio-temporal perception about visuo-auditory stimulus, researches propose optimal integration hypothesis that perceptual process is optimized to the interaction of the senses for the precision of perception. So, when the visual information considered generally dominant over any other sense is ambiguous, the information of the other sense like auditory stimulus influences the perceptual process in interaction with visual information. Thus, we performed two different experiments to certain the conditions of the interacting senses and influence of the condition. We consider the interaction of the visuo-auditory stimulation in the free space, the color of visual stimulus and sex difference of testee with normal people. In first experiment, 12 participants were asked to judge the change in the frequency of audio-visual stimulation using a visual flicker and auditory flutter stimulation in the free space. When auditory temporal cues were presented, the change in the frequency of the visual stimulation was associated with a perceived change in the frequency of the auditory stimulation as the results of the previous studies using headphone. In second experiment, 30 male and 30 female were asked to judge the change in the frequency of audio-visual stimulation using a color of visual flicker and auditory flutter stimulation. In the color condition using red and green. Both male and female testees showed same perceptual tendency. male and female testees showed same perceptual tendency however, in case of female, the standard deviation is larger than that of male. This results implies that audio-visual asymmetry effects are influenced by the cues of visual and auditory information, such as the orientation between auditory and visual stimulus, the color of visual stimulus.

Perceptual Boundary on a Synthesized Korean Vowel /o/-/u/ Continuum by Chinese Learners of Korean Language (/오/-/우/ 합성모음 연속체에 대한 중국인 한국어 학습자의 청지각적 경계)

  • Yun, Jihyeon;Kim, EunKyung;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.111-121
    • /
    • 2015
  • The present study examines the auditory boundary between Korean /o/ and /u/ on a synthesized vowel continuum by Chinese learners of Korean language. Preceding researches reported that the Chinese learners have difficulty pronouncing Korean monophthongs /o/ and /u/. In this experiment, a nine-step continuum was resynthesized using Praat from a vowel token from a recording of a male announcer who produced it in isolated form. F1 and F2 were synchronously shifted in equal steps in qtone (quarter tone), while F3 and F4 values were held constant for the entire stimuli. A forced choice identification task was performed by the advanced learners who speak Mandarin Chinese as their native language. Their experiment data were compared to a Korean native group. ROC (Receiver Operating Characteristic) analysis and logistic regression were performed to estimate the perceptual boundary. The result indicated the learner group has a different auditory criterion on the continuum from the Korean native group. This suggests that more importance should be placed on hearing and listening training in order to acquire the phoneme categories of the two vowels.

A Study on Development of a Hearing Impairment Simulator considering Frequency Selectivity and Asymmetrical Auditory Filter of the Hearing Impaired (난청인의 주파수 선택도와 비대칭적 청각 필터를 고려한 난청 시뮬레이터 개발에 관한 연구)

  • Joo, Sang-Ick;Kang, Hyun-Deok;Song, Young-Rok;Lee, Sang-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.4
    • /
    • pp.831-840
    • /
    • 2010
  • In this paper, we propose a hearing impairment simulator considering reduced frequency selectivity and asymmetrical auditory filter of the hearing impaired, and we verified the reduced frequency selectivity and asymmetrical auditory filter affected in speech perception through experiments. The reduced frequency selectivity has made embodied by spectral smearing using LPC(linear prediction coding). The shapes of auditory filter are asymmetrical different with each center frequency. Hearing impaired person which has hearing loss was differently changed with that of normal hearing people and it has different value for speech of quality through auditory filter. The experiments confirmed subjective test and objective test. The subjective experiments are composed of 4 kinds of tests: pure tone test, SRT(speech reception threshold) test, and WRS(word recognition score) test without spectral smearing, and WRS test with spectral smearing. The experiment of the hearing impairment simulator was performed from 9 subjects who have normal ears. The amount of spectral smearing was controlled by LPC order. The asymmetrical auditory filter of proposed hearing impairment simulator was simulated and then some tests to estimate the filter's performance objectively were performed. The objective experiment as simulated auditory filter's performance evaluation method used PESQ(perceptual evaluation of speech quality) and LLR(log likelihood ratio) for speech through auditory filter. The processed speech was evaluated objective speech quality and distortion using PESQ and LLR value. When hearing loss processed, PESQ and LLR value have big difference according to asymmetrical auditory filter in hearing impairment simulator.

Variations in the perception of lexical pitch accents and the correlations with individuals' autistic traits

  • Lee, Hyunjung
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.53-59
    • /
    • 2017
  • The present study examined if individual listeners' perceptual variations were associated with their cognitive characteristics indexed by the Autistic Spectrum Quotient (AQ). This study first investigated the perception of the lexical pitch accent contrast in the Kyungsang Korean currently undergoing a sound change, and then tested if listeners' perceptual variations were correlated with their AQ scores. Eighteen Kyungsang listeners in their 20s participated in the perception experiment where they identified two contrastive accent words for auditory stimuli systematically varying F0 scaling and timing properties; the participants then completed the AQ questionnaire. In the results, the acoustic parameters reporting reduced phonetic differences across accent contrasts for younger Kyungsang generation played a reliable role in perceiving the HH word from HL, suggesting the discrepancy between the perception and the production in the context of sound change. This study also observed that individuals' perceptual variations were negatively correlated with their AQ sub scores. The present findings suggested that the sound change might appear differently between production and perception with a different time course, and deviant percepts could be explained by individuals' cognitive measure.

An Objective Estimation for Simulating of Asymmetrical Auditory Filter of the Hearing Impaired According to Hearing Loss Degree (난청인의 난청 정도에 따른 비대칭 청각 필터 구현의 객관적 평가)

  • Joo, S.I.;Jeon, Y.Y.;Song, Y.R.;Lee, S.M.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.3 no.1
    • /
    • pp.27-34
    • /
    • 2009
  • Hearing impaired person's hearing loss has personally various shape, so existing symmetrical auditory filter of frequency band method wasn't properly simulated the hearing impaired person's various hearing loss shape. The shapes of auditory filter are asymmetrical different with each center frequency and each input level. Hearing impaired person which has hearing loss was differently changed with that of normal hearing people and it has different value for speech of quality through auditory filter. In this study, the asymmetrical auditory filter was simulated and then some tests to estimate the filter's performance objectively were performed. The experiment as simulated auditory filter's performance evaluation method used perceptual evaluation of speech quality (PESQ) and log likelihood ratio (LLR) for speech through auditory filter. In the test, processed speech was evaluated objective speech quality and distortion using PESQ and LLR value. When hearing loss processed, PESQ and LLR value have big difference between symmetrical and asymmetrical auditory filter. It means that the difference of the shape auditory filter may affect to speech quality. Especially, when hearing loss existed, auditory filter changing according to asymmetrical shape for each center frequency affected to perceive speech quality of the hearing impaired.

  • PDF

Auditory Interaction Design By Impact Sound Synthesis for Virtual Environment (충돌음 합성에 의한 가상환경의 청각적 인터랙션 디자인)

  • Nam, Yang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.5
    • /
    • pp.1-8
    • /
    • 2013
  • Focused on the fact that sound is one of the important sensory cues delivering situations such as impact, this paper proposes an auditory interaction design approach for virtual environment. Based on a few sampling of basic material sound for various materials such as steel, rubber, and glass, the proposed method enables design transformations of the basic sound by allowing modification of mode gains that characterize natural sound for the material. In real-time virtual environment, it also provides simulation of modified sound according to the change of impact situation's perceptual properties such as colliding objects' size, hardness, contacting area, and speed. The test results on cognition experiment for discriminating objects' materials and impact situation by sound showed the feasibility of proposed auditory interaction design method.

The Primitive Representation in Speech Perception: Phoneme or Distinctive Features (말지각의 기초표상: 음소 또는 변별자질)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.157-169
    • /
    • 2013
  • Using a target detection task, this study compared the processing automaticity of phonemes and features in spoken syllable stimuli to determine the primitive representation in speech perception, phoneme or distinctive feature. For this, we modified the visual search task(Treisman et al., 1992) developed to investigate the processing of visual features(ex. color, shape or their conjunction) for auditory stimuli. In our task, the distinctive features(ex. aspiration or coronal) corresponded to visual primitive features(ex. color and shape), and the phonemes(ex. /$t^h$/) to visual conjunctive features(ex. colored shapes). The automaticity is measured by the set size effect that was the increasing amount of reaction time when the number of distracters increased. Three experiments were conducted. The laryngeal features(experiment 1), the manner features(experiment 2), and the place features(experiment 3) were compared with phonemes. The results showed that the distinctive features are consistently processed faster and automatically than the phonemes. Additionally there were differences in the processing automaticity among the classes of distinctive features. The laryngeal features are the most automatic, the manner features are moderately automatic and the place features are the least automatic. These results are consistent with the previous studies(Bae et al., 2002; Bae, 2010) that showed the perceptual hierarchy of distinctive features.

Isolated-Word Speech Recognition in Telephone Environment Using Perceptual Auditory Characteristic (인지적 청각 특성을 이용한 고립 단어 전화 음성 인식)

  • Choi, Hyung-Ki;Park, Ki-Young;Kim, Chong-Kyo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.2
    • /
    • pp.60-65
    • /
    • 2002
  • In this paper, we propose GFCC(gammatone filter frequency cepstrum coefficient) parameter which was based on the auditory characteristic for accomplishing better speech recognition rate. And it is performed the experiment of speech recognition for isolated word acquired from telephone network. For the purpose of comparing GFCC parameter with other parameter, the experiment of speech recognition are carried out using MFCC and LPCC parameter. Also, for each parameter, we are implemented CMS(cepstral mean subtraction)which was applied or not in order to compensate channel distortion in telephone network. Accordingly, we found that the recognition rate using GFCC parameter is better than other parameter in the experimental result.

Comparison of prosodic characteristics by question type in left- and right-hemisphere-injured stroke patients (좌반구 손상과 우반구 손상 뇌졸중 환자의 의문문 유형에 따른 운율 특성 비교)

  • Yu, Youngmi;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.1-13
    • /
    • 2021
  • This study examined the characteristics of linguistic prosody in terms of cerebral lateralization in three groups of 9 healthy speakers and 14 speakers with a history of stroke (7 with left hemisphere damage (LHD), 7 with right hemisphere damage (RHD)). Specifically, prosodic characteristics related to speech rate, duration, pitch, and intensity were examined in three types of interrogative sentences (wh-questions, yes-no questions, alternative questions) with auditory perceptual evaluation. As a result, the statistically significant key variables showed flaws in production of the linguistic prosody in the speakers with LHD. The statistically significant variables were more insufficiently produced for wh-questions than for yes-no and alternative questions. This trend was particularly noticeable in variables related to pitch and speech rate. This result suggests that when Korean speakers process linguistic prosody, such as that of lexico-semantic and syntactic information in interrogative sentences, the left hemisphere seems to be superior to the right hemisphere.

Effects of F1/F2 Manipulation on the Perception of Korean Vowels /o/ and /u/ (F1/F2의 변화가 한국어 /오/, /우/ 모음의 지각판별에 미치는 영향)

  • Yun, Jihyeon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the perception of two Korean vowels using F1/F2 manipulated synthetic vowels. Previous studies indicated that there is an overlap between the acoustic spaces of Korean /o/ and /u/ in terms of the first two formants. A continuum of eleven synthetic vowels were used as stimuli. The experiment consisted of three tasks: an /o/ identification task (Yes-no), an /u/ identification task (Yes-no), and a forced choice identification task (/o/-/u/). ROC(Receiver Operating Characteristic) analysis and logistic regression were performed to calculate the boundary criterion of the two vowels along the stimulus continuum, and to predict the perceptual judgment on F1 and F2. The result indicated that the location between stimulus no.5 (F1 = 342Hz, F2 = 691Hz) and no.6 (F1 = 336Hz, F2 = 700Hz) was estimated as a perceptual boundary region between /o/ and /u/, while stimulus no.0 (F1=405Hz, F2=666Hz) and no.10 (F1=321Hz, F2=743Hz) were at opposite ends of the continuum. The influence of F2 was predominant over F1 on the perception of the vowel categories.