• Title/Summary/Keyword: Auditory Signal

Search Result 176, Processing Time 0.025 seconds

Adaptive Noise Suppression system based on Human Auditory Model (인간의 청각모델에 기초한 잡음환경에 적응된 잡음억압 시스템)

  • Choi, Jae-Seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.421-424
    • /
    • 2008
  • This paper proposes an adaptive noise suppression system based on human auditory model to enhance speech signal that is degraded by various background noises. The proposed system detects voiced and unvoiced sections for each frame and implements the adaptive auditory process, then reduces the noise speech signal using neural network including amplitude component and phase component. Base on measuring signal-to-noise ratios, experiments confirm that the proposed system is effective for speech signal that is degraded by various noises.

  • PDF

Aurally Relevant Analysis by Synthesis - VIPER a New Approach to Sound Design -

  • Daniel, Peter;Pischedda, Patrice
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.1009-1009
    • /
    • 2003
  • VIPER a new tool for the VIsual PERception of sound quality and for sound design will be presented. Requirement for the visualization of sound quality is a signal analysis modeling the information processing of the ear. The first step of the signal processing implemented in VIPER, calculates an auditory spectrogram by a filter bank adapted to the time- and frequency resolution of the human ear. The second step removes redundant information by extracting time- and frequency contours from the auditory spectrogram in analogy to contours of the visual system. In a third step contours and/or auditory spectrogram can be resynthesised confirming that only aurally relevant information were extracted. The visualization of the contours in VIPER allows intuitively to grasp the important components of a signal. Contributions of parts of a signal to the overall quality can be easily auralized by editing and resynthesising the contours or the underlying auditory spectrogram. Resynthesis of time contours alone allows e.g. to auralize impulsive components separately from the tonal components. Further processing of the contours determines tonal parts in form of tracks. Audible differences between two versions of a sound can be visually inspected in VIPER through the help of auditory distance spectrograms. Applications are shown for the sound design of several interior noises of cars.

  • PDF

Korean Vowel Recognition using Peripheral Auditory Model (말초 청각 계통 모델을 이용한 한국어 모음 인식)

  • Yun, Tae-Seong;Baek, Seung-Hwa;Park, Sang-Hui
    • Journal of Biomedical Engineering Research
    • /
    • v.9 no.1
    • /
    • pp.1-10
    • /
    • 1988
  • In this study, the recognition experiments for Korean vowel are performed using peripheral auditory model. In addition, for the purpose of objective comparison, the recognition experiments are performed by extracting LPC cepstrum coefficients for the same speech data. The results are as follows. 1) The time and the frequency responses of the auditory model show that important features of input signal are involved in the responses of inner ear and auditory nerve. 2) The recognition results for Korean vowel show that the recognition rate by auditory model output is higher than the recognition rate by LPC cepstrum coefficients. 3) The adaptation phenomenon of auditory nerve provides useful characteristics for the discrimination of vowel signal.

  • PDF

Investigating the Effects of Hearing Loss and Hearing Aid Digital Delay on Sound-Induced Flash Illusion

  • Moradi, Vahid;Kheirkhah, Kiana;Farahani, Saeid;Kavianpour, Iman
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.174-179
    • /
    • 2020
  • Background and Objectives: The integration of auditory-visual speech information improves speech perception; however, if the auditory system input is disrupted due to hearing loss, auditory and visual inputs cannot be fully integrated. Additionally, temporal coincidence of auditory and visual input is a significantly important factor in integrating the input of these two senses. Time delayed acoustic pathway caused by the signal passing through digital signal processing. Therefore, this study aimed to investigate the effects of hearing loss and hearing aid digital delay circuit on sound-induced flash illusion. Subjects and Methods: A total of 13 adults with normal hearing, 13 with mild to moderate hearing loss, and 13 with moderate to severe hearing loss were enrolled in this study. Subsequently, the sound-induced flash illusion test was conducted, and the results were analyzed. Results: The results showed that hearing aid digital delay and hearing loss had no detrimental effect on sound-induced flash illusion. Conclusions: Transmission velocity and neural transduction rate of the auditory inputs decreased in patients with hearing loss. Hence, the integrating auditory and visual sensory cannot be combined completely. Although the transmission rate of the auditory sense input was approximately normal when the hearing aid was prescribed. Thus, it can be concluded that the processing delay in the hearing aid circuit is insufficient to disrupt the integration of auditory and visual information.

Multi-Channel Analog Front-End for Auditory Nerve Signal Detection (청각신경신호 검출 장치용 다중채널 아나로그 프론트엔드)

  • Cheon, Ji-Min;Lim, Seung-Hyun;Lee, Dong-Myung;Chang, Eun-Soo;Han, Gun-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.1
    • /
    • pp.60-68
    • /
    • 2010
  • In case of sensorineural hearing loss, auditory perception can be activated by electrical stimulation of the nervous system via electrode implanted into the cochlea or auditory nerve. Since the tonotopic map of the human auditory nerve has not been definitively identified, the recording of auditory nerve signal with microelectrode is desirable for determining the tonotopic map. This paper proposes the multi-channel analog front-end for auditory nerve signal detection. A channel of the proposed analog front-end consists of an AC coupling circuit, a low-power 4th-order Gm-C LPF, and a single-slope ADC. The AC coupling circuit transfers only AC signal while it blocks DC signal level. Considering the bandwidth of the auditory signal, the Gm-C LPF is designed with OTAs adopting floating-gate technique. For the channel-parallel ADC structure, the single-slope ADC is used because it occupies the small silicon area. Experimental results shows that the AC coupling circuit and LPF have the bandwidth of 100 Hz - 6.95 kHz and the ADC has the effective resolution of 7.7 bits. The power consumption per a channel is $12\;{\mu}W$, the power supply is 3.0 V, and the core area is $2.6\;mm\;{\times}\;3.7\;mm$. The proposed analog front-end was fabricated in a 1-poly 4-metal $0.35-{\mu}m$ CMOS process.

Audio-visual Spatial Coherence Judgments in the Peripheral Visual Fields

  • Lee, Chai-Bong;Kang, Dae-Gee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.2
    • /
    • pp.35-39
    • /
    • 2015
  • Auditory and visual stimuli presented in the peripheral visual field were perceived as spatially coincident when the auditory stimulus was presented five to seven degrees outwards from the direction of the visual stimulus. Furthermore, judgments of the perceived distance between auditory and visual stimuli presented in the periphery did not increase when an auditory stimulus was presented in the peripheral side of the visual stimulus. As to the origin of this phenomenon, there would seem to be two possibilities. One is that the participants could not perceptually distinguish the distance on the peripheral side because of the limitation of accuracy perception. The other is that the participants could distinguish the distances, but could not evaluate them because of the insufficient experimental setup of auditory stimuli. In order to confirm which of these two alternative explanations is valid, we conducted an experiment similar to that of our previous study using a sufficient number of loudspeakers for the presentation of auditory stimuli. Results revealed that judgments of perceived distance increased on the peripheral side. This indicates that we can perceive discrimination between audio and visual stimuli on the peripheral side.

Noise Suppression Algorithm using Neural Network based Amplitude and Phase Spectrum (진폭 및 위상스펙트럼이 도입된 신경회로망에 의한 잡음억제 알고리즘)

  • Choi, Jae-Seung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.4
    • /
    • pp.652-657
    • /
    • 2009
  • This paper proposes an adaptive noise suppression system based on human auditory model to enhance speech signal that is degraded by various background noises. The proposed system detects voiced, unvoiced and silence sections for each frame and implements an adaptive auditory process, then reduces the noise speech signal using a neural network including amplitude component and phase component. Based on measuring signal-to-noise ratios, experiments confirm that the proposed system is effective for speech signal that is degraded by various noises.

연령증가에 따른 신호탐지능력의 변화 -시.청각을 중심으로-

  • 이용태;신승헌
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.206-215
    • /
    • 1996
  • Recently, proportion of the aged becomes greater in Korea like the advanced country as time passed away, and this is treated as one of major social problems. Therfore, we investigated visual/auditory signal detection performance to evaluate vocational aptitude of the middle-and old-aged workers in this study. It was shown that signal detection performance decreased as workers became older, and there was large individual difference in signal detection performance. Since signal detection performance in visual task decreased rapidly and more than that in auditory task, the middle-and old-aged workers can not carry out properly visual inspection and precision task was related with that in auditory task. It can be expected that the parameters used in this study are in good use for evaluating a worker's aptitude.

  • PDF

Developing the Design Guideline of Auditory User Interface for Digital Appliances (가전제품의 청각 사용자 인터페이스(AUI) 디자인을 위한 가이드라인 개발 사례)

  • Lee, Ju-Hwan;Jeon, Myoung-Hoon;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.10 no.3
    • /
    • pp.307-320
    • /
    • 2007
  • In this study, we attempted to provide a distinctive cognitive, emotional 'Auditory User Interface (AUI) Design Guideline' according to home appliance groups and their functions. It is an effort to apply a new design method to practical affairs to overcome the limit of GUI centered appliance design and reflect user multimodal properties by presenting a guideline possible to generate auditory signals intuitively associable with the operational functions. The reason why this study is required is because of frequent instances given rise to annoyance as not systematic application of AUI, but arbitrary mapping. This study tried to provide a useful guideline of AUI in home appliances by extracting the relations with cognitive, emotional properties of a certain device or function induced by several properties of auditory signal and showing the empirical data on the basic mechanism of such relations.

  • PDF

A Study on the Extraction of the Excitation Pattern for Auditory Prothesis (청각 보철을 위한 자극패턴 추출에 관한 연구)

  • Park, Sang-Hui;Yoon, Tae-Sung;Lee, Jae-Hyuk;Beack, Seunt-Hwa
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1322-1325
    • /
    • 1987
  • In this study, the excitation pattern, which can be sensated by a man having hearing loss due to the damage of inner ear, is extracted, and the procedure of the auditory speech signal processing is simulated with the computer. Therefore, the excitation pattern is extracted by the neural tuning model satisfying the physiological characteristic of the inner ear and by the infor.ation extracted from speech signal. The firing pattern is also extracted by inputting this excitation pattern to the auditory neural model. With this extracted firing pattern, the possibility that the patient can sensate the speech signal is studied by the computer simulation.

  • PDF