• Title/Summary/Keyword: Auditory Signal

Search Result 176, Processing Time 0.02 seconds

Improvement of Sound Quality of Voice Transmission by Finger

  • Park, Hyungwoo
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.218-226
    • /
    • 2019
  • In modern society, people live in an environment with artificial or natural noise. Especially, the sound that corresponds to the artificial noise makes the noise itself and affects each other because many people live and work in the city. Sounds are generated by the activities and causes of various people, such as construction sites, aircraft, production machinery, or road traffic. These sounds are essential elements in human life and are recognized and judged by human auditory organs. Noise is a sound that you do not want to hear by subjective evaluation, and it is a loud sound that gives hearing damage or a sound that causes physical and mental harm. In this study, we introduce the method of stimulating the human hearing by finger vibration and explain the advantages of the proposed method in various kinds of a noise environment. And how to improve the sound quality to improve efficiency. In this paper, we propose a method to prevent the loss of hearing loss and the transmission of sound information based on proper signal to noise ratio when using portable IT equipment in various noise environments.

Measurement of Rhythmic Similarity for Auditory Memory Game (청각 기억 게임을 위한 리듬 유사도 측정 기술)

  • Kim, Ju-Wan;Lee, Se-Won;Park, Ho-Chong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.136-141
    • /
    • 2011
  • In this paper, a method for measuring rhythmic similarity between two sound signals for auditory memory game is proposed. The proposed method analyzes energy fluctuation, the temporal duration of energy peak, the timbre of two signals, and detects beat positions for each signal. Then, it determines the rhythm vector after compensating a difference in tempo and the number of beats between two signals. Finally, a method for rhythmic similarity measurement is defined as a function of the dissimilarity between two rhythm vectors and a difference in the number of beats. The rhythmic similarity measured by the proposed method and that by the subjective listening test are compared, and the correlation of 0.86 between two results is achieved.

Characteristics of Auditory Stereocilia in the Apical Turn of the Echolocating Bats by Scanning Electron Microscopy

  • Kim, Jinyong;Jung, Yongwook
    • Applied Microscopy
    • /
    • v.44 no.1
    • /
    • pp.8-14
    • /
    • 2014
  • The auditory system of the Korean greater horseshoe bat (Rhinolophus ferrumequinum korai, RFK) is adapted to its own echolocation signal, which consist of constant frequency (CF) element and frequency modulated (FM) element. In contrast, the Japanese long-fingered bat (Miniopterus schreibersii fuliginosus, MSF) emits FM signals. In the present study, the characteristics of stereocilia in RFK (a CF/FM bat) and MSF (a FM bat) were studied in the apical turn of the cochlea where the lower frequencies are transduced. Stereocilia lengths and numbers were quantitatively measured in RFK by scanning electron microscopy and compared with those of MSF. Each inner hair cells (IHCs) of RFK possessed three rows of stereocilia, whereas MSF possessed five rows of stereocilia. Gradients in stereocilia lengths and numbers of stereocilia of the IHCs of RFK were found to be less pronounced and fewer, respectively, than those of MSF. Each outer hair cells (OHCs) possessed three rows of stereocilia in both species. OHCs stereocilia in RFK that distinguished it from MSF were a shorter length and a greater number of stereocilia. These features suggest that the apical cochleas of RFK are adapted for the processing of higher frequency echolocation calls rather than that of MSF.

A study imitating human auditory system for tracking the position of sound source (인간의 청각 시스템을 응용한 음원위치 추정에 관한 연구)

  • Bae, Jeen-Man;Cho, Sun-Ho;Park, Chong-Kuk
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.878-881
    • /
    • 2003
  • To acquire an appointed speaker's clear voice signal from inspect-camera, picture-conference or hands free microphone eliminating interference noises needs to be preceded speaker's position automatically. Presumption of sound source position's basic algorithm is about measuring TDOA(Time Difference Of Arrival) from reaching same signals between two microphones. This main project uses ADF(Adaptive Delay Filter) [4] and CPS(Cross Power Spectrum) [5] which are one of the most important analysis of TDOA. From these analysis this project proposes presumption of real time sound source position and improved model NI-ADF which makes possible to presume both directions of sound source position. NI-ADF noticed that if auditory sense of humankind reaches above to some specified level in specified frequency, it will accept sound through activated nerve. NI-ADF also proposes practicable algorithm, the presumption of real time sound source position including both directions, that when microphone loads to some specified system, it will use sounds level difference from external system related to sounds of diffraction phenomenon. In accordance with the project, when existing both direction adaptation filter's algorithm measures sound source, it increases more than twice number by measuring one way. Preserving this weak point, this project proposes improved algorithm to presume real time in both directions.

  • PDF

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

The Masking Effect According in Olfactory Stimulus on Horns Stimulus While Driving in Graphic Driving Simulator (화상 자동차 시뮬레이터에서 운전 중에 경적음 자극에 대한 후각자극의 마스킹 효과)

  • Min, Cheol-Kee;Ji, Doo-Hwan;Ko, Bok-Soo;Kim, Jin-Soo;Lee, Dong-Hyung;Ryu, Tae-Beum;Shin, Moon-Soo;Chung, Soon-Cheol;Min, Byung-Chan;Kang, Jin-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.4
    • /
    • pp.227-234
    • /
    • 2012
  • In this study, the masking effect of olfactory stimulus on the awakening state due to sound stimuli while driving using Graphic Driving Simulator was observed through the response of autonomic nervous system. The test was conducted for 11 males in their twenties. The siren of ambulance car was presented to them as auditory stimulus for 30 secs while driving in a situation of high way in the condition of both peppermint and control, respectively, and LF/HF ratio of HRV (Heart Rate Variability), the activity index of sympathetic nerve, and GSR (Galvanic Skin Response) response were examined. The test was proceeded in the order of three stages, that is, sound stimuli (test 1), driving performance, and sound stimuli (test 2), and fragrance stimulus, driving performance, and sound stimuli (test 3), and the physiological signal of GSR, HRV was measured in the whole stages. As a result of test, comparing the results of before and after auditory stimulus test (1) (p < 0.01), test (2) (p < 0.05), and test (3) (p < 0.01), driving performance test (2) (p < 0.01), test (3) (p < 0.01), and olfactory stimulus test (3) (p < 0.05), respectively, GSR response increased, showing significant difference in all the tests. It indicates that when auditory stimulus was presented to the subjects, they were in the awakening state as sympathetic nervous system got activated. As a result of comparing auditory stimulus while driving before and after presenting olfactory stimulus, there was no significant difference in GSR response. The LF/HF ratio of HRV increased, showing a significant difference only in test (2) (p < 0.05), and in driving performance test (2) (p < 0.05) in auditory stimulus, however, it showed no significant difference in olfactory stimulus. As a result of comparing auditory stimulus while driving before and after presenting olfactory stimulus, there was a decrease, showing significant difference (p < 0.05) in LF/HF ratio of HRV. That is, it means that the activation of sympathetic nervous system decreased, and that parasympathetic nervous system got activated. From these results, it was observed that while driving, the awakening level due to auditory stimulus was settled with olfactory stimulus. In conclusion, it was drawn that while driving, olfactory stimulus could have the masking effect on auditory stimulus.

Partial Principal Component Elimination Method and Extended Temporal Decorrelation Method for the Exclusion of Spontaneous Neuromagnetic Fields in the Multichannel SQUID Magnetoencephalography

  • Kim, Kiwoon;Lee, Yong-Ho;Hyukchan Kwon;Kim, Jin-Mok;Kang, Chan-Seok;Kim, In-Seon;Park, Yong-Ki
    • Progress in Superconductivity
    • /
    • v.4 no.2
    • /
    • pp.114-120
    • /
    • 2003
  • We employed a method eliminating a temporally partial principal component (PC) of multichannel-recorded neuromagnetic fields for excluding spatially correlated noises from event-evoked signals. The noises in magnetoencephalography (MEG) are considered to be mainly spontaneous neuromagnetic fields which are spatially correlated. In conventional MEG experiments, the amplitude of the spontaneous neuromagnetic field is much lager than that of the evoked signal and the synchronized characteristics of the correlated rhythmic noise makes it possible for us to extract the correlation noises from the evoked signal by means of the general PC analysis. However, the whole-time PC of the fields still contains a little projection component of the evoked signal and the elimination of the PC results in the distortion of the evoked signal. Especially, the distortion will not be negligible when the amplitude of the evoked signal is relatively large or when the evoked signals have a spatially-asymmetrical distribution which does not cancel out the corresponding elements of the covariance matrix. In the period of prestimulus, there are only the spontaneous fields and we can find the pure noise PC that is not including the evoked signal. Besides that, we propose a method, called the extended temporal decorrelation method (ETDM), to suppress the distortion of the noise PC from remanent evoked signal components. In this study, we applied the Partial Principal component elimination method (PPCE) and ETDM to simulated signals and the auditory evoked signals that had been obtained with our homemade 37-channel magnetometer-based SQUID system. We demonstrate here that PPCE and ETDM reduce the number of epochs required in averaging to about half of that required in conventional averaging.

  • PDF

Factors for Speech Signal Time Delay Estimation (음성 신호를 이용한 시간지연 추정에 미치는 영향들에 관한 연구)

  • Kwon, Byoung-Ho;Park, Young-Jin;Park, Youn-Sik
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.8
    • /
    • pp.823-831
    • /
    • 2008
  • Since it needs the light computational load and small database, sound source localization method using time delay of arrival(TDOA method) is applied at many research fields such as a robot auditory system, teleconferencing and so on. Researches for time delay estimation, which is the most important thing of TDOA method, had been studied broadly. However studies about factors for time delay estimation are insufficient, especially in case of real environment application. In 1997, Brandstein and Silverman announced that performance of time delay estimation deteriorates as reverberant time of room increases. Even though reverberant time of room is same, performance of estimation is different as the specific part of signals. In order to know that reason, we studied and analyzed the factors for time delay estimation using speech signal and room impulse response. In result, we can know that performance of time delay estimation is changed by different R/D ratio and signal characteristics in spite of same reverberant time. Also, we define the performance index(PI) to show a similar tendency to R/D ratio, and propose the method to improve the performance of time delay estimation with PI.

Multiple Vibration Signal Feedback for Mobile Devices (모바일 기기에서의 다중 진도 신호 피드백)

  • Yoo, Yongjae;Hwang, Inwook;Seo, Jongman;Choi, Seungmoon
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.8-17
    • /
    • 2012
  • In this paper, we introduce the appoaches that aim to improve user experience in mobile device by the use of multiple vibration signal feedback, conducted by Haptics and Virtual Reality laboratory at POSTECH. We introduce current progresses of our 'Vibrotactile flow using multiple vibration actuators' and 'Real-time dual-channel haptic music player.' The 'Vibrotactile flow using multiple vibration actuators' produces vibrotactile flow sensations by using multiple actuators and that improves the information transfer on mobile devices. The 'Real-time dual-channel haptic music player' generates vibrotactile sensation by transforming auditory signal, which improves the user experience of mobile devices. These approaches can be good examples to fulfill the demands of better information transfer capability and user experience on mobile devices.

  • PDF

Folded Architecture for Digital Gammatone Filter Used in Speech Processor of Cochlear Implant

  • Karuppuswamy, Rajalakshmi;Arumugam, Kandaswamy;Swathi, Priya M.
    • ETRI Journal
    • /
    • v.35 no.4
    • /
    • pp.697-705
    • /
    • 2013
  • Emerging trends in the area of digital very large scale integration (VLSI) signal processing can lead to a reduction in the cost of the cochlear implant. Digital signal processing algorithms are repetitively used in speech processors for filtering and encoding operations. The critical paths in these algorithms limit the performance of the speech processors. These algorithms must be transformed to accommodate processors designed to be high speed and have less area and low power. This can be realized by basing the design of the auditory filter banks for the processors on digital VLSI signal processing concepts. By applying a folding algorithm to the second-order digital gammatone filter (GTF), the number of multipliers is reduced from five to one and the number of adders is reduced from three to one, without changing the characteristics of the filter. Folded second-order filter sections are cascaded with three similar structures to realize the eighth-order digital GTF whose response is a close match to the human cochlea response. The silicon area is reduced from twenty to four multipliers and from twelve to four adders by using the folding architecture.