• Title/Summary/Keyword: Sound Signal

Search Result 894, Processing Time 0.024 seconds

Analysis of Heart Sound Using the Wavelet Transform (Wavelet Transform을 이용한 Heart Sound Analysis)

  • 위지영;김중규
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.959-962
    • /
    • 2000
  • A heart sound algorithm, which separates the heart sound signal into four parts; the first heart sound, the systolic period, the second heart sound, and the diastolic period has been developed. The algorithm uses discrete intensity envelopes of approximations of the wavelet transform analysis method to the phonocard-iogram(PCG)signal. Heart sound a highly nonstation-ary signal, so in the analysis of heart sound, it is important to study the frequency and time information. Further more, Wavelet Transform provides more features and characteristics of the PCG signal that will help physician to obtain qualitative and quantitative measurements of the heart sound.

  • PDF

Sound Quality Evaluation of Turn-signal of a Passenger Vehicle based on Brain Signal (뇌파 측정을 이용한 차량 깜빡이 소리의 음질 평가)

  • Shin, Tae-Jin;Lee, Young-Jun;Lee, Sang-Kwon
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.22 no.11
    • /
    • pp.1137-1143
    • /
    • 2012
  • This paper presents the correlation between psychological and physiological acoustics for the automotive sound. The research purpose of this paper is to evaluate the sound quality of turn-signal sound of a passenger car based EEG signal. The previous method for the objective evaluation of sound quality is to use sound metrics based on psychological acoustics. This method uses not only psychological acoustics but also physiological acoustics. For this work, the sounds of 7 premium passenger cars are recorded and evaluated subjectively by 30 persons. The correlation between this subjective rating and sound metrics is calculated based on psychological acoustics. Finally the correlation between the subjective rating and the EEG signal measured on the brain is also calculated. Throughout these results the new evaluation system for the sound quality on interior sound of a passenger car has been developed based on bio-signal.

Automatic Classification of Continuous Heart Sound Signals Using the Statistical Modeling Approach (통계적 모델링 기법을 이용한 연속심음신호의 자동분류에 관한 연구)

  • Kim, Hee-Keun;Chung, Yong-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.144-152
    • /
    • 2007
  • Conventional research works on the classification of the heart sound signal have been done mainly with the artificial neural networks. But the analysis results on the statistical characteristic of the heart sound signal have shown that the HMM is suitable for modeling the heart sound signal. In this paper, we model the various heart sound signals representing different heart diseases with the HMM and find that the classification rate is much affected by the clustering of the heart sound signal. Also, the heart sound signal acquired in real environments is a continuous signal without any specified starting and ending points of time. Hence, for the classification based on the HMM, the continuous cyclic heart sound signal needs to be manually segmented to obtain isolated cycles of the signal. As the manual segmentation will incur the errors in the segmentation and will not be adequate for real time processing, we propose a variant of the ergodic HMM which does not need segmentation procedures. Simulation results show that the proposed method successfully classifies continuous heart sounds with high accuracy.

A Study about Direction Estimate Device of the Sound Source using Input Time Difference by Microphones′ Arrangement (마이크로폰 배열로 발생되는 입력 시간차를 이용한 음원의 방향 추정 장치에 관한 연구)

  • 윤준호;최기훈;유재명
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.91-98
    • /
    • 2004
  • Human uses level difference and time difference to get space information. Therefore this paper shows that method to presume direction of sound source by time difference and to mark presumed position. The position means direction from geometrical center of sensors to the sound source. To get the time difference of microphones input level, we will be explained about arrangement of microphones which used for the sensor to take the sound signal. It is included distance among the 3 microphones and distance between microphones and sound source. Secondly, input signals are transmitted to CPU througth digital process. CPU is used to DSP(Digital Signal Processor) for manage the signal by real time. Finally, the position of sound source is perceived by an explained algorithm in this paper.

A Study on the Detection of Small Arm Rifle Sound Using the Signal Modelling Method (신호 모델링 기법을 이용한 소총화기 신호 검출에 대한 연구)

  • Shin, Mincheol;Park, Kyusik
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.443-451
    • /
    • 2015
  • This paper proposes a signal modelling method that can effectively detect the shock wave(SW) sound and muzzle blast(MB) sound from the gunshot of a small arm rifle. In order to localize a counter sniper in battlefield, an accurate detection of both shock wave sound and muzzle blast sound are the necessary keys in estimating the direction and the distance of the counter sniper. To verify the performance of the proposed algorithm, a real gunshot sound in a domestic military shooting range was recorded and analyzed. From the experimental results, the proposed signal modelling method was found to be superior to the comparative system more than 20% in a shock wave detection and 5% in a muzzle blast detection, respectively.

Heart Sound Recognition by Analysis of wavelet transform and Neural network.

  • Lee, Jung-Jun;Lee, Sang-Min;Hong, Seung-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1045-1048
    • /
    • 2000
  • This paper presents the application of the wavelet transform analysis and the neural network method to the phonocardiogram (PCG) signal. Heart sound is a acoustic signal generated by cardiac valves, myocardium and blood flow and is a very complex and nonstationary signal composed of many source. Heart sound can be discriminated normal heart sound and heart murmur. Murmurs have broader frequency bandwidth than the normal ones and can occur at random position of cardiac cycle. In this paper, we classified the group of heart sound as normal heart sound(NO), pre-systolic murmur(PS), early systolic murmur(ES), late systolic murmur(LS), early diastolic murmur(ED). And we used the wavelet transform to shorten artifacts and strengthen the low level signal. The ANN system was trained and tested with the back- propagation algorithm from a large data set of examples-normal and abnormal signals classified by expert. The best ANN configuration occurred with 15 hidden layer neurons. We can get the accuracy of 85.6% by using the proposed algorithm.

  • PDF

Class Determination Based on Kullback-Leibler Distance in Heart Sound Classification

  • Chung, Yong-Joo;Kwak, Sung-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.2E
    • /
    • pp.57-63
    • /
    • 2008
  • Stethoscopic auscultation is still one of the primary tools for the diagnosis of heart diseases due to its easy accessibility and relatively low cost. It is, however, a difficult skill to acquire. Many research efforts have been done on the automatic classification of heart sound signals to support clinicians in heart sound diagnosis. Recently, hidden Markov models (HMMs) have been used quite successfully in the automatic classification of the heart sound signal. However, in the classification using HMMs, there are so many heart sound signal types that it is not reasonable to assign a new class to each of them. In this paper, rather than constructing an HMM for each signal type, we propose to build an HMM for a set of acoustically-similar signal types. To define the classes, we use the KL (Kullback-Leibler) distance between different signal types to determine if they should belong to the same class. From the classification experiments on the heart sound data consisting of 25 different types of signals, the proposed method proved to be quite efficient in determining the optimal set of classes. Also we found that the class determination approach produced better results than the heuristic class assignment method.

A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal (LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구)

  • 이희원;나덕수;정찬중;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

Sound System Analysis for Health Smart Home

  • CASTELLI Eric;ISTRATE Dan;NGUYEN Cong-Phuong
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.237-243
    • /
    • 2004
  • A multichannel smart sound sensor capable to detect and identify sound events in noisy conditions is presented in this paper. Sound information extraction is a complex task and the main difficulty consists is the extraction of high­level information from an one-dimensional signal. The input of smart sound sensor is composed of data collected by 5 microphones and its output data is sent through a network. For a real time working purpose, the sound analysis is divided in three steps: sound event detection for each sound channel, fusion between simultaneously events and sound identification. The event detection module find impulsive signals in the noise and extracts them from the signal flow. Our smart sensor must be capable to identify impulsive signals but also speech presence too, in a noisy environment. The classification module is launched in a parallel task on the channel chosen by data fusion process. It looks to identify the event sound between seven predefined sound classes and uses a Gaussian Mixture Model (GMM) method. Mel Frequency Cepstral Coefficients are used in combination with new ones like zero crossing rate, centroid and roll-off point. This smart sound sensor is a part of a medical telemonitoring project with the aim of detecting serious accidents.

  • PDF

Aurally Relevant Analysis by Synthesis - VIPER a New Approach to Sound Design -

  • Daniel, Peter;Pischedda, Patrice
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.1009-1009
    • /
    • 2003
  • VIPER a new tool for the VIsual PERception of sound quality and for sound design will be presented. Requirement for the visualization of sound quality is a signal analysis modeling the information processing of the ear. The first step of the signal processing implemented in VIPER, calculates an auditory spectrogram by a filter bank adapted to the time- and frequency resolution of the human ear. The second step removes redundant information by extracting time- and frequency contours from the auditory spectrogram in analogy to contours of the visual system. In a third step contours and/or auditory spectrogram can be resynthesised confirming that only aurally relevant information were extracted. The visualization of the contours in VIPER allows intuitively to grasp the important components of a signal. Contributions of parts of a signal to the overall quality can be easily auralized by editing and resynthesising the contours or the underlying auditory spectrogram. Resynthesis of time contours alone allows e.g. to auralize impulsive components separately from the tonal components. Further processing of the contours determines tonal parts in form of tracks. Audible differences between two versions of a sound can be visually inspected in VIPER through the help of auditory distance spectrograms. Applications are shown for the sound design of several interior noises of cars.

  • PDF