• Title/Summary/Keyword: Acoustic Signal Recognition

Search Result 71, Processing Time 0.021 seconds

Acoustic emission technique to identify stress corrosion cracking damage

  • Soltangharaei, V.;Hill, J.W.;Ai, Li;Anay, R.;Greer, B.;Bayat, Mahmoud;Ziehl, P.
    • Structural Engineering and Mechanics
    • /
    • v.75 no.6
    • /
    • pp.723-736
    • /
    • 2020
  • In this paper, acoustic emission (AE) and pattern recognition are utilized to identify the AE signal signatures caused by propagation of stress corrosion cracking (SCC) in a 304 stainless steel plate. The surface of the plate is under almost uniform tensile stress at a notch. A corrosive environment is provided by exposing the notch to a solution of 1% Potassium Tetrathionate by weight. The Global b-value indicated an occurrence of the first visible crack and damage stages during the SCC. Furthermore, a method based on linear regression has been developed for damage identification using AE data.

A Study on the Wavelet Transform of Acoustic Emission Signals Generated from Fusion-Welded Butt Joints in Steel during Tensile Test and its Applications (맞대기 용접 이음재 인장시험에서 발생한 음향방출 신호의 웨이블릿 변환과 응용)

  • Rhee, Zhang-Kyu
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.1
    • /
    • pp.26-32
    • /
    • 2007
  • This study was carried out fusion-welded butt joints in SWS 490A high strength steel subjected to tensile test that load-deflection curve. The windowed or short-time Fourier transform(WFT or STFT) makes possible for the analysis of non-stationary or transient signals into a joint time-frequency domain and the wavelet transform(WT) is used to decompose the acoustic emission(AE) signal into various discrete series of sequences over different frequency bands. In this paper, for acoustic emission signal analysis to use a continuous wavelet transform, in which the Gabor wavelet base on a Gaussian window function is applied to the time-frequency domain. A wavelet transform is demonstrated and the plots are very powerful in the recognition of the acoustic emission features. As a result, the technique of acoustic emission is ideally suited to study variables which control time and stress dependent fracture or damage process in metallic materials.

A Study on the Wavelet Transform of Acoustic Emission Signals Generated from Fusion-Welded Butt Joints in Steel during Tensile Test and its Applications (맞대기 용접 이음재 인장시험에서 발생한 음향방출 신호의 웨이블릿 변환과 응용)

  • Rhee Zhang-Kyu;Yoon Joung-Hwi;Woo Chang-Ki;Park Sung-Oan;Kim Bong-Gag;Jo Dae-Hee
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2005.05a
    • /
    • pp.342-348
    • /
    • 2005
  • This study was carried out fusion-welded butt joints in SWS 490A high strength steel subjected to tensile test that load-deflection curve. The windowed or short-time Fourier transform (WFT or SIFT) makes possible for the analysis of non-stationary or transient signals into a joint time-frequency domain and the wavelet transform (WT) is used to decompose the acoustic emission (AE) signal into various discrete series of sequences over different frequency bands. In this paper, for acoustic emission signal analysis to use a continuous wavelet transform, in which the Gabor wavelet base on a Gaussian window function is applied to the time-frequency domain. A wavelet transform is demonstrated and the plots are very powerful in the recognition of the acoustic emission features. As a result, the technique of acoustic emission is ideally suited to study variables which control time and stress dependent fracture or damage process in metallic materials.

  • PDF

An Adaptive Utterance Verification Framework Using Minimum Verification Error Training

  • Shin, Sung-Hwan;Jung, Ho-Young;Juang, Biing-Hwang
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.423-433
    • /
    • 2011
  • This paper introduces an adaptive and integrated utterance verification (UV) framework using minimum verification error (MVE) training as a new set of solutions suitable for real applications. UV is traditionally considered an add-on procedure to automatic speech recognition (ASR) and thus treated separately from the ASR system model design. This traditional two-stage approach often fails to cope with a wide range of variations, such as a new speaker or a new environment which is not matched with the original speaker population or the original acoustic environment that the ASR system is trained on. In this paper, we propose an integrated solution to enhance the overall UV system performance in such real applications. The integration is accomplished by adapting and merging the target model for UV with the acoustic model for ASR based on the common MVE principle at each iteration in the recognition stage. The proposed iterative procedure for UV model adaptation also involves revision of the data segmentation and the decoded hypotheses. Under this new framework, remarkable enhancement in not only recognition performance, but also verification performance has been obtained.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets

  • Iswarya, P.;Radha, V.
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1135-1148
    • /
    • 2017
  • Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

Development of Adaptive AE Signal Pattern Recognition Program and Application to Classification of Defects in Metal Contact Regions of Rotating Component (적응형 AE신호 형상 인식 프로그램 개발자 회전체 금속 접촉부 이상 분류에 관한 적용 연구)

  • Lee, K.Y.;Lee, C.M.;Kim, J.S.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.15 no.4
    • /
    • pp.520-530
    • /
    • 1996
  • In this study, the artificial defects in rotary compressor are classified using pattern recognition of acoustic emission signal. For this purpose the computer program is developed. The neural network classifier is compared with the statistical classifier such as the linear discriminant function classifier and empirical Bayesian classifier. It is concluded that the former is better. It is possible to acquire the recognition rate of above 99% by neural network classifier.

  • PDF

A Real-Time Sound Recognition System with a Decision Logic of Random Forest for Robots (Random Forest를 결정로직으로 활용한 로봇의 실시간 음향인식 시스템 개발)

  • Song, Ju-man;Kim, Changmin;Kim, Minook;Park, Yongjin;Lee, Seoyoung;Son, Jungkwan
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.273-281
    • /
    • 2022
  • In this paper, we propose a robot sound recognition system that detects various sound events. The proposed system is designed to detect various sound events in real-time by using a microphone on a robot. To get real-time performance, we use a VGG11 model which includes several convolutional neural networks with real-time normalization scheme. The VGG11 model is trained on augmented DB through 24 kinds of various environments (12 reverberation times and 2 signal to noise ratios). Additionally, based on random forest algorithm, a decision logic is also designed to generate event signals for robot applications. This logic can be used for specific classes of acoustic events with better performance than just using outputs of network model. With some experimental results, the performance of proposed sound recognition system is shown on real-time device for robots.

Frequency-Cepstral Features for Bag of Words Based Acoustic Context Awareness (Bag of Words 기반 음향 상황 인지를 위한 주파수-캡스트럴 특징)

  • Park, Sang-Wook;Choi, Woo-Hyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.248-254
    • /
    • 2014
  • Among acoustic signal analysis tasks, acoustic context awareness is one of the most formidable tasks in terms of complexity since it requires sophisticated understanding of individual acoustic events. In conventional context awareness methods, individual acoustic event detection or recognition is employed to generate a relevant decision on the impending context. However this approach may produce poorly performing decision results in practical situations due to the possibility of events occurring simultaneously or the acoustically similar events that are difficult to distinguish with each other. Particularly, the babble noise acoustic event occurring at a bus or subway environment may create confusion to context awareness task since babbling is similar in any environment. Therefore in this paper, a frequency-cepstral feature vector is proposed to mitigate the confusion problem during the situation awareness task of binary decisions: bus or metro. By employing the Support Vector Machine (SVM) as the classifier, the proposed feature vector scheme is shown to produce better performance than the conventional scheme.

Automatic speech recognition using acoustic doppler signal (초음파 도플러를 이용한 음성 인식)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.1
    • /
    • pp.74-82
    • /
    • 2016
  • In this paper, a new automatic speech recognition (ASR) was proposed where ultrasonic doppler signals were used, instead of conventional speech signals. The proposed method has the advantages over the conventional speech/non-speech-based ASR including robustness against acoustic noises and user comfortability associated with usage of the non-contact sensor. In the method proposed herein, 40 kHz ultrasonic signal was radiated toward to the mouth and the reflected ultrasonic signals were then received. Frequency shift caused by the doppler effects was used to implement ASR. The proposed method employed multi-channel ultrasonic signals acquired from the various locations, which is different from the previous method where single channel ultrasonic signal was employed. The PCA(Principal Component Analysis) coefficients were used as the features of ASR in which hidden markov model (HMM) with left-right model was adopted. To verify the feasibility of the proposed ASR, the speech recognition experiment was carried out the 60 Korean isolated words obtained from the six speakers. Moreover, the experiment results showed that the overall word recognition rates were comparable with the conventional speech-based ASR methods and the performance of the proposed method was superior to the conventional signal channel ASR method. Especially, the average recognition rate of 90 % was maintained under the noise environments.