• Title/Summary/Keyword: MEL

Search Result 586, Processing Time 0.026 seconds

Deep Learning-Based, Real-Time, False-Pick Filter for an Onsite Earthquake Early Warning (EEW) System (온사이트 지진조기경보를 위한 딥러닝 기반 실시간 오탐지 제거)

  • Seo, JeongBeom;Lee, JinKoo;Lee, Woodong;Lee, SeokTae;Lee, HoJun;Jeon, Inchan;Park, NamRyoul
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.71-81
    • /
    • 2021
  • This paper presents a real-time, false-pick filter based on deep learning to reduce false alarms of an onsite Earthquake Early Warning (EEW) system. Most onsite EEW systems use P-wave to predict S-wave. Therefore, it is essential to properly distinguish P-waves from noises or other seismic phases to avoid false alarms. To reduce false-picks causing false alarms, this study made the EEWNet Part 1 'False-Pick Filter' model based on Convolutional Neural Network (CNN). Specifically, it modified the Pick_FP (Lomax et al.) to generate input data such as the amplitude, velocity, and displacement of three components from 2 seconds ahead and 2 seconds after the P-wave arrival following one-second time steps. This model extracts log-mel power spectrum features from this input data, then classifies P-waves and others using these features. The dataset consisted of 3,189,583 samples: 81,394 samples from event data (727 events in the Korean Peninsula, 103 teleseismic events, and 1,734 events in Taiwan) and 3,108,189 samples from continuous data (recorded by seismic stations in South Korea for 27 months from 2018 to 2020). This model was trained with 1,826,357 samples through balancing, then tested on continuous data samples of the year 2019, filtering more than 99% of strong false-picks that could trigger false alarms. This model was developed as a module for USGS Earthworm and is written in C language to operate with minimal computing resources.

The Edge Computing System for the Detection of Water Usage Activities with Sound Classification (음향 기반 물 사용 활동 감지용 엣지 컴퓨팅 시스템)

  • Seung-Ho Hyun;Youngjoon Chee
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.147-156
    • /
    • 2023
  • Efforts to employ smart home sensors to monitor the indoor activities of elderly single residents have been made to assess the feasibility of a safe and healthy lifestyle. However, the bathroom remains an area of blind spot. In this study, we have developed and evaluated a new edge computer device that can automatically detect water usage activities in the bathroom and record the activity log on a cloud server. Three kinds of sound as flushing, showering, and washing using wash basin generated during water usage were recorded and cut into 1-second scenes. These sound clips were then converted into a 2-dimensional image using MEL-spectrogram. Sound data augmentation techniques were adopted to obtain better learning effect from smaller number of data sets. These techniques, some of which are applied in time domain and others in frequency domain, increased the number of training data set by 30 times. A deep learning model, called CRNN, combining Convolutional Neural Network and Recurrent Neural Network was employed. The edge device was implemented using Raspberry Pi 4 and was equipped with a condenser microphone and amplifier to run the pre-trained model in real-time. The detected activities were recorded as text-based activity logs on a Firebase server. Performance was evaluated in two bathrooms for the three water usage activities, resulting in an accuracy of 96.1% and 88.2%, and F1 Score of 96.1% and 87.8%, respectively. Most of the classification errors were observed in the water sound from washing. In conclusion, this system demonstrates the potential for use in recording the activities as a lifelog of elderly single residents to a cloud server over the long-term.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

Effective Speaker Recognition Technology Using Noise (잡음을 활용한 효과적인 화자 인식 기술)

  • Ko, Suwan;Kang, Minji;Bang, Sehee;Jung, Wontae;Lee, Kyungroul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.259-262
    • /
    • 2022
  • 정보화 시대 스마트폰이 대중화되고 실시간 인터넷 사용이 가능해짐에 따라, 본인을 식별하기 위한 사용자 인증이 필수적으로 요구된다. 대표적인 사용자 인증 기술로는 아이디와 비밀번호를 이용한 비밀번호 인증이 있지만, 키보드로부터 입력받는 이러한 인증 정보는 시각 장애인이나 손 사용이 불편한 사람, 고령층과 같은 사람들이 많은 서비스로부터 요구되는 아이디와 비밀번호를 기억하고 입력하기에는 불편함이 따를 뿐만 아니라, 키로거와 같은 공격에 노출되는 문제점이 존재한다. 이러한 문제점을 해결하기 위하여, 자신의 신체의 특징을 활용하는 생체 인증이 대두되고 있으며, 그중 목소리로 사용자를 인증한다면, 효과적으로 비밀번호 인증의 한계점을 극복할 수 있다. 이러한 화자 인식 기술은 KT의 기가 지니와 같은 음성 인식 기술에서 활용되고 있지만, 목소리는 위조 및 변조가 비교적 쉽기에 지문이나 홍채 등을 활용하는 인증 방식보다 정확도가 낮고 음성 인식 오류 또한 높다는 한계점이 존재한다. 상기 목소리를 활용한 사용자 인증 기술인 화자 인식 기술을 활용하기 위하여, 사용자 목소리를 학습시켰으며, 목소리의 주파수를 추출하는 MFCC 알고리즘을 이용해 테스트 목소리와 정확도를 측정하였다. 그리고 악의적인 공격자가 사용자 목소리를 흉내 내는 경우나 사용자 목소리를 마이크로 녹음하는 등의 방법으로 획득하였을 경우에는 높은 확률로 인증의 우회가 가능한 것을 검증하였다. 이에 따라, 더욱 효과적으로 화자 인식의 정확도를 향상시키기 위하여, 본 논문에서는 목소리에 잡음을 섞는 방법으로 화자를 인식하는 방안을 제안한다. 제안하는 방안은 잡음이 정확도에 매우 민감하게 반영되기 때문에, 기존의 인증 우회 방법을 무력화하고, 더욱 효과적으로 목소리를 활용한 화자 인식 기술을 제공할 것으로 사료된다.

  • PDF

Speech/Music Discrimination Using Spectrum Analysis and Neural Network (스펙트럼 분석과 신경망을 이용한 음성/음악 분류)

  • Keum, Ji-Soo;Lim, Sung-Kil;Lee, Hyon-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.5
    • /
    • pp.207-213
    • /
    • 2007
  • In this research, we propose an efficient Speech/Music discrimination method that uses spectrum analysis and neural network. The proposed method extracts the duration feature parameter(MSDF) from a spectral peak track by analyzing the spectrum, and it was used as a feature for Speech/Music discriminator combined with the MFSC. The neural network was used as a Speech/Music discriminator, and we have reformed various experiments to evaluate the proposed method according to the training pattern selection, size and neural network architecture. From the results of Speech/Music discrimination, we found performance improvement and stability according to the training pattern selection and model composition in comparison to previous method. The MSDF and MFSC are used as a feature parameter which is over 50 seconds of training pattern, a discrimination rate of 94.97% for speech and 92.38% for music. Finally, we have achieved performance improvement 1.25% for speech and 1.69% for music compares to the use of MFSC.

Performance Improvement of Cardiac Disorder Classification Based on Automatic Segmentation and Extreme Learning Machine (자동 분할과 ELM을 이용한 심장질환 분류 성능 개선)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.32-43
    • /
    • 2009
  • In this paper, we improve the performance of cardiac disorder classification by continuous heart sound signals using automatic segmentation and extreme learning machine (ELM). The accuracy of the conventional cardiac disorder classification systems degrades because murmurs and click sounds contained in the abnormal heart sound signals cause incorrect or missing starting points of the first (S1) and the second heart pulses (S2) in the automatic segmentation stage, In order to reduce the performance degradation due to segmentation errors, we find the positions of the S1 and S2 pulses, modify them using the time difference of S1 or S2, and extract a single period of heart sound signals. We then obtain a feature vector consisting of the mel-scaled filter bank energy coefficients and the envelope of uniform-sized sub-segments from the single-period heart sound signals. To classify the heart disorders, we use ELM with a single hidden layer. In cardiac disorder classification experiments with 9 cardiac disorder categories, the proposed method shows the classification accuracy of 81.6% and achieves the highest classification accuracy among ELM, multi-layer perceptron (MLP), support vector machine (SVM), and hidden Markov model (HMM).

A Relevant Distortion Criterion for Interpolation of the Head-Related Transfer Functions (머리 전달 함수의 보간에 적합한 왜곡 척도)

  • Lee, Ki-Seung;Lee, Seok-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.85-95
    • /
    • 2009
  • In the binaural synthesis environments, wide varieties of the head-related transfer functions (HRTFs) that have measured with a various direction would be desirable to obtain the accurate and various spatial sound images. To reduce the size' of HRTFs, interpolation has been often employed, where the HRTF for any direction is obtained by a limited number of the representative HRTFs. In this paper, we study on the distortion measures for interpolation, which has an important role in interpolation. With lhe various objective distortion metrics, the differences between the interpolated and the measured HRTFs were computed. These were then compared and analyzed with the results from the listening tests. From the results, the objective distortion measures were selected, that reflected the perceptual differences in spatial sound image. This measure was employed in a practical interpolation technique. We applied the proposed method to four kinds of an HRTF set, measured from three human heads and one mannequin. As a result, the Mel-frequency cepstral distortion was shown to be a good predictor for the differences in spatial sound location, when three HRTF measured from human, and the time-domain signal to distortion ratio revealed good prediction results for the entire four HRTF sets.

A study on improving the performance of the machine-learning based automatic music transcription model by utilizing pitch number information (음고 개수 정보 활용을 통한 기계학습 기반 자동악보전사 모델의 성능 개선 연구)

  • Daeho Lee;Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.207-213
    • /
    • 2024
  • In this paper, we study how to improve the performance of a machine learning-based automatic music transcription model by adding musical information to the input data. Where, the added musical information is information on the number of pitches that occur in each time frame, and which is obtained by counting the number of notes activated in the answer sheet. The obtained information on the number of pitches was used by concatenating it to the log mel-spectrogram, which is the input of the existing model. In this study, we use the automatic music transcription model included the four types of block predicting four types of musical information, we demonstrate that a simple method of adding pitch number information corresponding to the music information to be predicted by each block to the existing input was helpful in training the model. In order to evaluate the performance improvement proceed with an experiment using MIDI Aligned Piano Sounds (MAPS) data, as a result, when using all pitch number information, performance improvement was confirmed by 9.7 % in frame-based F1 score and 21.8 % in note-based F1 score including offset.

Determination of High-pass Filter Frequency with Deep Learning for Ground Motion (딥러닝 기반 지반운동을 위한 하이패스 필터 주파수 결정 기법)

  • Lee, Jin Koo;Seo, JeongBeom;Jeon, SeungJin
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.183-191
    • /
    • 2024
  • Accurate seismic vulnerability assessment requires high quality and large amounts of ground motion data. Ground motion data generated from time series contains not only the seismic waves but also the background noise. Therefore, it is crucial to determine the high-pass cut-off frequency to reduce the background noise. Traditional methods for determining the high-pass filter frequency are based on human inspection, such as comparing the noise and the signal Fourier Amplitude Spectrum (FAS), f2 trend line fitting, and inspection of the displacement curve after filtering. However, these methods are subject to human error and unsuitable for automating the process. This study used a deep learning approach to determine the high-pass filter frequency. We used the Mel-spectrogram for feature extraction and mixup technique to overcome the lack of data. We selected convolutional neural network (CNN) models such as ResNet, DenseNet, and EfficientNet for transfer learning. Additionally, we chose ViT and DeiT for transformer-based models. The results showed that ResNet had the highest performance with R2 (the coefficient of determination) at 0.977 and the lowest mean absolute error (MAE) and RMSE (root mean square error) at 0.006 and 0.074, respectively. When applied to a seismic event and compared to the traditional methods, the determination of the high-pass filter frequency through the deep learning method showed a difference of 0.1 Hz, which demonstrates that it can be used as a replacement for traditional methods. We anticipate that this study will pave the way for automating ground motion processing, which could be applied to the system to handle large amounts of data efficiently.

Harnessing the Power of Voice: A Deep Neural Network Model for Alzheimer's Disease Detection

  • Chan-Young Park;Minsoo Kim;YongSoo Shim;Nayoung Ryoo;Hyunjoo Choi;Ho Tae Jeong;Gihyun Yun;Hunboc Lee;Hyungryul Kim;SangYun Kim;Young Chul Youn
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Background and Purpose: Voice, reflecting cerebral functions, holds potential for analyzing and understanding brain function, especially in the context of cognitive impairment (CI) and Alzheimer's disease (AD). This study used voice data to distinguish between normal cognition and CI or Alzheimer's disease dementia (ADD). Methods: This study enrolled 3 groups of subjects: 1) 52 subjects with subjective cognitive decline; 2) 110 subjects with mild CI; and 3) 59 subjects with ADD. Voice features were extracted using Mel-frequency cepstral coefficients and Chroma. Results: A deep neural network (DNN) model showed promising performance, with an accuracy of roughly 81% in 10 trials in predicting ADD, which increased to an average value of about 82.0%±1.6% when evaluated against unseen test dataset. Conclusions: Although results did not demonstrate the level of accuracy necessary for a definitive clinical tool, they provided a compelling proof-of-concept for the potential use of voice data in cognitive status assessment. DNN algorithms using voice offer a promising approach to early detection of AD. They could improve the accuracy and accessibility of diagnosis, ultimately leading to better outcomes for patients.