• Title/Summary/Keyword: 소리 분류

Search Result 172, Processing Time 0.036 seconds

Classification of Whale Sounds using LPC and Neural Networks (신경망과 LPC 계수를 이용한 고래 소리의 분류)

  • An, Woo-Jin;Lee, Eung-Jae;Kim, Nam-Gyu;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.43-48
    • /
    • 2017
  • The underwater transients signals contain the characteristics of complexity, time varying, nonlinear, and short duration. So it is very hard to model for these signals with reference patterns. In this paper we separate the whole length of signals into some short duration of constant length with overlapping frame by frame. The 20th LPC(Linear Predictive Coding) coefficients are extracted from the original signals using Durbin algorithm and applied to neural network. The 65% of whole signals were learned and 35% of the signals were tested in the neural network with two hidden layers. The types of the whales for sound classification are Blue whale, Dulsae whale, Gray whale, Humpback whale, Minke whale, and Northern Right whale. Finally, we could obtain more than 83% of classification rate from the test signals.

  • PDF

CNN-based Automatic Machine Fault Diagnosis Method Using Spectrogram Images (스펙트로그램 이미지를 이용한 CNN 기반 자동화 기계 고장 진단 기법)

  • Kang, Kyung-Won;Lee, Kyeong-Min
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.3
    • /
    • pp.121-126
    • /
    • 2020
  • Sound-based machine fault diagnosis is the automatic detection of abnormal sound in the acoustic emission signals of the machines. Conventional methods of using mathematical models were difficult to diagnose machine failure due to the complexity of the industry machinery system and the existence of nonlinear factors such as noises. Therefore, we want to solve the problem of machine fault diagnosis as a deep learning-based image classification problem. In the paper, we propose a CNN-based automatic machine fault diagnosis method using Spectrogram images. The proposed method uses STFT to effectively extract feature vectors from frequencies generated by machine defects, and the feature vectors detected by STFT were converted into spectrogram images and classified by CNN by machine status. The results show that the proposed method can be effectively used not only to detect defects but also to various automatic diagnosis system based on sound.

A Study on the Classification of Fault Motors using Sound Data (소리 데이터를 이용한 불량 모터 분류에 관한 연구)

  • Il-Sik, Chang;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.885-896
    • /
    • 2022
  • Motor failure in manufacturing plays an important role in future A/S and reliability. Motor failure is detected by measuring sound, current, and vibration. For the data used in this paper, the sound of the car's side mirror motor gear box was used. Motor sound consists of three classes. Sound data is input to the network model through a conversion process through MelSpectrogram. In this paper, various methods were applied, such as data augmentation to improve the performance of classifying fault motors and various methods according to class imbalance were applied resampling, reweighting adjustment, change of loss function and representation learning and classification into two stages. In addition, the curriculum learning method and self-space learning method were compared through a total of five network models such as Bidirectional LSTM Attention, Convolutional Recurrent Neural Network, Multi-Head Attention, Bidirectional Temporal Convolution Network, and Convolution Neural Network, and the optimal configuration was found for motor sound classification.

Multi-Modal Scheme for Music Mood Classification (멀티 모달 음악 무드 분류 기법)

  • Choi, Hong-Gu;Jun, Sang-Hoon;Hwang, Een-Jun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.259-262
    • /
    • 2011
  • 최근 들어 소리의 세기나 하모니, 템포, 리듬 등의 다양한 음악 신호 특성을 기반으로 한 음악 무드 분류에 대한 연구가 활발하게 진행되고 있다. 본 논문에서는 음악 무드 분류의 정확도를 높이기 위하여 음악 신호 특성과 더불어 노래 가사와 소셜 네트워크 상에서의 사용자 평가 등을 함께 고려하는 멀티 모달 음악 무드 분류 기법을 제안한다. 이를 위해, 우선 음악 신호 특성에 대해 퍼지 추론 기반의 음악 무드 추출 기법을 적용하여 다수의 가능한 음악 무드를 추출한다. 다음으로 음악 가사에 대해 TF-IDF 기법을 적용하여 대표 감정 키워드를 추출하고 학습시킨 가사 무드 분류기를 사용하여 가사 음악 무드를 추출한다. 마지막으로 소셜 네트워크 상에서의 사용자 태그 등 사용자 피드백을 통한 음악 무드를 추출한다. 특정 음악에 대해 이러한 다양한 경로를 통한 음악 무드를 교차 분석하여 최종적으로 음악 무드를 결정한다. 음악 분류를 기반한 자동 음악 추천을 수행하는 사용자 만족도 평가 실험을 통해서 제안하는 기법의 효율성을 검증한다.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Digital Video Record System for Classification of Car Accident Sounds in the Parking Lot. (주차장 차량사고 음향분류 DVR시스템)

  • Yoon, Jae-Min
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.429-432
    • /
    • 2010
  • 주차장에서는 다양한 형태의 사건 사고가 발생하는데, 기존 DVR(CCTV)는 단순 영상녹화 기능만 지원하므로, 이를 효과적으로 분석하는데는 한계가 있다. 따라서, DVR의 영상카메라와 마이크를 통해서 입력되는 영상과 사운드 신호를 대상으로, 해당 영상이 발생하는 음향 신호의 종류를 파악하여, 특정 음향이 발생한 영상구간을 저장하여 이를 검색할 수 있다면, 주차장 관리자가 효과적으로 사건 사고를 대처할 수 있게 된다. 본 연구에서는 주차장에서 발생하는 차량관련 음향(충돌음, 과속음, 경적음, 유리파손, 비명)을 분류하기 위해 효과적인 특징벡터를 제안하고, 제안한 특징벡터를 이용하여 신경망 차량음향분류기를 설계하여 성능을 평가함으로써, 효과적으로 차량음향을 분류하기 위한 방법을 제안하였다. 또한, 신경망 차량음향분류기를 DVR시스템과 연동하여, 마이크로부터 입력되는 음향신호를 실시간 분석하고, 특정 소리가 발생한 영상구간을 기록함으로써, 음향 키워드에 의해서 해당 사고영상을 검색 및 디스플레이하는 시스템을 개발하였다.

  • PDF

Classification of bearded seals signal based on convolutional neural network (Convolutional neural network 기법을 이용한 턱수염물범 신호 판별)

  • Kim, Ji Seop;Yoon, Young Geul;Han, Dong-Gyun;La, Hyoung Sul;Choi, Jee Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2022
  • Several studies using Convolutional Neural Network (CNN) have been conducted to detect and classify the sounds of marine mammals in underwater acoustic data collected through passive acoustic monitoring. In this study, the possibility of automatic classification of bearded seal sounds was confirmed using a CNN model based on the underwater acoustic spectrogram images collected from August 2017 to August 2018 in East Siberian Sea. When only the clear seal sound was used as training dataset, overfitting due to memorization was occurred. By evaluating the entire training data by replacing some training data with data containing noise, it was confirmed that overfitting was prevented as the model was generalized more than before with accuracy (0.9743), precision (0.9783), recall (0.9520). As a result, the performance of the classification model for bearded seals signal has improved when the noise was included in the training data.

The Effect of Emotional Sounds on Multiple Target Search (정서적인 소리가 다중 목표 자극 탐색에 미치는 영향)

  • Kim, Hannah;Han, Kwang Hee
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.3
    • /
    • pp.301-322
    • /
    • 2015
  • This study examined the effect of emotional sounds on satisfaction of search (SOS). SOS occurs when detection of a target results in a lesser chance of finding subsequent targets when searching for an unknown number of targets. Previous studies have examined factors that may influence the phenomenon, but the effect of emotional sounds is yet to be identified. Therefore, the current study investigated how emotional sound affects magnitude of the SOS effect. In addition, participants' eye movements were recorded to determine the source of SOS errors. The search display included abstract T and L-shaped items on a cloudy background and positive and negative sounds. Results demonstrated that negative sounds produced the largest SOS effect by definition, but this was due to superior accuracy in low-salient single target trials. Response time, which represents efficiency, was consistently faster when negative sounds were provided, in all target conditions. On-target fixation classification revealed scanning error, which occurs because targets are not fixated, as the most prominent type of error. These results imply that the two dimensions of emotion - valence and arousal - interactively affect cognitive performance.

Classification of Apparel Fabrics according to Rustling Sounds and Their Transformed Colors

  • Park, Kye-Youn;Kim, Chun-Jeong;Chung, Hye-Jin;Cho, Gil-Soo
    • Science of Emotion and Sensibility
    • /
    • v.5 no.2
    • /
    • pp.23-28
    • /
    • 2002
  • The purpose of this study was to classify apparel fabrics according to rustling rounds and to analyze their transformed colors and mechanical properties. The rustling sounds of apparel fabrics were recorded and then transformed into colors using Mori's color-transforming program. The specimens were clustered into five groups according to sound properties, and each group was named as ‘Silky’,‘Crispy’,‘Paper-like’,‘Worsted’, and ‘Flaxy’, respectively. The Silky consisted of smooth and soft silk fabrics had the lowest value of LPT, $\Delta$f, ARC , loudness(B) and sharpness(z). Their transformed colors showed lots of red portion and color counts. The Crispy with crepe fabrics showed relatively low loudness(z) and sharpness(B), but diverse colors and color counts were appeared. The Paper-like showed the highest value of LPT, $\Delta$f and loudness(z). The Worsted composed of wool and wool-Like fabrics showed high values of LPT, $\Delta$f, loudness(z) and sharpness(B). The transformed rotors of the Paper-like and Worsted showed the blue mostly but color counts were less than the others. The Flaxy with rugged flax fabric had the highest fluctuation strength, and their transformed colors showed diversity.

  • PDF