• Title/Summary/Keyword: 소리 분류

Search Result 172, Processing Time 0.033 seconds

System Realization of Whale Sound Reconstruction (고래 사운드 재생 시스템 구현)

  • Chong, Ui-Pil;Jeon, Seo-Yun;Hong, Jeong-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.145-150
    • /
    • 2019
  • We develop the system realization of whale sound reconstruction by inverse MFCC algorithm with the weighted L2-norm minimization techniques. The output products from this research will contribute to the whale tourism and multimedia content industry by combining whale sound contents with the prototype of 3D printing. First of all, we develop the softwares for generating whale sounds and install them into Raspberry Pi hardware and fasten them inside a 3D printed whale. The languages used in the development of this system are the C++ for whale-sounding classification, MATLAB and Python for whale-sounding playback algorithm, and Rhino 6 for 3D printing.

Snoring sound detection method using attention-based convolutional bidirectional gated recurrent unit (주의집중 기반의 합성곱 양방향 게이트 순환 유닛을 이용한 코골이 소리 검출 방식)

  • Kim, Min-Soo;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.155-160
    • /
    • 2021
  • This paper proposes an automatic method for detecting snore sound, one of the important symptoms of sleep apnea patients. In the proposed method, sound signals generated during sleep are input to detect a sound generation section, and a spectrogram transformed from the detected sound section is applied to a classifier based on a Convolutional Bidirectional Gated Recurrent Unit (CBGRU) with attention mechanism. The applied attention mechanism improved the snoring sound detection performance by extending the CBGRU model to learn discriminative feature representation for the snoring detection. The experimental results show that the proposed snoring detection method improves the accuracy by approximately 3.1 % ~ 5.5 % than existing method.

Differences in the Soundscape Characteristics of a Natural Park and an Urban Park (자연공원과 도시공원의 Soundscape 특성 차이)

  • Gim, Ji-youn;Lee, Jae-Yoon;Ki, Kyong-Seok
    • Korean Journal of Environment and Ecology
    • /
    • v.31 no.1
    • /
    • pp.112-118
    • /
    • 2017
  • The purpose of this study is to clarify the characteristics of the soundscape in a natural park and an urban park. The study sites were a natural park (Chiaksan Nationalpark) and an urban park (Rose Park) in Wonju City, Gangwon Province. Soundscape recording was conducted using Digital Recorder from April 2015 to January 2016. The analysis period was 8 days per season, with a total of 64 days (2 places). Analysis items were soundscape's daily cycle, soundscape type, and seasonal variation. According to the result of the daily cycle analysis of the soundscape, the natural park was dominated by the biophony in accordance with the cycle of the sun, and the airplane sound was observed in the daytime. Meanwhile, anthrophony was consistently produced in the urban park 24 hours a day. As a result of the detailed type analysis of the soundscape, the sources of biophony were classified into wild birds, mammals, insects and amphibians, and the sources of geophony were classified into rain and wind. The anthrophony was mostly airplane sound. In the urban park, wild birds appeared to most influence the biophonic sounds while rain and the wind were the most frequent sounds that contribute to geophony. The most influential components of anthrophony in the urban park were in the order of automobiles, people, music, construction, cleaning, and airplane sound. As a result of the seasonal difference analysis of the soundscape, it was statistically significant that the natural park shows higher biophony in spring, summer, and autumn compared to the urban park. Anthrophony in the urban park appeared to be higher than the natural park in all seasons. The significance of this study is that it is the first study to identify the characteristics of the soundscape of a natural park and an urban park emanating from different landscapes in South Korea.

Development of sound location visualization intelligent control system for using PM hearing impaired users (청각 장애인 PM 이용자를 위한 소리 위치 시각화 지능형 제어 시스템 개발)

  • Yong-Hyeon Jo;Jin Young Choi
    • Convergence Security Journal
    • /
    • v.22 no.2
    • /
    • pp.105-114
    • /
    • 2022
  • This paper is presents an intelligent control system that visualizes the direction of arrival for hearing impaired using personal mobility, and aims to recognize and prevent dangerous situations caused by sound such as alarm sounds and crack sounds on roads. The position estimation method of sound source uses a machine learning classification model characterized by generalized correlated phase transformation based on time difference of arrival. In the experimental environment reproducing the road situations, four classification models learned after extracting learning data according to wind speeds 0km/h, 5.8km/h, 14.2km/h, and 26.4km/h were compared with grid search cross validation, and the Muti-Layer Perceptron(MLP) model with the best performance was applied as the optimal algorithm. When wind occurred, the proposed algorithm showed an average performance improvement of 7.6-11.5% compared to the previous studies.

Classification System for Emotional Verbs and Adjectives (감정동사 및 감정형용사 분류에 관한 연구)

  • 장효진
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2001.08a
    • /
    • pp.29-34
    • /
    • 2001
  • 영상자료 및 소리자료의 색인과 검색을 위해서는 감정동사 및 감정형용사 등의 감정 어휘를 필요로 한다. 그러나 감정어휘는 그 뉘앙스가 미묘하여 분명한 분류체계가 없이는 체계적인 정리가 불가능하다. 이에 따라 본 연구에서는 국어학과 분류사전의 분류체계를 고찰하고 새로운 감정어휘의 분류방안을 연구하였으며, 감정에 따른 기쁨, 슬픔, 놀람, 공포, 혐오, 분노의 6가지 기본유형을 제시하였다.

  • PDF

Analysis of Acoustic Psychology of City Traffic and Nature Sounds (도심 교통음과 자연의 소리에 대한 음향심리 분석)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.356-362
    • /
    • 2009
  • In modern society, most people of the world are densely populated in cities so that the traffic sound has a very significant meaning. people tend to classify traffic sound as a noise pollution while they are likely to categorize most nature sound as positive. In this paper, we applied various forms of FFT filters into white noise belonged in nature sound to find frequency characteristics of white noise which preferred by people and confirm its correlation with nature sound. In addition, we conducted an analysis through the comparison of various traffic and nature sound waveforms and spectra. As a result of analysis, the traffic sound have characteristics which sound energy had concentrated on specific frequency bandwidth and point of time compared to nature sound. And we confirmed the fact that these characteristics had negative elements to which could affect to people. Lastly, by letting the subjects listen directly to both traffic and nature sound through brainwave experiment using electrode, the study measured the energy distribution of alpha waves and beta waves. As a result of experiments, it has been noted that urban sound created a noticeably larger amount of beta waves than nature sound; on the contrary, nature sound generated positive alpha waves. These results could directly confirm the negative effects of traffic sound and the positive effects of nature sound.

Active Slope Weighted-Constraints Based DTW Algorithm for Environmental Sound Recognition System (능동형 기울기 가중치 제약에 기반한 환경소리 인식시스템용 DTW 알고리듬)

  • Jung, Young-Jin;Lee, Yun-Jung;Kim, Pil-Un;Kim, Myoung-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.4
    • /
    • pp.471-480
    • /
    • 2008
  • The deaf can not recognize useful sound informations such as alarm, doorbell, siren, car horn, and phone ring etc., because they have the hearing impairment. To solve this problems, portable hearing assistive devices which have suitable environment sound recognition methods are needed. In this paper, the DTW algorithm for sound recognition system with new active slope weighting constraint method was proposed. The environment sound recognition methods consist of three processes. First process is extraction of start point and end point using frequency and amplitude of sound. Second process is extraction of features and third process is classification of features for given segments. As a result of the experiment, the recognition rate of the proposed method is over 90%. And, the recognition rate of the proposed method increased about 20% than the conventional algorithm. Therefore if there are developed portable assistive devices which use developed method to recognize environment sound for hearing-impaired persons, they could be more convenient in life.

  • PDF

Temporal attention based animal sound classification (시간 축 주의집중 기반 동물 울음소리 분류)

  • Kim, Jungmin;Lee, Younglo;Kim, Donghyeon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.406-413
    • /
    • 2020
  • In this paper, to improve the classification accuracy of bird and amphibian acoustic sound, we utilize GLU (Gated Linear Unit) and Self-attention that encourages the network to extract important features from data and discriminate relevant important frames from all the input sequences for further performance improvement. To utilize acoustic data, we convert 1-D acoustic data to a log-Mel spectrogram. Subsequently, undesirable component such as background noise in the log-Mel spectrogram is reduced by GLU. Then, we employ the proposed temporal self-attention to improve classification accuracy. The data consist of 6-species of birds, 8-species of amphibians including endangered species in the natural environment. As a result, our proposed method is shown to achieve an accuracy of 91 % with bird data and 93 % with amphibian data. Overall, an improvement of about 6 % ~ 7 % accuracy in performance is achieved compared to the existing algorithms.

Classification of Infant Crying Audio based on 3D Feature-Vector through Audio Data Augmentation

  • JeongHyeon Park;JunHyeok Go;SiUng Kim;Nammee Moon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • Infants utilize crying as a non-verbal means of communication [1]. However, deciphering infant cries presents challenges. Extensive research has been conducted to interpret infant cry audios [2,3]. This paper proposes the classification of infant cries using 3D feature vectors augmented with various audio data techniques. A total of 5 classes (belly pain, burping, discomfort, hungry, tired) are employed in the study dataset. The data is augmented using 5 techniques (Pitch, Tempo, Shift, Mixup-noise, CutMix). Tempo, Shift, and CutMix augmentation techniques demonstrated improved performance. Ultimately, applying effective data augmentation techniques simultaneously resulted in a 17.75% performance enhancement compared to models using single feature vectors and original data.

Jet-Edge Interaction and Sound Radiation in Edgetones (쐐기소리에서 분류-쐐기의 상호작용과 소리의 방사)

  • ;Powell A.
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.3
    • /
    • pp.584-590
    • /
    • 1994
  • A theoretical model has been developed to analyze the jet-edge interaction and the sound radiation. The edge responding to the sinuous impinging jet is regarded as an array of dipoles and their strength is determined by the boundary condition on the edge surface. The surface pressure distribution and the edgeforce are estimated using these dipoles. Then the pressure amplitude and directivity of the sound field is obtained by summing the radiating sounds from the dipole sources. It is found that the effective source is located a little distance downstream from the edge tip. And the directivity of the sound radiation is cardioid pattern near the edge but dipole pattern far from the edge. The theoretical model is confirmed by comparing the theoretical prediction of the edgeforce and sound pressure level with available experimental data.