• Title/Summary/Keyword: Sound data

Search Result 1,645, Processing Time 0.033 seconds

Optimal Thoracic Sound Data Extraction Using Principal Component Analysis (주성분 분석을 이용한 최적 흉부음 데이터 검출)

  • 임선희;박기영;최규훈;박강서;김종교
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2156-2159
    • /
    • 2003
  • Thoracic sound has been widely known as a good method to examine thoracic disease. But, it's difficult to diagnose with correct data according to patient's thoracic position from same patient who has thoracic disease. Therefore, it is necessary to normalize the data for lung sound objectively In this paper, we'd like to detect a useful data for medical examination by applying PCA(Principal Component Analysis) to thoracic sound data and then present a objective data about lung and heart sound for thoracic disease.

  • PDF

Method for 3D Visualization of Sound Data (사운드 데이터의 3D 시각화 방법)

  • Ko, Jae-Hyuk
    • Journal of Digital Convergence
    • /
    • v.14 no.7
    • /
    • pp.331-337
    • /
    • 2016
  • The purpose of this study is to provide a method to visualize the sound data to the three-dimensional image. The visualization of the sound data is performed according to the algorithm set after production of the text-based script that form the channel range of the sound data. The algorithm consists of a total of five levels, including setting sound channel range, setting picture frame for sound visualization, setting 3D image unit's property, extracting channel range of sound data and sound visualization, 3D visualization is performed with at least an operation signal input by the input device such as a mouse. With the sound files with the amount an animator can not finish in the normal way, 3D visualization method proposed in this study was highlighted that the low-cost, highly efficient way to produce creative artistic image by comparing the working time the animator with a study presented method and time for work. Future research will be the real-time visualization method of the sound data in a way that is going through a rendering process in the game engine.

Sound System Analysis for Health Smart Home

  • CASTELLI Eric;ISTRATE Dan;NGUYEN Cong-Phuong
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.237-243
    • /
    • 2004
  • A multichannel smart sound sensor capable to detect and identify sound events in noisy conditions is presented in this paper. Sound information extraction is a complex task and the main difficulty consists is the extraction of high­level information from an one-dimensional signal. The input of smart sound sensor is composed of data collected by 5 microphones and its output data is sent through a network. For a real time working purpose, the sound analysis is divided in three steps: sound event detection for each sound channel, fusion between simultaneously events and sound identification. The event detection module find impulsive signals in the noise and extracts them from the signal flow. Our smart sensor must be capable to identify impulsive signals but also speech presence too, in a noisy environment. The classification module is launched in a parallel task on the channel chosen by data fusion process. It looks to identify the event sound between seven predefined sound classes and uses a Gaussian Mixture Model (GMM) method. Mel Frequency Cepstral Coefficients are used in combination with new ones like zero crossing rate, centroid and roll-off point. This smart sound sensor is a part of a medical telemonitoring project with the aim of detecting serious accidents.

  • PDF

A Study on Elemental Technology Identification of Sound Data for Audio Forensics (오디오 포렌식을 위한 소리 데이터의 요소 기술 식별 연구)

  • Hyejin Ryu;Ah-hyun Park;Sungkyun Jung;Doowon Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.115-127
    • /
    • 2024
  • The recent increase in digital audio media has greatly expanded the size and diversity of sound data, which has increased the importance of sound data analysis in the digital forensics process. However, the lack of standardized procedures and guidelines for sound data analysis has caused problems with the consistency and reliability of analysis results. The digital environment includes a wide variety of audio formats and recording conditions, but current audio forensic methodologies do not adequately reflect this diversity. Therefore, this study identifies Life-Cycle-based sound data elemental technologies and provides overall guidelines for sound data analysis so that effective analysis can be performed in all situations. Furthermore, the identified elemental technologies were analyzed for use in the development of digital forensic techniques for sound data. To demonstrate the effectiveness of the life-cycle-based sound data elemental technology identification system presented in this study, a case study on the process of developing an emergency retrieval technology based on sound data is presented. Through this case study, we confirmed that the elemental technologies identified based on the Life-Cycle in the process of developing digital forensic technology for sound data ensure the quality and consistency of data analysis and enable efficient sound data analysis.

The Method of Elevation Accuracy In Sound Source Localization System (음원 위치 추정 시스템의 정확도 향상 방법)

  • Kim, Yong-Eun;Chung, Jin-Gyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.24-29
    • /
    • 2009
  • Sound source localization system is used in a robot, a video conference and CCTV(Closed-circuit television) systems. In this Sound source localization systems are applied to human and they can receive a number of sound data frames during speaking. In this paper, we propose methods which is reducing angle estimation error by selecting sound data frame which can more precisely compute the angles from inputted sound data frame. After selected data converted to angle, the error of sound source localization recognition system can be reduced by applying to medium filter. By the experiment using proposed system it is shown that the average error of angle estimation in sound source recognition system can be reduced up to 31 %.

(A study on the Sound Input Device and Data Base Configuration for Guitar Manufacturing) (기타음향 입력 장치 및 분석용 Data Base 구성에 관한 연구)

  • 정병태
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.8
    • /
    • pp.1063-1072
    • /
    • 2002
  • Characteristics of a guitar sound changes according to the internal structure a guitar. The developed system uses a PC to analyze a guitar sound, which will have different characteristics in accordance with the internal structure of the guitar and the material that the guitar made, and make a data base. The developed system consist of three parts; a mechanical body which holds a guitar and the internal structure of the guitar can be changed easily; DSP sound acquisition boards which have multi channel sound input and A/D converting abilities with RS232C data transfer to PC abilities; and software which runs on a PC to analyze the input sound and make a data base.

  • PDF

Enhanced Sound Signal Based Sound-Event Classification (향상된 음향 신호 기반의 음향 이벤트 분류)

  • Choi, Yongju;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.193-204
    • /
    • 2019
  • The explosion of data due to the improvement of sensor technology and computing performance has become the basis for analyzing the situation in the industrial fields, and various attempts to detect events based on such data are increasing recently. In particular, sound signals collected from sensors are used as important information to classify events in various application fields as an advantage of efficiently collecting field information at a relatively low cost. However, the performance of sound-event classification in the field cannot be guaranteed if noise can not be removed. That is, in order to implement a system that can be practically applied, robust performance should be guaranteed even in various noise conditions. In this study, we propose a system that can classify the sound event after generating the enhanced sound signal based on the deep learning algorithm. Especially, to remove noise from the sound signal itself, the enhanced sound data against the noise is generated using SEGAN applied to the GAN with a VAE technique. Then, an end-to-end based sound-event classification system is designed to classify the sound events using the enhanced sound signal as input data of CNN structure without a data conversion process. The performance of the proposed method was verified experimentally using sound data obtained from the industrial field, and the f1 score of 99.29% (railway industry) and 97.80% (livestock industry) was confirmed.

Temporal and Spatial Variability of Sound Speed in the Sea around the Ieodo (이어도 주변해역에서 수중음속의 시공간적 변동성)

  • Park, Kyeongju
    • Journal of Environmental Science International
    • /
    • v.29 no.11
    • /
    • pp.1141-1151
    • /
    • 2020
  • The impact of sound speed variability in the sea is the very important on acoustic propagation for the underwater acoustic systems. Understanding of the temporal and spatial variability of ocean sound speed in the sea around the Ieodo were obtained using oceanographic data (temperature, salinity). from the Korea Oceanographic Data Center, collected by season for 17 years. The vertical distributions of sound speed are mainly related to seasonal variations and various current such as Chinese coastal water, Yellow Sea Cold Water (YSCW), Kuroshio source water. The standard deviations show that great variations of sound speed exist in the upper layer and observation station between 16 and 18. In order to quantitatively explain the reason for sound speed variations, Empirical Orthogonal Function (EOF) analysis was performed on sound speed data at the Line 316 covering 68 cruises between 2002 and 2018. Three main modes of EOFs respectively revealed 55, 29, and 5% the total variance of sound speed. The first mode of the EOFs was associated with influence of surface heating. The second EOFs pattern shows that contributions of YSCW and surface heating. The first and second modes had seasonal and inter-annul variations.

A Study on Sound Recognition System Based on 2-D Transformation and CNN Deep Learning (2차원 변환과 CNN 딥러닝 기반 음향 인식 시스템에 관한 연구)

  • Ha, Tae Min;Cho, Seongwon;Tra, Ngo Luong Thanh;Thanh, Do Chi;Lee, Keeseong
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.31-37
    • /
    • 2022
  • This paper proposes a study on applying signal processing and deep learning for sound recognition that detects sounds commonly heard in daily life (Screaming, Clapping, Crowd_clapping, Car_passing_by and Back_ground, etc.). In the proposed sound recognition, several techniques related to the spectrum of sound waves, augmentation of sound data, ensemble learning for various predictions, convolutional neural networks (CNN) deep learning, and two-dimensional (2-D) data are used for improving the recognition accuracy. The proposed sound recognition technology shows that it can accurately recognize various sounds through experiments.

Classification of Asthma Disease Using Thoracic Data (흉부음 데이터를 이용한 천식 질환 판별)

  • Moon In-Seob;Choi Hyoung-Ki;Lee Chul-Hee;Park Ki-Young;Kim Chong-Kyo
    • MALSORI
    • /
    • no.49
    • /
    • pp.135-144
    • /
    • 2004
  • In this paper, we make a study of classification normal from abnormal - normal, asthma through analysis of thoracic sound to take use thoracic sound detection system. Thoracic sound detection system has a function to store thoracic sound and analyze the data. The wave shape of thoracic sound is similar to noise and is systematically generated by inhalation and exhalation breathing, therefore, in this paper, to classify asthma sound in thoracic sound, we could discriminate between normal and abnormal case using level crossing rate(LCR) and spectrogram energy rate.

  • PDF