• Title/Summary/Keyword: 이상 음원 인식

Search Result 10, Processing Time 0.021 seconds

Design of direction control system for camera, Using sound source recognition and delay time. (음원인식 및 지연시간을 이용한 카메라의 방향제어 시스템 설계)

  • Lee, Hui-Tae;Kim, Young-Sub
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.1076-1078
    • /
    • 2017
  • 본 연구는 이상음원(비명, 유리창 파손음, 경적소리 등) 발생 시, 2개의 마이크로폰에 입력되는 사운드에 대하여 음원 방향추적 장치와 연결된 카메라에 음원의 방향 정보를 전송함으로써, 카메라의 View Point를 음원 발생방향으로 이동시켜 사고현장을 더욱 신속하게 대처할 수 있는 시스템에 대한 연구이다. 일반적인 음성을 이용한 감시카메라는 단순히 소리 발생 여부만 감지하지만, 본 시스템은 이상음원 발생 지점으로 카메라의 방향 제어를 가능하게 한다. 이상음원의 검출은 기존에 수집한 DB를 기반으로 비교, 분석 과정을 통하여 이상음원을 분류한다. 음원 발생 방향은 음원 발생 시, 마이크로폰에 도달하는 음원의 시간차에 따른 음파의 위상차를 계산하여 음원 발생 방향을 판단하게 된다.

Sound recognition and tracking system design using robust sound extraction section (주변 배경음에 강인한 구간 검출을 통한 음원 인식 및 위치 추적 시스템 설계)

  • Kim, Woo-Jun;Kim, Young-Sub;Lee, Gwang-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.8
    • /
    • pp.759-766
    • /
    • 2016
  • This paper is on a system design of recognizing sound sources and tracing locations from detecting a section of sound sources which is strong in surrounding environmental sounds about sound sources occurring in an abnormal situation by using signals within the section. In detection of the section with strong sound sources, weighted average delta energy of a short section is calculated from audio signals received. After inputting it into a low-pass filter, through comparison of values of the output result, a section strong in background sound is defined. In recognition of sound sources, from data of the detected section, using an HMM(: Hidden Markov Model) as a traditional recognition method, learning and recognition are realized from creating information to recognize sound sources. About signals of sound sources that surrounding background sounds are included, by using energy of existing signals, after detecting the section, compared with the recognition through the HMM, a recognition rate of 3.94% increase is shown. Also, based on the recognition result, location grasping by using TDOA(: Time Delay of Arrival) between signals in the section accords with 97.44% of angles of a real occurrence location.

Efficient Implementation of IFFT and FFT for PHAT Weighting Speech Source Localization System (PHAT 가중 방식 음성신호방향 추정시스템의 FFT 및 IFFT의 효율적인 구현)

  • Kim, Yong-Eun;Hong, Sun-Ah;Chung, Jin-Gyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.71-78
    • /
    • 2009
  • Sound source localization systems in service robot applications estimate the direction of a human voice. Time delay information obtained from a few separate microphones is widely used for the estimation of the sound direction. Correlation is computed in order to calculate the time delay between two signals. In addition, PHAT weighting function can be applied to significantly improve the accuracy of the estimation. However, FFT and IFFT operations in the PHAT weighting function occupy more than half of the area of the sound source localization system. Thus efficient FFT and IFFT designs are essential for the IP implementation of sound source localization system. In this paper, we propose an efficient FFT/IFFT design method based on the characteristics of human voice.

On the speaker's position estimation using TDOA algorithm in vehicle environments (자동차 환경에서 TDOA를 이용한 화자위치추정 방법)

  • Lee, Sang-Hun;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.17 no.2
    • /
    • pp.71-79
    • /
    • 2016
  • This study is intended to compare the performances of sound source localization methods used for stable automobile control by improving voice recognition rate in automobile environment and suggest how to improve their performances. Generally, sound source location estimation methods employ the TDOA algorithm, and there are two ways for it; one is to use a cross correlation function in the time domain, and the other is GCC-PHAT calculated in the frequency domain. Among these ways, GCC-PHAT is known to have stronger characteristics against echo and noise than the cross correlation function. This study compared the performances of the two methods above in automobile environment full of echo and vibration noise and suggested the use of a median filter additionally. We found that median filter helps both estimation methods have good performances and variance values to be decreased. According to the experimental results, there is almost no difference in the two methods' performances in the experiment using voice; however, using the signal of a song, GCC-PHAT is 10% more excellent than the cross correlation function in terms of the recognition rate. Also, when the median filter was added, the cross correlation function's recognition rate could be improved up to 11%. And in regarding to variance values, both methods showed stable performances.

Real-time Background Music System for Immersive Dialogue in Metaverse based on Dialogue Emotion (메타버스 대화의 몰입감 증진을 위한 대화 감정 기반 실시간 배경음악 시스템 구현)

  • Kirak Kim;Sangah Lee;Nahyeon Kim;Moonryul Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.1-6
    • /
    • 2023
  • To enhance immersive experiences for metaverse environements, background music is often used. However, the background music is mostly pre-matched and repeated which might occur a distractive experience to users as it does not align well with rapidly changing user-interactive contents. Thus, we implemented a system to provide a more immersive metaverse conversation experience by 1) developing a regression neural network that extracts emotions from an utterance using KEMDy20, the Korean multimodal emotion dataset 2) selecting music corresponding to the extracted emotions from an utterance by the DEAM dataset where music is tagged with arousal-valence levels 3) combining it with a virtual space where users can have a real-time conversation with avatars.

Performance Improvement of Speaker Recognition by MCE-based Score Combination of Multiple Feature Parameters (MCE기반의 다중 특징 파라미터 스코어의 결합을 통한 화자인식 성능 향상)

  • Kang, Ji Hoon;Kim, Bo Ram;Kim, Kyu Young;Lee, Sang Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.679-686
    • /
    • 2020
  • In this thesis, an enhanced method for the feature extraction of vocal source signals and score combination using an MCE-Based weight estimation of the score of multiple feature vectors are proposed for the performance improvement of speaker recognition systems. The proposed feature vector is composed of perceptual linear predictive cepstral coefficients, skewness, and kurtosis extracted with lowpass filtered glottal flow signals to eliminate the flat spectrum region, which is a meaningless information section. The proposed feature was used to improve the conventional speaker recognition system utilizing the mel-frequency cepstral coefficients and the perceptual linear predictive cepstral coefficients extracted with the speech signals and Gaussian mixture models. In addition, to increase the reliability of the estimated scores, instead of estimating the weight using the probability distribution of the convectional score, the scores evaluated by the conventional vocal tract, and the proposed feature are fused by the MCE-Based score combination method to find the optimal speaker. The experimental results showed that the proposed feature vectors contained valid information to recognize the speaker. In addition, when speaker recognition is performed by combining the MCE-based multiple feature parameter scores, the recognition system outperformed the conventional one, particularly in low Gaussian mixture cases.

Background Music Identification in TV Broadcasting Program Algorithm using Audio Peak Detection (오디오 피크 검출을 적용한 TV 방송 프로그램 내 배경음악 식별 알고리즘)

  • Lee, Jung-Sung;Kim, Hyoung-Gook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.06a
    • /
    • pp.34-35
    • /
    • 2013
  • 본 논문에서는 오디오 피크 검출을 적용한 TV 방송 프로그램내 배경음악 식별 알고리즘을 제안한다. 제안한 알고리즘은 음악 핑거프린트 추출 및 전송부, 음악구간 검출부, 음악 핑거프린트는 고속 매칭 및 정보전송부 세 부분으로 구성되어 있다. 음악 핑거프린트 추출 및 전송부에서는 음악 원음 오디오 데이터를 퓨리에 변환하여 스펙트럼 계수를 추출한다. 추출된 스펙트럼의 성분 중에서 일정한 문턱값 이상의 에너지를 가지는 값을 피크로 검출하고 검출된 피크를 이용하이 핑거프린트를 생성하고 데이터 베이스화한다. 음악구간 검출부에서는 입력된 방송 프로그램 오디오 데이터에 GMM(Gaussian Mixture Model)을 적용하여 음악과 음악 외 오디오 데이터를 분류한다. 음악 핑거프린트 고속 매칭 및 정보전송부에서는 음악구간이라고 인식된 쿼리 오디오 데이터를 음악 핑거프린트 추출 및 전송부와 동일한 과정을 통해 핑거프린트를 생성하고 데이터 베이스화된 음악 원음의 핑거프린트들과 비교하여 가장 유사한 음원의 정보를 TV의 화면에 자막으로 보여준다.

  • PDF

An analysis of the moving speed effect of the receiver array on the passive synthetic aperture signal processing (수동형 합성개구 신호처리에서 수신 배열 센서의 이동 속도에 대한 영향 분석)

  • Kim, Sea-Moon;Byun, Sung-Hoon;Oh, Sehyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.125-133
    • /
    • 2016
  • In order to obtain high-resolution seafloor images, research on SA (Synthetic Aperture) processing and the development of related underwater systems have been performed in many countries. Recently the SA processing is also recognized as an important technique in Korea and researchers started related basic study. However, most previous studies ignored the Doppler effect by a moving receiver array. In this paper reconstructed SAS (Synthetic Aperture Sonar) images and position errors are analyzed according to the speed of a moving array for understanding its moving effect on the SAS images. In the analysis the spatial frequency domain interpolation algorithm is used. The results show that as the moving speed of the array increases the estimated position error also increases and image distortion gets worse when we do not consider the array motion. However, if the compensated receiver signals considering the array motion are used the position error and image distortion can be eliminated. In conclusion a signal processing scheme which compensates the Doppler effect is necessary especially in the condition where the array speed is over 1 m/s.

Comparative analysis of the soundscape evaluation depending on the listening experiment methods (청감실험방식에 따른 음풍경 평가결과 비교분석)

  • Jo, A-Hyeon;Haan, Chan-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.287-301
    • /
    • 2022
  • The present study aims to investigate the difference of soundscape evaluation results from on-site field test and laboratory test which are commonly used for soundscape surveys. In order to do this, both field and lab tests were carried out at four different areas in Cheongju city. On-site questionnaire surveys were undertaken to 65 people at 13 points. Laboratory listening tests were carried out to 48 adults using recorded sounds and video. Laboratory tests were undertaken to two different groups who had experience of field survey or not. Also, two different sound reproduction tools, headphones and speakers, were used in laboratory tests. As a result, it was found that there is a very close correlation between sound loudness and annoyance in both field and laboratory tests. However, it was concluded that there must be a difference in recognizing the figure sounds between field and laboratory tests since it is hard to apprehend on-site situation only using visual and aural information provided in laboratory tests. In laboratory tests, it was shown that there is a some difference in perceived most loud figure sounds in two groups using headphones and speakers. Also, it was analyzed that there is a tendency that field experienced people recognize the figure sounds using their experienced memory while non-experienced people can not perceive the figure sounds.

Speech/Music Signal Classification Based on Spectrum Flux and MFCC For Audio Coder (오디오 부호화기를 위한 스펙트럼 변화 및 MFCC 기반 음성/음악 신호 분류)

  • Sangkil Lee;In-Sung Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.239-246
    • /
    • 2023
  • In this paper, we propose an open-loop algorithm to classify speech and music signals using the spectral flux parameters and Mel Frequency Cepstral Coefficients(MFCC) parameters for the audio coder. To increase responsiveness, the MFCC was used as a short-term feature parameter and spectral fluxes were used as a long-term feature parameters to improve accuracy. The overall voice/music signal classification decision is made by combining the short-term classification method and the long-term classification method. The Gaussian Mixed Model (GMM) was used for pattern recognition and the optimal GMM parameters were extracted using the Expectation Maximization (EM) algorithm. The proposed long-term and short-term combined speech/music signal classification method showed an average classification error rate of 1.5% on various audio sound sources, and improved the classification error rate by 0.9% compared to the short-term single classification method and 0.6% compared to the long-term single classification method. The proposed speech/music signal classification method was able to improve the classification error rate performance by 9.1% in percussion music signals with attacks and 5.8% in voice signals compared to the Unified Speech Audio Coding (USAC) audio classification method.