• Title/Summary/Keyword: speech signal processing

Search Result 331, Processing Time 0.024 seconds

A Novel Model, Recurrent Fuzzy Associative Memory, for Recognizing Time-Series Patterns Contained Ambiguity and Its Application (모호성을 포함하고 있는 시계열 패턴인식을 위한 새로운 모델 RFAM과 그 응용)

  • Kim, Won;Lee, Joong-Jae;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.449-456
    • /
    • 2004
  • This paper proposes a novel recognition model, a recurrent fuzzy associative memory(RFAM), for recognizing time-series patterns contained an ambiguity. RFAM is basically extended from FAM(Fuzzy Associative memory) by adding a recurrent layer which can be used to deal with sequential input patterns and to characterize their temporal relations. RFAM provides a Hebbian-style learning method which establishes the degree of association between input and output. The error back-propagation algorithm is also adopted to train the weights of the recurrent layer of RFAM. To evaluate the performance of the proposed model, we applied it to a word boundary detection problem of speech signal.

Room Acoustic Measurement System Using Impulse Response (임펄스응답을 이용한 실내음향 측정 시스템)

    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.63-67
    • /
    • 1999
  • Recently, a method of measuring impulse response is widely used for a room acoustic evaluation instead of measuring reverberation time by white noise excitation. Comparing with the traditional reverberation time measurement, this method has many advantages such as good repeatability and the ability to extract various room acoustic parameters at one measurement. In this study, the author developed a measuring system that can extract mono-aural room acoustic parameters from an impulse response measured with MLS (Maximum Length Sequence) signal excitation. These room acoustic parameters include reverberation times(EDT, RT), speech intelligibilities(C50, C80, D, U50, U80, AI) and sound strength(G). This paper introduces the configuration of the developed measuring system, test results and discussions for the measurements at several rooms.

  • PDF

HEEAS: On the Implementation and an Animation Algorithm of an Emotional Expression (HEEAS: 감정표현 애니메이션 알고리즘과 구현에 관한 연구)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.3
    • /
    • pp.125-134
    • /
    • 2006
  • The purpose of this paper is constructed a HEEAAS(Human Emotional Expression Animaion System), which is an animation system to show both the face and the body motion from the inputted voice about just 4 types of emotions such as fear, dislike, surprise and normal. To implement our paper, we chose the korean young man in his twenties who was to show appropriate emotions the most correctly. Also, we have focused on reducing the processing time about making the real animation in making both face and body codes of emotions from the inputted voice signal. That is, we can reduce the search time to use the binary search technique from the face and body motion databases, Throughout the experiment, we have a 99.9% accuracy of the real emotional expression in the cartoon animation.

  • PDF

On the Behavior of the Signed Regressor Least Mean Squares Adaptation with Gaussian Inputs (가우시안 입력신호에 대한 Signed Regressor 최소 평균자승 적응 방식의 동작 특성)

  • 조성호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.1028-1035
    • /
    • 1993
  • The signed regressor (SR) algorithm employs one bit quantization on the input regressor (or tap input) in such a way that the quantized input sequences become +1 or -1. The algorithm is computationally more efficient by nature than the popular least mean square (LMS) algorithm. The behavior of the SR algorithm unfortunately is heavily dependent on the characteristics of the input signal, and there are some Inputs for which the SR algorithm becomes unstable. It is known, however, that such a stability problem does not take place with the SR algorithm when the input signal is Gaussian, such as in the case of speech processing. In this paper, we explore a statistical analysis of the SR algorithm. Under the assumption that signals involved are zero-mean and Gaussian, and further employing the commonly used independence assumption, we derive a set of nonlinear evolution equations that characterizes the mean and mean-squared behavior of the SR algorithm. Experimental results that show very good agreement with our theoretical derivations are also presented.

  • PDF

Implementation of a G,723.1 Annex A Using a High Performance DSP (고성능 DSP를 이용한 G.723.1 Annex A 구현)

  • 최용수;강태익
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.7
    • /
    • pp.648-655
    • /
    • 2002
  • This paper describes implementation of a multi-channel G.723.1 Annex A (G.723.1A) focused on code optimization using a high performance general purpose Digital Signal Processor (DSP), To implement a multi-channel G.723.1A functional complexities of the ITU-T G.723.1A fixed-point C-code are measures an analyzed. Then we sort and optimize C functions in complexity order. In parallel with optimization, we verify the bit-exactness of the optimized code using the ITU-T test vectors. Using only internal memory, the optimized code can perform full-duplex 17 channel processing. In addition, we further increase the number of available channels per DSP into 22 using fast codebook search algorithms, referred to as bit -compatible optimization.

Improving Fidelity of Synthesized Voices Generated by Using GANs (GAN으로 합성한 음성의 충실도 향상)

  • Back, Moon-Ki;Yoon, Seung-Won;Lee, Sang-Baek;Lee, Kyu-Chul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.9-18
    • /
    • 2021
  • Although Generative Adversarial Networks (GANs) have gained great popularity in computer vision and related fields, generating audio signals independently has yet to be presented. Unlike images, an audio signal is a sampled signal consisting of discrete samples, so it is not easy to learn the signals using CNN architectures, which is widely used in image generation tasks. In order to overcome this difficulty, GAN researchers proposed a strategy of applying time-frequency representations of audio to existing image-generating GANs. Following this strategy, we propose an improved method for increasing the fidelity of synthesized audio signals generated by using GANs. Our method is demonstrated on a public speech dataset, and evaluated by Fréchet Inception Distance (FID). When employing our method, the FID showed 10.504, but 11.973 as for the existing state of the art method (lower FID indicates better fidelity).

CNN based dual-channel sound enhancement in the MAV environment (MAV 환경에서의 CNN 기반 듀얼 채널 음향 향상 기법)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1506-1513
    • /
    • 2019
  • Recently, as the industrial scope of multi-rotor unmanned aerial vehicles(UAV) is greatly expanded, the demands for data collection, processing, and analysis using UAV are also increasing. However, the acoustic data collected by using the UAV is greatly corrupted by the UAV's motor noise and wind noise, which makes it difficult to process and analyze the acoustic data. Therefore, we have studied a method to enhance the target sound from the acoustic signal received through microphones connected to UAV. In this paper, we have extended the densely connected dilated convolutional network, one of the existing single channel acoustic enhancement technique, to consider the inter-channel characteristics of the acoustic signal. As a result, the extended model performed better than the existed model in all evaluation measures such as SDR, PESQ, and STOI.

Masking Level Difference: Performance of School Children Aged 7-12 Years

  • de Carvalho, Nadia Giulian;do Amaral, Maria Isabel Ramos;de Barros, Vinicius Zuffo;dos Santos, Maria Francisca Colella
    • Journal of Audiology & Otology
    • /
    • v.25 no.2
    • /
    • pp.65-71
    • /
    • 2021
  • Background and Objectives: In masking level difference (MLD), the masked detection threshold for a signal is determined as a function of the relative interaural differences between the signal and the masker. Study 1 analyzed the results of school-aged children with good school performance in the MLD test, and study 2 compared their results with those of a group of children with poor academic performance. Subjects and Methods: Study 1 was conducted with 47 school-aged children with good academic performance (GI) and study 2 was carried out with 32 school-aged children with poor academic performance (GII). The inclusion criteria adopted for both studies were hearing thresholds within normal limits in basic audiological evaluation. Study 1 also considered normal performance in the central auditory processing test battery and absence of auditory complaints and/or of attention, language or speech issues. The MLD test was administered with a pure pulsatile tone of 500 Hz, in a binaural mode and intensity of 50 dBSL, using a CD player and audiometer. Results: In study 1, no significant correlation was observed, considering the influence of the variables age and sex in relation to the results obtained in homophase (SoNo), antiphase (SπNo) and MLD threshold conditions. The final mean MLD threshold was 13.66 dB. In study 2, the variables did not influence the test performance either. There was a significant difference between test results in SπNo conditions of the two groups, while no differences were found both in SoNo conditions and the final result of MLD. Conclusions: In study 1, the cut-off criterion of school-aged children in the MLD test was 9.3 dB. The variables (sex and age) did not interfere with the MLD results. In study 2, school performance did not differ in the MLD results. GII group showed inferior results than GI group, only in SπNo condition.

Masking Level Difference: Performance of School Children Aged 7-12 Years

  • de Carvalho, Nadia Giulian;do Amaral, Maria Isabel Ramos;de Barros, Vinicius Zuffo;dos Santos, Maria Francisca Colella
    • Korean Journal of Audiology
    • /
    • v.25 no.2
    • /
    • pp.65-71
    • /
    • 2021
  • Background and Objectives: In masking level difference (MLD), the masked detection threshold for a signal is determined as a function of the relative interaural differences between the signal and the masker. Study 1 analyzed the results of school-aged children with good school performance in the MLD test, and study 2 compared their results with those of a group of children with poor academic performance. Subjects and Methods: Study 1 was conducted with 47 school-aged children with good academic performance (GI) and study 2 was carried out with 32 school-aged children with poor academic performance (GII). The inclusion criteria adopted for both studies were hearing thresholds within normal limits in basic audiological evaluation. Study 1 also considered normal performance in the central auditory processing test battery and absence of auditory complaints and/or of attention, language or speech issues. The MLD test was administered with a pure pulsatile tone of 500 Hz, in a binaural mode and intensity of 50 dBSL, using a CD player and audiometer. Results: In study 1, no significant correlation was observed, considering the influence of the variables age and sex in relation to the results obtained in homophase (SoNo), antiphase (SπNo) and MLD threshold conditions. The final mean MLD threshold was 13.66 dB. In study 2, the variables did not influence the test performance either. There was a significant difference between test results in SπNo conditions of the two groups, while no differences were found both in SoNo conditions and the final result of MLD. Conclusions: In study 1, the cut-off criterion of school-aged children in the MLD test was 9.3 dB. The variables (sex and age) did not interfere with the MLD results. In study 2, school performance did not differ in the MLD results. GII group showed inferior results than GI group, only in SπNo condition.

A study on the clinical usefulness and improvement of hearing in noise test in evaluating central auditory processing (중추 청각 처리 기능 평가에서 hearing in noise test의 임상적 유용성과 개선점 고찰)

  • Han, Soo-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.108-113
    • /
    • 2022
  • Speech recognition in noise situation is an important skill for effective communication. Hearing In Noise Test (HINT) has been suggested as a clinical tool to evaluate these aspects. However, this tool has not been used widely in domestic clinics. In this study, psychophysical aspects of HINT and burdens in clinical application were analyzed to improve the applicability of the tool. The difficulty in understanding speech in the elderly population is due to hearing loss based on aging of peripheral and central auditory pathways. As typical clinical cases, HINT scores for young and elderly listeners (20s vs 70s) were compared. Four conditions of HINT test were Quiet (Q), Noise Front (NF), Noise Right (NR), and Noise Left (NL). Quantitative scores showed that the elderly listener required more Signal to Noris Ratio (SNR) values than the younger counterpart in noisy situations. Although both showed Binaural Masking Level Difference (BMLD) effect, the strength was smaller in the elder. However, the age-matched normalized data were not established in detail for clinical application. Confirmed usefulness of HINT and the related improvement in clinical measuring procedure were suggested.