• Title/Summary/Keyword: Speech signals

Search Result 499, Processing Time 0.022 seconds

Intonatin Conversion using the Other Speaker's Excitation Signal (他話者의 勵起信號를 이용한 抑揚變換)

  • Lee, Ki-Young;Choi, Chang-Seok;Choi, Kap-Seok;Lee, Hyun-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.21-28
    • /
    • 1995
  • In this paper an intonation conversion method is presented which provides the basic study on converting the original speech into the artificially intoned one. This method employs the other speaker's excitation signals as intonation information and the original vocal tract spectra, which are warped with the other speaker's ones by using DTW. as vocal features, and intonation converted speech signals are synthesized through short-time inverse Fourier transform(STIFT) of their product. To evaluate the intonation converted speech by this method, we collect Korean single vowels and sentences spoken by 30 males and compare fundamental frequency contours spectrograms, distortion measures and MOS test between the original speech and the converted one. The result shows that this method can convert and speech into the intoned one of the other speaker's.

  • PDF

Efficient Implementation of SVM-Based Speech/Music Classification on Embedded Systems (SVM 기반 음성/음악 분류기의 효율적인 임베디드 시스템 구현)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.461-467
    • /
    • 2011
  • Accurate classification of input signals is the key prerequisite for variable bit-rate coding, which has been introduced in order to effectively utilize limited communication bandwidth. Especially, recent surge of multimedia services elevate the importance of speech/music classification. Among many speech/music classifier, the ones based on support vector machine (SVM) have a strong selling point, high classification accuracy, but their computational complexity and memory requirement hinder their way into actual implementations. Therefore, techniques that reduce the computational complexity and the memory requirement is inevitable, particularly for embedded systems. We first analyze implementation of an SVM-based classifier on embedded systems in terms of execution time and energy consumption, and then propose two techniques that alleviate the implementation requirements: One is a technique that removes support vectors that have insignificant contribution to the final classification, and the other is to skip processing some of input signals by virtue of strong correlations in speech/music frames. These are post-processing techniques that can work with any other optimization techniques applied during the training phase of SVM. With experiments, we validate the proposed algorithms from the perspectives of classification accuracy, execution time, and energy consumption.

Detection of Glottal Closure Instant using the property of G-peak (G-peak의 특성을 이용한 성문폐쇄시점 검출)

  • Keum, Hong;Kim, Dae-Sik;Bae, Myung-Jin;Kim, Young-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1E
    • /
    • pp.82-88
    • /
    • 1994
  • It is important to exactly detect the GCI(Glottal Closure Instant) in the speech signal processing. A few methods to detect the GCI of voiced speech have een proposer, untill now. But these are difficult to detect the GCI for wide range of speakers and or various vowel signals. In this paper, we prposed a new method for GCI detection using the G-peak. The speech waveforms are passed through the LPF of variable bandwidth. Then, the GCI's of voiced speech are detected by the G-peak based on the filtered signals. We compared the detected with the eye-checked GCI at the SNR of clean, 20dB, and 0dB. We took into account the range within 1ms between eye-checked and detected GCI. We obtained the result of the detection rate as 97.9% in the clean speech, 96.5% in 20dB SNR, and 94.8% in 0dB SNR, respectively.

  • PDF

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

Speech Interface with Echo Canceller and Barge- In Functionality for Telematic System (텔레매틱스 시스템을 위한 반향제거 및 Barge-In 기능을 갖는 음성인터페이스)

  • Kim, Jun;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.483-490
    • /
    • 2009
  • In this paper, we develop a speech interface that has acoustic echo cancelling and barge-in functionalities in the car environment. In the echo canceller, DT (Double-Talk) detection algorithm using the correlation coefficients between reference and desired signals can make DT detection errors often in the background noise. We reduce the DT detection errors by using the average power of noise and echo estimated from the input signal. In addition, to make it possible for drivers to give speech command to the system by interrupting the speaker output, barge-in functionality is implemented with the combination of DT detection and appropriate gain control of the speaker output. Through the computer simulation with the assumed car environment and experiment in the real laboratory environment, implemented speech interface has shown good performance in removing acoustic echo signals in the noisy environment with proper operation of barge-in functionality.

Automated Layout of PLA using CIF (GIF를 이용한 PLA의 Layout 자동화)

  • Jeong, Seung-Jeong;Yang, Yeong-Il;Gyeong, Jong-Min
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.22 no.1
    • /
    • pp.14-21
    • /
    • 1985
  • In this paper, a new pitch extraction method, the area comparison method, is proposed. By the speech production model, the area of the first peak on a pitch interval of speech signals is emphasized. By using the above characteristics, this method have more advantages than the others for pitch extraction. The defective decision caused by an impulsive noise is minimized and the pre-filtering is not necessary for this rr ethos, because the integration of signals takes place in the process.

  • PDF

An Implementation of Real-Time Speaker Verification System on Telephone Voices Using DSP Board (DSP보드를 이용한 전화음성용 실시간 화자인증 시스템의 구현에 관한 연구)

  • Lee Hyeon Seung;Choi Hong Sub
    • MALSORI
    • /
    • no.49
    • /
    • pp.145-158
    • /
    • 2004
  • This paper is aiming at implementation of real-time speaker verification system using DSP board. Dialog/4, which is based on microprocessor and DSP processor, is selected to easily control telephone signals and to process audio/voice signals. Speaker verification system performs signal processing and feature extraction after receiving voice and its ID. Then through computing the likelihood ratio of claimed speaker model to the background model, it makes real-time decision on acceptance or rejection. For the verification experiments, total 15 speaker models and 6 background models are adopted. The experimental results show that verification accuracy rates are 99.5% for using telephone speech-based speaker models.

  • PDF

Overlapped Subband-Based Independent Vector Analysis

  • Jang, Gil-Jin;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.30-34
    • /
    • 2008
  • An improvement to the existing blind signal separation (BSS) method has been made in this paper. The proposed method models the inherent signal dependency observed in acoustic object to separate the real-world convolutive sound mixtures. The frequency domain approach requires solving the well known permutation problem, and the problem had been successfully solved by a vector representation of the sources whose multidimensional joint densities have a certain amount of dependency expressed by non-spherical distributions. Especially for speech signals, we observe strong dependencies across neighboring frequency bins and the decrease of those dependencies as the bins become far apart. The non-spherical joint density model proposed in this paper reflects this property of real-world speech signals. Experimental results show the improved performances over the spherical joint density representations.

On the Center Pitch Estimation by using the Spectrum Leakage Phenomenon for the Noise Corrupted Speech Signals (배경 잡음하에서 스펙트럼 누설현상을 이용한 음성신호의 중심 피치 검출)

  • Kang, Dong-Kyu;Bae, Myung-Jin;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.37-46
    • /
    • 1991
  • The pitch estimation algorithms witch have proposed until now are difficult to detect wide range pitches regardless of age or sex. A little deviation are observed with reference to the center pitch in the distribution diagram of pitches, since pitches are characterized by a physical limitation of the coarticulation mechanism. If the center pitches are refered to the accurate pitch extraction procedure, the algorithms will be not only simplified in procedure but also improved in accuracy. In this paper, we proposed an algorithm that the center pitches are accurately detected by using the spectrum leakage phenomenon for the noise speech signals.

  • PDF

A Study on Real-time Implementing of Time-Scale Modification (음성 신호 시간축 변환의 실시간 구현에 관한 연구)

  • Han, Dong-Chul;Lee, Ki-Seung;Cha, Il-Hawan;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.50-61
    • /
    • 1995
  • A time scale modification method yielding rate-modified speech while conserving the characteristic of speech was implemented in real-time using a goneral purpose digital signal processor. Time scale modification changed pronunciation speed only, producing a time difference between the input signal and the modified signal, making it impossible to implement it in real-time. In this thesis, a system was implemented to remove the time difference between the input and modified signals. Speech signals slowed down or speeded up by a physical time scale modification method, such as adjusting the motor speed of the cassett tape recorder, was used as the input signal. Physical modification that controled only the inter speed of the cassette tape player distorted the pitch period of the original speech. In this study, a real-time system was implemented so that the pitch-distorted speech was reconstructed back to the original by fractional sampling pitch shifting using an FIR filter, and this signal was time scale modified to match the cassette tape recorder motor speed using SOLA time-scale medification. In experiments using speech signals medifiedby the proposed method, results obtained using a 16-bit resolution ADSP2101 processor and using computer simulations employing floating point operations showed about the same average frame signal-to-noise ratio of about 20 dB.

  • PDF