• Title/Summary/Keyword: Speech signals

Search Result 499, Processing Time 0.031 seconds

On Reaction Signals

  • Hatanaka, Takami
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.301-311
    • /
    • 2000
  • The purpose of this paper is to explore the use of reaction signals by Japanese and English speakers. After collecting data from Japanese and English speakers, American and British, I checked them and decided to be concerned with five of them: ah, eh, oh, m, and ${\partial}:m$. At first I thought that the first three of them resembled in form and in their tones and meanings, while the others occur frequently only in English. But as I was reading the data more in detail I found the reason for too frequent use of the signal eh by Japanese. It is also found that the signal eh is a kind of substitute for a real word, the similar linguistic phenomenon is seen in the use of m, and m seems to be different from ${\partial}:m$ in its function, according to whether the speaker is talkative or not. And American students learning Japanese started their Japanese with an English reaction signal and the reverse phenomenon was found with Japanese students speaking in English, so much so that reaction signals are used spontaneously, though they have various tones and meanings.

  • PDF

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Implementation of G.726 ADPCM Dual Rate Speech Codec of 16Kbps and 40Kbps (16Kbps와 40Kbps의 Dual Rate G.726 ADPCM 음성 codec구현)

  • Kim Jae-Oh;Han Kyong-Ho
    • Journal of IKEEE
    • /
    • v.2 no.2 s.3
    • /
    • pp.233-238
    • /
    • 1998
  • In this paper, the implementation of dual rate ADPCM using G.726 16Kbps and 40Kbps speech codec algorithm is handled. For small signals, the low rate 16Kbps coding algorithm shows almost the same SNR as the high rate 40Kbps coding algorithm , while the high rate 40Kbps coding algorithm shows the higher SNR than the low rate 16Kbps coding algorithm fur large signal. To obtain the good trade-off between the data rate and synthesized speech quality, we applied low rate 16Kbps for the small signal and high rate 40Kbps for the large signal. Various threshold values determining the rate are applied for good trade-off between data rate and speech quality. The simulation result shows the good speech quality at a low rate comparing with 16Kbps & 40Kbps.

  • PDF

Automatic Phonetic Segmentation of Korean Speech Signal Using Phonetic-acoustic Transition Information (음소 음향학적 변화 정보를 이용한 한국어 음성신호의 자동 음소 분할)

  • 박창목;왕지남
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.24-30
    • /
    • 2001
  • This article is concerned with automatic segmentation for Korean speech signals. All kinds of transition cases of phonetic units are classified into 3 types and different strategies for each type are applied. The type 1 is the discrimination of silence, voiced-speech and unvoiced-speech. The histogram analysis of each indicators which consists of wavelet coefficients and SVF (Spectral Variation Function) in wavelet coefficients are used for type 1 segmentation. The type 2 is the discrimination of adjacent vowels. The vowel transition cases can be characterized by spectrogram. Given phonetic transcription and transition pattern spectrogram, the speech signal, having consecutive vowels, are automatically segmented by the template matching. The type 3 is the discrimination of vowel and voiced-consonants. The smoothed short-time RMS energy of Wavelet low pass component and SVF in cepstral coefficients are adopted for type 3 segmentation. The experiment is performed for 342 words utterance set. The speech data are gathered from 6 speakers. The result shows the validity of the method.

  • PDF

Minimum Classification Error Training to Improve Discriminability of PCMM-Based Feature Compensation (PCMM 기반 특징 보상 기법에서 변별력 향상을 위한 Minimum Classification Error 훈련의 적용)

  • Kim Wooil;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.58-68
    • /
    • 2005
  • In this paper, we propose a scheme to improve discriminative property in the feature compensation method for robust speech recognition under noisy environments. The estimation of noisy speech model used in existing feature compensation methods do not guarantee the computation of posterior probabilities which discriminate reliably among the Gaussian components. Estimation of Posterior probabilities is a crucial step in determining the discriminative factor of the Gaussian models, which in turn determines the intelligibility of the restored speech signals. The proposed scheme employs minimum classification error (MCE) training for estimating the parameters of the noisy speech model. For applying the MCE training, we propose to identify and determine the 'competing components' that are expected to affect the discriminative ability. The proposed method is applied to feature compensation based on parallel combined mixture model (PCMM). The performance is examined over Aurora 2.0 database and over the speech recorded inside a car during real driving conditions. The experimental results show improved recognition performance in both simulated environments and real-life conditions. The result verifies the effectiveness of the proposed scheme for increasing the performance of robust speech recognition systems.

A Study on Real-Time Walking Action Control of Biped Robot with Twenty Six Joints Based on Voice Command (음성명령기반 26관절 보행로봇 실시간 작업동작제어에 관한 연구)

  • Jo, Sang Young;Kim, Min Sung;Yang, Jun Suk;Koo, Young Mok;Jung, Yang Geun;Han, Sung Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.4
    • /
    • pp.293-300
    • /
    • 2016
  • The Voice recognition is one of convenient methods to communicate between human and robots. This study proposes a speech recognition method using speech recognizers based on Hidden Markov Model (HMM) with a combination of techniques to enhance a biped robot control. In the past, Artificial Neural Networks (ANN) and Dynamic Time Wrapping (DTW) were used, however, currently they are less commonly applied to speech recognition systems. This Research confirms that the HMM, an accepted high-performance technique, can be successfully employed to model speech signals. High recognition accuracy can be obtained by using HMMs. Apart from speech modeling techniques, multiple feature extraction methods have been studied to find speech stresses caused by emotions and the environment to improve speech recognition rates. The procedure consisted of 2 parts: one is recognizing robot commands using multiple HMM recognizers, and the other is sending recognized commands to control a robot. In this paper, a practical voice recognition system which can recognize a lot of task commands is proposed. The proposed system consists of a general purpose microprocessor and a useful voice recognition processor which can recognize a limited number of voice patterns. By simulation and experiment, it was illustrated the reliability of voice recognition rates for application of the manufacturing process.

Parkinson's disease diagnosis using speech signal and deep residual gated recurrent neural network (음성 신호와 심층 잔류 순환 신경망을 이용한 파킨슨병 진단)

  • Shin, Seung-Su;Kim, Gee Yeun;Koo, Bon Mi;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.3
    • /
    • pp.308-313
    • /
    • 2019
  • Parkinson's disease, one of the three major diseases in old age, has more than 70 % of patients with speech disorders, and recently, diagnostic methods of Parkinson's disease through speech signals have been devised. In this paper, we propose a method of diagnosis of Parkinson's disease based on deep residual gated recurrent neural network using speech features. In the proposed method, the speech features for diagnosing Parkinson's disease are selected and applied to the deep residual gated recurrent neural network to classify Parkinson's disease patients. The proposed deep residual gated recurrent neural network, an algorithm combining residual learning with deep gated recurrent neural network, has a higher recognition rate than the traditional method in Parkinson's disease diagnosis.

Gender Analysis in Elderly Speech Signal Processing (노인음성신호처리에서의 젠더 분석)

  • Lee, JiYeoun
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.351-356
    • /
    • 2018
  • Changes in vocal cords due to aging can change the frequency of speech, and the speech signals of the elderly can be automatically distinguished from normal speech signals through various analyzes. The purpose of this study is to provide a tool that can be easily accessed by the elderly and disabled people who can be excluded from the rapidly changing technological society and to improve the voice recognition performance. In the study, the gender of the subjects was reported as sex analysis, and the number of female and male voice samples was used equally. In addition, the gender analysis was applied to set the voices of the elderly without using voices of all ages. Finally, we applied a review methodology of standards and reference models to reduce gender difference. 10 Korean women and 10 men aged 70 to 80 years old are used in this study. Comparing the F0 value extracted directly with the waveform and the F0 extracted with TF32 and the Wavesufer speech analysis program, Wavesufer analyzed the F0 of the elderly voice better than TF32. However, there is a need for a voice analysis program for elderly people. In conclusions, analyzing the voice of the elderly will improve speech recognition and synthesis capabilities of existing smart medical systems.

Highband Coding Method Using Matching Pusuit Estimation and CELP Coding for Wideband Speech Coder (광대역 음성부호화기를 위한 매칭퍼슈잇 알고리즘과 CELP 방법을 이용한 고대역 부호화 방법)

  • Jeong Gyu-Hyeok;Ahn Yeong-Uk;Kim Jong-Hark;Shin Jae-Hyun;Seo Sang-Won;Hwang In-Kwan;Lee In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.21-29
    • /
    • 2006
  • In this Paper a split bandwidth wideband speech coder and its highband coding method are Proposed. The coder uses a split-band approach. where the wideband input speech signal is split into two equal frequency bands from 0-4kHz and 4-8kHz. The lowband and the highband are coded respectively by the 11.8kb/s G.729 Annex E and the proposed coding method. After the LPC analysis, the highband is divided by two modes according to the properties of signals. In stationary mode. the highband signals are compressed by the mixture excitation model; CELP algorithm and W (Matching Pursuit) algorithm. The others are coded by the only CELP algorithm. We compare the performance of the new wideband speech coder with that of G.722 48kbps SB-ADPCM and G.722.2 12.85kbps in a subjective method. The simulation results show that the Performance of the proposed wideband speech coder has better than that of 48kbps G.722 and no better than that of 12.85kbps G.722.2.

A Phase-related Feature Extraction Method for Robust Speaker Verification (열악한 환경에 강인한 화자인증을 위한 위상 기반 특징 추출 기법)

  • Kwon, Chul-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.613-620
    • /
    • 2010
  • Additive noise and channel distortion strongly degrade the performance of speaker verification systems, as it introduces distortion of the features of speech. This distortion causes a mismatch between the training and recognition conditions such that acoustic models trained with clean speech do not model noisy and channel distorted speech accurately. This paper presents a phase-related feature extraction method in order to improve the robustness of the speaker verification systems. The instantaneous frequency is computed from the phase of speech signals and features from the histogram of the instantaneous frequency are obtained. Experimental results show that the proposed technique offers significant improvements over the standard techniques in both clean and adverse testing environments.