• Title/Summary/Keyword: cepstral

Search Result 297, Processing Time 0.022 seconds

A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments (네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계)

  • Lee, Gil-Ho;Yoon, Jae-Sam;Oh, Yoo-Rhee;Kim, Hong-Kook
    • MALSORI
    • /
    • no.54
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets

  • Iswarya, P.;Radha, V.
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1135-1148
    • /
    • 2017
  • Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

Audio Fingerprint Retrieval Method Based on Feature Dimension Reduction and Feature Combination

  • Zhang, Qiu-yu;Xu, Fu-jiu;Bai, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.522-539
    • /
    • 2021
  • In order to solve the problems of the existing audio fingerprint method when extracting audio fingerprints from long speech segments, such as too large fingerprint dimension, poor robustness, and low retrieval accuracy and efficiency, a robust audio fingerprint retrieval method based on feature dimension reduction and feature combination is proposed. Firstly, the Mel-frequency cepstral coefficient (MFCC) and linear prediction cepstrum coefficient (LPCC) of the original speech are extracted respectively, and the MFCC feature matrix and LPCC feature matrix are combined. Secondly, the feature dimension reduction method based on information entropy is used for column dimension reduction, and the feature matrix after dimension reduction is used for row dimension reduction based on energy feature dimension reduction method. Finally, the audio fingerprint is constructed by using the feature combination matrix after dimension reduction. When speech's user retrieval, the normalized Hamming distance algorithm is used for matching retrieval. Experiment results show that the proposed method has smaller audio fingerprint dimension and better robustness for long speech segments, and has higher retrieval efficiency while maintaining a higher recall rate and precision rate.

Effects of Laryngeal Massage on Muscle Tension Dysphonia: A Systematic Review and Meta-Analysis (근긴장성 발성장애의 후두마사지 효과: 체계적 고찰 및 메타분석)

  • Kim, Jaeock
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.32 no.2
    • /
    • pp.64-74
    • /
    • 2021
  • Background and Objectives This study was to investigate the voice quality and articulation effects of laryngeal massage on muscle tension dysphonia (MTD). Materials and Method A systematic review of articles published between January 2000 and December 2020 in Cochrane, PubMed, ScienceDirect, SpingerLink, ERIC, and Naver Academic was conducted. From the total of 2094 articles identified, 10 peer-reviewed articles were included in a meta-analysis. Mean effect sizes of the variables related to voice quality (jitter, shimmer, harmonic to noise ratio or noise to harmonic ratio, high-F0, low-I, cepstral peak prominence) and articulation (F1, F2, F1 slope, F2 slope) were calculated by Hedges'g. Results Meta-analysis of the selected articles showed that laryngeal massage had medium to large effects on all variables of voice quality and articulation except F0-high and F1 slope in the MTD patients. Conclusion This study provided comprehensive clinical evidence that it is highly desirable to apply laryngeal massage to MTD patients.

A cable tension identification technology using percussion sound

  • Wang, Guowei;Lu, Wensheng;Yuan, Cheng;Kong, Qingzhao
    • Smart Structures and Systems
    • /
    • v.29 no.3
    • /
    • pp.475-484
    • /
    • 2022
  • The loss of cable tension for civil infrastructure reduces structural bearing capacity and causes harmful deformation of structures. Currently, most of the structural health monitoring (SHM) approaches for cables rely on contact transducers. This paper proposes a cable tension identification technology using percussion sound, which provides a fast determination of steel cable tension without physical contact between cables and sensors. Notably, inspired by the concept of tensioning strings for piano tuning, this proposed technology predicts cable tension value by deep learning assisted classification of "percussion" sound from tapping a steel cable. To simulate the non-linear mapping of human ears to sound and to better quantify the minor changes in the high-frequency bands of the sound spectrum generated by percussions, Mel-frequency cepstral coefficients (MFCCs) were extracted as acoustic features to train the deep learning network. A convolutional neural network (CNN) with four convolutional layers and two global pooling layers was employed to identify the cable tension in a certain designed range. Moreover, theoretical and finite element methods (FEM) were conducted to prove the feasibility of the proposed technology. Finally, the identification performance of the proposed technology was experimentally investigated. Overall, results show that the proposed percussion-based technology has great potentials for estimating cable tension for in-situ structural safety assessment.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

New Temporal Features for Cardiac Disorder Classification by Heart Sound (심음 기반의 심장질환 분류를 위한 새로운 시간영역 특징)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.133-140
    • /
    • 2010
  • We improve the performance of cardiac disorder classification by adding new temporal features extracted from continuous heart sound signals. We add three kinds of novel temporal features to a conventional feature based on mel-frequency cepstral coefficients (MFCC): Heart sound envelope, murmur probabilities, and murmur amplitude variation. In cardiac disorder classification and detection experiments, we evaluate the contribution of the proposed features to classification accuracy and select proper temporal features using the sequential feature selection method. The selected features are shown to improve classification accuracy significantly and consistently for neural network-based pattern classifiers such as multi-layer perceptron (MLP), support vector machine (SVM), and extreme learning machine (ELM).

Voice quality distinctions of the three-way stop contrast under prosodic strengthening in Korean

  • Jiyoung Jang;Sahyang Kim;Taehong Cho
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.17-24
    • /
    • 2024
  • The Korean three-way stop contrast (lenis, aspirated, fortis) is currently undergoing a sound change, such that the primary cue distinguishing lenis and aspirated stops is shifting from voice onset time (VOT) to F0. Despite recent discussions of this shift, research on voice quality, traditionally considered an additional cue signaling the contrast, remains sparse. This study investigated the extent to which the associated voice quality [as reflected in the acoustic measurements of H1*-H2*, H1*- A1*, and cepstral peak prominence (CPP)] contributes to the three-way stop contrast, and how the realization is conditioned by prominence- vs. boundary-induced prosodic strengthening amid the ongoing sound change. Results for 12 native Korean speakers indicate that there was a substantial distinction in voice quality among the three stop categories with the breathiness of the vowel being the greatest after the lenis, intermediate after the aspirated, and least after the fortis stops, indicating the role of voice quality in the maintenance of the three-way stop contrast. Furthermore, prosodic strengthening has different effects on the contrast and contributes to the enhancement of the phonological contrast contingent on whether it is induced by prominence or boundary.

Speaker Identification Using Dynamic Time Warping Algorithm (동적 시간 신축 알고리즘을 이용한 화자 식별)

  • Jeong, Seung-Do
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.5
    • /
    • pp.2402-2409
    • /
    • 2011
  • The voice has distinguishable acoustic properties of speaker as well as transmitting information. The speaker recognition is the method to figures out who speaks the words through acoustic differences between speakers. The speaker recognition is roughly divided two kinds of categories: speaker verification and identification. The speaker verification is the method which verifies speaker himself based on only one's voice. Otherwise, the speaker identification is the method to find speaker by searching most similar model in the database previously consisted of multiple subordinate sentences. This paper composes feature vector from extracting MFCC coefficients and uses the dynamic time warping algorithm to compare the similarity between features. In order to describe common characteristic based on phonological features of spoken words, two subordinate sentences for each speaker are used as the training data. Thus, it is possible to identify the speaker who didn't say the same word which is previously stored in the database.

A Study on Isolated Word Recognition using Improved Multisection Vector Quantization Recognition System (개선된 MSVQ 인식 시스템을 이용한 단독어 인식에 관한 연구)

  • An, Tae-Ok;Kim, Nam-Joong;Song, Chul;Kim, Soon-Hyeob
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.2
    • /
    • pp.196-205
    • /
    • 1991
  • This paper is a study on the isolated word recognition of speaker independent which proposes to newly improved MSVQ(multisection vector quantization) recognition system which improve the classical MSVQ recognition system. It is a difference that test pattern has on more section than reference pattern in recognition system 146 DDD area names are selected as recognition vocabulary. 12th LPC cepstral coefficients is used as feature parameter. and when codebook is generated, MINSUM and MINMAX are used in finding the centroid. According to the experiment result. it is proved that this method is better than VQ(vector quantization) recognition methods, DTW(dynamic time warping) pattern matching methods and classical MSVQ methods for recognition rate and recognition time.

  • PDF