• 제목/요약/키워드: Speech problem

검색결과 472건 처리시간 0.022초

LPC 켑스트럼 거리 기반의 천이구간 정보를 이용한 음성의 가변적인 시간축 변환 (Variable Time-Scale Modification of Speech Using Transient Information based on LPC Cepstral Distance)

  • 이성주;김희동;김형순
    • 음성과학
    • /
    • 제3권
    • /
    • pp.167-176
    • /
    • 1998
  • Conventional time-scale modification methods have the problem that as the modification rate gets higher the time-scale modified speech signal becomes less intelligible, because they ignore the effect of articulation rate on speech characteristics. Results of research on speech perception show that the timing information of transient portions of a speech signal plays an important role in discriminating among different speech sounds. Inspired by this fact, we propose a novel scheme for modifying the time-scale of speech. In the proposed scheme, the timing information of the transient portions of speech is preserved, while the steady portions of speech are compressed or expanded somewhat excessively for maintaining overall time-scale change. In order to identify the transient and steady portions of a speech signal, we employ a simple method using LPC cepstral distance between neighboring frames. The result of the subjective preference test indicates that the proposed method produces performance superior to that of the conventional SOLA method, especially for very fast playback case.

  • PDF

대학생의 연음 또는 비연음 영문 지각 (Students' Perception of Linked or Clear English Speech)

  • 황선이;양병곤
    • 음성과학
    • /
    • 제13권3호
    • /
    • pp.107-117
    • /
    • 2006
  • This study examined how well Korean undergraduate students perceived linked or clear English speech and attempted to find areas of difficulty in their English listening caused by phonological variations. Thirty nine undergraduate students participated in listening sessions. They were divided into high and low groups by their TOEIC listening scores. Samples of linked speech included such phonological processes as linking, palatalization, flapping, and deletion. Results showed that the students had more problem perceiving linked speech than perceiving clear speech. Secondly, both the higher and the lower groups scored low on the linked speech. The lower group had more score difference between linked and clear speech. Thirdly, the students' scores increased from the speech with flapping, through deletion, palatalization, to linking. Finally, there was a strong positive correlation between their TOEIC listening scores and the perception scores. Further studies would be desirable on the level of improvement of TOEIC scores by training the students' listening ability using the linked speech.

  • PDF

Non-Intrusive Speech Intelligibility Estimation Using Autoencoder Features with Background Noise Information

  • Jeong, Yue Ri;Choi, Seung Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권3호
    • /
    • pp.220-225
    • /
    • 2020
  • This paper investigates the non-intrusive speech intelligibility estimation method in noise environments when the bottleneck feature of autoencoder is used as an input to a neural network. The bottleneck feature-based method has the problem of severe performance degradation when the noise environment is changed. In order to overcome this problem, we propose a novel non-intrusive speech intelligibility estimation method that adds the noise environment information along with bottleneck feature to the input of long short-term memory (LSTM) neural network whose output is a short-time objective intelligence (STOI) score that is a standard tool for measuring intrusive speech intelligibility with reference speech signals. From the experiments in various noise environments, the proposed method showed improved performance when the noise environment is same. In particular, the performance was significant improved compared to that of the conventional methods in different environments. Therefore, we can conclude that the method proposed in this paper can be successfully used for estimating non-intrusive speech intelligibility in various noise environments.

Adaptive Band Selection for Robust Speech Detection In Noisy Environments

  • Ji Mikyong;Suh Youngjoo;Kim Hoirin
    • 대한음성학회지:말소리
    • /
    • 제50호
    • /
    • pp.85-97
    • /
    • 2004
  • One of the important problems in speech recognition is to accurately detect the existence of speech in adverse environments. The speech detection problem becomes severer when recognition systems are used over the telephone network, especially in a wireless network and a noisy environment. In this paper, we propose a robust speech detection algorithm, which detects speech boundaries accurately by selecting useful bands adaptively to noisy environments. The bands where noises are mainly distributed, so called, noise-centric bands are introduced. In this paper, we compare two different speech detection algorithms with the proposed algorithm, and evaluate them on noisy environments. The experimental results show the excellence of the proposed speech detection algorithm.

  • PDF

Selecting Good Speech Features for Recognition

  • Lee, Young-Jik;Hwang, Kyu-Woong
    • ETRI Journal
    • /
    • 제18권1호
    • /
    • pp.29-41
    • /
    • 1996
  • This paper describes a method to select a suitable feature for speech recognition using information theoretic measure. Conventional speech recognition systems heuristically choose a portion of frequency components, cepstrum, mel-cepstrum, energy, and their time differences of speech waveforms as their speech features. However, these systems never have good performance if the selected features are not suitable for speech recognition. Since the recognition rate is the only performance measure of speech recognition system, it is hard to judge how suitable the selected feature is. To solve this problem, it is essential to analyze the feature itself, and measure how good the feature itself is. Good speech features should contain all of the class-related information and as small amount of the class-irrelevant variation as possible. In this paper, we suggest a method to measure the class-related information and the amount of the class-irrelevant variation based on the Shannon's information theory. Using this method, we compare the mel-scaled FFT, cepstrum, mel-cepstrum, and wavelet features of the TIMIT speech data. The result shows that, among these features, the mel-scaled FFT is the best feature for speech recognition based on the proposed measure.

  • PDF

설소대성형술이 발음 및 혀의 운동에 미치는 영향에 관한 연구 (THE EFFECT OF LINGUAL FRENECTOMY ON PHONATION & TONGUE MOVEMENT)

  • 황선용;이상철;류동목
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제14권1_2호
    • /
    • pp.40-53
    • /
    • 1992
  • This sutdy aimed at examining the effect of lingual frenectomy on phonation & tongue movement. Almost the patient visiting to department of oral & maxillofacial surgery for the treatment of tongue tie always complain the speech problem. Many operation was performed according to this problem. But the objective evaluation of the speech change have been deficient. The experimental group was 25 adult males. Fourteen Korean consonants & after Korean vowels was combined and seventy sound was made for speech analysis. Before & after lingual frenectomy, the speech of the above mentioned group was recorded and then analysed by the Speech Workstation computer software. And before & after operation, the lingual frenum & tongue protrusion amount vas measured. The results were as follows : 1. The pre-operative length of lingual frenum was inverse proportion with the pre-operative length of the protrusive tongue. 2. The average difference between pre & post-operative length of the protrusive tongue was about 23 mm. 3. In the comparison of consonant continuing time change, fricative consonant(r, s, h) was increased post-operatively. 4. In the comparison of the vowel frequency formant change, the "i"and "u" sound vas reliably changed. 5. There was no reliable speech changes on the other sounds.

  • PDF

Classical Tamil Speech Enhancement with Modified Threshold Function using Wavelets

  • Indra., J;Kasthuri., N;Navaneetha Krishnan., S
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권6호
    • /
    • pp.1793-1801
    • /
    • 2016
  • Speech enhancement is a challenging problem due to the diversity of noise sources and their effects in different applications. The goal of speech enhancement is to improve the quality and intelligibility of speech by reducing noise. Many research works in speech enhancement have been accomplished in English and other European Languages. There has been limited or no such works or efforts in the past in the context of Tamil speech enhancement in the literature. The aim of the proposed method is to reduce the background noise present in the Tamil speech signal by using wavelets. New modified thresholding function is introduced. The proposed method is evaluated on several speakers and under various noise conditions including White Gaussian noise, Babble noise and Car noise. The Signal to Noise Ratio (SNR), Mean Square Error (MSE) and Mean Opinion Score (MOS) results show that the proposed thresholding function improves the speech enhancement compared to the conventional hard and soft thresholding methods.

입술정보를 이용한 음성 특징 파라미터 추정 및 음성인식 성능향상 (Estimation of speech feature vectors and enhancement of speech recognition performance using lip information)

  • 민소희;김진영;최승호
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.83-92
    • /
    • 2002
  • Speech recognition performance is severly degraded under noisy envrionments. One approach to cope with this problem is audio-visual speech recognition. In this paper, we discuss the experiment results of bimodal speech recongition based on enhanced speech feature vectors using lip information. We try various kinds of speech features as like linear predicion coefficient, cepstrum, log area ratio and etc for transforming lip information into speech parameters. The experimental results show that the cepstrum parameter is the best feature in the point of reconition rate. Also, we present the desirable weighting values of audio and visual informations depending on signal-to-noiso ratio.

  • PDF

다채널 주파수영역 독립성분분석에서 분리된 신호 전력비의 공분산을 이용한 주파수 빈 정렬 (Frequency Bin Alignment Using Covariance of Power Ratio of Separated Signals in Multi-channel FD-ICA)

  • 전성일;배건성
    • 말소리와 음성과학
    • /
    • 제6권3호
    • /
    • pp.149-153
    • /
    • 2014
  • In frequency domain ICA, the frequency bin permutation problem falls off the quality of separated signals. In this paper, we propose a new algorithm to solve the frequency bin permutation problem using the covariance of power ratio of separated signals in multi-channel FD-ICA. It makes use of the continuity of the spectrum of speech signals to check if frequency bin permutation occurs in the separated signal using the power ratio of adjacent frequency bins. Experimental results have shown that the proposed method could fix the frequency bin permutation problem in the multi-channel FD-ICA.

심층신경망을 이용한 조음 예측 모형 개발 (Development of articulatory estimation model using deep neural network)

  • 유희조;양형원;강재구;조영선;황성하;홍연정;조예진;김서현;남호성
    • 말소리와 음성과학
    • /
    • 제8권3호
    • /
    • pp.31-38
    • /
    • 2016
  • Speech inversion (acoustic-to-articulatory mapping) is not a trivial problem, despite the importance, due to the highly non-linear and non-unique nature. This study aimed to investigate the performance of Deep Neural Network (DNN) compared to that of traditional Artificial Neural Network (ANN) to address the problem. The Wisconsin X-ray Microbeam Database was employed and the acoustic signal and articulatory pellet information were the input and output in the models. Results showed that the performance of ANN deteriorated as the number of hidden layers increased. In contrast, DNN showed lower and more stable RMS even up to 10 deep hidden layers, suggesting that DNN is capable of learning acoustic-articulatory inversion mapping more efficiently than ANN.