• Title/Summary/Keyword: Speech signals

Search Result 499, Processing Time 0.028 seconds

Virtual Dialog System Based on Multimedia Signal Processing for Smart Home Environments (멀티미디어 신호처리에 기초한 스마트홈 가상대화 시스템)

  • Kim, Sung-Ill;Oh, Se-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.173-178
    • /
    • 2005
  • This paper focuses on the use of the virtual dialog system whose aim is to build more convenient living environments. In order to realize this, the main emphasis of the paper lies on the description of the multimedia signal processing on the basis of the technologies such as speech recognition, speech synthesis, video, or sensor signal processing. For essential modules of the dialog system, we incorporated the real-time speech recognizer based on HM-Net(Hidden Markov Network) as well as speech synthesis into the overall system. In addition, we adopted the real-time motion detector based on the changes of brightness in pixels, as well as the touch sensor that was used to start system. In experimental evaluation, the results showed that the proposed system was relatively easy to use for controlling electric appliances while sitting in a sofa, even though the performance of the system was not better than the simulation results owing to the noisy environments.

A New MPEG Reference Model for Unified Speech and Audio Coding (통합 음성/오디오 부호화를 위한 새로운 MPEG 참조 모델)

  • Song, Jeong-Ook;Oh, Hyen-O;Kang, Hong-Goo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.74-80
    • /
    • 2010
  • Speech and audio codecs have been developed based on different type of coding technologies since they have different characteristics of signal and applications. In harmony with a convergence between broadcasting and telecommunication system, international organizations for standardization such as 3GPP and ISO/IEC MPEG have tried to compress and transmit multimedia signals using unified codecs. MPEG recently initiated an activity to standardize the USAC (Unified speech and audio coding). However, USAC RM (Reference model) software has been problematic since it has a complex hierarchy, many useless source codes and poor quality of the encoder. To solve these problems, this paper introduces a new RM software designed with an open source paradigm. It was presented at the MPEG meeting in April, 2010 and the source code was released in June.

Intrinsic Fundamental Frequency(Fo) of Vowels in the Esophageal Speech (식도음성의 고유기저주파수 발현 현상)

  • 홍기환;김성완;김현기
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.9 no.2
    • /
    • pp.142-146
    • /
    • 1998
  • Background : It has been established that the fundamental frequency(Fo) of the vowels varies systemically as a function of vowel height. Specifically, high vowels have a higher Fo than low vowels. Two major explanations or hypotheses dominate contemporary accounts of fired to explain the mechanisms underlying intrinsic variation in vowel Fo, source-tract coupling hypothesis and tongue-pull hypothesis. Objectives : Total laryngectomy surgery necessiates removal of all structures between the hyoid bone and the tracheal rings. Therefore, the assumption that no direct interconnection exists between the tongue and pharyngoesophageal segment that would mediate systematic variation in vowel Fo appears quite reasonable. If tongue-pull hypothesis is correct, systemic differences in Fo between high versus low vowels produced by esophageal speakers would not Or expected. We analyzed the Fo in the vowels of esophageal voice. Materials and method : The subjects were 11 cases of laryngectomee patients with fluent esophageal voice. The five essential vowels were recorded and analyzed with computer speech analysis system(Computerized Speech Lab). The Fo was measured using acoustic waveform, automatically and manually, and narrow band spectral analysis. Results : The results of this study reveal that intrinsic variation in vowel Fo is clearly evident in esophageal speech. By analysis using acoustic waveform automatically, the signals were too irregular to measure the Fo precisely. So the data from automatic analysis of acoustic waveform is not logical. But the Fo by measuring with manually calculated acoustic waveform or narrowband spectral analysis resulted in acceptable results. These results were interpreted to support neither the source-tract coupling nor the tongue-pull hypotheses and led us to offer an alternative explanation to account for intrinsic variation of Fo.

  • PDF

Online blind source separation and dereverberation of speech based on a joint diagonalizability constraint (공동 행렬대각화 조건 기반 온라인 음원 신호 분리 및 잔향제거)

  • Yu, Ho-Gun;Kim, Do-Hui;Song, Min-Hwan;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.503-514
    • /
    • 2021
  • Reverberation in speech signals tends to significantly degrade the performance of the Blind Source Separation (BSS) system. Especially in online systems, the performance degradation becomes severe. Methods based on joint diagonalizability constraints have been recently developed to tackle the problem. To improve the quality of separated speech, in this paper, we add the proposed de-reverberation method to the online BSS algorithm based on the constraints in reverberant environments. Through experiments on the WSJCAM0 corpus, the proposed method was compared with the existing online BSS algorithm. The performance evaluation by the Signal-to-Distortion Ratio and the Perceptual Evaluation of Speech Quality demonstrated that SDR improved from 1.23 dB to 3.76 dB and PESQ improved from 1.15 to 2.12 on average.

Design of EVRC LSP Codebooks with Korean (한국어에 의한 EVRC LSP 코드북 설계)

  • 이진걸
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.167-172
    • /
    • 2002
  • The EVRC (Enhanced Variable Rate Codec) is currently in service as a speech cosec in digital cellular systems in North America and Korea. In the EVRC, the LSP (Line Spectral Pairs) related to energy distribution of speech signals in the frequency domain are coded by weighted split vector quantization. Considering that the LSP codebooks might be trained with the language of the develop country of the codebooks or English, it is expected that codebooks trained with Korean provide the performance improvements in the communication in Korean. In this paper, the EVRC LSP codebooks are designed with korean adopting the LBG algorithm based vector quantization, and the performance improvement of the vector quantization and the accompanying speech quality improvement are demonstrated by spectral distortion, SNR and SegSNR measurements, respectively.

Vowel Recognition Using the Fractal Dimension (프랙탈 차원을 이용한 모음인식)

  • 최철영;김형순;김재호;손경식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.6
    • /
    • pp.1140-1148
    • /
    • 1994
  • In this paper, we carried out some experiments on the Korean vowel recognition using the fractal dimension of the speech signals. We chose the Minkowski-Bouligand dimension as the fractal dimension, and computed it using the morphological covering method. For our experiments, we used both the fractal dimension and the LPC cepstrum which is conventionally known to be one of the best parameters for speech recognition, and examined the usefulness of the fractal dimension. From the vowel recognition experiments under various consonant contexts, we achieved the vowel recognition error rates of 5.6% and 3.2% for the case with only LPC cepstrum and that with both LPC cepstrum and the fractal dimension, respectively. The results indicate that the incorporation of the fractal dimension with LPC cepstrum gives more than 40% reduction in recognition errors, and indicates that the fractal dimension is a useful feature parameter for speech recognition.

  • PDF

A Study on the Speech Packetized Coding by Zero Bit Reduction of 1'st Order Differences (1차 차분신호의 영비트 제거에 의한 음성신호의 패킷부호화에 관한 연구)

  • Shin, Dong-Jin;Lim, Un-Cheon;Bae, Myung-Jin;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.4
    • /
    • pp.74-82
    • /
    • 1989
  • In this paper, we have studied on the methodologies of implementation and the performance evaluations about the real-time packetized coding of multi-channel speech signals. Our suggested coding algorithm is very, simple and it has majorly the data handling operations rather than the numerical calculations. And it gives about $40\%$ of compression ratio with less than the conventional codings. Nevertheless, using this algorithm, we can save the memories for the speech signal and we can raise the efficiency of the channel transmission. Especially because of its simplicity of algorithm, we can easily obtain the merits of the multi-channel operations.

  • PDF

Comparative Analysis of Performance of Established Pitch Estimation Methods in Sustained Vowel of Benign Vocal Fold Lesions (양성후두 질환의 지속모음을 대상으로 한 기존 피치 추정 방법들의 성능 비교 분석)

  • Jang, Seung-Jin;Kim, Hyo-Min;Choi, Seong-Hee;Park, Young-Cheol;Choi, Hong-Shik;Yoon, Young-Ro
    • Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.179-200
    • /
    • 2007
  • In voice pathology, various measurements calculated from pitch values are proposed to show voice quality. However, those measurements frequently seem to be inaccurate and unreliable because they are based on some wrong pitch values determined from pathological voice data. In order to solve the problem, we compared several pitch estimation methods to propose a better one in pathological voices. From the database of 99 pathological voice and 30 normal voice data, errors derived from pitch estimation were analyzed and compared between pathological and normal voice data or among the vowels produced by patients with benign vocal fold lesions. Results showed that gross pitch errors were observed in the cases of pathological voice data. From the types of pathological voices classified by the degree of aperiodicity in the speech signals, we found that pitch errors were closely related to the number of aperiodic segments. Also, the autocorrelation approach was found to be the most robust pitch estimation in the pathological voice data. It is desirable to conduct further research on the more severely pathological voice data in order to reduce pitch estimation errors.

  • PDF

VoIP Receiver Structure for Enhancing Speech Quality Based on Telematics (텔레메틱스 기반의 VoIP 음성 통화품질 향상을 위한 수신단 구조)

  • Kim, Hyoung-Gook;Seo, Kwang-Duk
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.3
    • /
    • pp.48-54
    • /
    • 2012
  • The quality of real-time voice communication over Internet Protocol networks based on telematics is affected by network impairments such as delays, jitters, and packet loss. To resolve this issue, this paper proposes a receiver-based enhancing method of VoIP speech quality. The proposed method enables users to deliver high-quality voice using playout control and signal reconstruction, which consists of concealment of lost packets, adaptive playout-buffer scheduling using active jitter estimation, and smooth interpolation between two signals in a transition region. The proposed algorithm achieves higher Perceptual Evaluation of Speech Quality (PESQ) values and low buffering delay than the reference algorithm.

Phoneme Recognition based on Two-Layered Stereo Vision Neural Network (2층 구조의 입체 시각형 신경망 기반 음소인식)

  • Kim, Sung-Ill;Kim, Nag-Cheol
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.523-529
    • /
    • 2002
  • The present study describes neural networks for stereoscopic vision, which are applied to identifying human speech. In speech recognition based on stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, the two-layered SVNN was 7.7% higher in recognition accuracies than the hidden Markov model (HMM). From the evaluation results, it was noticed that SVNN outperformed the existing HMM recognizer.

  • PDF