• Title/Summary/Keyword: Speech Database

Search Result 331, Processing Time 0.024 seconds

Automatic Vowel Sequence Reproduction for a Talking Robot Based on PARCOR Coefficient Template Matching

  • Vo, Nhu Thanh;Sawada, Hideyuki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.215-221
    • /
    • 2016
  • This paper describes an automatic vowel sequence reproduction system for a talking robot built to reproduce the human voice based on the working behavior of the human articulatory system. A sound analysis system is developed to record a sentence spoken by a human (mainly vowel sequences in the Japanese language) and to then analyze that sentence to give the correct command packet so the talking robot can repeat it. An algorithm based on a short-time energy method is developed to separate and count sound phonemes. A matching template using partial correlation coefficients (PARCOR) is applied to detect a voice in the talking robot's database similar to the spoken voice. Combining the sound separation and counting the result with the detection of vowels in human speech, the talking robot can reproduce a vowel sequence similar to the one spoken by the human. Two tests to verify the working behavior of the robot are performed. The results of the tests indicate that the robot can repeat a sequence of vowels spoken by a human with an average success rate of more than 60%.

A Study On Male-To-Female Voice Conversion (남녀 음성 변환 기술연구)

  • Choi Jung-Kyu;Kim Jae-Min;Han Min-Su
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.115-118
    • /
    • 2000
  • Voice conversion technology is essential for TTS systems because the construction of speech database takes much effort. In this paper. male-to-female voice conversion technology in Korean LPC TTS system has been studied. In general. the parameters for voice color conversion are categorized into acoustic and prosodic parameters. This paper adopts LSF(Line Spectral Frequency) for acoustic parameter, pitch period and duration for prosodic parameters. In this paper. Pitch period is shortened by the half, duration is shortened by $25\%, and LSFs are shifted linearly for the voice conversion. And the synthesized speech is post-filtered by a bandpass filter. The proposed algorithm is simpler than other algorithms. for example, VQ and Neural Net based methods. And we don't even need to estimate formant information. The MOS(Mean Opinion Socre) test for naturalness shows 2.25 and for female closeness, 3.2. In conclusion, by using the proposed algorithm. male-to-female voice conversion system can be simply implemented with relatively successful results.

  • PDF

Reconstructive Trends in Post-Ablation Patients with Esophagus and Hypopharynx Defect

  • Ki, Sae Hwi;Choi, Jong Hwan;Sim, Seung Hyun
    • Archives of Craniofacial Surgery
    • /
    • v.16 no.3
    • /
    • pp.105-113
    • /
    • 2015
  • The main challenge in pharyngoesophageal reconstruction is the restoration of swallow and speech functions. The aim of this paper is to review the reconstructive options and associated complications for patients with head and neck cancer. A literature review was performed for pharynoesophagus reconstruction after ablative surgery of head and neck cancer for studies published between January 1980 to July 2015 and listed in the PubMed database. Search queries were made using a combination of 'esophagus' and 'free flap', 'microsurgical', or 'free tissue transfer'. The search query resulted in 123 studies, of which 33 studies were full text publications that met inclusion criteria. Further review into the reference of these 33 studies resulted in 15 additional studies to be included. The pharyngoesophagus reconstruction should be individualized for each patient and clinical context. Fasciocutaneous free flap and pedicled flap are effective for partial phayngoesophageal defect. Fasciocutaneous free flap and jejunal free flap are effective for circumferential defect. Pedicled flaps remain a safe option in the context of high surgical risk patients, presence of fistula. Among free flaps, anterolateral thigh free flap and jejunal free flap were associated with superior outcomes, when compared with radial forearm free flap. Speech function is reported to be better for the fasciocutaneous free flap than for the jejunal free flap.

A Development of Administrative Affairs Supporting System using Call Control Mode of CTI (CTI 호출 제어 방식을 이용한 행정 업무 지원 시스템의 개발)

  • 최준기;조성범;정상수;이상정
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.2
    • /
    • pp.46-60
    • /
    • 1999
  • Recently, CTI (Computer Telephony Integration) technology has been widely applied to various area such as video conference, file transfer, voice mail, automatic message transfer and automatic redial, integrated messaging and network fax. In this paper, an administrative affairs supporting system using call control mode of CTI is designed. To improve inefficient processing of job due to heavy calling from entrance candidates during entrance examination of a college, the system is developed. The database of the system is desigend using object modeling technique. Also, the automatic calling and response system using CTI call control mode is implemented. Especially, to interface with voice of candidates who ask whether they pass or fail the entrance examination of the college, TTS(Text To Speech) module is developed.

  • PDF

Factors for Speech Signal Time Delay Estimation (음성 신호를 이용한 시간지연 추정에 미치는 영향들에 관한 연구)

  • Kwon, Byoung-Ho;Park, Young-Jin;Park, Youn-Sik
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.8
    • /
    • pp.823-831
    • /
    • 2008
  • Since it needs the light computational load and small database, sound source localization method using time delay of arrival(TDOA method) is applied at many research fields such as a robot auditory system, teleconferencing and so on. Researches for time delay estimation, which is the most important thing of TDOA method, had been studied broadly. However studies about factors for time delay estimation are insufficient, especially in case of real environment application. In 1997, Brandstein and Silverman announced that performance of time delay estimation deteriorates as reverberant time of room increases. Even though reverberant time of room is same, performance of estimation is different as the specific part of signals. In order to know that reason, we studied and analyzed the factors for time delay estimation using speech signal and room impulse response. In result, we can know that performance of time delay estimation is changed by different R/D ratio and signal characteristics in spite of same reverberant time. Also, we define the performance index(PI) to show a similar tendency to R/D ratio, and propose the method to improve the performance of time delay estimation with PI.

Performance Improvement of Voice Dialing System using Post-Processing (후처리를 이용한 음성 다이얼링 시스템의 성능향상)

  • 김원구
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.9-12
    • /
    • 2000
  • Voice dialing system can recognize the speaker's command and dial the destinate phone number automatically. Such a system is useful for wireless handsets and portable communication devices. As a personal voice dialing system, all the commands are used to train the HMM for speech recognition based on owner-selected phrases. Its implementation requires much less memory space and computation resource compared to a speaker-independent system. Since only two or three training utterances per command are used in this system, it is difficult to estimate exact state duration distribution to improve the recognition performance. Therefore a post-processor is presented to improve the performance. Experiments which use the database collected through the telephone line showed that the proposed post-processor improves the recognition system performance.

  • PDF

Nasal Place Detection with Acoustic Phonetic Parameters (음향음성학 파라미터를 사용한 비음 위치 검출)

  • Lee, Suk-Myung;Choi, Jeung-Yoon;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.6
    • /
    • pp.353-358
    • /
    • 2012
  • This paper describes acoustic phonetic parameters for detecting nasal place in a knowledge-based speech recognition system. Initial acoustic phonetic parameters are selected by studying nasal production mechanisms which are radiation of the sound through the nasal cavity. Nasals are produced with differing articulatory configuration which can be classified by measuring acoustic phonetic parameters such as band energy ratio, band energy differences, formants and formant differences. These acoustic phonetic parameters were tested in a classification experiment among labial nasal, alveolar nasal and velar nasal. An overall classification rate of 57.5% is obtained using the proposed acoustic phonetic parameters on the TIMIT database.

Classification of Diphthongs using Acoustic Phonetic Parameters (음향음성학 파라메터를 이용한 이중모음의 분류)

  • Lee, Suk-Myung;Choi, Jeung-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.167-173
    • /
    • 2013
  • This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.

Development of 3-Ch EGG System Using Modulation and Demodulation Techniques(I) (변복조 방식을 이용한 3-채널 EGG 시스템의 개발(I))

  • Kim, J.M.;Song, C.G.;Lee, M.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.05
    • /
    • pp.134-135
    • /
    • 1993
  • The purpose of this research is development of EGG system for quantitative assessment of laryngeal function using speech and electroglotto-graphic data. The designed EGG system is 4-electrodes system which excitation current source is supplied from 1st to 4th electrode. The output signal.: from 2nd and 3rd electrodes, which are motivated by frequency of excitation current source, are air-pressure waveforms from vocal folds. After demodulation process, we obtain pitch signals of the modulated waveforms by excitation current source through differentiator which cuts off frequency below 0.1Hz. Software processing methods were used as conventional pitch extraction methods, but the proposed system is designed to analog hardware in order to eliminate interferences from low formant frequency of speech. We will construct the discriminating database between pathological subjects and control groups on each case. Using the proposed 3 channel EGG system and LMS algorithm, it will be detected that the distinctive characteristics of laryngeal function of voiced region and other regions by EGG signals and LPC spectra.

  • PDF

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF