• Title/Summary/Keyword: computer-based speech analysis system

Search Result 35, Processing Time 0.028 seconds

A Study on Image Recommendation System based on Speech Emotion Information

  • Kim, Tae Yeun;Bae, Sang Hyun
    • Journal of Integrative Natural Science
    • /
    • v.11 no.3
    • /
    • pp.131-138
    • /
    • 2018
  • In this paper, we have implemented speeches that utilized the emotion information of the user's speech and image matching and recommendation system. To classify the user's emotional information of speech, the emotional information of speech about the user's speech is extracted and classified using the PLP algorithm. After classification, an emotional DB of speech is constructed. Moreover, emotional color and emotional vocabulary through factor analysis are matched to one space in order to classify emotional information of image. And a standardized image recommendation system based on the matching of each keyword with the BM-GA algorithm for the data of the emotional information of speech and emotional information of image according to the more appropriate emotional information of speech of the user. As a result of the performance evaluation, recognition rate of standardized vocabulary in four stages according to speech was 80.48% on average and system user satisfaction was 82.4%. Therefore, it is expected that the classification of images according to the user's speech information will be helpful for the study of emotional exchange between the user and the computer.

Implementation of Music Broadcasting Service System in the Shopping Center Using Text-To-Speech Technology (TTS를 이용한 매장 음악 방송 서비스 시스템 구현)

  • Chang, Moon-Soo;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.169-178
    • /
    • 2007
  • This thesis describes the development of a service system for small-sized shops which support not only music broadcasting, but editing and generating voice announcement using the TTS(Text-To-Speech) technology. The system has been developed based on web environments with an easy access whenever and wherever it is needed. The system is able to control the sound using silverlight media player based on the ASP .NET 2.0 technology without any additional application software. Use of the Ajax control allows for multiple users to get the maximum load when needed. TTS is built in the server side so that the service can be provided without user's computer. Due to convenience and usefulness of the system, the business sector can provide better service to many shops. Further additional functions such as statistical analysis will undoubtedly help shop management provide desirable services.

  • PDF

GMM based Nonlinear Transformation Methods for Voice Conversion

  • Vu, Hoang-Gia;Bae, Jae-Hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.67-70
    • /
    • 2005
  • Voice conversion (VC) is a technique for modifying the speech signal of a source speaker so that it sounds as if it is spoken by a target speaker. Most previous VC approaches used a linear transformation function based on GMM to convert the source spectral envelope to the target spectral envelope. In this paper, we propose several nonlinear GMM-based transformation functions in an attempt to deal with the over-smoothing effect of linear transformation. In order to obtain high-quality modifications of speech signals our VC system is implemented using the Harmonic plus Noise Model (HNM)analysis/synthesis framework. Experimental results are reported on the English corpus, MOCHA-TlMlT.

  • PDF

Implementation of HMM-Based Speech Recognizer Using TMS320C6711 DSP

  • Bae Hyojoon;Jung Sungyun;Son Jongmok;Kwon Hongseok;Kim Siho;Bae Keunsung
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.391-394
    • /
    • 2004
  • This paper focuses on the DSP implementation of an HMM-based speech recognizer that can handle several hundred words of vocabulary size as well as speaker independency. First, we develop an HMM-based speech recognition system on the PC that operates on the frame basis with parallel processing of feature extraction and Viterbi decoding to make the processing delay as small as possible. Many techniques such as linear discriminant analysis, state-based Gaussian selection, and phonetic tied mixture model are employed for reduction of computational burden and memory size. The system is then properly optimized and compiled on the TMS320C6711 DSP for real-time operation. The implemented system uses 486kbytes of memory for data and acoustic models, and 24.5kbytes for program code. Maximum required time of 29.2ms for processing a frame of 32ms of speech validates real-time operation of the implemented system.

  • PDF

Analysis on Vowel and Consonant Sounds of Patent's Speech with Velopharyngeal Insufficiency (VPI) and Simulated Speech (구개인두부전증 환자와 모의 음성의 모음과 자음 분석)

  • Sung, Mee Young;Kim, Heejin;Kwon, Tack-Kyun;Sung, Myung-Whun;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.7
    • /
    • pp.1740-1748
    • /
    • 2014
  • This paper focuses on listening test and acoustic analysis of patients' speech with velopharyngeal insufficiency (VPI) and normal speakers' simulation speech. In this research, a set consisting of 50-words, vowels and single syllables is determined for speech database construction. A web-based listening evaluation system is developed for a convenient/automated evaluation procedure. The analysis results show the trend of incorrect recognition for VPI speech and the one for simulation speech are similar. Such similarity is also confirmed by comparing the formant locations of vowel and spectrum of consonant sounds. These results show that the simulation method for VPI speech is effective at generating the speech signals similar to actual VPI patient's speech. It is expected that the simulation speech data can be effectively employed for our future work such as acoustic model adaptation.

A Model of English Part-Of-Speech Determination for English-Korean Machine Translation (영한 기계번역에서의 영어 품사결정 모델)

  • Kim, Sung-Dong;Park, Sung-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.53-65
    • /
    • 2009
  • The part-of-speech determination is necessary for resolving the part-of-speech ambiguity in English-Korean machine translation. The part-of-speech ambiguity causes high parsing complexity and makes the accurate translation difficult. In order to solve the problem, the resolution of the part-of-speech ambiguity must be performed after the lexical analysis and before the parsing. This paper proposes the CatAmRes model, which resolves the part-of-speech ambiguity, and compares the performance with that of other part-of-speech tagging methods. CatAmRes model determines the part-of-speech using the probability distribution from Bayesian network training and the statistical information, which are based on the Penn Treebank corpus. The proposed CatAmRes model consists of Calculator and POSDeterminer. Calculator calculates the degree of appropriateness of the partof-speech, and POSDeterminer determines the part-of-speech of the word based on the calculated values. In the experiment, we measure the performance using sentences from WSJ, Brown, IBM corpus.

  • PDF

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.

Development of English Stress and Intonation Training System and Program for the Korean Learners of English Using Personal Computer (P.C.) (퍼스컴을 이용한 영어 강세 및 억양 교육 프로그램의 개발 연구)

  • Jeon, B.M.;Pae, D.B.;Lee, C.H.;Yu, C.K.
    • Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.57-75
    • /
    • 1999
  • The purpose of this paper is to develop an English prosody training system using PC for Korean learners of English. The program is called Intonation Training Tool (ITT). It operates on DOS 5.0. The hardware for this program requires over IBM PC 386 with 4 MBytes main memory, SVGA (1 MByte or more) for graphic, soundblaster 16 and over 14 inch monitor size. The ITT program operates this way: the learners can listen as well as see the English teacher's stress and intonation patterns on the monitor. The learner practices the same patterns with a microphone. This program facilitates the learner's stress and intonation patterns to overlap the teacher's patterns. The learner can find his/her stress and intonation errors and correct these independently. This program is expected to be a highly efficient learning tool for Korean learners of English in their English prosody training in the English class without the aid of a native English speaker in the classroom.

  • PDF

I-vector similarity based speech segmentation for interested speaker to speaker diarization system (화자 구분 시스템의 관심 화자 추출을 위한 i-vector 유사도 기반의 음성 분할 기법)

  • Bae, Ara;Yoon, Ki-mu;Jung, Jaehee;Chung, Bokyung;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.461-467
    • /
    • 2020
  • In noisy and multi-speaker environments, the performance of speech recognition is unavoidably lower than in a clean environment. To improve speech recognition, in this paper, the signal of the speaker of interest is extracted from the mixed speech signals with multiple speakers. The VoiceFilter model is used to effectively separate overlapped speech signals. In this work, clustering by Probabilistic Linear Discriminant Analysis (PLDA) similarity score was employed to detect the speech signal of the interested speaker, which is used as the reference speaker to VoiceFilter-based separation. Therefore, by utilizing the speaker feature extracted from the detected speech by the proposed clustering method, this paper propose a speaker diarization system using only the mixed speech without an explicit reference speaker signal. We use phone-dataset consisting of two speakers to evaluate the performance of the speaker diarization system. Source to Distortion Ratio (SDR) of the operator (Rx) speech and customer speech (Tx) are 5.22 dB and -5.22 dB respectively before separation, and the results of the proposed separation system show 11.26 dB and 8.53 dB respectively.

The Speech of Cleft Palate Patients using Nasometer, EPG and Computer based Speech Analysis System (비음 측정기, 전기 구개도 및 음성 분석 컴퓨터 시스템을 이용한 구개열 언어 장애의 특성 연구)

  • Shin, Hyo-Geun;Kim, Oh-Whan;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.69-89
    • /
    • 1998
  • The aim of this study is to develop an objectively method of speech evaluation for children with cleft palates. To assess velopharyngeal function, Visi-Pitch, Computerized Speech Lab. (CSL), Nasometer and Palatometer were used for this study. Acoustic parameters were measured depending on the diagnostic instruments: Pitch (Hz), sound pressure level (dB), jitter (%) and diadochokinetic rate by Visi-Pitch, VOT and vowels formant ($F_1\;&\;F_2$) by a Spectrography and the degree of hypernasality by Nasometer. In addition, Palatometer was used to find the lingual-palatal patterns of cleft palate. Ten children with cleft palates and fifty normal children participated in the experiment. The results are as follows: (1) Higher nasalance of children with cleft palates showed the resonance disorder. (2) The cleft palate showed palatal misarticulation and lateral misarticulation on the palatogram. (3) Children with cleft palates showed the phonatory and respiratory problems. The duration of sustained vowels in children with cleft palates was shorter than in the control groups. The pitch of children with cleft palates was higher than in the control groups. However, intensity, jitter and diadochokinetic rate of children with cleft palates were lower than in the control group. (4) On the Spectrogram, the VOT of children with cleft palates was longer than control group. $F_1\;&\;F_2$ were lower than in the control group.

  • PDF