• 제목/요약/키워드: Speech Data

검색결과 1,388건 처리시간 0.025초

한국어 원거리 음성의 지속시간 연구 (A Study on the Durational Characteristics of Korean Distant-Talking Speech)

  • 김선희
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.1-14
    • /
    • 2005
  • This paper presents durational characteristics of Korean distant-talking speech using speech data, which consist of 500 distant-talking utterances and 500 normal utterances of 10 speakers (5 males and 5 females). Each file was segmented and labeled manually and the duration of each segment and each word was extracted. Using a statistical method, the durational change of distant-talking speech in comparison with normal speech was analyzed. The results show that the duration of words with distant-talking speech is increased in comparison with normal style, and that the average unvoiced consonantal duration is reduced while the average vocalic duration is increased. Female speakers show a stronger tendency towards lengthening the duration in distant-talking speech. Finally, this study also shows that the speakers of distant-talking speech could be classified according to their different duration rate.

  • PDF

A Real-Time Embedded Speech Recognition System

  • Nam, Sang-Yep;Lee, Chun-Woo;Lee, Sang-Won;Park, In-Jung
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.690-693
    • /
    • 2002
  • According to the growth of communication biz, embedded market rapidly developing in domestic and overseas. Embedded system can be used in various way such as wire and wireless communication equipment or information products. There are lots of developing performance applying speech recognition to embedded system, for instance, PDA, PCS, CDMA-2000 or IMT-2000. This study implement minimum memory of speech recognition engine and DB for apply real time embedded system. The implement measure of speech recognition equipment to fit on embedded system is like following. At first, DC element is removed from Input voice and then a compensation of high frequency was achieved by pre-emphasis with coefficients value, 0.97 and constitute division data as same size as 256 sample by lapped shift method. Through by Levinson - Durbin Algorithm, these data can get linear predictive coefficient and again, using Cepstrum - Transformer attain feature vectors. During HMM training, We used Baum-Welch reestimation Algorithm for each words training and can get the recognition result from executed likelihood method on each words. The used speech data is using 40 speech command data and 10 digits extracted form each 15 of male and female speaker spoken menu control command of Embedded system. Since, in many times, ARM CPU is adopted in embedded system, it's peformed porting the speech recognition engine on ARM core evaluation board. And do the recognition test with select set 1 and set 3 parameter that has good recognition rate on commander and no digit after the several tests using by 5 proposal recognition parameter sets. The recognition engine of recognition rate shows 95%, speech commander recognizer shows 96% and digits recognizer shows 94%.

  • PDF

음성의 변곡점 추출 및 전송에 기반한 가변 데이터율 음성 부호화 기법 (A Variable Data Rate Speech Coding Technique Based on the Inflection Point Detection of Speech)

  • 임병관
    • 전기학회논문지
    • /
    • 제62권4호
    • /
    • pp.562-565
    • /
    • 2013
  • A new variable rate speech coding technique is proposed. The method is based on the observation that the speech signal approximately looks linear for a very short period of time. The information transmitted is the location and data value of inflection points. If the distance between the inflection points is large, the mid point location and its data value are also delivered. Thus, the encoder transmits both the location and the data value for the inflection samples, but the location only for the non-inflection points. The location information is expressed using one bit for each sample, 0 for non-inflection and 1 for inflection point. At the receiver, using the interpolation, the decoder estimates the untransmitted sample values for non-inflection locations from the received sample values for the inflection samples. With 50 % of computational cost of the existing CVSD delta modulation, the proposed method is expected to achieve the data rate of 36 to 38 kbps and the SNR of 10 to 13 dB.

DSK50을 이용한 16kbps ADPCM 구현 (Implementation of 16Kpbs ADPCM by DSK50)

  • 조윤석;한경호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1996년도 하계학술대회 논문집 B
    • /
    • pp.1295-1297
    • /
    • 1996
  • CCITT G.721, G.723 standard ADPCM algorithm is implemented by using TI's fixed point DSP start kit (DSK). ADPCM can be implemented on a various rates, such as 16K, 24K, 32K and 40K. The ADPCM is sample based compression technique and its complexity is not so high as the other speech compression techniques such as CELP, VSELP and GSM, etc. ADPCM is widely applicable to most of the low cost speech compression application and they are tapeless answering machine, simultaneous voice and fax modem, digital phone, etc. TMS320C50 DSP is a low cost fixed point DSP chip and C50 DSK system has an AIC (analog interface chip) which operates as a single chip A/D and D/A converter with 14 bit resolution, C50 DSP chip with on-chip memory of 10K and RS232C interface module. ADPCM C code is compiled by TI C50 C-compiler and implemented on the DSK on-chip memory. Speech signal input is converted into 14 bit linear PCM data and encoded into ADPCM data and the data is sent to PC through RS232C. The ADPCM data on PC is received by the DSK through RS232C and then decoded to generate the 14 bit linear PCM data and converted into the speech signal. The DSK system has audio in/out jack and we can input and out the speech signal.

  • PDF

대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스 (Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation)

  • 백지영;김세라;이석필
    • 인터넷정보학회논문지
    • /
    • 제23권2호
    • /
    • pp.71-77
    • /
    • 2022
  • 본 연구에서는 음성 합성 모델을 감정에 따라 음성을 합성하는 모델로 확장하고 감정에 따른 얼굴 표정을 생성하기 위한 데이터 베이스를 수집한다. 데이터베이스는 남성과 여성의 데이터가 구분되며 감정이 담긴 발화와 얼굴 표정으로 구성되어 있다. 성별이 다른 2명의 전문 연기자가 한국어로 문장을 발음한다. 각 문장은 anger, happiness, neutrality, sadness의 4가지 감정으로 구분된다. 각 연기자들은 한 가지의 감정 당 약 3300개의 문장을 연기한다. 이를 촬영하여 수집한 전체 26468개의 문장은 중복되지 않으며 해당하는 감정과 유사한 내용을 담고 있다. 양질의 데이터베이스를 구축하는 것이 향후 연구의 성능에 중요한 역할을 하므로 데이터베이스를 감정의 범주, 강도, 진정성의 3가지 항목에 대해 평가한다. 데이터의 종류에 따른 정확도를 알아보기 위해 구축된 데이터베이스를 음성-영상 데이터, 음성 데이터, 영상 데이터로 나누어 평가를 진행하고 비교한다.

The Effects of Pitch Increasing Training (PIT) on Voice and Speech of a Patient with Parkinson's Disease: A Pilot Study

  • Lee, Ok-Bun;Jeong, Ok-Ran;Shim, Hong-Im;Jeong, Han-Jin
    • 음성과학
    • /
    • 제13권1호
    • /
    • pp.95-105
    • /
    • 2006
  • The primary goal of therapeutic intervention in dysarthric speakers is to increase the speech intelligibility. Decision of critical features to increase the intelligibility is very important in speech therapy. The purpose of this study is to know the effects of pitch increasing training (PIT) on speech of a subject with Parkinson's disease (PD). The PIT program is focused on increasing pitch while a vowel is sustained with the same loudness. The loudness level is somewhat higher than that of the habitual loudness. A 67-year-old female with PD participated in the study. Speech therapy was conducted for 4 sessions (200 minutes) for one week. Before and after the treatment, acoustic, perceptual and speech naturalness evaluation was peformed for data analysis. Speech and voice satisfaction index (SVSI) was obtained after the treatment. Results showed Improvements in voice quality and speech naturalness. In addition, the patient's satisfaction ratings (SVSI) indicated a positive relationship between improved speech production and their (the patient and care-givers) satisfaction.

  • PDF

한국인 표준 음성 DB 구축(II) (Developing a Korean standard speech DB (II))

  • 신지영;김경화
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.9-22
    • /
    • 2017
  • The purpose of this paper is to report the whole process of developing Korean Standard Speech Database (KSS DB). This project is supported by SPO (Supreme Prosecutors' Office) research grant for three years from 2014 to 2016. KSS DB is designed to provide speech data for acoustic-phonetic and phonological studies and speaker recognition system. For the samples to represent the spoken Korean, sociolinguistic factors, such as region (9 regional dialects), age (5 age groups over 20) and gender (male and female) were considered. The goal of the project is to collect over 3,000 male and female speakers of nine regional dialects and five age groups employing direct and indirect methods. Speech samples of 3,191 speakers (2,829 speakers and 362 speakers using direct and indirect methods, respectively) are collected and databased. KSS DB designs to collect read and spontaneous speech samples from each speaker carrying out 5 speech tasks: three (pseudo-)spontaneous speech tasks (producing prolonged simple vowels, 28 blanked sentences and spontaneous talk) and two read speech tasks (reading 55 phonetically and phonologically rich sentences and reading three short passages). KSS DB includes a 16-bit, 44.1kHz speech waveform file and a orthographic file for each speech task.

음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습 (Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition)

  • 박순찬;김형순
    • 한국음향학회지
    • /
    • 제40권5호
    • /
    • pp.515-522
    • /
    • 2021
  • 음성감정인식을 위한 훈련 데이터는 감정 레이블링의 어려움으로 인해 충분히 확보하기 어렵다. 본 논문에서는 음성감정인식의 성능 개선을 위해 트랜스포머 기반 모델에 대규모 음성인식용 훈련 데이터를 통한 전이학습을 적용한다. 또한 음성인식과의 다중작업학습을 통해 별도의 디코딩 없이 문맥 정보를 활용하는 방법을 제안한다. IEMOCAP 데이터 셋을 이용한 음성감정인식 실험을 통해, 가중정확도 70.6 % 및 비가중정확도 71.6 %를 달성하여, 제안된 방법이 음성감정인식 성능 향상에 효과가 있음을 보여준다.

Performance of GMM and ANN as a Classifier for Pathological Voice

  • Wang, Jianglin;Jo, Cheol-Woo
    • 음성과학
    • /
    • 제14권1호
    • /
    • pp.151-162
    • /
    • 2007
  • This study focuses on the classification of pathological voice using GMM (Gaussian Mixture Model) and compares the results to the previous work which was done by ANN (Artificial Neural Network). Speech data from normal people and patients were collected, then diagnosed and classified into two different categories. Six characteristic parameters (Jitter, Shimmer, NHR, SPI, APQ and RAP) were chosen. Then the classification method based on the artificial neural network and Gaussian mixture method was employed to discriminate the data into normal and pathological speech. The GMM method attained 98.4% average correct classification rate with training data and 95.2% average correct classification rate with test data. The different mixture number (3 to 15) of GMM was used in order to obtain an optimal condition for classification. We also compared the average classification rate based on GMM, ANN and HMM. The proper number of mixtures on Gaussian model needs to be investigated in our future work.

  • PDF

정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응 (N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient)

  • 최준기;오영환
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF