• 제목/요약/키워드: Korean speech

검색결과 5,300건 처리시간 0.03초

원어민 및 외국인 화자의 음성인식을 위한 심층 신경망 기반 음향모델링 (DNN-based acoustic modeling for speech recognition of native and foreign speakers)

  • 강병옥;권오욱
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.95-101
    • /
    • 2017
  • This paper proposes a new method to train Deep Neural Network (DNN)-based acoustic models for speech recognition of native and foreign speakers. The proposed method consists of determining multi-set state clusters with various acoustic properties, training a DNN-based acoustic model, and recognizing speech based on the model. In the proposed method, hidden nodes of DNN are shared, but output nodes are separated to accommodate different acoustic properties for native and foreign speech. In an English speech recognition task for speakers of Korean and English respectively, the proposed method is shown to slightly improve recognition accuracy compared to the conventional multi-condition training method.

훈련음성 데이터에 적응시킨 필터뱅크 기반의 MFCC 특징파라미터를 이용한 전화음성 연속숫자음의 인식성능 향상에 관한 연구 (A study on the recognition performance of connected digit telephone speech for MFCC feature parameters obtained from the filter bank adapted to training speech database)

  • 정성윤;김민성;손종목;배건성;강점자
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.119-122
    • /
    • 2003
  • In general, triangular shape filters are used in the filter bank when we get the MFCCs from the spectrum of speech signal. In [1], a new feature extraction approach is proposed, which uses specific filter shapes in the filter bank that are obtained from the spectrum of training speech data. In this approach, principal component analysis technique is applied to the spectrum of the training data to get the filter coefficients. In this paper, we carry out speech recognition experiments, using the new approach given in [1], for a large amount of telephone speech data, that is, the telephone speech database of Korean connected digit released by SITEC. Experimental results are discussed with our findings.

  • PDF

강인한 음성 인식 시스템을 사용한 감정 인식 (Emotion Recognition using Robust Speech Recognition System)

  • 김원구
    • 한국지능시스템학회논문지
    • /
    • 제18권5호
    • /
    • pp.586-591
    • /
    • 2008
  • 본 논문은 음성을 사용한 인간의 감정 인식 시스템의 성능을 향상시키기 위하여 감정 변화에 강인한 음성 인식 시스템과 결합된 감정 인식 시스템에 관하여 연구하였다. 이를 위하여 우선 다양한 감정이 포함된 음성 데이터베이스를 사용하여 감정 변화가 음성 인식 시스템의 성능에 미치는 영향에 관한 연구와 감정 변화의 영향을 적게 받는 음성 인식 시스템을 구현하였다. 감정 인식은 음성 인식의 결과에 따라 입력 문장에 대한 각각의 감정 모델을 비교하여 입력 음성에 대한 최종감정 인식을 수행한다. 실험 결과에서 강인한 음성 인식 시스템은 음성 파라메터로 RASTA 멜 켑스트럼과 델타 켑스트럼을 사용하고 신호편의 제거 방법으로 CMS를 사용한 HMM 기반의 화자독립 단어 인식기를 사용하였다. 이러한 음성 인식기와 결합된 감정 인식을 수행한 결과 감정 인식기만을 사용한 경우보다 좋은 성능을 나타내었다.

언어장애인을 위한 안드로이드 기반 의사소통보조 어플리케이션 (An Android Application for Speech Communication of People with Speech Disorders)

  • 최윤정;홍기형
    • 말소리와 음성과학
    • /
    • 제6권4호
    • /
    • pp.141-148
    • /
    • 2014
  • Voice is the most common means for communication, but some people have difficulties in generating voice due to their congenital or acquired disorders. Individuals with speech disorders might lose their speaking ability due to hearing impairment, encephalopathy or cerebral palsy accompanied by motor skill impairments, or autism caused by mental problems. However, they have needs for communication, so some of them use various types of AAC (Augmentative & Alternative Communication) devices in order to meet their communication needs. In this paper, a mobile application for literate people having speech disorder was designed and implemented by developing accurate and fast sentence-completion functions for efficient user interaction. From a user study and the previous study on Korean text-based communication for adults having difficulty in speech communication, we identified functionality and usability requirements. Specifically, the user interface with scanning features was designed by considering the users' motor skills in using the touch-screen of a mobile device. Finally, we conducted the usability test for the application. The results of the usability test show that the application is easy to learn and efficient to use in communication with people with speech disorders.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • 대한청각학회지
    • /
    • 제24권3호
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

한국어 파열음상의 Voice Onset Time(VOT) : 정상군과 언어실행증 환자비교에 대한 사전 연구 (Voice Onset Time(VOT) During Korean Plosives Production : A Preliminary Study on Normal and Apraxia of Speech Subjects)

  • 김향희
    • 대한후두음성언어의학회지
    • /
    • 제8권1호
    • /
    • pp.49-53
    • /
    • 1997
  • Aberrations in VOT measures in apraxia of speech are indicative of speech motor programming impairment. In English, overlaps of VOT between voiceless and voiced plosives have been frequently observed in patients with apraxia of speech. Unlike English, Korean plosives constitute trichotomy in terms of manner of production, that is, voiceless aspirated /p', t', k'/ ; voiceless or voiced, weakly aspirated /p-b, t-k, k-g/ ; or voiceless, heavily aspirated /p, t, k/. In this spectrographic study, VOT measures during Korean plosives produced by a patient with apraxia of speech were compared to those by age- and gender- matching normal subjects. The results indicated that there were partial overlaps between VOT of /b, d, g/ and those of /p, t, k/, implying that the errors were phonetic in nature. In addition, larger VOT variabilities in apraxia of speech compared to the normals were noted.

  • PDF

Effects of gender, age, and individual speakers on articulation rate in Seoul Korean spontaneous speech

  • Kim, Jungsun
    • 말소리와 음성과학
    • /
    • 제10권4호
    • /
    • pp.19-29
    • /
    • 2018
  • The present study investigated whether there are differences in articulation rate by gender, age, and individual speakers in a spontaneous speech corpus produced by 40 Seoul Korean speakers. This study measured their articulation rates using a second-per-syllable metric and a syllable-per-second metric. The findings are as follows. First, in spontaneous Seoul Korean speech, there was a gender difference in articulation rates only in age group 10-19, among whom men tended to speak faster than women. Second, individual speakers showed variability in their rates of articulation. The tendency for some speakers to speak faster than others was variable. Finally, there were metric differences in articulation rate. That is, regarding the coefficients of variation, the values of the second-per-syllable metric were much higher than those for the syllable-per-second metric. The articulation rate for the syllable-per-second metric tended to be more distinct among individual speakers. The present results imply that data gathered in a corpus of Seoul Korean spontaneous speech may reflect speaker-specific differences in articulatory movements.

대용량 한국어 TTS의 결정트리기반 음성 DB 감축 방안 (UA Tree-based Reduction of Speech DB in a Large Corpus-based Korean TTS)

  • 이정철
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권7호
    • /
    • pp.91-98
    • /
    • 2010
  • 대용량 음성 DB를 사용하는 음편접합 TTS는 부가적인 신호처리 기술을 거의 사용하지 않고, 문맥을 반영하는 여러 합성유닛들을 결합해 합성음을 생성하기 때문에 높은 자연성을 가진다는 장점이 있다. 그러나 자연성, 개인성, 어조, 감정구현 등에서 활용성을 높이기 위해서는 음성DB의 크기가 비례적으로 증가하게 되므로 음운환경과 음향적 특성이 유사한 다수의 음편들을 제거하여 음성DB의 크기를 감축하기 위한 연구가 필수적이다 본 논문에서는DB감축을 위해 결정 트리 기반의 새로운 음소 군집화 방법을 이용하여 한국어 TTS용 합성단위음편 데이터베이스 구축 방법을 제안한다. 그리고 클러스터링방법에 대한 성능 평가를 위해서 언어 처리기, 운율 처리기, 음편 선택기, 합성음 생성기, 합성단위 음편데이터베이스, 음성신호 출력기로 구성되는 한국어 TTS 기본 시스템을 이용하여 합성음을 생성하였고 트리 클러스터링 방법 CM1, CM2와 전체 DB (Full DB)와 감축된 DB(Reduced DB)의 4가지 조합별로 제작된 음편 데이터베이스를 이용하여 각 조합에 대한 MOS 테스트를 수행하였다. 실험결과 제안된 방법을 사용할 경우 전체 음성DB의 크기를 23%로 줄일 수 있었고, 청취실험 결과 높은 MOS를 보이므로 향후 소용량 DB TTS에 적용 가능성을 보였다.

콘포머 기반 FastSpeech2를 이용한 한국어 음식 주문 문장 음성합성기 (A Korean menu-ordering sentence text-to-speech system using conformer-based FastSpeech2)

  • 최예린;장재후;구명완
    • 한국음향학회지
    • /
    • 제41권3호
    • /
    • pp.359-366
    • /
    • 2022
  • 본 논문에서는 콘포머 기반 FastSpeech2를 이용한 한국어 메뉴 음성합성기를 제안한다. 콘포머는 본래 음성 인식 분야에서 제안된 것으로, 합성곱 신경망과 트랜스포머를 결합하여 광역과 지역 정보를 모두 잘 추출할 수 있도록 한 구조다. 이를 위해 순방향 신경망을 반으로 나누어 제일 처음과 마지막에 위치시켜 멀티 헤드 셀프 어텐션 모듈과 합성곱 신경망을 감싸는 마카론 구조를 구성했다. 본 연구에서는 한국어 음성인식에서 좋은 성능이 확인된 콘포머 구조를 한국어 음성합성에 도입하였다. 기존 음성합성 모델과의 비교를 위하여 트랜스포머 기반의 FastSpeech2와 콘포머 기반의 FastSpeech2를 학습하였다. 이때 데이터셋은 음소 분포를 고려한 자체 제작 데이터셋을 이용하였다. 특히 일반대화 뿐만 아니라, 음식 주문 문장 특화 코퍼스를 제작하고 이를 음성합성 훈련에 사용하였다. 이를 통해 외래어 발음에 대한 기존 음성합성 시스템의 문제점을 보완하였다. ParallelWave GAN을 이용하여 합성음을 생성하고 평가한 결과, 콘포머 기반의 FastSpeech2가 월등한 성능인 MOS 4.04을 달성했다. 본 연구를 통해 한국어 음성합성 모델에서, 동일한 구조를 트랜스포머에서 콘포머로 변경하였을 때 성능이 개선됨을 확인하였다.

한국어 발화 속도의 지역, 성별, 세대에 따른 특징 연구 (Speech rate in Korean across region, gender and generation)

  • 이나라;신지영;유도영;김경화
    • 말소리와 음성과학
    • /
    • 제9권1호
    • /
    • pp.27-39
    • /
    • 2017
  • This paper deals with how speech rate in Korean is affected by the sociolinguistic factors such as region, gender and generation. Speech rate was quantified as articulation rate (excluding physical pauses) and speaking rate (including physical pauses), both expressed as the number of syllables per second (sps). Other acoustic measures such as pause frequency and duration were also examined. Four hundred twelve subjects were chosen from Korean Standard Speech Database considering their age, gender and region. The result shows that generation has a significant effect on both speaking rate and articulation rate. Younger speakers produce their speech with significantly faster speaking rate and articulation rate than older speakers. Mean duration of total pause interval and the total number of pause of older speakers are also significantly different to those of younger speakers. Gender has a significant effect only on articulation rate, which means male speakers' speech rate is characterized by faster articulation rate, longer and more frequent pauses. Finally, region has no effect both on speaking and articulation rates.