• Title/Summary/Keyword: 자유발화

Search Result 34, Processing Time 0.028 seconds

Some considerations for construction of spontaneous speech/text corpus (자유발화음성 및 텍스트코퍼스 구축에 관한 검토)

  • 이용주
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.303-309
    • /
    • 1994
  • 최근의 음성연구의 관신은 낭독음성에서 자유발화음성으로 옮겨가고 있다. 본고에서는 자유발화음성을 대상으로한 음성번역 및 대화시스템의 연구동향과 함께 자유발화의 음성 및 텍스트코퍼스 구축을 위한 몇몇 사항들을 살펴보고, 필자들이 현재 수집중인 코퍼스의 예를 소개한다.

  • PDF

Korean prosodic properties between read and spontaneous speech (한국어 낭독과 자유 발화의 운율적 특성)

  • Yu, Seungmi;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.39-54
    • /
    • 2022
  • This study aims to clarify the prosodic differences in speech types by examining the Korean read speech and spontaneous speech in the Korean part of the L2 Korean Speech Corpus (speech corpus for Korean as a foreign language). To this end, the articulation length, articulation speed, pause length and frequency, and the average fundamental frequency values of sentences were set as variables and analyzed via statistical methodologies (t-test, correlation analysis, and regression analysis). The results found that read speech and spontaneous speech were structurally different in the form of prosodic phrases constituting each sentence and that the prosodic elements differentiating each speech type were articulation length, pause length, and pause frequency. The statistical results show that the correlation between articulation speed and articulation length was highest in read speech, explaining that the longer a given sentence is, the faster the speaker speaks. In spontaneous speech, however, the relationship between the articulation length and the pause frequency in a sentence was high. Overall, spontaneous speech produces more pauses because short intonation phrases are continuously built to make a sentence, and as a result, the sentence gets lengthened.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Trends of Spontaneous Speech Dialogue Processing Technology (자유발화형 음성대화처리 기술동향)

  • Kwon, O.W.;Choi, S.K.;Roh, Y.H.;Kim, Y.K.;Park, J.G.;Lee, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.30 no.4
    • /
    • pp.26-35
    • /
    • 2015
  • 모바일 혁명 빅데이터와 사물인터넷 시대에 접어들면서 인간의 음성과 말로 다양한 장치와 서비스를 제어하고 이용하는 것은 당연시되고 있다. 음성대화처리 기술은 인간 중심의 자유로운 발화를 인식하고 이해 및 처리하는 방향으로 발전하게 될 것이다. 본고에서는 현재 음성대화처리 기술 국내외 기술 및 산업 동향과 지식재산권 동향을 살펴보고, 인간 중심의 자유발화형 음성대화처리 기술 개념과 발전방향에 대해 기술한다.

  • PDF

Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data (음향 데이터로부터 얻은 확장된 음소 단위를 이용한 한국어 자유발화 음성인식기의 성능)

  • Bang, Jeong-Uk;Kim, Sang-Hun;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • We propose a method to improve the performance of spontaneous speech recognizers by extending their phone set using speech data. In the proposed method, we first extract variable-length phoneme-level segments from broadcast speech signals, and convert them to fixed-length latent vectors using an long short-term memory (LSTM) classifier. We then cluster acoustically similar latent vectors and build a new phone set by choosing the number of clusters with the lowest Davies-Bouldin index. We also update the lexicon of the speech recognizer by choosing the pronunciation sequence of each word with the highest conditional probability. In order to analyze the acoustic characteristics of the new phone set, we visualize its spectral patterns and segment duration. Through speech recognition experiments using a larger training data set than our own previous work, we confirm that the new phone set yields better performance than the conventional phoneme-based and grapheme-based units in both spontaneous speech recognition and read speech recognition.

Study on the realization of pause groups and breath groups (휴지 단위와 호흡 단위의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.19-31
    • /
    • 2020
  • The purpose of this study is to observe the realization of pause and breath groups from adult speakers and to examine how gender, generation, and tasks can affect this realization. For this purpose, we analyzed forty-eight male or female speakers. Their generation was divided into two groups: young, old. Task and gender affected both the realization of pause and breath groups. The length of the pause groups was longer in the read speech than in the spontaneous speech and female speech. On the other hand, the length of the breath group was longer in the spontaneous speech and the male speech. In the spontaneous speech, which requires planning, the speaker produced shorter length of pause group. The short sentence length of the reading material influenced the reason for which the length of the breath group was shorter in the reading speech. Gender difference resulted from difference in pause patterns between genders. In the case of the breath groups, the male speaker produced longer duration of pause than the female speaker did, which may be due to difference in lung capacity between genders. On the other hand, generation did not affect either the pause groups or the breath groups. The generation factor only influenced the number of syllables and the eojeols, which can be interpreted as the result of the difference in speech rate between generations.

Speaker age estimation and acoustic characteristics: According to pitch and speech rate (화자 연령 지각과 음성적 특성: 음높이와 발화 속도를 중심으로)

  • Seo, YoonJeong;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.9-18
    • /
    • 2019
  • This study aimed to investigate the correlation between speaker's chronological age (CA) and perceived age (PA) and to specify the effect of pitch and speech rate as acoustic cue on judging age, using perceptual testing and acoustic analysis. Three tasks were conducted to identify the degree of listener's accuracy about age estimation. Three perception tasks were conducted to measure the accuracy of 80 Korean listeners when presented with different types of speech. In all the tasks, participants listened to speech samples and gave their estimate of the speaker's age in figures. It was found that Korean listeners are able to gauge the age of a speaker fairly precisely. CA and mean PA were positively correlated in all three tasks. It is clear that the amount and type of information included in the voice samples affected the accuracy of a listener's judgement. Moreover, the result revealed that listeners make use of acoustic information such as pitch and speech rate to estimate speaker's age.

Korean Spoken Language Analysis System Using Concept and Syntactic Information (개념 및 구문 정보를 이용한 한국어 대화체 분석시스템)

  • Wang, Ji-Hyun;Seo, Young-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 1997.10a
    • /
    • pp.341-346
    • /
    • 1997
  • 개념기반 분석방법은 발화문에서 발화자가 전달하고자 하는 중요한 부분만을 추출하여 개념어로 대표하여 분석하기 때문에 문장에서 발생하는 불필요한 여러 언어현상을 무시하고 주요 의미만 추출할 수 있는 강건함을 가장 큰 장점으로 갖는다. 한국어는 영어권 언어와는 달리 교착어와 부분 자유 어순의 특징을 가지기 때문에 구문정보를 이용하지 않는 순수 개념 기반의 분석기법을 한국어에 그대로 적용하면 문법의 복잡도가 증가하여 시스템 성능이 크게 저하된다. 본 논문에서 제시하는 구문정보를 이용한 개념기반의 분석방법은 순수 개념 기반의 분석기법이나 구문정보만을 사용하는 방법보다 모호성이 적고, 문법의 기술이 용이하며, 대화체 처리의 어려운 점들을 상당수 극복할 수 있다. 또한 분석루틴의 skip기능은 자연 발화문의 분석률을 높여주며, 어근으로부터 분리한 어미를 일정한 개념으로 분류함으로써 교착어의 특성으로 인한 문법의 복잡도를 해소하였고, 분석문법으로 부분 자유 어순에 따른 다양한 문장들을 수용할 수 있다.

  • PDF

A Korean Analysis based on Argument Structures for Spoken Language Translation (대화체 번역을 위한 논항 구조에 기반한 한국어 분석)

  • Jeong, Cheon-Yeong;Seo, Yeong-Hun
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.4
    • /
    • pp.380-387
    • /
    • 2001
  • 본 논문에서는 대화체 번역을 위한 논항 구조에 기반한 한국어 분석에 대하여 기술한다. 논항구조 기반 문법은 순서에 관계없이 기술된다. 따라서 한국어 부분 자유 어순 특성으로 문법이 방대해지는 문제점을 해결할 수 있다. 또한, 서술어가 지배하는 논항이 문법으로부터 선택됨으로서 대화체가 갖는 특성인 간투어나 중복 발화 현상 등을 효과적으로 해결할 수 있다. 실험을 위하여 사용된 데이터는 ‘여행 안내’ 영역 중에서 1,335개의 훈련된 발화문과 420개의 훈련되지 않은 발화문이다. 실험 결과 훈련된 발화문에서는 99.7%, 훈련되지 않은 발화문에서는 93.3%의 분석 성공률을 보였다.

  • PDF

Production and perception of Korean word-initial stops from a sound change perspective (음 변화 관점에서 바라본 한국어 어두 폐쇄음의 발화 및 지각)

  • Kim, Jin-Woo
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.39-51
    • /
    • 2021
  • Based on spontaneous speech data collected in 2020, this study examined the production and perception of Korean lenis, aspirated, and fortis stops. Unlike the controlled experiments of previous studies, lenis and aspirated stops of males in their 30s were not distinguished by voice onset time (VOT) in spontaneous speech. Perceptual experiments were conducted on young females, the leaders of language change. F0 was found to serve as the primary cue for the perception of lenis stops, and then VOT distinguished the aspirated and fortis stops. The fact that the sounds were always perceived as lenis stops when F0 was low, irrespective of whether VOT was short or long, showed that F0 plays an absolute role in the perception of lenis stops. However, in some cases the aspirated and lenis stops were distinguished only by VOT, which does not happen in production. In terms of sound change, disagreement between production and perception systems occurs when sound change is in progress. In particular, when production change precedes perception change, it indicates that the sound change is in its latter stages. Young females still maintain the previous system in perception because the distinction of lenis and aspirated stops by VOT was valid in their parents' generation. In other words, VOT is still used for perception to communicate with other groups.