• 제목/요약/키워드: continuous speech

검색결과 314건 처리시간 0.047초

영어의 강음절(강세 음절)과 한국어 화자의 단어 분절 (Strong (stressed) syllables in English and lexical segmentation by Koreans)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제3권1호
    • /
    • pp.3-14
    • /
    • 2011
  • It has been posited that in English, native listeners use the Metrical Segmentation Strategy (MSS) for the segmentation of continuous speech. Strong syllables tend to be perceived as potential word onsets for English native speakers, which is due to the high proportion of strong syllables word-initially in the English vocabulary. This study investigates whether Koreans employ the same strategy when segmenting speech input in English. Word-spotting experiments were conducted using vowel-initial and consonant-initial bisyllabic targets embedded in nonsense trisyllables in Experiment 1 and 2, respectively. The effect of strong syllable was significant in the RT (reaction times) analysis but not in the error analysis. In both experiments, Korean listeners detected words more slowly when the word-initial syllable is strong (stressed) than when it is weak (unstressed). However, the error analysis showed that there was no effect of initial stress in Experiment 1 and in the item (F2) analysis in Experiment 2. Only the subject (F1) analysis in Experiment 2 showed that the participants made more errors when the word starts with a strong syllable. These findings suggest that Koran listeners do not use the Metrical Segmentation Strategy for segmenting English speech. They do not treat strong syllables as word beginnings, but rather have difficulties recognizing words when the word starts with a strong syllable. These results are discussed in terms of intonational properties of Korean prosodic phrases which are found to serve as lexical segmentation cues in the Korean language.

  • PDF

FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현 (On the Implementation of a Facial Animation Using the Emotional Expression Techniques)

  • 김상길;민용식
    • 한국콘텐츠학회논문지
    • /
    • 제5권2호
    • /
    • pp.147-155
    • /
    • 2005
  • 본 논문은 여러 가지 감정들 중에서 4가지 감정의 범주 즉, 중성, 두려움, 싫증 및 놀람을 포함한 음성과 감성이 결합되어진 얼굴의 표정을 좀 더 정확하고 자연스러운 3차원 모델로 만들 수 있는 FAES(a Facial Animation with Emotion and Speech) 시스템을 구축하는데 그 주된 목적이 있다. 이를 위해서 먼저 사용할 훈련자료를 추출하고 난후에 감성을 처리한 얼굴 애니메이션에서는 SVM(Support vector machine)[11]을 사용하여 4개의 감정을 수반한 얼굴 표정을 데이터베이스로 구축한다. 마지막으로 얼굴 표정에 감정과 음성이 표현되는 시스템을 개발하는 것이다. 얼굴 표정을 위해서 본 논문에서는 한국인 청년을 대상으로 이루어졌다. 이런 시스템을 통한 결과가 기존에 제시된 방법에 비해서 감정의 영역을 확대시킴은 물론이고 감정인지의 정확도가 약 7%, 어휘의 연속 음성인지가 약 5%의 향상을 시켰다.

  • PDF

The f0 distribution of Korean speakers in a spontaneous speech corpus

  • Yang, Byunggon
    • 말소리와 음성과학
    • /
    • 제13권3호
    • /
    • pp.31-37
    • /
    • 2021
  • The fundamental frequency, or f0, is an important acoustic measure in the prosody of human speech. The current study examined the f0 distribution of a corpus of spontaneous speech in order to provide normative data for Korean speakers. The corpus consists of 40 speakers talking freely about their daily activities and their personal views. Praat scripts were created to collect f0 values, and a majority of obvious errors were corrected manually by watching and listening to the f0 contour on a narrow-band spectrogram. Statistical analyses of the f0 distribution were conducted using R. The results showed that the f0 values of all the Korean speakers were right-skewed, with a pointy distribution. The speakers produced spontaneous speech within a frequency range of 274 Hz (from 65 Hz to 339 Hz), excluding statistical outliers. The mode of the total f0 data was 102 Hz. The female f0 range, with a bimodal distribution, appeared wider than that of the male group. Regression analyses based on age and f0 values yielded negligible R-squared values. As the mode of an individual speaker could be predicted from the median, either the median or mode could serve as a good reference for the individual f0 range. Finally, an analysis of the continuous f0 points of intonational phrases revealed that the initial and final segments of the phrases yielded several f0 measurement errors. From these results, we conclude that an examination of a spontaneous speech corpus can provide linguists with useful measures to generalize acoustic properties of f0 variability in a language by an individual or groups. Further studies would be desirable of the use of statistical measures to secure reliable f0 values of individual speakers.

영어 복합명사와 명사구의 강세충돌과 강세전이 (Stress Clash and Stress Shift in English Noun Phrases and Compounds)

  • 이주경;강선미
    • 음성과학
    • /
    • 제11권3호
    • /
    • pp.95-109
    • /
    • 2004
  • Metrical Phonology has asserted that stress shift does not occur in English compounds because it violates the Continuous Column Constraint. Noun phrases, on the other hand, freely allow for stress shift, whereby the preceding stress moves forward to the preceding heavy syllable. This paper hypothesizes that stress does not shift in compounds as opposed to noun phrases and compares their pitch accentual patterns in a phonetic experiment. More specifically, we examined two-word combinations, noun phrases and compounds, whose boundaries involve stress clash and assured that the preceding words involve a heavy syllable ahead of the stress to guarantee the place for a shifting stress. Depending on where the preceding pitch accent is aligned, stress shift is determined. Results show that stress shift occurs in approximately 47% of the noun phrases and 59% of the compounds; therefore, the hypothesis is not borne out. This suggests that the surface representations derived by phonological rules may not be implemented in real utterance but that phonetic forms may be determined by the phonetic constraints. directly operating on human speech.

  • PDF

단어사전과 다층 퍼셉트론을 이용한 고립단어 인식 알고리듬 (Isolated Word Recognition Algorithm Using Lexicon and Multi-layer Perceptron)

  • 이기희;임인칠
    • 전자공학회논문지B
    • /
    • 제32B권8호
    • /
    • pp.1110-1118
    • /
    • 1995
  • Over the past few years, a wide variety of techniques have been developed which make a reliable recognition of speech signal. Multi-layer perceptron(MLP) which has excellent pattern recognition properties is one of the most versatile networks in the area of speech recognition. This paper describes an automatic speech recognition system which use both MLP and lexicon. In this system., the recognition is performed by a network search algorithm which matches words in lexicon to MLP output scores. We also suggest a recognition algorithm which incorperat durational information of each phone, whose performance is comparable to that of conventional continuous HMM(CHMM). Performance of the system is evaluated on the database of 26 vocabulary size from 9 speakers. The experimental results show that the proposed algorithm achieves error rate of 7.3% which is 5.3% lower rate than 12.6% of CHMM.

  • PDF

포만트 정보의 동적 변화특성 조사에 관한 연구 (Investigation on Dynamic Behavior of Formant Information)

  • 조철우
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.157-162
    • /
    • 2015
  • This study reports on the effective way of displaying dynamic formant information on F1-F2 space. Conventional ways of F1-F2 space (different name of vowel triangle or vowel rectangle) have been used for investigating vowel characteristics of a speaker or a language based on statistics of the F1 and F2 values, which were computed by spectral envelope search method. Those methods were dealing mainly with the static information of the formants, not the changes of the formant values (i.e. dynamic information). So a better way of investigating dynamic informations from the formant values of speech signal is suggested so that more convenient and detailed investigation of the dynamic changes can be achieved on F1-F2 space. Suggested method used visualization of static and dynamic information in overlapped way to be able to observe the change of the formant information easily. Finally some examples of the implemented display on some cases of the continuous vowels are shown to prove the usefulness of suggested method.

CHMM을 이용한 발매기 명령어의 음성인식에 관한 연구 (A Study on the Speech Recognition for Commands of Ticketing Machine using CHMM)

  • 김범승;김순협
    • 한국철도학회논문집
    • /
    • 제12권2호
    • /
    • pp.285-290
    • /
    • 2009
  • 논문에서는 연속HMM(Continuos Hidden Markov Model)을 이용하여 실시간으로 발매기 명령어(314개 역명)를 인식 할 수 있도록 음성인식 시스템을 구현하였다. 특징 벡터로 39 MFCC를 사용하였으며, 인식률 향상을 위하여 895개의 tied-state 트라이폰 음소 모델을 구성하였다. 시스템 성능 평가 결과 다중 화자 종속 인식률은 99.24%, 다중화자 독립 인식률은 98.02%의 인식률을 나타내었으며, 실제 노이즈가 있는 환경에서 다중 화자 독립 실험의 경우 93.91%의 인식률을 나타내었다.

구문 분석과 One-Stage DP를 이용한 연속 숫자음 인식에 관한 연구 (A study on the Recognition of Continuous Digits using Syntactic Analysis and One-Stage DP)

  • 안태옥
    • 한국음향학회지
    • /
    • 제14권3호
    • /
    • pp.97-104
    • /
    • 1995
  • 본 논문은 음성 다이얼링 시스템 구현을 위한 연속 숫자음 인식에 관한 연구로써, 구문 분석을 이용한 One-Stage DP에 의한 음성 인식 방법을 제안하다. 인식 실험을 위해 우선 구간 구분화 알고리즘을 이용하여 DMS (Dynamic Multi-SEction) 모델을 만들며, 제안된 구문 분석을 이용한 One-Stage DP 방법으로 실험 대ㅛ상의 연속 숫자음 데이터를 인식하게 하였다. 본 연구에서는 8명의 ㅣ남성 화자에 의해 2-3번 발음도니 21종의 7자리의 연속 숫자음이 사용되었고, 기존의 One-Stage DP와 제안된 구문 분석을 이용한 One-Stage DP 음성 인식 알고리즘을 사용해서 화자 종속과 화자 독립 실험을 실험실 환경에서 수행하였다. 인식 실험 결과, 기존의 방법보다 제안된 방법이 인식률이 좋은 것으로 나타났으며, 제안된 방법에서는 화자 종속과 화자 독립 실험에서 각각 약 91.7%, 89.7%로 나타났다.

  • PDF

음성인식에서 문맥의존 음향모델의 성능향상을 위한 유사음소단위에 관한 연구 (A Study on Phoneme Likely Units to Improve the Performance of Context-dependent Acoustic Models in Speech Recognition)

  • 임영춘;오세진;김광동;노덕규;송민규;정현열
    • 한국음향학회지
    • /
    • 제22권5호
    • /
    • pp.388-402
    • /
    • 2003
  • In this paper, we carried out the word, 4 continuous digits. continuous, and task-independent word recognition experiments to verify the effectiveness of the re-defined phoneme-likely units (PLUs) for the phonetic decision tree based HM-Net (Hidden Markov Network) context-dependent (CD) acoustic modeling in Korean appropriately. In case of the 48 PLUs, the phonemes /ㅂ/, /ㄷ/, /ㄱ/ are separated by initial sound, medial vowel, final consonant, and the consonants /ㄹ/, /ㅈ/, /ㅎ/ are also separated by initial sound, final consonant according to the position of syllable, word, and sentence, respectively. In this paper. therefore, we re-define the 39 PLUs by unifying the one phoneme in the separated initial sound, medial vowel, and final consonant of the 48 PLUs to construct the CD acoustic models effectively. Through the experimental results using the re-defined 39 PLUs, in word recognition experiments with the context-independent (CI) acoustic models, the 48 PLUs has an average of 7.06%, higher recognition accuracy than the 39 PLUs used. But in the speaker-independent word recognition experiments with the CD acoustic models, the 39 PLUs has an average of 0.61% better recognition accuracy than the 48 PLUs used. In the 4 continuous digits recognition experiments with the liaison phenomena. the 39 PLUs has also an average of 6.55% higher recognition accuracy. And then, in continuous speech recognition experiments, the 39 PLUs has an average of 15.08% better recognition accuracy than the 48 PLUs used too. Finally, though the 48, 39 PLUs have the lower recognition accuracy, the 39 PLUs has an average of 1.17% higher recognition characteristic than the 48 PLUs used in the task-independent word recognition experiments according to the unknown contextual factor. Through the above experiments, we verified the effectiveness of the re-defined 39 PLUs compared to the 48PLUs to construct the CD acoustic models in this paper.

Recurrent Neural Network with Backpropagation Through Time Learning Algorithm for Arabic Phoneme Recognition

  • Ismail, Saliza;Ahmad, Abdul Manan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1033-1036
    • /
    • 2004
  • The study on speech recognition and understanding has been done for many years. In this paper, we propose a new type of recurrent neural network architecture for speech recognition, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units [1]. Besides that, we also proposed the new architecture and the learning algorithm of recurrent neural network such as Backpropagation Through Time (BPTT, which well-suited. The aim of the study was to observe the difference of Arabic's alphabet like "alif" until "ya". The purpose of this research is to upgrade the people's knowledge and understanding on Arabic's alphabet or word by using Recurrent Neural Network (RNN) and Backpropagation Through Time (BPTT) learning algorithm. 4 speakers (a mixture of male and female) are trained in quiet environment. Neural network is well-known as a technique that has the ability to classified nonlinear problem. Today, lots of researches have been done in applying Neural Network towards the solution of speech recognition [2] such as Arabic. The Arabic language offers a number of challenges for speech recognition [3]. Even through positive results have been obtained from the continuous study, research on minimizing the error rate is still gaining lots attention. This research utilizes Recurrent Neural Network, one of Neural Network technique to observe the difference of alphabet "alif" until "ya".

  • PDF