• Title/Summary/Keyword: Speech Spectrogram

Search Result 88, Processing Time 0.026 seconds

The Computation Reduction Algorithm Independent of the Language for CELP Vocoders (각국 언어 특성에 독립적인 CELP 계열 보코더에서의 계산량 단축 알고리즘)

  • 민소연;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2451-2454
    • /
    • 2003
  • In this paper, we propose the computation reduction methods of LSP(Line spectrum pairs) transformation that is mainly used in CELP vocoders. In order to decrease the computational time in real root method the characteristic of four proposed algorithms is as the following. First, scheme to reduce the LSP transformation time uses met scale. Developed the second scheme is the control of searching order by the distribution characteristic of LSP parameters. Third, scheme to reduce the LSP transformation time uses voice characteristics. Developed the fourth scheme is the control of searching interval and order by the distribution characteristic of LSP parameters. As a result of searching time, computational amount, transformed LSP parameters, SNR, MOS test, waveform of synthesized speech, speech, spectrogram analysis, searching time reduced about 37.5%, 46.21%, 46.3%, 51.29% in average, computational amount is reduced about 44.76%, 49.44%, 47.03%, 57.40%. But the transformed LSP parameters of the proposed methods were the same as those of real root method.

  • PDF

An Acoustic Study of Prosodic Features of Korean Spoken Language and Korean Folk Song (Minyo) (언어와 민요의 운율 자질에 관한 음향음성학적 연구)

  • Koo, Hee-San
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.133-144
    • /
    • 2003
  • The purpose of this acoustic experimental study was to investigate interrelation between prosodic features of Korean spoken language and those of Korean folk songs. The words of Changbutaryoung were spoken for analysis of spoken language by three female graduate students and the song was sung for musical features by three Kyunggi Minyo singers. Pitch contours were analyzed from sound spectrogram made by Pitch Works. Results showed that special musical voices (breaking, tinkling, vibrating, etc.) and tunes (rising, falling, level, etc) of folk song were discovered at the same place where accents of spoken language came. It appeared that, even though the patterns of pitch contour were different from each other, there was positive interrelation between prosodic features of Korean spoken language and those of Korean folk songs.

  • PDF

A Study Using Acoustic Measurement and Perceptual Judgment to identify Prosodic Characteristics of English as Spoken by Koreans (음향 측정과 지각 판단에 의한 한국인 영어의 운율 연구)

  • Koo, Hee-San
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.95-108
    • /
    • 1997
  • The purpose of this experimental study was to investigate prosodic characteristics of English as spoken by Koreans. Test materials were four English words, a sentence, and a paragraph. Six female Korean speakers and five native English speakers participated in acoustic and perceptual experiments. Pitch and duration of word syllables were measured from signals and spectrograms made by the Signalize 3.04 software program for Power Mac 7200. In the perceptual experiment, accent position, intonation patterns, rhythm patterns and phrasing were evaluated by the five native English speakers. Preliminary results from this limited study show that prosodic characteristics of Koreans include (1) pitch on the first part of a word and sentence is lower than that of English speakers, but the pitch on the last part is the opposite; (2) word prosody is quite similar to that of an English speaker, but sentence prosody is quite different; (3) the weakest point of sentence prosody spoken by Koreans is in the rhythmic pattern.

  • PDF

Sample selection approach using moving window for acoustic analysis of pathological sustained vowels according to signal typing

  • Lee, Ji-Yeoun
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.99-108
    • /
    • 2011
  • The perturbation parameters like jitter, shimmer, and signal-to-noise ratio (SNR) are largely estimated in the particular segment from the subjective or whole portion of the given pathological voice signal although there are many possible regions to be able to analyze the voice signals. In this paper, the pathological voice signals were classified as type 1, 2, 3, or 4 according to narrow band spectrogram and the value differences of the perturbation parameters extracted in the subjective and entire portion tended to be getting bigger as from type 1 to type 4 signals. Therefore, sample selection method based on moving window to analyze type 2 and 3 signals as well as type 1 signals is proposed. Although type 3 signals cannot be analyzed using the perturbation analysis, the type 3 signals by selecting out the samples in which error count is less than 10 through moving window were analyzed. At present, there is no method to be able to analyze the type 4 signals. Future research will endeavor to determine the best way to evaluate such voices.

  • PDF

Speech emotion recognition through time series classification (시계열 데이터 분류를 통한 음성 감정 인식)

  • Kim, Gi-duk;Kim, Mi-sook;Lee, Hack-man
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.11-13
    • /
    • 2021
  • 본 논문에서는 시계열 데이터 분류를 통한 음성 감정 인식을 제안한다. mel-spectrogram을 사용하여 음성파일에서 특징을 뽑아내 다변수 시계열 데이터로 변환한다. 이를 Conv1D, GRU, Transformer를 결합한 딥러닝 모델에 학습시킨다. 위의 딥러닝 모델에 음성 감정 인식 데이터 세트인 TESS, SAVEE, RAVDESS, EmoDB에 적용하여 각각의 데이터 세트에서 기존의 모델 보다 높은 정확도의 음성 감정 분류 결과를 얻을 수 있었다. 정확도는 99.60%, 99.32%, 97.28%, 99.86%를 얻었다.

  • PDF

The f0 distribution of Korean speakers in a spontaneous speech corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.31-37
    • /
    • 2021
  • The fundamental frequency, or f0, is an important acoustic measure in the prosody of human speech. The current study examined the f0 distribution of a corpus of spontaneous speech in order to provide normative data for Korean speakers. The corpus consists of 40 speakers talking freely about their daily activities and their personal views. Praat scripts were created to collect f0 values, and a majority of obvious errors were corrected manually by watching and listening to the f0 contour on a narrow-band spectrogram. Statistical analyses of the f0 distribution were conducted using R. The results showed that the f0 values of all the Korean speakers were right-skewed, with a pointy distribution. The speakers produced spontaneous speech within a frequency range of 274 Hz (from 65 Hz to 339 Hz), excluding statistical outliers. The mode of the total f0 data was 102 Hz. The female f0 range, with a bimodal distribution, appeared wider than that of the male group. Regression analyses based on age and f0 values yielded negligible R-squared values. As the mode of an individual speaker could be predicted from the median, either the median or mode could serve as a good reference for the individual f0 range. Finally, an analysis of the continuous f0 points of intonational phrases revealed that the initial and final segments of the phrases yielded several f0 measurement errors. From these results, we conclude that an examination of a spontaneous speech corpus can provide linguists with useful measures to generalize acoustic properties of f0 variability in a language by an individual or groups. Further studies would be desirable of the use of statistical measures to secure reliable f0 values of individual speakers.

Intonation Training System (Visual Analysis Tool) and the application of French Intonation for Korean Learners (컴퓨터를 이용한 억양 교육 프로그램 개발 : 프랑스어 억양 교육을 중심으로)

  • Yu, Chang-Kyu;Son, Mi-Ra;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1999
  • This study is concerned with the educational program Visual Analysis Tool (VAT) for sound development for foreign intonation using personal computer. The VAT can run on IBM-PC 386 compatible or higher. It shows the spectrogram, waveform, intensity and the pitch contour. The system can work freely on either waveform zoom in-out or the documentation of measured value. In this paper, intensity and pitch contour information were used. Twelve French sentences were recorded from a French conversational tape. And three Korean participated in this study. They spoke out twelve sentences repeatly and trid to make the same pitch contour - by visually matching their pitcgh contour to the native speaker's. A sentences were recorded again when the participants themselves became familiar with intonation, intensity and pauses. The difference of pitch contour(rising or falling), pitch value, energy, total duration of sentences and the boundary of rhythmic group between native speaker's and theirs before and after training were compared. The results were as following: 1) In a declarative sentence: a native speaker's general pitch contour falls at the end of sentences. But the participant's pitch contours were flat before training. 2) In an interrogative: the native speaker made his pitch contours it rise at the end of sentences with the exception of wh-questions (qu'est-ce que) and a pitch value varied a greath. In the interrogative 'S + V' form sentences, we found the pitch contour rose higher in comparison to other sentences and it varied a great deal. 3) In an exclamatory sentence: the pitch contour looked like a shape of a mountain. But the participants could not make it fall before or after training.

  • PDF

Characteristics of Vowel Formants, Voice Intensity, and Fundamental Frequency of Female with Amyotrophic Lateral Sclerosis using Spectrograms (스펙트로그램을 이용한 근위축성측삭경화증 여성 화자의 모음 포먼트, 음성강도, 기본주파수의 변화)

  • Byeon, Haewon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.193-198
    • /
    • 2019
  • This study analyzed the changes of vowel formant, voice intensity, and fundamental frequency of vowels for 11 months using acoustochemical spectrogram analysis of women diagnosed with amyotrophic lateral sclerosis (ALS). The test word was a vowel /a, i, u/ and a diphthong /h + ja + da/, /h + wi + da/, and /h +ɰi+ da/. Speech data were collected through the word reading task presented on the monitor using 'Alvin' program, and the recording environment was set to 5,500 Hz for the nyquist frequency and 11,000 Hz for the sampling rate. The records were analyzed by using spectrograms to vowel formants, voice intensity, and fundamental frequency. As a result of analysis, the fundamental frequency and intensity of the ALS process were decreased and the formant slope of the diphthong was decreased rather than the formant change in the vowel. This result suggests that the vowel distortion of ALS due to disease progression is due to the decrease of tongue and jaw co morbidity.

Differentiation of Adductor-Type Spasmodic Dysphonia from Muscle Tension Dysphonia Using Spectrogram (스펙트로그램을 이용한 내전형 연축성 발성 장애와 근긴장성 발성 장애의 감별)

  • Noh, Seung Ho;Kim, So Yean;Cho, Jae Kyung;Lee, Sang Hyuk;Jin, Sung Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.28 no.2
    • /
    • pp.100-105
    • /
    • 2017
  • Background and Objectives : Adductor type spasmodic dysphonia (ADSD) is neurogenic disorder and focal laryngeal dystonia, while muscle tension dysphonia (MTD) is caused by functional voice disorder. Both ADSD and MTD may be associated with excessive supraglottic contraction and compensation, resulting in a strained voice quality with spastic voice breaks. The aim of this study was to determine the utility of spectrogram analysis in the differentiation of ADSD from MTD. Materials and Methods : From 2015 through 2017, 17 patients of ADSD and 20 of MTD, underwent acoustic recording and phonatory function studies, were enrolled. Jitter (frequency perturbation), Shimmer (amplitude perturbation) were obtained using MDVP (Multi-dimensional Voice Program) and GRBAS scale was used for perceptual evaluation. The two speech therapist evaluated a wide band (11,250 Hz) spectrogram by blind test using 4 scales (0-3 point) for four spectral findings, abrupt voice breaks, irregular wide spaced vertical striations, well defined formants and high frequency spectral noise. Results : Jitter, Shimmer and GRBAS were not found different between two groups with no significant correlation (p>0.05). Abrupt voice breaks and irregular wide spaced vertical striations of ADSD were significantly higher than those of MTD with strong correlation (p<0.01). High frequency spectral noise of MTD were higher than those of ADSD with strong correlation (p<0.01). Well defined formants were not found different between two groups. Conclusion : The wide band spectrograms provided visual perceptual information can differentiate ADSD from MTD. Spectrogram analysis is a useful diagnostic tool for differentiating ADSD from MTD where perceptual analysis and clinical evaluation alone are insufficient.

  • PDF

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.