• Title/Summary/Keyword: speech task

Search Result 316, Processing Time 0.018 seconds

An acoustic and perceptual investigation of the vowel length contrast in Korean

  • Lee, Goun;Shin, Dong-Jin
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.37-44
    • /
    • 2016
  • The goal of the current study is to investigate how the sound change is reflected in production or in perception, and what the effect of lexical frequency is on the loss of sound contrasts. Specifically, the current study examined whether the vowel length contrasts are retained in Korean speakers' productions, and whether Korean listeners can distinguish vowel length minimal pairs in their perception. Two production experiments and two perception experiments investigated this. For production tests, twelve Korean native speakers in their 20s and 40s completed a read-aloud task as well as a map-task. The results showed that, regardless of their age group, all Korean speakers produced vowel length contrasts with a small but significant differences in the read-aloud test. Interestingly, the difference between long and short vowels has disappeared in the map task, indicating that the speech mode affects producing vowel length contrasts. For perception tests, thirty-three Korean listeners completed a discrimination and a forced-choice identification test. The results showed that Korean listeners still have a perceptual sensitivity to distinguish lexical meaning of the vowel length minimal pair. We also found that the identification accuracy was affected by the word frequency, showing a higher identification accuracy in high- and mid- frequency words than low frequency words. Taken together, the current study demonstrated that the speech mode (read-aloud vs. spontaneous) affects the production of the sound undergoing a language change; and word frequency affects the sound change in speech perception.

Pauses Characteristics in Slowed Speech of Treated Stutterer (치료 받은 말더듬 성인의 느린 구어에서 나타나는 휴지 특성)

  • Jeon, Hee-Sook
    • Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.189-197
    • /
    • 2008
  • In the process of speech therapy, fluency is acquired and speech rate increases in the process when the behavioral modification strategy, inducing speech fluency by making speech rate slower intentionally in an early stage, is applied. Therefore, the purpose of this study was to investigate the pause characteristics in slowed speech intentionally of treated stutterer. In this study, 10 developmental stutterers who had well established fluency in speech were involved. We had collected each 200 syllables sample of intentionally much slowed speech and a little slowed one in reading task. To measure the features of pause, total frequency of pauses, total durations of pauses, average duration of pauses and proportions of pause were investigated. The findings were as follows: Both the total durations and total frequency of pauses of much slowed speech were higher than that of a little slowed one. However, both the average duration and proportions of pauses of much slowed speech were not significantly higher than that of a little slowed one.

  • PDF

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • v.35 no.1
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

A Validity Study on Measurement of Mental Fatigue Using Speech Technology (음성기술을 이용한 정신피로 측정에 관한 타당성 연구)

  • Song, Seungkyu;Kim, Jongyeol;Jang, Junsu;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.3-10
    • /
    • 2013
  • This study proposes a method to measure mental fatigue using speech technology, which has not been used in previous research and is easier than existing complex and difficult methods. It aims at establishing a relationship between the human voice and mental fatigue based on experiments to measure the influence of mental fatigue on the human voice. Two monotonous tasks of simple calculation such as finding the sum of three one digit numbers were used to measure the feeling of monotony and two sets of subjective questionnaires were used to measure mental fatigue. While thirty subjects perform the experiment, responses to the questionnaire and speech data were collected. Speech features related to speech source and the vocal tract filter were extracted from the speech data. According to the results, speech parameters deeply related to mental fatigue are a mean and standard deviation of fundamental frequency, jitter, and shimmer. This study shows that speech technology is a useful method for measuring mental fatigue.

Korean-Japanese Speech Translation System for Hotel Reservation - Korean front desk side - (한-일 호텔예약 음성번역 시스템 - 한국 프론트데스트 측 -)

  • 이영직;김영섬;김회린;류준형;이정철;한남용;안영목;최운천;최운천
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.204-207
    • /
    • 1995
  • Recently, ETRI developed a Korean-Japanese speech translation system for Korean front de나 side in hotel reservation task. The system consists of three sub-systems each of which is responsible for speech recognition, machine translation, and speech synthesis. This paper introduces the background of the system development and describes the functions of the sub-systems.

  • PDF

DYNAMICALLY LOCALIZED SELF-ORGANIZING MAP MODEL FOR SPEECH RECOGNITION

  • KyungMin NA
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1052-1057
    • /
    • 1994
  • Dynamically localized self-organizing map model (DLSMM) is a new speech recognition model based on the well-known self-organizing map algorithm and dynamic programming technique. The DLSMM can efficiently normalize the temporal and spatial characteristics of speech signal at the same time. Especially, the proposed can use contextual information of speech. As experimental results on ten Korean digits recognition task, the DLSMM with contextual information has shown higher recognition rate than predictive neural network models.

  • PDF

Multi-resolution DenseNet based acoustic models for reverberant speech recognition (잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델)

  • Park, Sunchan;Jeong, Yongwon;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.

Effects of the Types of Noise and Signal-to-Noise Ratios on Speech Intelligibility in Dysarthria (소음 유형과 신호대잡음비가 마비말장애인의 말명료도에 미치는 영향)

  • Lee, Young-Mee;Sim, Hyun-Sub;Sung, Jee-Eun
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.117-124
    • /
    • 2011
  • This study investigated the effects of the types of noise and signal to noise ratios (SNRs) on speech intelligibility of an adult with dysartrhia. Speech intelligibility was judged by 48 naive listeners using a word transcription task. Repeated measures design was used with the types of noise (multi-talker babble/environmental noise) and SNRs (0, +10 dB, +20 dB) as within-subject factors. The dependent measure was the percentage of correctly transcribed words. Results revealed that two main effects were statistically significant. Listeners performed significantly worse in the multi-talker babble condition than the environmental noise condition, and they performed significantly better at higher levels of SNRs. The current results suggested that the multi-talker babble and lower level of SNRs decreased the speech intelligibility of adults with dysarthria, and speech-language pathologists should consider environmental factors such as the types of noise and SNRs in evaluating speech intelligibility of adults with dysarthria.

  • PDF

F0 Extrema Timing of HL and LH in North Kyungsang Korean: Evidence from a Mimicry Task

  • Kim, Jung-Sun
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.43-49
    • /
    • 2012
  • This paper describes the categorical effects of pitch accent contrasts in a mimicry task. It focuses, specifically, on examining how fundamental frequency (f0) variation reflects phonological contrasts from speakers of two distinct varieties of Korean (i.e., North Kyungsang and South Cholla). The results showed that, in a mimicry task using synthetic speech continua, there was a categorical effect in f0 peak timing for North Kyungsang speakers, but the timing of f0 peaks and valleys in the responses of South Cholla speakers was more variable, presenting a gradient or non-categorical effect. Evidence of categorical effects was represented as the shift of f0 peak times along an acoustic continuum for North Kyungsang speakers. The range for the shift of f0 valley times was much narrower, compared to that of f0 peak times. The degree of a shift near the middle of the continuum showed variability across individual mimicry responses. However, the categorical structure in mimicry responses regarding the clustering of f0 peak points was more significant for North Kyungsang speakers than for South Cholla speakers. Additionally, the finding of the current study implies that the location of f0 peak times depends on individuals' imitative (or cognitive) abilities.

An Aerodynamic and Acoustic Analysis of the Breathy Voice of Thyroidectomy Patients (갑상선 수술 후 성대마비 환자의 기식 음성에 대한 공기역학적 및 음향적 분석)

  • Kang, Young-Ae;Yoon, Kyu-Chul;Kim, Jae-Ock
    • Phonetics and Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.95-104
    • /
    • 2012
  • Thyroidectomy patients may have vocal paralysis or paresis, resulting in a breathy voice. The aim of this study was to investigate the aerodynamic and acoustic characteristics of a breathy voice in thyroidectomy patients. Thirty-five subjects who have vocal paralysis after thyroidectomy participated in this study. According to perceptual judgements by three speech pathologists and one phonetic scholar, subjects were divided into two groups: breathy voice group (n = 21) and non-breathy voice group (n = 14). Aerodynamic analysis was conducted by three tasks (Voicing Efficiency, Maximum Sustained Phonation, Vital Capacity) and acoustic analysis was measured during Maximum Sustained Phonation task. The breathy voice group had significantly higher subglottal pressure and more pathological voice characteristics than the non breathy voice group. Showing 94.1% classification accuracy in result logistic regression of aerodynamic analysis, the predictor parameters for breathiness were maximum sound pressure level, sound pressure level range, phonation time of Maximum Sustained Phonation task and Pitch range, peak air pressure, and mean peak air pressure of Voicing Efficiency task. Classification accuracy of acoustic logistic regression was 88.6%, and five frequency perturbation parameters were shown as predictors. Vocal paralysis creates air turbulence at the glottis. It fluctuates frequency-related parameters and increases aspiration in high frequency areas. These changes determine perceptual breathiness.