• Title/Summary/Keyword: stuttering (disfluency)

Search Result 12, Processing Time 0.016 seconds

The perceptual judgment of sound prolongation: Equal-appearing interval and direct magnitude estimation (연장음 길이에 따른 비유창성 정도 평가: 등간척도와 직접크기평정 비교 연구)

  • Jin Park;Hwajung Cha;Sejin Bae
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.59-67
    • /
    • 2023
  • This study aimed to propose an appropriate evaluation method for the perceived level of speech disfluency based on sound prolongation (i.e., increased duration of segments). To this end, 34 Korean-speaking adults (9 males, 25 females, average age: 32.9 yrs.) participated as raters in this study. The participants listened to sentences containing a total of 25 stimuli with the Korean voiceless fricative /s/ extended by 80-ms increments up to 2,000 ms (i.e., 285 ms, 365 ms., ..., 2,125 ms, 2,205 ms), and evaluated them using an equal-appearing interval scale (EAI, 1-7 points, where 1 represents "normal" and 7 represents "severe"). Subsequently, based on the interval-scale results, the sentence stimuli with the prolonged voiceless fricative corresponding to the mild-to-moderate level (rated as 4 points) were selected as the reference modulus for direct magnitude estimation (DME). After scatter plots were created for the two evaluation results, the relationship between the two measured mean values was analyzed using a curve estimation method for the observed data with the highest R2-value to determine whether a linear or curvilinear approximation fit the data better. A curvilinear relationship between the two evaluation results was indicated, suggesting that DME is a more appropriate evaluation method than the EAI scale for assessing the perceived level of disfluency based on sound prolongation.

Development of the video-based smart utterance deep analyser (SUDA) application (동영상 기반 자동 발화 심층 분석(SUDA) 어플리케이션 개발)

  • Lee, Soo-Bok;Kwak, Hyo-Jung;Yun, Jae-Min;Shin, Dong-Chun;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.63-72
    • /
    • 2020
  • This study aims to develop a video-based smart utterance deep analyser (SUDA) application that analyzes semiautomatically the utterances that child and mother produce during interactions over time. SUDA runs on the platform of Android, iPhones, and tablet PCs, and allows video recording and uploading to server. In this device, user modes are divided into three modes: expert mode, general mode and manager mode. In the expert mode which is useful for speech and language evaluation, the subject's utterances are analyzed semi-automatically by measuring speech and language factors such as disfluency, morpheme, syllable, word, articulation rate and response time, etc. In the general mode, the outcome of utterance analysis is provided in a graph form, and the manger mode is accessed only to the administrator controlling the entire system, such as utterance analysis and video deletion. SUDA helps to reduce clinicians' and researchers' work burden by saving time for utterance analysis. It also helps parents to receive detailed information about speech and language development of their child easily. Further, this device will contribute to building a big longitudinal data enough to explore predictors of stuttering recovery and persistence.