• 제목/요약/키워드: Speech Recording

검색결과 97건 처리시간 0.021초

자기점검법이 청각장애 유아의 자발적인 말시작 행동에 미치는 영향 (Effects of Self-monitoring on Initiating Speech Behavior of the Hearing-impaired Preschoolers)

  • 현노상;김영태
    • 음성과학
    • /
    • 제9권3호
    • /
    • pp.99-112
    • /
    • 2002
  • The purpose of the present study was to investigate the effectiveness of self-monitoring on spontaneously initiating speech behavior of the hearing-impaired preschoolers. Three hearing-impaired preschoolers were selected from a special school for the deaf. They showed some vocalizations and words under intensive instruction settings, but never spontaneously spoke as a means of communication. Multiple probe design was applied in this study. During the self-monitoring intervention, each child was trained to assess whether his own initiating speech behavior was occurred or not, and then record his own behavior's occurrence on the self-recording sheets and self-graphing sheets. The vibration of handphone was used as a tactile cue for self-monitoring. The results of the present study were as follows: (1) self-monitoring significantly increased the percentage of occurrence of spontaneously initiating speech behaviors. (2) Increased level of spontaneously initiating speech behavior were generalized into another natural instruction (cognitive) settings. (3) Increased level of spontaneously initiating speech behavior were maintained after four weeks from the termination of the intervention.

  • PDF

자유 대화에서의 한국어 원어민 화자와 한국어 고급 학습자들의 발화 속도 비교 (A Comparative Study on the Speech Rate of Advanced Korean(L2) Learners and Korean Native Speakers in Conversational Speech)

  • 홍민경
    • 한국어교육
    • /
    • 제29권3호
    • /
    • pp.345-363
    • /
    • 2018
  • The purpose of this study is to compare the speech rate of advanced Korean(L2) learners and Korean native speakers in spontaneous utterances. Specifically, the current study investigated the difference of the two groups' speech pattern according to utterance length. Eight advanced Korean(L2) learners and eight Korean native speakers participated in this study. The data were collected by recording their conversation and physical measurements (speaking rate, articulatory rates, pause and several types of speech disfluency) were taken on extracted 120 utterances from 12 out of the 16 participants. The findings show that advanced Korean learners' speech pattern is similar to that of Koreans in the short-length utterance. However, in the long-length utterance, two groups show different speech patterns; while the articulatory rate of Korean native speakers increased in the long-length utterance, that of Korean learners decreased. This suggests that the frequency of speech disfluency factors might affect this result.

Can the Energy Costs of Speech Movements be Measured\ulcorner A Preliminary Feasibility Study

  • Bjorn Lindblom;Moon, Seung-Jae
    • The Journal of the Acoustical Society of Korea
    • /
    • 제19권3E호
    • /
    • pp.25-32
    • /
    • 2000
  • The main question addressed in this research was whether an adaptation of a standard exercise Physiology Procedure would be sensitive enough to record excess oxygen uptake associated with speech activity. Oxygen consumption was recorded for a single subject during 7-minute rest periods and an automatic speech task, also 7-minutes long and performed at three different vocal efforts. The data show measurable and systematic speech-induced modifications of breathing and oxygen uptake patterns. The subject was found to use less power for normal than for soft and loud speech. This result is similar to findings reported by experimental biologists on the energetics of locomotion. However, more comprehensive feasibility studies need to be undertaken on a larger population before solid and detailed conclusions about speech energy costs are possible. However, it appears clear that, for experimental tasks like the present one, i.e., variations in vocal effort, standard exercise physiology methods may indeed offer a viable approach to recording excess oxygen uptake associated with speech movements.

  • PDF

대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식 (Google speech recognition of an English paragraph produced by college students in clear or casual speech styles)

  • 양병곤
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

Gender difference in speech intelligibility using speech intelligibility tests and acoustic analyses

  • Kwon, Ho-Beom
    • The Journal of Advanced Prosthodontics
    • /
    • 제2권3호
    • /
    • pp.71-76
    • /
    • 2010
  • PURPOSE. The purpose of this study was to compare men with women in terms of speech intelligibility, to investigate the validity of objective acoustic parameters related with speech intelligibility, and to try to set up the standard data for the future study in various field in prosthodontics. MATERIALS AND METHODS. Twenty men and women were served as subjects in the present study. After recording of sample sounds, speech intelligibility tests by three speech pathologists and acoustic analyses were performed. Comparison of the speech intelligibility test scores and acoustic parameters such as fundamental frequency, fundamental frequency range, formant frequency, formant ranges, vowel working space area, and vowel dispersion were done between men and women. In addition, the correlations between the speech intelligibility values and acoustic variables were analyzed. RESULTS. Women showed significantly higher speech intelligibility scores than men and there were significant difference between men and women in most of acoustic parameters used in the present study. However, the correlations between the speech intelligibility scores and acoustic parameters were low. CONCLUSION. Speech intelligibility test and acoustic parameters used in the present study were effective in differentiating male voice from female voice and their values might be used in the future studies related patients involved with maxillofacial prosthodontics. However, further studies are needed on the correlation between speech intelligibility tests and objective acoustic parameters.

대용량 운율 음성데이타를 이용한 자동합성방식 (Automatic Synthesis Method Using Prosody-Rich Database)

  • 김상훈
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 제15회 음성통신 및 신호처리 워크샵(KSCSP 98 15권1호)
    • /
    • pp.87-92
    • /
    • 1998
  • In general, the synthesis unit database was constructed by recording isolated word. In that case, each boundary of word has typical prosodic pattern like a falling intonation or preboundary lengthening. To get natural synthetic speech using these kinds of database, we must artificially distort original speech. However, that artificial process rather resulted in unnatural, unintelligible synthetic speech due to the excessive prosodic modification on speech signal. To overcome these problems, we gathered thousands of sentences for synthesis database. To make a phone level synthesis unit, we trained speech recognizer with the recorded speech, and then segmented phone boundaries automatically. In addition, we used laryngo graph for the epoch detection. From the automatically generated synthesis database, we chose the best phone and directly concatenated it without any prosody processing. To select the best phone among multiple phone candidates, we used prosodic information such as break strength of word boundaries, phonetic contexts, cepstrum, pitch, energy, and phone duration. From the pilot test, we obtained some positive results.

  • PDF

An Implementatin of a Multi-Channel Speech Surveillance System Over Telephone Lines

  • Kim, Sung-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권4E호
    • /
    • pp.17-21
    • /
    • 1998
  • This paper presents an implementation of a multi-channel speech surveillance system over telephone lines using TMS320C31 DSP chips. The incoming speech into each telephone line are first compressed simultaneously in real-time by the popular vector-sum excited linear predictive (VSELP) speech coding algorithm at the rate of 8 Kbps. The compressed steech bit streams are then multiplexed with those of other users. The multiplexed speech bit streams are transferred to the system storage equipments with some other required information so that a system operator can later monitor the stored speech data whenever it is necessary. The host program runs under Microsoft Windows95 for an efficient man-machine interface and a future upgrade-ability. We have confirmed that the overall 64-channel system operates satisfactorily in realtime. We also have checked approximately up to 2,880 total hours of recording capability of the system on a playback module and two removable backup drives.

  • PDF

다양한 음성을 이용한 자동화자식별 시스템 성능 확인에 관한 연구 (Variation of the Verification Error Rate of Automatic Speaker Recognition System With Voice Conditions)

  • 홍수기
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.45-55
    • /
    • 2002
  • High reliability of automatic speaker recognition regardless of voice conditions is necessary for forensic application. Audio recordings in real cases are not consistent in voice conditions, such as duration, time interval of recording, given text or conversational speech, transmission channel, etc. In this study the variation of verification error rate of ASR system with the voice conditions was investigated. As a result in order to decrease both false rejection rate and false acception rate, the various voices should be used for training and the duration of train voices should be longer than the test voices.

  • PDF

A New Formulation of Multichannel Blind Deconvolution: Its Properties and Modifications for Speech Separation

  • Nam, Seung-Hyon;Jee, In-Nho
    • The Journal of the Acoustical Society of Korea
    • /
    • 제25권4E호
    • /
    • pp.148-153
    • /
    • 2006
  • A new normalized MBD algorithm is presented for nonstationary convolutive mixtures and its properties/modifications are discussed in details. The proposed algorithm normalizes the signal spectrum in the frequency domain to provide faster stable convergence and improved separation without whitening effect. Modifications such as nonholonomic constraints and off-diagonal learning to the proposed algorithm are also discussed. Simulation results using a real-world recording confirm superior performanceof the proposed algorithm and its usefulness in real world applications.

Pathological Vibratory patterns of the Vocal Folds Observed by the High Speed Digital Imaging System

  • Niimi, Seiji
    • 대한음성언어의학회:학술대회논문집
    • /
    • 대한음성언어의학회 1998년도 제10회 학술대회 심포지움
    • /
    • pp.208-209
    • /
    • 1998
  • It is generally known that many cases of pathological rough voice are characterized not by simple random perturbations but by quasi-periodic perturbations in the speech wave. However, there are few studies on the characteristics of perturbations in vocal fold vibrations associated with this type of voice. We have been conducting studies of pathological vocal fold vibration using a high-speed digital image recording system developed by our institute, Compared to the ordinary high-speed-motion picture system, the present system is compact and simple to operate and thus, it suited for pathological data collection. (omitted)

  • PDF