• Title/Summary/Keyword: Speech control

Search Result 603, Processing Time 0.022 seconds

An Implementation of Speech Recognition System for Car's Control (자동차 제어용 음성 인식시스템 구현)

  • 이광석;김현덕
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.451-458
    • /
    • 2001
  • In this paper, we propose speech control system for a various control device in the car with real time control speech. A real time speech control system is detected start-end points from speech data processing by A/D conversion, and recognize by one pass dynamic programming method. The results displays a monitor, and transports control data to control interfaces. The HMM model is modeled by a continuous control speech consists of control speech and digit speech for controlling of a various control device in the car The recognition rates is an average 97.3% in case of word & control speech, and is an average 96.3% in case of digit speech.

  • PDF

PROSODY CONTROL BASED ON SYNTACTIC INFORMATION IN KOREAN TEXT-TO-SPEECH CONVERSION SYSTEM

  • Kim, Yeon-Jun;Oh, Yung-Hwan
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.937-942
    • /
    • 1994
  • Text-to-Speech(TTS) conversion system can convert any words or sentences into speech. To synthesize the speech like human beings do, careful prosody control including intonation, duration, accent, and pause is required. It helps listeners to understand the speech clearly and makes the speech sound more natural. In this paper, a prosody control scheme which makes use of the information of the function word is proposed. Among many factors of prosody, intonation, duration, and pause are closely related to syntactic structure, and their relations have been formalized and embodied in TTS. To evaluate the synthesized speech with the proposed prosody control, one of the subjective evaluation methods-MOS(Mean Opinion Score) method has been used. Synthesized speech has been tested on 10 listeners and each listener scored the speech between 1 and 5. Through the evaluation experiments, it is observed that the proposed prosody control helps TTS system synthesize the more natural speech.

  • PDF

The Effects of Speaking Mode on Intelligibility of Dysarthric Speech (뇌성마비 성인의 발화유형에 따른 명료도)

  • Kim, Soo-Jin;Ko, Hyun-Ju
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.171-176
    • /
    • 2009
  • Intelligibility measurement is one criterion for the assessment of the severity of speech disorders especially of dysarthric persons. Rate control, usually rate reduction, is used with many dysarthric speakers to improve their intelligibility. The purpose of this study is to compare how change intelligibility of speech produced by cerebral palsic speakers according to three speaking conditions. Speech samples were collected from 10 adults with cerebral palsy were asked to speak under three speaking conditions-(1) naturally(control), (2) more slowly(rate control), (3) louder and accurately(clear speech). In a perception test, after listening to the speech samples, a group of three judges were to write down whatever they heard. The result showed that total cerebral palsic subjects were divided into two subgroups according to their intelligibility according to three speaking conditions. Some subjects showed that speech intelligibility increased greatly if asked to speak 'louder and more accurately'. and the others showed no difference of intelligibility according to the speaking conditions. This study suggested that it would be useful clinically to find out the best instruction to improve intelligibility suitable for each speaker with cerebral palsy.

  • PDF

Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command (음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현)

  • Shim, Byoung-Kyun;Han, Sung-Hyun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.2
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

Acoustic Characteristics of Patients with Maxillary Complete Dentures (상악 총의치 장착 환자 언어의 음향학적 특성 연구)

  • Ko, Sok-Min;Hwang, Byung-Nam
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.139-156
    • /
    • 2001
  • Speech intelligibility in patients with complete dentures is an important clinical problem depending on the material used. The objective of this study was to investigate the speech of two edentulous subjects fitted with a complete maxillary prosthesis made of two different palatal materials: chrome-cobalt alloy and acrylic resin. Three patients with complete dentures in the experiment group and ten people in the controls groups participated in the experiment. CSL, Visi-Pitch were used to measure speech characteristics. The test words consisted of a simple vowel /e/, meaningless three syllabic words containing fricative, affricated and stops sounds, and sustained fricative sounds /s/ and /$\int$/. The analysis speech parameters were vowel and lateral formants, VOT, sound durations, sound pressure level and fricative frequency. Data analysis was conducted by a series of paired T-test. The findings like the following: (1) Vowel formant one of patients with complete denture is higher than that of the control group (p<0.05), while lateral formant three of patients with complete denture is lower than that of the control group (p<0.0l). (2) Patients with complete denture produced lower speech intelligibility with low fricative frequency (/$\int$/) than control group (p<0.0). The speech intelligibility of patients with metal prosthesis was higher than that of those with resin prosthesis (p<0.05). (3) Fricative, lateral and stop sound durations of patients with complete denture were longer than those of the control group (p<0.01 and p<0.05), respectively. Total sound durations of patients with metal prosthesis were similar to that of the control group (p<0.05), while those with resin prosthesis had a shorter duration (p<0.01). This implied that those with metal prosthesis had higher speech intelligibility than those with resin prosthesis. (4) Patients with complete denture had higher sound pressure levels /t/ and /c/ than the control group (p<0.01). However, sound pressure levels for /c/ of patients with metal prosthesis or resin prosthesis was similar to the control group (p<0.05). (5) Patients with complete denture had higher fundamental frequency than the control group (p<0.01).

  • PDF

Multi-Channel Speech Enhancement Algorithm Using DOA-based Learning Rate Control (DOA 기반 학습률 조절을 이용한 다채널 음성개선 알고리즘)

  • Kim, Su-Hwan;Lee, Young-Jae;Kim, Young-Il;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.91-98
    • /
    • 2011
  • In this paper, a multi-channel speech enhancement method using the linearly constrained minimum variance (LCMV) algorithm and a variable learning rate control is proposed. To control the learning rate for adaptive filters of the LCMV algorithm, the direction of arrival (DOA) is measured for each short-time input signal and the likelihood function of the target speech presence is estimated to control the filter learning rate. Using the likelihood measure, the learning rate is increased during the pure noise interval and decreased during the target speech interval. To optimize the parameter of the mapping function between the likelihood value and the corresponding learning rate, an exhaustive search is performed using the Bark's scale distortion (BSD) as the performance index. Experimental results show that the proposed algorithm outperforms the conventional LCMV with fixed learning rate in the BSD by around 1.5 dB.

  • PDF

The Relationship between 3- and 5-year-old children' private speech and their mothers' scaffolding (3세와 5세 유아의 혼잣말과 어머니의 비계설정과의 관계)

  • Park, Young-Soon;Yoo, An-Jin
    • Korean Journal of Human Ecology
    • /
    • v.14 no.1
    • /
    • pp.59-68
    • /
    • 2005
  • The purposes of this study were to investigate the relationship between children's private speech during the individual session and maternal scaffolding during mother-child session. Subjects were twenty 3-year-old children and twenty 5-year-old children and their mothers recruited from day-care centers in Seoul. Mother-child interaction was videotaped for 15 minutes and maternal utterances were transcribed for analysis maternal scaffolding. Individual session of child after 3-5days was videotaped for 15 minutes and children's utterance was transcribed. Subcategories of maternal scaffolding were significantly related with children's private speech during individual session. There did appear to be an age difference in this relationship. In verbal strategy for scaffolding that 3-year-old's mother used, other-regulation and control, praise strategy was significantly related with children's private speech. In verbal strategy for scaffolding that 5-year-old's mother used, other-regulation and control, teaching strategy was significantly related with children's private speech. In maternal physical control strategy, withdrawal of mother physical control the maze task over time was significantly related with children's private speech. Withdrawal of mother physical control 5-year-old's physical performance was significantly related with children's private speech.

  • PDF

Compensation Ability in Speech Motor Control in Children with and without Articulation Disorders (조음장애아동과 비장애아동의 말운동통제 보상능력 비교)

  • Song, Yun-Kyung;Sim, Hyun-Sub
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.183-201
    • /
    • 2008
  • This study attempted to reveal the physiologic etiology or related factors associated with speech processing by comparing the compensation ability in speech motor control in children with and without articulation disorders. Subjects were 35 children with articulation disorder and 35 children without articulation disorder whose age ranged from 5 to 6 years. They were asked to rapidly repeat /$p^ha$/, /$t^ha$/, /$k^ha$/, /$p^hat^hak^ha$/ diadochokinetic movement while mandible was free and mandible was stabilized with bite block. The results showed that children with articulation disorder revealed significantly greater difference in elapsed time for diadochokinetic movement between mandible free and stabilized state compared to the without articulation disorder group. But the correlation between the percentage of consonants correct and the compensation ability in speech motor control in the articulation disorder group was irrelevant. These results point out to the fact that children with articulation disorder have poor compensation ability in speech motor control compared to the children without articulation disorder. On the other hand, the poor ability does not have any relation with the severity of articulation disorder. These results suggest either general or individual characteristics of children with articulation disorder.

  • PDF

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Prosody Control of the Synthetic Speech using Sampling Rate Conversion (표본화율 변환을 이용한 합성음의 운율제어)

  • 이현구;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.676-679
    • /
    • 1999
  • In this paper, we presents a method to control prosody of the synthetic speech using sampling rate conversion technique. In prosody control, the conventional methods perform overlap and add. So the synthetic speech has a distortion and the voice quality is not satisfied. Using sampling rate conversion technique, we can get high Qualify of the synthetic speech. Also we can control various talking speeds according to speaker's patterns.

  • PDF