• 제목/요약/키워드: Speech interface

검색결과 251건 처리시간 0.025초

Implementation of Embedded Speech Recognition System for Supporting Voice Commander to Control an Audio and a Video on Telematics Terminals (텔레메틱스 단말기 내의 오디오/비디오 명령처리를 위한 임베디드용 음성인식 시스템의 구현)

  • Kwon, Oh-Il;Lee, Heung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • 제42권11호
    • /
    • pp.93-100
    • /
    • 2005
  • In this paper, we implement the embedded speech recognition system to support various application services such as audio and video control using speech recognition interface on cars. The embedded speech recognition system is implemented and ported in a DSP board. Because MIC type and speech codecs affect the accuracy of speech recognition. And also, we optimize the simulation and test environment to effectively remove the real noises on a car. We applied a noise suppression and feature compensation algorithm to increase an accuracy of sppech recognition on a car. And we used a context dependent tied-mixture acoustic modeling. The performance evaluation showed high accuracy of proposed system in office environment and even real car environment.

Implementation of 16Kpbs ADPCM by DSK50 (DSK50을 이용한 16kbps ADPCM 구현)

  • Cho, Yun-Seok;Han, Kyong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 대한전기학회 1996년도 하계학술대회 논문집 B
    • /
    • pp.1295-1297
    • /
    • 1996
  • CCITT G.721, G.723 standard ADPCM algorithm is implemented by using TI's fixed point DSP start kit (DSK). ADPCM can be implemented on a various rates, such as 16K, 24K, 32K and 40K. The ADPCM is sample based compression technique and its complexity is not so high as the other speech compression techniques such as CELP, VSELP and GSM, etc. ADPCM is widely applicable to most of the low cost speech compression application and they are tapeless answering machine, simultaneous voice and fax modem, digital phone, etc. TMS320C50 DSP is a low cost fixed point DSP chip and C50 DSK system has an AIC (analog interface chip) which operates as a single chip A/D and D/A converter with 14 bit resolution, C50 DSP chip with on-chip memory of 10K and RS232C interface module. ADPCM C code is compiled by TI C50 C-compiler and implemented on the DSK on-chip memory. Speech signal input is converted into 14 bit linear PCM data and encoded into ADPCM data and the data is sent to PC through RS232C. The ADPCM data on PC is received by the DSK through RS232C and then decoded to generate the 14 bit linear PCM data and converted into the speech signal. The DSK system has audio in/out jack and we can input and out the speech signal.

  • PDF

Crossword Game Using Speech Technology (음성기술을 이용한 십자말 게임)

  • Yu, Il-Soo;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • 제10B권2호
    • /
    • pp.213-218
    • /
    • 2003
  • In this paper, we implement a crossword game, which operate by speech. The CAA (Cross Array Algorithm) produces the crossword array randomly and automatically using an domain-dictionary. For producing the crossword array, we construct seven domain-dictionaries. The crossword game is operated by a mouse and a keyboard and is also operated by speech. For the user interface by speech, we use a speech recognizer and a speech synthesizer and this provide more comfortable interface to the user. The efficiency evaluation of CAA is performed by estimating the processing times of producing the crossword array and the generation ratio of the crossword array. As the results of the CAA's efficiency evaluation, the processing times is about 10ms and the generation ratio of the crossword array is about 50%. Also, the recognition rates were 95.5%, 97.6% and 96.2% for the window sizes of "$7{\times}7$", "$9{\times}9$," and "$11{\times}11$" respectively.}11$" respectively.vely.

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

A Design and Implementation of Natural User Interface System Using Kinect (키넥트를 사용한 NUI 설계 및 구현)

  • Lee, Sae-Bom;Jung, Il-Hong
    • Journal of Digital Contents Society
    • /
    • 제15권4호
    • /
    • pp.473-480
    • /
    • 2014
  • As the use of computer has been popularized these days, an active research is in progress to make much more convenient and natural interface compared to the existing user interfaces such as keyboard or mouse. For this reason, there is an increasing interest toward Microsoft's motion sensing module called Kinect, which can perform hand motions and speech recognition system in order to realize communication between people. Kinect uses its built-in sensor to recognize the main joint movements and depth of the body. It can also provide a simple speech recognition through the built-in microphone. In this paper, the goal is to use Kinect's depth value data, skeleton tracking and labeling algorithm to recognize information about the extraction and movement of hand, and replace the role of existing peripherals using a virtual mouse, a virtual keyboard, and a speech recognition.

Development of Ambulatory Speech Audiometric System (휴대용 어음청력검사 시스템 구현)

  • Shin, Seung-Won;Kim, Kyeong-Seop;Lee, Sang-Min;Im, Won-Jin;Lee, Jeong-Whan;Kim, Dong-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • 제58권3호
    • /
    • pp.645-654
    • /
    • 2009
  • In this study, we present an efficient ambulatory speech audiometric system to detect one's hearing problems at an earlier stage as possible without his or her visit to the audiometric testing facility such in a hospital or a clinic. To estimate a person's hearing threshold level in terms of speech sound response in his or her local environment, a digital assistant(PDA) device is used to generate the speech sound with implementing audiometric Graphic User Interface(GUI) system. Furthermore, a supra-aural earphone is used to measure a subject's hearing threshold level in terms of speech sound by the compensating the transducer's gain by adopting speech sound calibration system.

MLLR-Based Environment Adaptation for Distant-Talking Speech Recognition (원거리 음성인식을 위한 MLLR적응기법 적용)

  • Kwon, Suk-Bong;Ji, Mi-Kyong;Kim, Hoi-Rin;Lee, Yong-Ju
    • MALSORI
    • /
    • 제53호
    • /
    • pp.119-127
    • /
    • 2005
  • Speech recognition is one of the user interface technologies in commanding and controlling any terminal such as a TV, PC, cellular phone etc. in a ubiquitous environment. In controlling a terminal, the mismatch between training and testing causes rapid performance degradation. That is, the mismatch decreases not only the performance of the recognition system but also the reliability of that. Therefore, the performance degradation due to the mismatch caused by the change of the environment should be necessarily compensated. Whenever the environment changes, environment adaptation is performed using the user's speech and the background noise of the changed environment and the performance is increased by employing the models appropriately transformed to the changed environment. So far, the research on the environment compensation has been done actively. However, the compensation method for the effect of distant-talking speech has not been developed yet. Thus, in this paper we apply MLLR-based environment adaptation to compensate for the effect of distant-talking speech and the performance is improved.

  • PDF

Design and Implementation of Speech-Training System for Voice Disorders (발성장애아동을 위한 발성훈련시스템 설계 및 구현)

  • 정은순;김봉완;양옥렬;이용주
    • Journal of Internet Computing and Services
    • /
    • 제2권1호
    • /
    • pp.97-106
    • /
    • 2001
  • In this paper, we design and implement complement based speech training system for voice disorder. The system consists of three level of training: precedent training, training for speech apprehension and training for speech enhancement. To analyze speech of voice disorder, we extracted speech features as loudness, amplitude, pitch using digital signal processing technique. Extracted features are converted to graphic interface for visual feedback of speech by the system.

  • PDF

Implementation of Hidden Markov Model based Speech Recognition System for Teaching Autonomous Mobile Robot (자율이동로봇의 명령 교시를 위한 HMM 기반 음성인식시스템의 구현)

  • 조현수;박민규;이민철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.281-281
    • /
    • 2000
  • This paper presents an implementation of speech recognition system for teaching an autonomous mobile robot. The use of human speech as the teaching method provides more convenient user-interface for the mobile robot. In this study, for easily teaching the mobile robot, a study on the autonomous mobile robot with the function of speech recognition is tried. In speech recognition system, a speech recognition algorithm using HMM(Hidden Markov Model) is presented to recognize Korean word. Filter-bank analysis model is used to extract of features as the spectral analysis method. A recognized word is converted to command for the control of robot navigation.

  • PDF

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • 제11권2호
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF