• 제목/요약/키워드: Speech recognition interface

검색결과 125건 처리시간 0.021초

자동차 잡음 및 오디오 출력신호가 존재하는 자동차 실내 환경에서의 강인한 음성인식 (Robust Speech Recognition in the Car Interior Environment having Car Noise and Audio Output)

  • 박철호;배재철;배건성
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.85-96
    • /
    • 2007
  • In this paper, we carried out recognition experiments for noisy speech having various levels of car noise and output of an audio system using the speech interface. The speech interface consists of three parts: pre-processing, acoustic echo canceller, post-processing. First, a high pass filter is employed as a pre-processing part to remove some engine noises. Then, an echo canceller implemented by using an FIR-type filter with an NLMS adaptive algorithm is used to remove the music or speech coming from the audio system in a car. As a last part, the MMSE-STSA based speech enhancement method is applied to the out of the echo canceller to remove the residual noise further. For recognition experiments, we generated test signals by adding music to the car noisy speech from Aurora 2 database. The HTK-based continuous HMM system is constructed for a recognition system. Experimental results show that the proposed speech interface is very promising for robust speech recognition in a noisy car environment.

  • PDF

Windows95 환경에서의 음성 인터페이스 구현 (Implementation of speech interface for windows 95)

  • 한영원;배건성
    • 전자공학회논문지S
    • /
    • 제34S권5호
    • /
    • pp.86-93
    • /
    • 1997
  • With recent development of speech recognition technology and multimedia computer systems, more potential applications of voice will become a reality. In this paper, we implement speech interface on the windows95 environment for practical use fo multimedia computers with voice. Speech interface is made up of three modules, that is, speech input and detection module, speech recognition module, and application module. The speech input and etection module handles th elow-level audio service of win32 API to input speech data on real time. The recognition module processes the incoming speech data, and then recognizes the spoken command. DTW pattern matching method is used for speech recognition. The application module executes the voice command properly on PC. Each module of the speech interface is designed and examined on windows95 environments. Implemented speech interface and experimental results are explained and discussed.

  • PDF

음성인식용 인터페이스의 사용편의성 평가 방법론 (A Usability Evaluation Method for Speech Recognition Interfaces)

  • 한성호;김범수
    • 대한인간공학회지
    • /
    • 제18권3호
    • /
    • pp.105-125
    • /
    • 1999
  • As speech is the human being's most natural communication medium, using it gives many advantages. Currently, most user interfaces of a computer are using a mouse/keyboard type but the interface using speech recognition is expected to replace them or at least be used as a tool for supporting it. Despite the advantages, the speech recognition interface is not that popular because of technical difficulties such as recognition accuracy and slow response time to name a few. Nevertheless, it is important to optimize the human-computer system performance by improving the usability. This paper presents a set of guidelines for designing speech recognition interfaces and provides a method for evaluating the usability. A total of 113 guidelines are suggested to improve the usability of speech-recognition interfaces. The evaluation method consists of four major procedures: user interface evaluation; function evaluation; vocabulary estimation; and recognition speed/accuracy evaluation. Each procedure is described along with proper techniques for efficient evaluation.

  • PDF

음성기반 멀티모달 사용자 인터페이스의 사용성 평가 방법론 (Usability Test Guidelines for Speech-Oriented Multimodal User Interface)

  • 홍기형
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.103-120
    • /
    • 2008
  • Basic components for multimodal interface, such as speech recognition, speech synthesis, gesture recognition, and multimodal fusion, have their own technological limitations. For example, the accuracy of speech recognition decreases for large vocabulary and in noisy environments. In spite of those technological limitations, there are lots of applications in which speech-oriented multimodal user interfaces are very helpful to users. However, in order to expand application areas for speech-oriented multimodal interfaces, we have to develop the interfaces focused on usability. In this paper, we introduce usability and user-centered design methodology in general. There has been much work for evaluating spoken dialogue systems. We give a summary for PARADISE (PARAdigm for Dialogue System Evaluation) and PROMISE (PROcedure for Multimodal Interactive System Evaluation) that are the generalized evaluation frameworks for voice and multimodal user interfaces. Then, we present usability components for speech-oriented multimodal user interfaces and usability testing guidelines that can be used in a user-centered multimodal interface design process.

  • PDF

통신환경에서 음성인식 인터페이스 (Speech Recognition Interface in the Communication Environment)

  • 한태근;김종근;이동욱
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2610-2612
    • /
    • 2001
  • This study examines the recognition of the user's sound command based on speech recognition and natural language processing, and develops the natural language interface agent which can analyze the recognized command. The natural language interface agent consists of speech recognizer and semantic interpreter. Speech recognizer understands speech command and transforms the command into character strings. Semantic interpreter analyzes the character strings and creates the commands and questions to be transferred into the application program. We also consider the problems, related to the speech recognizer and the semantic interpreter, such as the ambiguity of natural language and the ambiguity and the errors from speech recognizer. This kind of natural language interface agent can be applied to the telephony environment involving all kind of communication media such as telephone, fax, e-mail, and so on.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.

An Experimental Study on Barging-In Effects for Speech Recognition Using Three Telephone Interface Boards

  • Park, Sung-Joon;Kim, Ho-Kyoung;Koo, Myoung-Wan
    • 음성과학
    • /
    • 제8권1호
    • /
    • pp.159-165
    • /
    • 2001
  • In this paper, we make an experiment on speech recognition systems with barging-in and non-barging-in utterances. Barging-in capability, with which we can say voice commands while voice announcement is coming out, is one of the important elements for practical speech recognition systems. Barging-in capability can be realized by echo cancellation techniques based on the LMS (least-mean-square) algorithm. We use three kinds of telephone interface boards with barging-in capability, which are respectively made by Dialogic Company, Natural MicroSystems Company and Korea Telecom. Speech database was made using these three kinds of boards. We make a comparative recognition experiment with this speech database.

  • PDF

자동차 텔레매틱스용 내장형 음성 HMI시스템 (The Human-Machine Interface System with the Embedded Speech recognition for the telematics of the automobiles)

  • 권오일
    • 전자공학회논문지CI
    • /
    • 제41권2호
    • /
    • pp.1-8
    • /
    • 2004
  • 자동차 텔레매틱스 용 음성 HMI(Human Machine Interface) 기술은 차량 내 음성정보기술 활용을 위하여 차량 잡음환경에 강인한 내장형 음성 기술을 통합한 음성 HMI 기반 텔레매틱스 용 DSP 시스템의 개발을 포함한다. 개발된 내장형 음성 인식엔진을 바탕으로 통합 시험을 위한 자동차 텔레매틱스 용 DSP 시스템 구현 개발을 수행하는 본 논문은 자동차용 음성 HMI의 요소 기술을 통합하는 기술 개발로 자동차용 음성 HMI 기술 개발에 중심이 되는 연구이다.

자동차 환경내의 음성인식 자동 평가 플랫폼 연구 (A Study of Automatic Evaluation Platform for Speech Recognition Engine in the Vehicle Environment)

  • 이성재;강선미
    • 한국통신학회논문지
    • /
    • 제37권7C호
    • /
    • pp.538-543
    • /
    • 2012
  • 주행 중 차량내의 음성인터페이스 에서 음성인식기의 성능은 가장 중요한 부분이다. 본 논문은 차량내 음성인식기의 성능 평가를 자동화하기 위한 플랫폼의 개발에 대한 것이다. 개발된 플랫폼은 주 프로그램, 중계 프로그램 데이터베이스 관리, 통계산출 모듈로 구성된다. 성능 평가에 있어 실제 차량의 주행 조건을 고려한 시뮬레이션 환경이 구축되었고, 미리 녹음된 주행 노이즈와 발화자의 목소리를 마이크를 통해 입력하여 실험하였다. 실험 결과 제안하는 플랫폼에서 얻어진 음성인식 결과의 유효성이 입증되었다. 제안한 플랫폼으로 사용자는 음성인식의 자동화와 인식결과의 효율적인 관리 및 통계산출을 함으로서 차량 음성인식기의 평가를 효과적으로 진행할 수 있다.

착용형 단말에서의 음성 인식과 제스처 인식을 융합한 멀티 모달 사용자 인터페이스 설계 (Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform)

  • 성기은;박유진;강순주
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제21권6호
    • /
    • pp.418-423
    • /
    • 2015
  • 기술 발전에 따른 착용형 단말의 기능들은 더 다양하고 복잡해지고 있다. 복잡한 기능 때문에 일반 사용자들도 기능을 사용하기 힘든 경우가 있다. 본 논문에서는 사용자에게 편리하고 간단한 인터페이스 방식을 제공하자는데 목적을 두고 있다. 음성 인식의 경우 사용자 입장에서 직관적이고 사용하기 편리할 뿐만 아니라 다양한 명령어를 입력할 수 있다. 하지만 음성 인식을 착용형 단말에서 사용할 경우 컴퓨팅 파워라든지 소모전력 등 하드웨어적인 제약이 있다. 또한 착용형 단말은 언제 사용자가 음성으로 명령을 내릴지 그 시점을 알 수가 없다. 따라서 명령을 입력 받기 위해서는 음성 인식이 항상 동작하여야 한다. 하지만 소모전력 문제 때문에 이와 같은 방법을 사용하기에는 무리가 있다. 음성 인식이 가지고 있는 문제점을 보완하기 위해 제스처 인식을 사용한다. 본 논문에서는 음성과 제스처를 혼합한 멀티 모달 인터페이스로 사용자에게 어떻게 편리한 인터페이스를 제공할 것인지에 대해 설명하고 있다.