• Title/Summary/Keyword: Speech recognition interface

Search Result 125, Processing Time 0.021 seconds

Speaker Separation Based on Directional Filter and Harmonic Filter (Directional Filter와 Harmonic Filter 기반 화자 분리)

  • Baek, Seung-Eun;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.125-136
    • /
    • 2005
  • Automatic speech recognition is much more difficult in real world. Speech recognition according to SIR (Signal to Interface Ratio) is difficult in situations in which noise of surrounding environment and multi-speaker exists. Therefore, study on main speaker's voice extractions a very important field in speech signal processing in binaural sound. In this paper, we used directional filter and harmonic filter among other existing methods to extract the main speaker's information in binaural sound. The main speaker's voice was extracted using directional filter, and other remaining speaker's information was removed using harmonic filter through main speaker's pitch detection. As a result, voice of the main speaker was enhanced.

  • PDF

Implementation of Real-time Vowel Recognition Mouse based on Smartphone (스마트폰 기반의 실시간 모음 인식 마우스 구현)

  • Jang, Taeung;Kim, Hyeonyong;Kim, Byeongman;Chung, Hae
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.531-536
    • /
    • 2015
  • The speech recognition is an active research area in the human computer interface (HCI). The objective of this study is to control digital devices with voices. In addition, the mouse is used as a computer peripheral tool which is widely used and provided in graphical user interface (GUI) computing environments. In this paper, we propose a method of controlling the mouse with the real-time speech recognition function of a smartphone. The processing steps include extracting the core voice signal after receiving a proper length voice input with real time, to perform the quantization by using the learned code book after feature extracting with mel frequency cepstral coefficient (MFCC), and to finally recognize the corresponding vowel using hidden markov model (HMM). In addition a virtual mouse is operated by mapping each vowel to the mouse command. Finally, we show the various mouse operations on the desktop PC display with the implemented smartphone application.

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Voice Portal based on SMS Authentication at CTI Module Implementation by Speech Recognition (SMS 인증 기반의 보이스포탈에서의 음성인식을 위한 CTI 모듈 구현)

  • Oh, Se-Il;Kim, Bong-Hyun;Koh, Jin-Hwan;Park, Won-Tea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.1177-1180
    • /
    • 2001
  • 전화를 통해 인터넷 정보를 들을 수 있는 보이스 포탈(Voice Portal) 서비스가 인기를 얻고 있다. Voice Portal 서비스란 알고자 하는 정보를 Speech Recognition System에 음성으로 명령하면 전화를 통해 음성으로 원하는 정보를 듣는 서비스이다. Authentication의 절차를 수행하는 SMS (Short Message Service) 서버 Module, PSTN과 Database 서버사이의 Interface를 제공하는 CTI (Computer Telephony Integration) Module, CTI 서버와 WWW (World Wide Web) 사이의 Voice XML Module, 정보를 검색하기 위한 Searching Module들이 필요하다. 본 논문은 Speech Recognition technology를 기반으로 한 CTI Module 설계를 구현하였다. 또한 인정 방식으로 Random한 일회용 password를 기반으로 한 SMS Authentication을 택하므로 더욱 더 안정된 서비스 제공을 목적으로 하였다.

  • PDF

Development of an Autonomous Mobile Robot with Functions of Speech Recognition and Collision Avoidance

  • Park, Min-Gyu;Lee, Min-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.475-475
    • /
    • 2000
  • This paper describes the construction of an autonomous mobile robot with functions of collision avoidance and speech recognition that is used for teaching path of the robot. The human voice as a teaching method provides more convenient user-interface to mobile robot. For safe navigation, the autonomous mobile robot needs abilities to recognize surrounding environment and avoid collision. We use u1trasonic sensors to obtain the distance from the mobile robot to the various obstacles. By navigation algorithm, the robot forecasts the possibility of collision with obstacles and modifies a path if it detects dangerous obstacles. For these functions, the robot system is composed of four separated control modules, which are a speech recognition module, a servo motor control module, an ultrasonic sensor module, and a main control module. These modules are integrated by CAN(controller area network) in order to provide real-time communication.

  • PDF

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.

Speaker Identification in Small Training Data Environment using MLLR Adaptation Method (MLLR 화자적응 기법을 이용한 적은 학습자료 환경의 화자식별)

  • Kim, Se-hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.159-162
    • /
    • 2005
  • Identification is the process automatically identify who is speaking on the basis of information obtained from speech waves. In training phase, each speaker models are trained using each speaker's speech data. GMMs (Gaussian Mixture Models), which have been successfully applied to speaker modeling in text-independent speaker identification, are not efficient in insufficient training data environment. This paper proposes speaker modeling method using MLLR (Maximum Likelihood Linear Regression) method which is used for speaker adaptation in speech recognition. We make SD-like model using MLLR adaptation method instead of speaker dependent model (SD). Proposed system outperforms the GMMs in small training data environment.

  • PDF

Crossword Game Using Speech Technology (음성기술을 이용한 십자말 게임)

  • Yu, Il-Soo;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.213-218
    • /
    • 2003
  • In this paper, we implement a crossword game, which operate by speech. The CAA (Cross Array Algorithm) produces the crossword array randomly and automatically using an domain-dictionary. For producing the crossword array, we construct seven domain-dictionaries. The crossword game is operated by a mouse and a keyboard and is also operated by speech. For the user interface by speech, we use a speech recognizer and a speech synthesizer and this provide more comfortable interface to the user. The efficiency evaluation of CAA is performed by estimating the processing times of producing the crossword array and the generation ratio of the crossword array. As the results of the CAA's efficiency evaluation, the processing times is about 10ms and the generation ratio of the crossword array is about 50%. Also, the recognition rates were 95.5%, 97.6% and 96.2% for the window sizes of "$7{\times}7$", "$9{\times}9$," and "$11{\times}11$" respectively.}11$" respectively.vely.

Interactive Game Designed for Early Child using Multimedia Interface : Physical Activities (멀티미디어 인터페이스 기술을 이용한 유아 대상의 체감형 게임 설계 : 신체 놀이 활동 중심)

  • Won, Hye-Min;Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.116-127
    • /
    • 2011
  • This paper proposes interactive game elements for children : contents, design, sound, gesture recognition, and speech recognition. Interactive games for early children must use the contents which reflect the educational needs and the design elements which are all bright, friendly, and simple to use. Also the games should consider the background music which is familiar with children and the narration which make easy to play the games. In gesture recognition and speech recognition, the interactive games must use gesture and voice data which hits to the age of the game user. Also, this paper introduces the development process for the interactive skipping game and applies the child-oriented contents, gestures, and voices to the game.

Development of a Voice User Interface for Web Browser using VoiceXML (VoiceXML을 이용한 VUI 지원 웹브라우저 개발)

  • Yea SangHoo;Jang MinSeok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.2
    • /
    • pp.101-111
    • /
    • 2005
  • The present web informations are mainly described in terms of HTML, which users obtain through input devices such as mouse, keyboard, etc. Thus the existing GUI environment have not supported human's most natural information acquisition means, that is, voice. To solve the problem, several vendors are developing voice user interface. However these products are deficient in man -machine interactivity and their accommodation of existing web environment. This paper presents a VUI(Voice User Interface) supporting web browser by utilizing more and more maturing speech recognition technology and VoiceXML, a markup language derived from XML. It provides users with both interfaces, VUI as well as GUI. In addition, XML Island technology is applied to the bowser in a way that VoiceXML fragments are nested in HTML documents to accommodate the existing web environment. Also for better interactivity, dialogue scenarios for menu, bulletin, and search engine are suggested.