• Title/Summary/Keyword: Speech Recognition Agent

Search Result 18, Processing Time 0.028 seconds

Applying Mobile Agent for Internet-based Distributed Speech Recognition

  • Saaim, Emrul Hamide Md;Alias, Mohamad Ashari;Ahmad, Abdul Manan;Ahmad, Jamal Nasir
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.134-138
    • /
    • 2005
  • There are several application have been developed on internet-based speech recognition. Internet-based speech recognition is a distributed application and there were various techniques and methods have been using for that purposed. Currently, client-server paradigm was one of the popular technique that been using for client-server communication in web application. However, there is a new paradigm with the same purpose: mobile agent technology. Mobile agent technology has several advantages working on distributed internet-based system. This paper presents, applying mobile agent technology in internet-based speech recognition which based on client-server processing architecture.

  • PDF

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

Speech Recognition Interface in the Communication Environment (통신환경에서 음성인식 인터페이스)

  • Han, Tai-Kun;Kim, Jong-Keun;Lee, Dong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2610-2612
    • /
    • 2001
  • This study examines the recognition of the user's sound command based on speech recognition and natural language processing, and develops the natural language interface agent which can analyze the recognized command. The natural language interface agent consists of speech recognizer and semantic interpreter. Speech recognizer understands speech command and transforms the command into character strings. Semantic interpreter analyzes the character strings and creates the commands and questions to be transferred into the application program. We also consider the problems, related to the speech recognizer and the semantic interpreter, such as the ambiguity of natural language and the ambiguity and the errors from speech recognizer. This kind of natural language interface agent can be applied to the telephony environment involving all kind of communication media such as telephone, fax, e-mail, and so on.

  • PDF

Intelligent Digital Public Address System using Agent Based on Network

  • Kim, Jung-Sook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.87-92
    • /
    • 2013
  • In this paper, we developed a digital and integrated PA(Public Address) system with speech recognition and sensor connection based on IP with an ID using agent. It has facilities such as an external input, a microphone and a radio for a PA system and has speech recognition. If "fire" is spoken to the PA system then it can recognize the emergency situation and will broadcast information to the appropriate agency immediately. In addition to that, many sensors, such as temperature, humidity, and infrared, etc., can be connected to the PA system and can be integrated with the context awareness which contains many types of information about internal statuses using inference agent. Also, developed the digital integrated PA system will make it possible to broadcast the message to adaptable places using network IP based on IDs. Finally, the digital PA system is designed for operation from a PC, which makes installation and setting of operating parameters very simple and user-friendly. For implementation details, we implemented thread based concurrent processing for the events which occur concurrently from many sensors or users.

A Study of Automatic Evaluation Platform for Speech Recognition Engine in the Vehicle Environment (자동차 환경내의 음성인식 자동 평가 플랫폼 연구)

  • Lee, Seong-Jae;Kang, Sun-Mee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7C
    • /
    • pp.538-543
    • /
    • 2012
  • The performance of the speech recognition engine is one of the most critical elements of the in-vehicle speech recognition interface. The objective of this paper is to develop an automated platform for running performance tests on the in-vehicle speech recognition engine. The developed platform comprise of main program, agent program, database management module, and statistical analysis module. A simulation environment for performance tests which mimics the real driving situations was constructed, and it was tested by applying pre-recorded driving noises and a speaker's voice as inputs. As a result, the validity of the results from the speech recognition tests was proved. The users will be able to perform the performance tests for the in-vehicle speech recognition engine effectively through the proposed platform.

Joint streaming model for backchannel prediction and automatic speech recognition

  • Yong-Seok Choi;Jeong-Uk Bang;Seung Hi Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.118-126
    • /
    • 2024
  • In human conversations, listeners often utilize brief backchannels such as "uh-huh" or "yeah." Timely backchannels are crucial to understanding and increasing trust among conversational partners. In human-machine conversation systems, users can engage in natural conversations when a conversational agent generates backchannels like a human listener. We propose a method that simultaneously predicts backchannels and recognizes speech in real time. We use a streaming transformer and adopt multitask learning for concurrent backchannel prediction and speech recognition. The experimental results demonstrate the superior performance of our method compared with previous works while maintaining a similar single-task speech recognition performance. Owing to the extremely imbalanced training data distribution, the single-task backchannel prediction model fails to predict any of the backchannel categories, and the proposed multitask approach substantially enhances the backchannel prediction performance. Notably, in the streaming prediction scenario, the performance of backchannel prediction improves by up to 18.7% compared with existing methods.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Dialog System Using Multimedia Techniques for the Elderly with Dementia

  • Kim, Sung-Ill;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4E
    • /
    • pp.170-177
    • /
    • 2002
  • The goal of the present research is to improve a quality of life of the elderly with a dementia. In this paper, it is realized by developing the dialog system that is controlled by three kinds of modules such as speech recognition engine, graphical agent, or database classified by a nursing schedule. The system was evaluated in an actual environment of a nursing facility by introducing it to an older male patient with dementia. The comparison study between dialog system and professional caregivers was then carried out at nursing home for 5 days in each case. The evaluation results showed that the dialog system was more responsive in catering to needs of dementia patient than professional caregivers. Moreover, the proposed system led the patient to talk more than caregivers did.

Development of AI-based Real Time Agent Advisor System on Call Center - Focused on N Bank Call Center (AI기반 콜센터 실시간 상담 도우미 시스템 개발 - N은행 콜센터 사례를 중심으로)

  • Ryu, Ki-Dong;Park, Jong-Pil;Kim, Young-min;Lee, Dong-Hoon;Kim, Woo-Je
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.750-762
    • /
    • 2019
  • The importance of the call center as a contact point for the enterprise is growing. However, call centers have difficulty with their operating agents due to the agents' lack of knowledge and owing to frequent agent turnover due to downturns in the business, which causes deterioration in the quality of customer service. Therefore, through an N-bank call center case study, we developed a system to reduce the burden of keeping up business knowledge and to improve customer service quality. It is a "real-time agent advisor" system that provides agents with answers to customer questions in real time by combining AI technology for speech recognition, natural language processing, and questions & answers for existing call center information systems, such as a private branch exchange (PBX) and computer telephony integration (CTI). As a result of the case study, we confirmed that the speech recognition system for real-time call analysis and the corpus construction method improves the natural speech processing performance of the query response system. Especially with name entity recognition (NER), the accuracy of the corpus learning improved by 31%. Also, after applying the agent advisor system, the positive feedback rate of agents about the answers from the agent advisor was 93.1%, which proved the system is helpful to the agents.

Dialog System Using Multimedia Techniques for the Elderly with Dementia

  • 김성일;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.170-170
    • /
    • 2002
  • The goal of the present research is to improve a quality of life of the elderly with a dementia. In this paper, it is realized by developing the dialog system that is controlled by three kinds of modules such as speech recognition engine, graphical agent, or database classified by a nursing schedule. The system was evaluated in an actual environment of a nursing facility by introducing it to an older male patient with dementia. The comparison study between dialog system and professional caregivers was then carried out at nursing home for 5 days in each case. The evaluation results showed that the dialog system was more responsive in catering to needs of dementia patient than professional caregivers. Moreover, the proposed system led the patient to talk more than caregivers did.