• 제목/요약/키워드: speech interactive agent

검색결과 5건 처리시간 0.02초

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • 음성과학
    • /
    • 제11권2호
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 한국정보컨버전스학회 2008년도 International conference on information convergence
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

A Design and Implementation of The Deep Learning-Based Senior Care Service Application Using AI Speaker

  • Mun Seop Yun;Sang Hyuk Yoon;Ki Won Lee;Se Hoon Kim;Min Woo Lee;Ho-Young Kwak;Won Joo Lee
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권4호
    • /
    • pp.23-30
    • /
    • 2024
  • 본 논문에서는 딥러닝 기반의 개인 맞춤형 실버세대 케어 서비스 애플리케이션을 설계하고 구현한다. 이 애플리케이션은 사용자의 편의성을 고려하여 STT(Speech to Text) 기술을 사용해 사용자의 발화를 텍스트로 변환하고, 이를 Microsoft 사의 대화형 멀티 에이전트 거대 언어 모델인 Autogen의 입력으로 사용한다. Autogen은 사용자와 ChatBot의 대화 데이터를 활용하여 상대방의 의도를 파악하여 답변에 대하여 응답한다. 그리고 백엔드 에이전트를 활용하여 위시리스트, 공유 달력 그리고 보이스 클로닝을 위한 딥러닝 모델을 통해 상대방의 목소리가 담긴 안부 메시지 기능을 제공한다. 또한, 애플리케이션은 SKT 사의 인공지능 누구(NUGU) 스피커를 탑재하여 홈 IoT 서비스 기능을 제공한다. 이러한 기능을 통해 제안하는 지능형 애플리케이션은 향후 미래 인공지능 기반의 실버세대 케어 기술에 기여할 것이다.

사용자 감정의 사회적 의미 조사 모델 제안 -대화형 에이전트와 커뮤니케이션을 중심으로- (Suggestion of a Social Significance Research Model for User Emotion -Focused on Conversational Agent and Communication-)

  • 한상욱;김승인
    • 한국융합학회논문지
    • /
    • 제10권3호
    • /
    • pp.167-176
    • /
    • 2019
  • 4차 산업의 선두에 있는 대화형 에이전트는 앞으로 사용자 중심의 개인화를 목표로 하며, 다양한 IoT와 연결할 수 있는 허브 역할을 가질 수 있는 중요한 위치를 차지하고 있다. 개인화에 있어서 사용자의 감정을 인지하고 올바른 상호작용을 제공하는 것은 대화형 에이전트가 풀어야 할 숙제이다. 이 연구에서는 먼저 감정의 정의와 과학적 공학적 접근에 대해서 알아보았다. 또한, 감정이 가진 사회적인 기능이 무엇이며 어떤 요소가 있는지 사회적 관점을 통해서 알아보고 이를 이용하여 감정을 어떻게 이해할 수 있는지 밝혔다. 이를 바탕으로 사용자가 커뮤니케이션에서 감정의 사회적 요소들을 어떻게 발견할 수 있는지 알아보았다. 연구결과 사용자 발화에서 사회적 요소를 발견할 수 있으며, 이를 감정이 가지는 사회적 의미와 연결할 수 있다. 최종적으로 사용자 커뮤니케이션에서 사회적 요소들을 발견할 수 있는 모델을 제안한다. 이를 바탕으로 대화형 에이전트를 디자인하는 데 있어서, 사용자 중심의 설계와 상호작용 연구에 도움이 되길 바란다.

Research on Developing a Conversational AI Callbot Solution for Medical Counselling

  • Won Ro LEE;Jeong Hyon CHOI;Min Soo KANG
    • 한국인공지능학회지
    • /
    • 제11권4호
    • /
    • pp.9-13
    • /
    • 2023
  • In this study, we explored the potential of integrating interactive AI callbot technology into the medical consultation domain as part of a broader service development initiative. Aimed at enhancing patient satisfaction, the AI callbot was designed to efficiently address queries from hospitals' primary users, especially the elderly and those using phone services. By incorporating an AI-driven callbot into the hospital's customer service center, routine tasks such as appointment modifications and cancellations were efficiently managed by the AI Callbot Agent. On the other hand, tasks requiring more detailed attention or specialization were addressed by Human Agents, ensuring a balanced and collaborative approach. The deep learning model for voice recognition for this study was based on the Transformer model and fine-tuned to fit the medical field using a pre-trained model. Existing recording files were converted into learning data to perform SSL(self-supervised learning) Model was implemented. The ANN (Artificial neural network) neural network model was used to analyze voice signals and interpret them as text, and after actual application, the intent was enriched through reinforcement learning to continuously improve accuracy. In the case of TTS(Text To Speech), the Transformer model was applied to Text Analysis, Acoustic model, and Vocoder, and Google's Natural Language API was applied to recognize intent. As the research progresses, there are challenges to solve, such as interconnection issues between various EMR providers, problems with doctor's time slots, problems with two or more hospital appointments, and problems with patient use. However, there are specialized problems that are easy to make reservations. Implementation of the callbot service in hospitals appears to be applicable immediately.