• 제목/요약/키워드: Talking robot

검색결과 7건 처리시간 0.025초

Automatic Vowel Sequence Reproduction for a Talking Robot Based on PARCOR Coefficient Template Matching

  • Vo, Nhu Thanh;Sawada, Hideyuki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제5권3호
    • /
    • pp.215-221
    • /
    • 2016
  • This paper describes an automatic vowel sequence reproduction system for a talking robot built to reproduce the human voice based on the working behavior of the human articulatory system. A sound analysis system is developed to record a sentence spoken by a human (mainly vowel sequences in the Japanese language) and to then analyze that sentence to give the correct command packet so the talking robot can repeat it. An algorithm based on a short-time energy method is developed to separate and count sound phonemes. A matching template using partial correlation coefficients (PARCOR) is applied to detect a voice in the talking robot's database similar to the spoken voice. Combining the sound separation and counting the result with the detection of vowels in human speech, the talking robot can reproduce a vowel sequence similar to the one spoken by the human. Two tests to verify the working behavior of the robot are performed. The results of the tests indicate that the robot can repeat a sequence of vowels spoken by a human with an average success rate of more than 60%.

From Montague Grammar to Database Semantics

  • Hausser, Roland
    • 한국언어정보학회지:언어와정보
    • /
    • 제19권2호
    • /
    • pp.1-18
    • /
    • 2015
  • This paper retraces the development of Database Semantics (DBS) from its beginnings in Montague grammar. It describes the changes over the course of four decades and explains why they were seen to be necessary. DBS was designed to answer the central theoretical question for building a talking robot: How does the mechanism of natural language communication work? For doing what is requested and reporting what is going on, a talking robot requires not only language but also non-language cognition. The contents of non-language cognition are re-used as the meanings of the language surfaces. Robot-externally, DBS handles the language-based transfer of content by using nothing but modality-dependent unanalyzed external surfaces such as sound shapes or dots on paper, produced in the speak mode and recognized n the hear mode. Robot-internally, DBS reconstructs cognition by integrating linguistic notions like functor-argument and coordination, philosophical notions like concept-, pointer-, and baptism-based reference, and notions of computer science like input-output, interface, data structure, algorithm, database schema, and functional flow.

  • PDF

지능형 서비스 로봇을 위한 문맥독립 화자인식 시스템 (Context-Independent Speaker Recognition in URC Environment)

  • 지미경;김성탁;김회린
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.158-162
    • /
    • 2006
  • This paper presents a speaker recognition system intended for use in human-robot interaction. The proposed speaker recognition system can achieve significantly high performance in the Ubiquitous Robot Companion (URC) environment. The URC concept is a scenario in which a robot is connected to a server through a broadband connection allowing functions to be performed on the server side, thereby minimizing the stand-alone function significantly and reducing the robot client cost. Instead of giving a robot (client) on-board cognitive capabilities, the sensing and processing work are outsourced to a central computer (server) connected to the high-speed Internet, with only the moving capability provided by the robot. Our aim is to enhance human-robot interaction by increasing the performance of speaker recognition with multiple microphones on the robot side in adverse distant-talking environments. Our speaker recognizer provides the URC project with a basic interface for human-robot interaction.

  • PDF

에너지 기반 가중치를 이용한 음성 특징의 자동회귀 이동평균 필터링 (ARMA Filtering of Speech Features Using Energy Based Weights)

  • 반성민;김형순
    • 한국음향학회지
    • /
    • 제31권2호
    • /
    • pp.87-92
    • /
    • 2012
  • In this paper, a robust feature compensation method to deal with the environmental mismatch is proposed. The proposed method applies energy based weights according to the degree of speech presence to the Mean subtraction, Variance normalization, and ARMA filtering (MVA) processing. The weights are further smoothed by the moving average and maximum filters. The proposed feature compensation algorithm is evaluated on AURORA 2 task and distant talking experiment using the robot platform, and we obtain error rate reduction of 14.4 % and 44.9 % by using the proposed algorithm comparing with MVA processing on AURORA 2 task and distant talking experiment, respectively.

2차원 마이크로폰 배열에 의한 능동 청각 시스템 (Active Audition System based on 2-Dimensional Microphone Array)

  • 이창훈;김용호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.175-178
    • /
    • 2003
  • This paper describes a active audition system for robot-human interface in real environment. We propose a strategy for a robust sound localization and for -talking speech recognition(60-300cm) based on 2-dimensional microphone array. We consider spatial features, the relation of position and interaural time differences, and realize speaker tracking system using fuzzy inference profess based on inference rules generated by its spatial features.

  • PDF

음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현 (Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command)

  • 심병균;한성현
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

빅데이터를 이용한 독거노인 돌봄 AI 대화형 말동무 아가야(AGAYA) 로봇 시스템에 관한 연구 (A Study on Interactive Talking Companion Doll Robot System Using Big Data for the Elderly Living Alone)

  • 송문선
    • 한국콘텐츠학회논문지
    • /
    • 제22권5호
    • /
    • pp.305-318
    • /
    • 2022
  • 본 연구는 4차 혁명기술의 핵심인 AI 기술을 활용한 대화형 AI 토이 로봇의 독거노인 돌봄에 대한 효과성에 주목하고, 보다 인간 중심적인 돌봄으로의 개인화, 맞춤화에 기여할 수 있도록 R&D를 통한 '아가야'라는 AI 토이 로봇을 개발하였다. R&D 작업은 활용 중인 AI 스피커와 AI 대화 인형의 기능을 고찰, 현재 AI 로봇을 사용 중인 총 6명의 독거노인과의 인터뷰, 독거노인의 AI 대화 로봇 사용 현황과 효과성, 한계성, 개선점 등을 파악한 후 진행되었다. 첫째, P-TTS 기술을 적용하여 듣고 싶은 사람의 음성을 자율적으로 선택하여 들음으로써 심리적 친밀감을 강화하고 둘째, 추억저장 및 소환기능으로 자신만의 심적 치유를 가능케 하며 셋째, 눈, 코, 입, 귀, 손의 5감의 다양한 역할을 추가하였고 넷째, 따뜻한 체온 유지, 아로마, 살균 및 미세먼지 제거부, 편리한 충전방식 등의 기술을 개발하였다. 이러한 기술들은 친밀감, 개인화 지향을 통한 독거노인의 대화형 로봇에 대한 사용을 확대하고, 돌봄의 수혜자라는 수동적인 프레임에서 벗어나 스스로 남은 노후를 생산적이고 독립적으로 기획할 수 있는 긍정적 이미지의 독거 노인상을 구축하는데 기여한다.