• Title/Summary/Keyword: Talking robot

Search Result 7, Processing Time 0.023 seconds

Automatic Vowel Sequence Reproduction for a Talking Robot Based on PARCOR Coefficient Template Matching

  • Vo, Nhu Thanh;Sawada, Hideyuki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.215-221
    • /
    • 2016
  • This paper describes an automatic vowel sequence reproduction system for a talking robot built to reproduce the human voice based on the working behavior of the human articulatory system. A sound analysis system is developed to record a sentence spoken by a human (mainly vowel sequences in the Japanese language) and to then analyze that sentence to give the correct command packet so the talking robot can repeat it. An algorithm based on a short-time energy method is developed to separate and count sound phonemes. A matching template using partial correlation coefficients (PARCOR) is applied to detect a voice in the talking robot's database similar to the spoken voice. Combining the sound separation and counting the result with the detection of vowels in human speech, the talking robot can reproduce a vowel sequence similar to the one spoken by the human. Two tests to verify the working behavior of the robot are performed. The results of the tests indicate that the robot can repeat a sequence of vowels spoken by a human with an average success rate of more than 60%.

From Montague Grammar to Database Semantics

  • Hausser, Roland
    • Language and Information
    • /
    • v.19 no.2
    • /
    • pp.1-18
    • /
    • 2015
  • This paper retraces the development of Database Semantics (DBS) from its beginnings in Montague grammar. It describes the changes over the course of four decades and explains why they were seen to be necessary. DBS was designed to answer the central theoretical question for building a talking robot: How does the mechanism of natural language communication work? For doing what is requested and reporting what is going on, a talking robot requires not only language but also non-language cognition. The contents of non-language cognition are re-used as the meanings of the language surfaces. Robot-externally, DBS handles the language-based transfer of content by using nothing but modality-dependent unanalyzed external surfaces such as sound shapes or dots on paper, produced in the speak mode and recognized n the hear mode. Robot-internally, DBS reconstructs cognition by integrating linguistic notions like functor-argument and coordination, philosophical notions like concept-, pointer-, and baptism-based reference, and notions of computer science like input-output, interface, data structure, algorithm, database schema, and functional flow.

  • PDF

Context-Independent Speaker Recognition in URC Environment (지능형 서비스 로봇을 위한 문맥독립 화자인식 시스템)

  • Ji, Mi-Kyong;Kim, Sung-Tak;Kim, Hoi-Rin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.158-162
    • /
    • 2006
  • This paper presents a speaker recognition system intended for use in human-robot interaction. The proposed speaker recognition system can achieve significantly high performance in the Ubiquitous Robot Companion (URC) environment. The URC concept is a scenario in which a robot is connected to a server through a broadband connection allowing functions to be performed on the server side, thereby minimizing the stand-alone function significantly and reducing the robot client cost. Instead of giving a robot (client) on-board cognitive capabilities, the sensing and processing work are outsourced to a central computer (server) connected to the high-speed Internet, with only the moving capability provided by the robot. Our aim is to enhance human-robot interaction by increasing the performance of speaker recognition with multiple microphones on the robot side in adverse distant-talking environments. Our speaker recognizer provides the URC project with a basic interface for human-robot interaction.

  • PDF

ARMA Filtering of Speech Features Using Energy Based Weights (에너지 기반 가중치를 이용한 음성 특징의 자동회귀 이동평균 필터링)

  • Ban, Sung-Min;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.87-92
    • /
    • 2012
  • In this paper, a robust feature compensation method to deal with the environmental mismatch is proposed. The proposed method applies energy based weights according to the degree of speech presence to the Mean subtraction, Variance normalization, and ARMA filtering (MVA) processing. The weights are further smoothed by the moving average and maximum filters. The proposed feature compensation algorithm is evaluated on AURORA 2 task and distant talking experiment using the robot platform, and we obtain error rate reduction of 14.4 % and 44.9 % by using the proposed algorithm comparing with MVA processing on AURORA 2 task and distant talking experiment, respectively.

Active Audition System based on 2-Dimensional Microphone Array (2차원 마이크로폰 배열에 의한 능동 청각 시스템)

  • Lee, Chang-Hun;Kim, Yong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.175-178
    • /
    • 2003
  • This paper describes a active audition system for robot-human interface in real environment. We propose a strategy for a robust sound localization and for -talking speech recognition(60-300cm) based on 2-dimensional microphone array. We consider spatial features, the relation of position and interaural time differences, and realize speaker tracking system using fuzzy inference profess based on inference rules generated by its spatial features.

  • PDF

Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command (음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현)

  • Shim, Byoung-Kyun;Han, Sung-Hyun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.2
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

A Study on Interactive Talking Companion Doll Robot System Using Big Data for the Elderly Living Alone (빅데이터를 이용한 독거노인 돌봄 AI 대화형 말동무 아가야(AGAYA) 로봇 시스템에 관한 연구)

  • Song, Moon-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.305-318
    • /
    • 2022
  • We focused on the care effectiveness of the interactive AI robots. developed an AI toy robot called 'Agaya' to contribute to personalization with more human-centered care. First, by applying P-TTS technology, you can maximize intimacy by autonomously selecting the voice of the person you want to hear. Second, it is possible to heal in your own way with good memory storage and bring back memory function. Third, by having five senses of the role of eyes, nose, mouth, ears, and hands, seeking better personalised services. Fourth, it attempted to develop technologies such as warm temperature maintenance, aroma, sterilization and fine dust removal, convenient charging method. These skills will expand the effective use of interactive robots by elderly people and contribute to building a positive image of the elderly who can plan the remaining old age productively and independently