DOI QR코드

DOI QR Code

The Cognition of Non-Ridged Objects Using Linguistic Cognitive System for Human-Robot Interaction

인간로봇 상호작용을 위한 언어적 인지시스템 기반의 비강체 인지

  • 안현식 (동명대학교 로봇시스템공학과)
  • Published : 2009.11.01

Abstract

For HRI (Human-Robot Interaction) in daily life, robots need to recognize non-rigid objects such as clothes and blankets. However, the recognition of non-rigid objects is challenging because of the variation of the shapes according to the places and laying manners. In this paper, the cognition of non-rigid object based on a cognitive system is presented. The characteristics of non-rigid objects are analysed in the view of HRI and referred to design a framework for the cognition of them. We adopt a linguistic cognitive system for describing all of the events happened to robots. When an event related to the non-rigid objects is occurred, the cognitive system describes the event into a sentential form and stores it at a sentential memory, and depicts the objects with a spatial model for being used as references. The cognitive system parses each sentence syntactically and semantically, in which the nouns meaning objects are connected to their models. For answering the questions of humans, sentences are retrieved by searching temporal information in the sentential memory and by spatial reasoning in a schematic imagery. Experiments show the feasibility of the cognitive system for cognizing non-rigid objects in HRI.

Keywords

References

  1. 정만태, '로봇산업의 2020 비젼과 전략,' 정책자료 2007-63, 산업연구원, Aug. 2007
  2. D. Roy, 'Semiotic schemas: A framework for grounding language in action and perception' Artificial Intelligence, vol. 167, pp. 170-205, 2005 https://doi.org/10.1016/j.artint.2005.04.007
  3. M. Levit and D. Roy, 'Interpretation of spatial language in a map navigation task,' IEEE Transac- tions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 3, pp. 667-679, 2007 https://doi.org/10.1109/TSMCB.2006.889809
  4. P. Gorniak and D. Roy, 'Grounded semantic composition for visual scenes,' Journal of Artificial Intelligence Research, vol. 21, pp. 429-470, 2004
  5. J. M. Siskind, 'Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic,' Journal of ArtificialIntelligence Research, no. 15, pp. 31-90, 2001
  6. R. J. Mooney, 'Learning to connect language and perception,' Proceedings of the 23th AAI Conference on Artificial Intelligence, Chicago, pp. 1598-1601, July 2008
  7. J. R Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin, 'An integrated theory of the mind,' Psychological Review, vol. 111, no. 4, pp. 1036-1060, 2004 https://doi.org/10.1037/0033-295X.111.4.1036
  8. D. E. Kieras, S. D. Wood, and D. E. Meyer, 'Predictive engineering models based on the EPIC architecture for a multimodal high-perfonnance human-computer interaction task,' ACM Transactions on Computer-Human Interaction, vol. 4, pp. 230-275, 1997 https://doi.org/10.1145/264645.264658
  9. S. D. Lathrop and J. E. Laird, 'Towards incorporating visual imagery into a cognitive architecture,' Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, 2007