• Title/Summary/Keyword: Human interaction

Search Result 2,484, Processing Time 0.047 seconds

Recent Trends in Human-Care Robot and Social Interaction Technology (휴먼케어 로봇과 소셜 상호작용 기술 동향)

  • Ko, Woori;Cho, Miyoung;Kim, Dohyung;Jang, Minsu;Lee, Jaeyeon;Kim, Jaehong
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.34-44
    • /
    • 2020
  • This paper examines the trends of recently developed human-care robots and social interaction technologies. As one of the solutions to the problems of an aging society, human-care robots have gained considerable attention from the public and the market. However, commercialized human-care robots do not meet user expectations for the role as companions. Current robot services based on short-term interaction and fragmentary pieces of intelligence have encountered difficulty in eliciting natural communication with humans. This results in the failure of human-robot social bonding. Social interaction is being actively investigated as a technique for improving robots' natural communication skills. Robots can form a natural bond with humans through social interaction, which consequently increases their effectiveness. In this paper, we introduce recent human-care robot-related issues and subsequently describe technical challenges and implications for the success of human-care robots. In addition, we review recent trends on social interaction technologies and the datasets required.

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Interactive Human Intention Reading by Learning Hierarchical Behavior Knowledge Networks for Human-Robot Interaction

  • Han, Ji-Hyeong;Choi, Seung-Hwan;Kim, Jong-Hwan
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1229-1239
    • /
    • 2016
  • For efficient interaction between humans and robots, robots should be able to understand the meaning and intention of human behaviors as well as recognize them. This paper proposes an interactive human intention reading method in which a robot develops its own knowledge about the human intention for an object. A robot needs to understand different human behavior structures for different objects. To this end, this paper proposes a hierarchical behavior knowledge network that consists of behavior nodes and directional edges between them. In addition, a human intention reading algorithm that incorporates reinforcement learning is proposed to interactively learn the hierarchical behavior knowledge networks based on context information and human feedback through human behaviors. The effectiveness of the proposed method is demonstrated through play-based experiments between a human and a virtual teddy bear robot with two virtual objects. Experiments with multiple participants are also conducted.

Cognitive and Emotional Structure of a Robotic Game Player in Turn-based Interaction

  • Yang, Jeong-Yean
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.154-162
    • /
    • 2015
  • This paper focuses on how cognitive and emotional structures affect humans during long-term interaction. We design an interaction with a turn-based game, the Chopstick Game, in which two agents play with numbers using their fingers. While a human and a robot agent alternate turn, the human user applies herself to play the game and to learn new winning skills from the robot agent. Conventional valence and arousal space is applied to design emotional interaction. For the robotic system, we implement finger gesture recognition and emotional behaviors that are designed for three-dimensional virtual robot. In the experimental tests, the properness of the proposed schemes is verified and the effect of the emotional interaction is discussed.

Recognizing User Engagement and Intentions based on the Annotations of an Interaction Video (상호작용 영상 주석 기반 사용자 참여도 및 의도 인식)

  • Jang, Minsu;Park, Cheonshu;Lee, Dae-Ha;Kim, Jaehong;Cho, Young-Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.612-618
    • /
    • 2014
  • A pattern classifier-based approach for recognizing internal states of human participants in interactions is presented along with its experimental results. The approach includes a step for collecting video recordings of human-human interactions or humanrobot interactions and subsequently analyzing the videos based on human coded annotations. The annotation includes social signals directly observed in the video recordings and the internal states of human participants indirectly inferred from those observed social signals. Then, a pattern classifier is trained using the annotation data, and tested. In our experiments on human-robot interaction, 7 video recordings were collected and annotated with 20 social signals and 7 internal states. Several experiments were performed to obtain an 84.83% recall rate for interaction engagement, 93% for concentration intention, and 81% for task comprehension level using a C4.5 based decision tree classifier.

An analysis of the component of Human-Robot Interaction for Intelligent room

  • Park, Jong-Chan;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2143-2147
    • /
    • 2005
  • Human-Robot interaction (HRI) has recently become one of the most important issues in the field of robotics. Understanding and predicting the intentions of human users is a major difficulty for robotic programs. In this paper we suggest an interaction method allows the robot to execute the human user's desires in an intelligent room-based domain, even when the user does not give a specific command for the action. To achieve this, we constructed a full system architecture of an intelligent room so that the following were present and sequentially interconnected: decision-making based on the Bayesian belief network, responding to human commands, and generating queries to remove ambiguities. The robot obtained all the necessary information from analyzing the user's condition and the environmental state of the room. This information is then used to evaluate the probabilities of the results coming from the output nodes of the Bayesian belief network, which is composed of the nodes that includes several states, and the causal relationships between them. Our study shows that the suggested system and proposed method would improve a robot's ability to understand human commands, intuit human desires, and predict human intentions resulting in a comfortable intelligent room for the human user.

  • PDF

Novel User Interaction Technologies in 3D Display Systems

  • Hopf, Klaus;Chojecki, Paul;Neumann, Frank
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1227-1230
    • /
    • 2007
  • This paper describes recent advances in the R&D work achieved at Fraunhofer HHI (Germany) that are believed to provide key technologies for the development of future human-machine interfaces. The paper focus on the area of vision based interaction technologies that will be one essential component in future three-dimensional display systems.

  • PDF

Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction (휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.