• Title/Summary/Keyword: Human-Robot interaction

Search Result 343, Processing Time 0.027 seconds

The Usage of Anthropomorphic Forms in Robot Design and the Method of Evaluation (로봇 디자인에서 의인화 기법의 활용 평가 방법에 관한 연구)

  • Choi, Jeong-Gun;Kim, Myung-Suk
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.126-130
    • /
    • 2008
  • It takes only few seconds to find an artifact that has anthropomorphic form. There are numerous examples illustrating human's shape in daily life products. Usage of anthropomorphic form has been a basic design strategy especially when industrial designers design intelligent service robots because most of robot features were basically from human. Therefore, it's necessary to use anthropomorphic form not only in appearance design but also in interaction design. To use anthropomorphic form effectively, it needs to measure how much the artifact is similar to human, and then to evaluate whether the usage of anthropomorphic form fits to the artifact. This study's goal was to set up an evaluation standard for anthropomorphism for robot design. We suggest that there are three criteria for the evaluation standard. Those are 'anthropomorphic form in appearance', 'anthropomorphic form in Human-Robot Interaction', and 'accordance in two former criteria'. We expect that when designers put an evaluation step of anthropomophism in their design process of robots, robots might become more preferred by users, and easier to understand how to interact with.

  • PDF

Recognition and Generation of Facial Expression for Human-Robot Interaction (로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법)

  • Jung Sung-Uk;Kim Do-Yoon;Chung Myung-Jin;Kim Do-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction (강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식)

  • Lee, Lae-Kyoung;An, Su-Yong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

Performance Evaluation of Human Robot Interaction Components in Real Environments (실 환경에서의 인간로봇상호작용 컴포넌트의 성능평가)

  • Kim, Do-Hyung;Kim, Hye-Jin;Bae, Kyung-Sook;Yun, Woo-Han;Ban, Kyu-Dae;Park, Beom-Chul;Yoon, Ho-Sub
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.165-175
    • /
    • 2008
  • For an advanced intelligent service, the need of HRI technology has recently been increasing and the technology has been also improved. However, HRI components have been evaluated under stable and controlled laboratory environments and there are no evaluation results of performance in real environments. Therefore, robot service providers and users have not been getting sufficient information on the level of current HRI technology. In this paper, we provide the evaluation results of the performance of the HRI components on the robot platforms providing actual services in pilot service sites. For the evaluation, we select face detection component, speaker gender classification component and sound localization component as representative HRI components closing to the commercialization. The goal of this paper is to provide valuable information and reference performance on appling the HRI components to real robot environments.

  • PDF

The Cognition of Non-Ridged Objects Using Linguistic Cognitive System for Human-Robot Interaction (인간로봇 상호작용을 위한 언어적 인지시스템 기반의 비강체 인지)

  • Ahn, Hyun-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1115-1121
    • /
    • 2009
  • For HRI (Human-Robot Interaction) in daily life, robots need to recognize non-rigid objects such as clothes and blankets. However, the recognition of non-rigid objects is challenging because of the variation of the shapes according to the places and laying manners. In this paper, the cognition of non-rigid object based on a cognitive system is presented. The characteristics of non-rigid objects are analysed in the view of HRI and referred to design a framework for the cognition of them. We adopt a linguistic cognitive system for describing all of the events happened to robots. When an event related to the non-rigid objects is occurred, the cognitive system describes the event into a sentential form and stores it at a sentential memory, and depicts the objects with a spatial model for being used as references. The cognitive system parses each sentence syntactically and semantically, in which the nouns meaning objects are connected to their models. For answering the questions of humans, sentences are retrieved by searching temporal information in the sentential memory and by spatial reasoning in a schematic imagery. Experiments show the feasibility of the cognitive system for cognizing non-rigid objects in HRI.

A Study on Interaction Design of Companion Robots Based on Emotional State (감정 상태에 따른 컴패니언 로봇의 인터랙션 디자인 : 공감 인터랙션을 중심으로)

  • Oh, Ye-Jeon;Shin, Yoon-Soo;Lee, Jee-Hang;Kim, Jin-Woo
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1293-1301
    • /
    • 2017
  • Recent changes in social structure, such as nuclear family and personalization, are leading to personal and social problems, which may cause various problems due to negative emotional amplification. The absence of a family member who gives a sense of psychological stability in the past can be considered as a representative cause of the emotional difficulties of modern people. This personal and social problem is solved through the empathic interaction of the companion robot communication with users in daily life. In this study, we developed sophisticated empathic interaction design through prototyping of emotional robots. As a result, it was confirmed that the face interaction greatly affects the emotional interaction of the emotional robot and the interaction of the robot improves the emotional sense of the robot. This study has the theoretical and practical significance in that the emotional robot is made more sophisticated interaction and the guideline of the sympathetic interaction design is presented based on the experimental results.

Analysis on Children Robot Interaction with Dramatic Playes for Better Augmented Reality (어린이 극놀이 증강현실감을 위한 아동로봇상호작용 분석)

  • Han, Jeong-Hye
    • Journal of Digital Contents Society
    • /
    • v.17 no.6
    • /
    • pp.531-536
    • /
    • 2016
  • This study highlights the effectiveness of analyzing the feelings children have when interacting with robots in a dramatic play setting using augmented reality in Human Robot Interaction (HRI). Existing dramatic play activities using robots by QR-markers were edited, and their weaknesses have been corrected so that children could interact more effectively with robots. Additionally, children's levels of interest and engagement in dramatic play activities, the accuracy of robotic props, and the smartness of robots were analyzed throughout children's interactions during such activities using augmented reality. Younger participants were more likely to find robots interesting and intelligent, and participants with no previous experience with robots had relatively higher levels of interest in robots and tended to notice changes in robots' costumes.

Deep Level Situation Understanding for Casual Communication in Humans-Robots Interaction

  • Tang, Yongkang;Dong, Fangyan;Yoichi, Yamazaki;Shibata, Takanori;Hirota, Kaoru
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • A concept of Deep Level Situation Understanding is proposed to realize human-like natural communication (called casual communication) among multi-agent (e.g., humans and robots/machines), where the deep level situation understanding consists of surface level understanding (such as gesture/posture understanding, facial expression understanding, speech/voice understanding), emotion understanding, intention understanding, and atmosphere understanding by applying customized knowledge of each agent and by taking considerations of thoughtfulness. The proposal aims to reduce burden of humans in humans-robots interaction, so as to realize harmonious communication by excluding unnecessary troubles or misunderstandings among agents, and finally helps to create a peaceful, happy, and prosperous humans-robots society. A simulated experiment is carried out to validate the deep level situation understanding system on a scenario where meeting-room reservation is done between a human employee and a secretary-robot. The proposed deep level situation understanding system aims to be applied in service robot systems for smoothing the communication and avoiding misunderstanding among agents.