• Title/Summary/Keyword: Human robot interaction

Search Result 342, Processing Time 0.027 seconds

Design and Development of Modular Replaceable AI Server for Image Deep Learning in Social Robots on Edge Devices (엣지 디바이스인 소셜 로봇에서의 영상 딥러닝을 위한 모듈 교체형 인공지능 서버 설계 및 개발)

  • Kang, A-Reum;Oh, Hyun-Jeong;Kim, Do-Yun;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.470-476
    • /
    • 2020
  • In this paper, we present the design of modular replaceable AI server for image deep learning that separates the server from the Edge Device so as to drive the AI block and the method of data transmission and reception. The modular replaceable AI server for image deep learning can reduce the dependency between social robots and edge devices where the robot's platform will be operated to improve drive stability. When a user requests a function from an AI server for interaction with a social robot, modular functions can be used to return only the results. Modular functions in AI servers can be easily maintained and changed by each module by the server manager. Compared to existing server systems, modular replaceable AI servers produce more efficient performance in terms of server maintenance and scale differences in the programs performed. Through this, more diverse image deep learning can be included in robot scenarios that allow human-robot interaction, and more efficient performance can be achieved when applied to AI servers for image deep learning in addition to robot platforms.

Uncanny Valley: Relationships Between Anthropomorphic Attribution to Robots, Mind Perception, and Moral Care (불쾌한 골짜기: 로봇 속성의 의인화, 마음지각 및 도덕적 처우의 관계)

  • Shin, Hong Im
    • Science of Emotion and Sensibility
    • /
    • v.24 no.4
    • /
    • pp.3-16
    • /
    • 2021
  • The attribution of human traits, emotions, and intentions to nonhuman entities such as robots is known as anthropomorphism. Two studies were conducted to check whether human-robot interaction is affected by anthropomorphic framing of robots. In Study 1, participants were presented with pictures of robots that varied in human similarity in appearance. According to the results, uncanny feelings toward a robot increased with the higher levels of human similarity. Furthermore, as the level of mind attribution increased, participants tended to attribute more humanlike abilities to nonhuman agents. In Study 2, a robot was described as either a machine-like robot or a humanlike robot in a priming story; then, it was examined whether significant differences exist in mind attribution and moral care. The participants tended to perceive robots as more humanlike in the mind attribution when anthropomorphism was used in a robot's behavior, according to the findings. Furthermore, in the condition of increased anthropomorphism, a higher level of moral care could be observed compared with that in the other condition. This means that humanlike appearances may increase uncanny feelings, whereas anthropomorphic attribution may facilitate social interactions between humans and robots. Limitations as well as the implications for future research are discussed.

Operator Capacity Assessment Method for the Supervisory Control of Unmanned Military Vehicle (군사로봇의 감시제어에서 운용자 역량 평가 방법에 관한 연구)

  • Choi, Sang-Yeong;Yang, Ji-Hyeon
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.94-106
    • /
    • 2017
  • Unmanned military vehicles (UMVs) will be increasingly applied to the various military operations. These UMVs are most commonly characterized as dealing with "4D" task - dull, dirty, dangerous and difficult with automations. Although most of the UMVs are designed to a high degree of autonomy, the human operator will still intervene in the robots operation, and tele-operate them to achieve his or her mission. Thus, operator capacity, along with robot autonomy and user interface, is one of the important design factors in the research and development of the UMVs. In this paper, we propose the method to assess the operator capacity of the UMVs. The method is comprised of the 6 steps (problem, assumption, goal function identification, operator task analysis, task modeling & simulation, results and assessment), and herein colored Petri-nets are used for the modeling and simulation. Further, an illustrative example is described at the end of this paper.

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF

A Review of Haptic Perception: Focused on Sensation and Application

  • Song, Joobong;Lim, Ji Hyoun;Yun, Myung Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.6
    • /
    • pp.715-723
    • /
    • 2012
  • Objective: The aim of this study is to investigate haptic perception related researches into three perspectives: cutaneous & proprioceptive sensations, active & passive touch, and cognition & emotion, then to identify issues for implementing haptic interactions. Background: Although haptic technologies had improved and become practical, more research on the method of application is still needed to actualize the multimodal interaction technology. Systematical approached to explore haptic perception is required to understand emotional experience and social message, as well as tactile feedback. Method: Content analysis were conducted to analyze trend in haptic related research. Changes in issues and topics were investigated using sensational dimensions and the different contents delivered via tactile perception. Result: The found research opportunities were haptic perception in various body segments and emotion related proprioceptive sensation. Conclusion: Once the mechanism of how users perceives haptic stimuli would help to develop effective haptic interactrion and this study provide insights of what to focus for the future of haptic interaction. Application: This research is expected to provide presence, and emotional response applied by haptic perception to fields such as human-robot, human-device, and telecommunication interaction.

Research on Service Extensior of Restaurant Serving Robot - Taking Haidilao Hot Pot Intelligent Restaurant in Beijing as an Example (레스토랑 서빙 로봇의 서비스 확장에 관한 연구 - 중국 베이징 하이디라오 스마트 레스토랑을 사례로 연구)

  • Zhao, Yuqi;Pan, Young-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.17-25
    • /
    • 2020
  • This study focuses on the analysis of the service process and interaction mode of the serving robot used in the restaurant, Through user research, shadowing research and indepth interviews with customers and catering service personnel, this paper analyzes the contact points between catering service machines, people and users, constructs user journey map to understand users' expectations. In addition to the delivery service that can be allocated to the machine and people, the blueprint construction of ordering, reception and table cleaning services can also included in the service process. The final proposal is to improve the existing machine human interface and design a new service scheme.

KOBIE: A Pet-type Emotion Robot (KOBIE: 애완형 감성로봇)

  • Ryu, Joung-Woo;Park, Cheon-Shu;Kim, Jae-Hong;Kang, Sang-Seung;Oh, Jin-Hwan;Sohn, Joo-Chan;Cho, Hyun-Kyu
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.154-163
    • /
    • 2008
  • This paper presents the concept for the development of a pet-type robot with an emotion engine. The pet-type robot named KOBIE (KOala roBot with Intelligent Emotion) is able to interact with a person through touch. KOBIE is equipped with tactile sensors on the body for interaction with a person through recognition of his/her touching behaviors such as "Stroke","Tickle","Hit". We have covered KOBIE with synthetic fur fabric in order to can make him/her feel affection as well. KOBIE is able to also express an emotional status that varies according to the circumstances under which it is presented. The emotion engine of KOBIE's emotion expression system generates an emotional status in an emotion vector space which is associated with a predefined needs and mood models. In order to examine the feasibility of our emotion expression system, we verified a changing emotional status in our emotion vector space by a touching behavior. We specially examined the reaction of children who have interacted with three kind of pet-type robots: KOBIE, PARO, AIBO for roughly 10 minutes to investigate the children's preference for pet-type robots.

  • PDF

Visual Tracking Using Improved Multiple Instance Learning with Co-training Framework for Moving Robot

  • Zhou, Zhiyu;Wang, Junjie;Wang, Yaming;Zhu, Zefei;Du, Jiayou;Liu, Xiangqi;Quan, Jiaxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5496-5521
    • /
    • 2018
  • Object detection and tracking is the basic capability of mobile robots to achieve natural human-robot interaction. In this paper, an object tracking system of mobile robot is designed and validated using improved multiple instance learning algorithm. The improved multiple instance learning algorithm which prevents model drift significantly. Secondly, in order to improve the capability of classifiers, an active sample selection strategy is proposed by optimizing a bag Fisher information function instead of the bag likelihood function, which dynamically chooses most discriminative samples for classifier training. Furthermore, we integrate the co-training criterion into algorithm to update the appearance model accurately and avoid error accumulation. Finally, we evaluate our system on challenging sequences and an indoor environment in a laboratory. And the experiment results demonstrate that the proposed methods can stably and robustly track moving object.

Robust Deep Age Estimation Method Using Artificially Generated Image Set

  • Jang, Jaeyoon;Jeon, Seung-Hyuk;Kim, Jaehong;Yoon, Hosub
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.643-651
    • /
    • 2017
  • Human age estimation is one of the key factors in the field of Human-Robot Interaction/Human-Computer Interaction (HRI/HCI). Owing to the development of deep-learning technologies, age recognition has recently been attempted. In general, however, deep learning techniques require a large-scale database, and for age learning with variations, a conventional database is insufficient. For this reason, we propose an age estimation method using artificially generated data. Image data are artificially generated through 3D information, thus solving the problem of shortage of training data, and helping with the training of the deep-learning technique. Augmentation using 3D has advantages over 2D because it creates new images with more information. We use a deep architecture as a pre-trained model, and improve the estimation capacity using artificially augmented training images. The deep architecture can outperform traditional estimation methods, and the improved method showed increased reliability. We have achieved state-of-the-art performance using the proposed method in the Morph-II dataset and have proven that the proposed method can be used effectively using the Adience dataset.

Biomechanical Model of Hand to Predict Muscle Force and Joint Force (근력과 관절력 예측을 위한 손의 생체역학 모델)

  • Kim, Kyung-Soo;Kim, Yoon-Hyuk
    • Journal of the Ergonomics Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.1-6
    • /
    • 2009
  • Recently, importance of the rehabilitation of hand pathologies as well as the development of high-technology hand robot has been increased. The biomechanical model of hand is indispensable due to the difficulty of direct measurement of muscle forces and joint forces in hands. In this study, a three-dimensional biomechanical model of four fingers including three joints and ten muscles in each finger was developed and a mathematical relationship between neural commands and finger forces which represents the enslaving effect and the force deficit effect was proposed. When pressing a plate under the flexed posture, the muscle forces and the joint forces were predicted by the optimization technique. The results showed that the major activated muscles were flexion muscles (flexor digitorum profundus, radial interosseous, and ulnar interosseous). In addition, it was found that the antagonistic muscles were also activated rather than the previous models, which is more realistic phenomenon. The present model has considered the interaction among fingers, thus can be more powerful while developing a robot hand that can totally control the multiple fingers like human.