• Title/Summary/Keyword: Human and Robot Interaction

Search Result 320, Processing Time 0.026 seconds

거울 뉴런 시스템의 모방적 동기화 및 학습 기능 기반 HRI 응용 기술 개발

  • Go, Gwang-Eun;Sim, Gwi-Bo
    • ICROS
    • /
    • v.20 no.2
    • /
    • pp.31-38
    • /
    • 2014
  • 인간의 행동을 통해 내재된 의도를 인식하고 그 의도에 대응하는 서비스를 제공할 수 있는 능력을 로봇에게 부여하기 위한 연구의 일환으로 모방적 동기화 및 학습에 의한 인간-로봇 상호작용(Human-Robot Interaction, HRI) 시스템의 개발이 주목받고 있다. 하지만 인간이 관찰과 모방을 통해 목적을 가진 행동을 학습하는 과정은 감각 정보를 대응하는 운동 정보로 연계하고 모방 주체와 모방 대상 간의 물리적 상태의 차이를 보정하고 관측된 행동에 내재된 의도 또는 목표를 이해하는 복잡한 메커니즘 단계의 연속이기 때문에 이를 수행하기 위한 기술개발이 필요하다. 본고에서는 실제 인간이 수행하는 모방적 동기화 및 학습에 관여하는 것으로 추정되는 거울뉴런 시스템에 대하여 소개하고 이를 HRI 시스템에 활용하기 위해 개발된 선행 기술 동향을 논하고자 한다. 또한, 본 연구실에서 관련하여 진행해온 관련 연구를 통해 현재 거울 뉴런 시스템의 발전 정도와 향후 활용 방안 및 가능성을 고찰해보도록 한다.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

Moral Judgment, Mind Perception and Immortality Perception of Humans and Robots (인간과 로봇의 도덕성 판단, 마음지각과 불멸지각의 관계)

  • Hong Im Shin
    • Science of Emotion and Sensibility
    • /
    • v.26 no.3
    • /
    • pp.29-40
    • /
    • 2023
  • The term and concept of "immortality" has garnered a considerable amount of attention worldwide. However, research on this topic is lacking, and the question of when the mind of a deceased individual survives death has yet to be answered. This research investigates whether morality and mind perception of the dead correlate with immortality. Study 1 measures the perceived immortality of people, who were good or evil in life. The results show that the perceived morality is related with the perceived immortality. Moreover, participants indicated the extent to which each person had maintained a degree of morality and agency/experience of the mind. Therefore, morality and mind perception toward a person are related to perceived immortality. In Study 2, participants were asked to read three essays on robots (good, evil, and nonmoral), and had to indicate the extent to which each robot maintains a degree of immortality, morality, and agency/experience of the mind. The results show that good spirits of a robot are related to higher scores of mind perception toward the robot, resulting in increasing tendency of perceived immortality. These results provide implications that the morality of humans and robots can mediate the relationship between mind perception and immortality. This work extends on previous research on the determinants of social robots for overcoming difficulties in human-robot interaction.

Face Classification Using Cascade Facial Detection and Convolutional Neural Network (Cascade 안면 검출기와 컨볼루셔널 신경망을 이용한 얼굴 분류)

  • Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.70-75
    • /
    • 2016
  • Nowadays, there are many research for recognizing face of people using the machine vision. the machine vision is classification and analysis technology using machine that has sight such as human eyes. In this paper, we propose algorithm for classifying human face using this machine vision system. This algorithm consist of Convolutional Neural Network and cascade face detector. And using this algorithm, we classified the face of subjects. For training the face classification algorithm, 2,000, 3,000, and 4,000 images of each subject are used. Training iteration of Convolutional Neural Network had 10 and 20. Then we classified the images. In this paper, about 6,000 images was classified for effectiveness. And we implement the system that can classify the face of subjects in realtime using USB camera.

AUTOMATION AND ROBOT APPLICATION IN AGRICULTURAL PRODUCTIONS AND BIO-INDUSTRIES

  • Sevila, Francis
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.142-159
    • /
    • 1996
  • Engineering of automated tools for the agro-food industries and the rural world activities have to pick up two challenges : to answer the immediate important problems related to the situation of these industries, and to imaging the tools that their professional will need next century. Creating or modifying automated tools in the next few will be made taking into account parameters either technical (environmental protection, health and safety), or social and economical (investment , employment). There will be a strong interaction with disciplines like ecology, medicine, ergonomy, psycho-sociology , etc. , The partners for such a research, tools manufactures and users, should have an early involvement in its content, in order to find rapidly the solution to the drastic problems they are meeting. On a longer term , during the next 20 years , there will be an important evolution of the rural space management and of the food processes. This will imply the emergence of new types of activities and know-how's , with lines of automated tools to be invented and developed , like : micro-system for organic localized tasks -mobile and adaptive equipments highly autonomous for natural space actions - device for perception , decision and control reproducing automatically the expert behaviors of human operators. Design of such automated tools need to overcome technological difficulties like the automation of the expert-decision process, or the management of complex design.

  • PDF

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Hierarchical Multi-Classifier for the Mixed Character Code Set (홍용 문자 코드 집합을 위한 계층적 다중문자 인식기)

  • Kim, Do-Hyeon;Park, Jae-Hyeon;Kim, Cheol-Ki;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.10
    • /
    • pp.1977-1985
    • /
    • 2007
  • The character recognition technique is one of the artificial intelligence and has been widely applied in the automated system robot HCI(Human Computer Interaction), etc. This paper introduces the character set and the representative character that can be used in the recognition of the mage ROI. The character codes in this ROI include the digit, symbol, English and Hereat etc. We proposed the efficient multi-classifier structure by combining the small-size classifiers hierarchically. Moreover, we generated each small-size classifiers by delta-bar-delta learning algorithm. We tested the performance with various kinds of images and achieved the accuracy of 99%. The proposed multi-classifier showed the efficiency and the reliability for the mixed character code set.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".