• Title/Summary/Keyword: Human and Robot Interaction

Search Result 321, Processing Time 0.028 seconds

Recognition of Hand gesture to Human-Computer Interaction (손 동작을 통한 인간과 컴퓨터간의 상호 작용)

  • Lee, Lae-Kyoung;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2930-2932
    • /
    • 2000
  • In this paper. a robust gesture recognition system is designed and implemented to explore the communication methods between human and computer. Hand gestures in the proposed approach are used to communicate with a computer for actions of a high degree of freedom. The user does not need to wear any cumbersome devices like cyber-gloves. No assumption is made on whether the user is wearing any ornaments and whether the user is using the left or right hand gestures. Image segmentation based upon the skin-color and a shape analysis based upon the invariant moments are combined. The features are extracted and used for input vectors to a radial basis function networks(RBFN). Our "Puppy" robot is employed as a testbed. Preliminary results on a set of gestures show recognition rates of about 87% on the a real-time implementation.

  • PDF

STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content (몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템)

  • Jeongho Kim;Byungsun Hwang;Jinwook Kim;Joonho Seon;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.89-95
    • /
    • 2023
  • In recent decades, human action recognition (HAR) has demonstrated potential applications in sports analysis, human-robot interaction, and large-scale signage content. In this paper, spatial temporal attention graph convolutional network (STAGCN)-based HAR system is proposed. Spatioal-temmporal features of skeleton sequences are assigned different weights by STAGCN, enabling the consideration of key joints and viewpoints. From simulation results, it has been shown that the performance of the proposed model can be improved in terms of classification accuracy in the NTU RGB+D dataset.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Silhouette and Active Skeleton Extraction of Human Body for Robot-Human Interaction (로봇-휴먼 인터액션을 위한 인간 몸의 실루엣 및 액티브 스켈레톤 추출)

  • So, Jea-Yun;Kim, Jin-Gyu;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.321-322
    • /
    • 2007
  • 본 논문에서는 로봇과 인간의 인터액션을 위해 인간 몸의 실루엣 및 액티브 스켈레톤 추출 기법을 제안한다. 연속된 이미지 정보로 부터 얻어진 옷영역등의 정보에서 background subtraction를 이용한 adaptive fusion을 통해 추출된 인간 몸의 실루엣을 바탕으로 active contour와 가상 신체 모델인 skeleton model을 응용하여 작은 움직임에 보다 강한 active skeleton model을 이용하여 인간 몸의 특징 점 위치를 추출하는 방법을 한다.

  • PDF

Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal (음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식)

  • Kim Eun-Ho;Hyun Kyung-Hak;Kwak Yoon-Keun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

Metabolic Rate Estimation for ECG-based Human Adaptive Appliance in Smart Homes (인간 적응형 가전기기를 위한 거주자 심박동 기반 신체활동량 추정)

  • Kim, Hyun-Hee;Lee, Kyoung-Chang;Lee, Suk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.486-494
    • /
    • 2014
  • Intelligent homes consist of ubiquitous sensors, home networks, and a context-aware computing system. These homes are expected to offer many services such as intelligent air-conditioning, lighting control, health monitoring, and home security. In order to realize these services, many researchers have worked on various research topics including smart sensors with low power consumption, home network protocols, resident and location detection, context-awareness, and scenario and service control. This paper presents the real-time metabolic rate estimation method that is based on measured heart rate for human adaptive appliance (air-conditioner, lighting etc.). This estimation results can provide valuable information to control smart appliances so that they can adjust themselves according to the status of residents. The heart rate based method has been experimentally compared with the location-based method on a test bed.

Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model (바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발)

  • Cho, Ye Ri;Pae, Dong Sung;Lee, Yun Kyu;Ahn, Woo Jin;Lim, Myo Taeg;Kang, Tae Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

Emotion Recognition Based on Human Gesture (인간의 제스쳐에 의한 감정 인식)

  • Song, Min-Kook;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.46-51
    • /
    • 2007
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills fo the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.

A Study on Face Recognition Performance Comparison of Real Images with Images from LED Monitor (LED 모니터 출력 영상과 실물 영상의 얼굴인식 성능 비교)

  • Cho, Mi-Young;Jeong, Young-Sook;Chun, Byung-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.144-149
    • /
    • 2013
  • With the increasing of service robots, human-robot interaction for natural communication between user and robots is becoming more and more important. Especially, face recognition is a key issue of HRI. Even though robots mainly use face detection and recognition to provide various services, it is still difficult to guarantee of performance due to insufficient test methods in real service environment. Now, face recognition performance of most robots is evaluation for engine without consideration for robots. In this paper, we show validity of test method using LED monitor through performance comparison of real images with from images LED monitor.

Tiny and Blurred Face Alignment for Long Distance Face Recognition

  • Ban, Kyu-Dae;Lee, Jae-Yeon;Kim, Do-Hyung;Kim, Jae-Hong;Chung, Yun-Koo
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.251-258
    • /
    • 2011
  • Applying face alignment after face detection exerts a heavy influence on face recognition. Many researchers have recently investigated face alignment using databases collected from images taken at close distances and with low magnification. However, in the cases of home-service robots, captured images generally are of low resolution and low quality. Therefore, previous face alignment research, such as eye detection, is not appropriate for robot environments. The main purpose of this paper is to provide a new and effective approach in the alignment of small and blurred faces. We propose a face alignment method using the confidence value of Real-AdaBoost with a modified census transform feature. We also evaluate the face recognition system to compare the proposed face alignment module with those of other systems. Experimental results show that the proposed method has a high recognition rate, higher than face alignment methods using a manually-marked eye position.