• Title/Summary/Keyword: 인간 로봇 상호작용

Search Result 169, Processing Time 0.02 seconds

Recent Research Trends of Facial Expression Recognition (얼굴표정 인식 기법의 최신 연구 동향)

  • Lee, Min Kyu;Song, Byung Cheol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.128-130
    • /
    • 2019
  • 최근 딥러닝의 급격한 발전과 함께 얼굴표정 인식(facial expression recognition) 기술이 상당한 진보를 이루었다. 얼굴표정 인식은 컴퓨터 비전 분야에서 지속적으로 관심을 받고 있으며, 인포테인먼트 시스템(Infotainment system), 인간-로봇 상호작용(human-robot interaction) 등 다양한 분야에서 활용되고 있다. 그럼에도 불구하고 얼굴표정 인식 분야는 학습 데이터의 부족, 얼굴 각도의 변화 또는 occlusion 등과 같은 많은 문제들이 존재한다. 본 논문은 얼굴표정 인식 분야에서의 위와 같은 고유한 문제들을 다룬 기술들을 포함하여 고전적인 기법부터 최신 기법에 대한 연구 동향을 제시한다.

  • PDF

An SoC-based Context-Aware System Architecture (SoC 기반 상황 인식 시스템 구조)

  • 이건명;손봉기;김종태;이승욱;이지형;전재욱;조준동
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.487-490
    • /
    • 2004
  • 상황 인식(context-awrare)은 인간-컴퓨터 상호작용의 단점을 극복하기 위한 방법으로써 많은 주목을 받고 있다. 이 논문에서는 SoC(System-on-a-Chip)로 구현될 수 있는 상황 인식 시스템 구조를 제안한다. 제안한 구조는 센서 추상화, 컨텍스트 변경에 대한 통지 메커니즘, 모듈식 개발, if-then규칙을 이용한 쉬운 서비스 구성과 유연한 상황 인식 서비스 구현을 지원한다. 이 구조는 통신 모듈, 처리 모듈, 블랙보드를 포함하는 SoC 마이크로프로세서 부분과 규칙 기반 시스템 모듈을 구현한 하드웨어로 구성된다. 규칙 기반 시스템 하드웨어는 모든 규칙의 조건부에 대해 매칭 연산을 병렬로 수행하고, 규칙의 결론부는 마이크로프로세서에 내장된 행위 모듈을 호출함으로써 작업을 수행한다. 제안한 구조의 SoC 시스템은 SystemC SoC 개발 환경에서 설계되고, 성공적으로 테스트되었다. 제안한 SoC 기반의 상황 인식 시스템 구조는 주거 환경에서 컨텍스트를 인식하여 노인을 보조하는 지능형 이동 로봇 등에 적용될 수 있을 것으로 기대된다.

  • PDF

Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal (음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식)

  • Kim Eun-Ho;Hyun Kyung-Hak;Kwak Yoon-Keun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

Open-source robot platform providing offline personalized advertisements (오프라인 맞춤형 광고 제공을 위한 오픈소스 로봇 플랫폼)

  • Kim, Young-Gi;Ryu, Geon-Hee;Hwang, Eui-Song;Lee, Byeong-Ho;Yoo, Jeong-Ki
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.4
    • /
    • pp.1-10
    • /
    • 2020
  • The performance of the personalized product recommendation system for offline shopping malls is poor compared with the one using online environment information since it is difficult to obtain visitors' characteristic information. In this paper, a mobile robot platform is suggested capable of recommending personalized advertisement using customers' sex and age information provided by Face API of MS Azure Cloud service. The performance of the developed robot is verified through locomotion experiments, and the performance of API used for our robot is tested using sampled images from open Asian FAce Dataset (AFAD). The developed robot could be effective in marketing by providing personalized advertisements at offline shopping malls.

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Prediction of the Upper Limb Motion Based on a Geometrical Muscle Changes for Physical Human Machine Interaction (물리적 인간 기계 상호작용을 위한 근육의 기하학적 형상 변화를 이용한 상지부 움직임 예측)

  • Han, Hyon-Young;Kim, Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.927-932
    • /
    • 2010
  • Estimation methods of motion intention from bio-signal present challenges in man machine interaction(MMI) to offer user's command to machine without control of any devices. Measurements of meaningful bio-signals that contain the motion intention and motion estimation methods from bio-signal are important issues for accurate and safe interaction. This paper proposes a novel motion estimation sensor based on a geometrical muscle changes, and a motion estimation method using the sensor. For estimation of the motion, we measure the circumference change of the muscle which is proportional to muscle activation level using a flexible piezoelectric cable (pMAS, piezo muscle activation sensor), designed in band type. The pMAS measures variations of the cable band that originate from circumference changes of muscle bundles. Moreover, we estimate the elbow motion by applying the sensor to upper limb with least square method. The proposed sensor and prediction method are simple to use so that they can be used to motion prediction device and methods in rehabilitation and sports fields.

Robust Speech Endpoint Detection in Noisy Environments for HRI (Human-Robot Interface) (인간로봇 상호작용을 위한 잡음환경에 강인한 음성 끝점 검출 기법)

  • Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.147-156
    • /
    • 2013
  • In this paper, a new speech endpoint detection method in noisy environments for moving robot platforms is proposed. In the conventional method, the endpoint of speech is obtained by applying an edge detection filter that finds abrupt changes in the feature domain. However, since the feature of the frame energy is unstable in such noisy environments, it is difficult to accurately find the endpoint of speech. Therefore, a novel feature extraction method based on the twice-iterated fast fourier transform (TIFFT) and statistical models of speech is proposed. The proposed feature extraction method was applied to an edge detection filter for effective detection of the endpoint of speech. Representative experiments claim that there was a substantial improvement over the conventional method.

Moral Judgment, Mind Perception and Immortality Perception of Humans and Robots (인간과 로봇의 도덕성 판단, 마음지각과 불멸지각의 관계)

  • Hong Im Shin
    • Science of Emotion and Sensibility
    • /
    • v.26 no.3
    • /
    • pp.29-40
    • /
    • 2023
  • The term and concept of "immortality" has garnered a considerable amount of attention worldwide. However, research on this topic is lacking, and the question of when the mind of a deceased individual survives death has yet to be answered. This research investigates whether morality and mind perception of the dead correlate with immortality. Study 1 measures the perceived immortality of people, who were good or evil in life. The results show that the perceived morality is related with the perceived immortality. Moreover, participants indicated the extent to which each person had maintained a degree of morality and agency/experience of the mind. Therefore, morality and mind perception toward a person are related to perceived immortality. In Study 2, participants were asked to read three essays on robots (good, evil, and nonmoral), and had to indicate the extent to which each robot maintains a degree of immortality, morality, and agency/experience of the mind. The results show that good spirits of a robot are related to higher scores of mind perception toward the robot, resulting in increasing tendency of perceived immortality. These results provide implications that the morality of humans and robots can mediate the relationship between mind perception and immortality. This work extends on previous research on the determinants of social robots for overcoming difficulties in human-robot interaction.

Degree of autonomy for education robot (교육 보조 로봇의 자율성 지수)

  • Choi, Okkyung;Jung, Bowon;Gwak, Kwan-Woong;Moon, Seungbin
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.67-73
    • /
    • 2016
  • With the rapid development of mobile services and the prevalence of education robots, robots are being developed to become a part of our lives and they can be utilized to assist teachers in giving education or learning to students. This standard has been proposed to define the degree of autonomy for education robot. The autonomy is an ability to perform a given work based on current state and sensor value without human intervention. The degree of autonomy is a scale indicating the extent of autonomy and it is determined in between 1 and 10 by considering the level of work and human intervention. It has been adapted as per standard and education robots can be utilized in teaching the students autonomously. Education robots can be beneficial in education and it is expected to contribute in assisting the teacher's education.