• Title/Summary/Keyword: Human robot interaction

Search Result 342, Processing Time 0.027 seconds

Secure Scheme Between Nodes in Cloud Robotics Platform (Cloud Robotics Platform 환경에서 Node간 안전한 통신 기법)

  • Kim, Hyungjoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.595-602
    • /
    • 2021
  • The robot is developing into a software-oriented shape that recognizes the surrounding situation and is given a task. Cloud Robotics Platform is a method to support Service Oriented Architecture shape for robots, and it is a cloud-based method to provide necessary tasks and motion controllers depending on the situation. As it evolves into a humanoid robot, the robot will be used to help humans in generalized daily life according to the three robot principles. Therefore, in addition to robots for specific individuals, robots as public goods that can help all humans depending on the situation will be universal. Therefore, the importance of information security in the Cloud Robotics Computing environment is analyzed to be composed of people, robots, service applications on the cloud that give intelligence to robots, and a cloud bridge that connects robots and clouds. It will become an indispensable element for In this paper, we propose a Security Scheme that can provide security for communication between people, robots, cloud bridges, and cloud systems in the Cloud Robotics Computing environment for intelligent robots, enabling robot services that are safe from hacking and protect personal information.

Korean Students' Attitudes Towards Robots: Two Survey Studies (한국 학생의 로봇에 대한 태도: 국제비교 및 태도형성에 관하여)

  • Shin, Na-Min;Kim, Sang-A
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.1
    • /
    • pp.10-16
    • /
    • 2009
  • This paper is concerned with Korean students' attitudes towards robots, presenting two survey studies. The first study was concerned with a group of college students, taking the perspective of international comparison. Data were collected by administering an online survey, where 106 volunteer students had participated. In the survey, the Negative Attitude towards Robot Scale(NARS) was adopted to compare the Korean students' scores with those of multi-national groups (U.S.A, Germany, Netherland, Japan, Mexico, and China) who responded to the same scale in Bartneck et al.'s research. The analysis of the data reveals that Korean students tend to be more concerned about social impacts that robots might bring to future society and are very conscious about the uncertain influences of robots on human life. The second study investigated factors that may affect K-12 students' attitudes towards robots, with survey data garnered from 298 elementary, middle, and high school students. The data were analyzed by the method of multiple regression analysis to test the hypothesis that a student's gender, age, the extent of interest in robots, and the extent of experiences with robots may influence his or her attitude towards robots. The hypothesis was partially supported in that variables of a student's gender, age, and the extent of interest in robots were statistically significant with regard to the attitude variable. Given the results, this paper suggests three points of discussions to better understand Korean students' attitudes towards robots: social and cultural context, individual differences, and theory of mind.

  • PDF

Development of the MVS (Muscle Volume Sensor) for Human-Machine Interface (인간-기계 인터페이스를 위한 근 부피 센서 개발)

  • Lim, Dong Hwan;Lee, Hee Don;Kim, Wan Soo;Han, Jung Soo;Han, Chang Soo;An, Jae Yong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.8
    • /
    • pp.870-877
    • /
    • 2013
  • There has been much recent research interest in developing numerous kinds of human-machine interface. This field currently requires more accurate and reliable sensing systems to detect the intended human motion. Most conventional human-machine interface use electromyography (EMG) sensors to detect the intended motion. However, EMG sensors have a number of disadvantages and, as a consequence, the human-machine interface is difficult to use. This study describes a muscle volume sensor (MVS) that has been developed to measure variation in the outline of a muscle, for use as a human-machine interface. We developed an algorithm to calibrate the system, and the feasibility of using MVS for detecting muscular activity was demonstrated experimentally. We evaluated the performance of the MVS via isotonic contraction using the KIN-COM$^{(R)}$ equipment at torques of 5, 10, and 15 Nm.

Interaction Ritual Interpretation of AI Robot in the TV Show (드라마<굿 플레이스>속 인공지능 로봇의 상호작용 의례적 해석)

  • Chu, Mi-Sun;Ryu, Seoung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.70-83
    • /
    • 2021
  • The issue of predicting the relationship between humans and AI robots is a 'strong AI' problem. Many experts predict the tragic ending which is a strong AI with superior thinking ability than humans will conquer humans. Due to the expectations of AI robots are projected onto media, the 'morally good AI' that meets human expectations is an important issue. However, the demand for good AI and the realization of perfect technology is not limited to machines. Rather, it appears as a result of putting all responsibility on humans, driving humans into immoral beings and turning them into human and human problems, which is resulting in more alienation and discrimination. As such, the result of technology interacts with the human being used and its properties are determined and developed according to the reaction. This again affects humans. Therefore, AI technology that considers human emotions in consideration of interaction is also important. Therefore, this study will clarify the process that the demand for 'Good AI' in the relationship of AI to humans with Randall Collins' Interaction Ritual Chain. Emotional energy in Interaction Ritual Chain has explained the formation of human bonds. Also, the methodology is a type of thinking experiment and explained through Janet and surrounding characters in the TV show .

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

A Study on the Implementation of RFID-Based Autonomous Navigation System for Robotic Cellular Phone (RCP) (RFID를 이용한 RCP 자율 네비게이션 시스템 구현을 위한 연구)

  • Choe Jae-Il;Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.480-488
    • /
    • 2006
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is one of the most attractive technologies of today. However, unless we find a new breakthrough in the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technologies. Unlike the industrial robot of the past, today's robots require advanced features, such as soft computing, human-friendly interface, interaction technique, speech recognition object recognition, among many others. In this paper, we present a new technological concept named RCP (Robotic Cellular Phone) which integrates RT and CP in the vision of opening a combined advancement of CP, IT, and RT, RCP consists of 3 sub-modules. They are $RCP^{Mobility}$(RCP Mobility System), $RCP^{Interaction}$, and $RCP^{Integration}$. The main focus of this paper is on $RCP^{Mobility}$ which combines an autonomous navigation system of the RT mobility with CP. Through $RCP^{Mobility}$, we are able to provide CP with robotic functions such as auto-charging and real-world robotic entertainment. Ultimately, CP may become a robotic pet to the human beings. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While the former is responsible for the wheel-based navigation of RCP, the latter provides localization information of the moving RCP With the coordinates acquired from RFID-based self-localization controller, trajectory controller refines RCP's movement to achieve better navigation. In this paper, a prototype of $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results on the RCP navigation.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).

An Emotion Appraisal System Based on a Cognitive Context (인지적 맥락에 기반한 감정 평가 시스템)

  • Ahn, Hyun-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.33-39
    • /
    • 2010
  • The interaction of emotion is an important factor in Human-Robot Interaction(HRI). This requires a contextual appraisal of emotion extracting the emotional information according to the events happened from past to present. In this paper an emotion appraisal system based on the cognitive context is presented. Firstly, a conventional emotion appraisal model is simplified to model a contextual emotion appraisal which defines the types of emotion appraisal, the target of the emotion induced from analyzing emotional verbs, and the transition of emotions in the context. We employ a language based cognitive system and its sentential memory and object descriptor to define the type and target of emotion and to evaluate the emotion varying with the process of time with the a priori emotional evaluation of targets. In a experimentation, we simulate the proposed emotion appraisal system with a scenario and show the feasibility of the system to HRI.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.