• Title/Summary/Keyword: Human robot interaction

Search Result 342, Processing Time 0.026 seconds

Development of an Electro-hydraulic Soft Zipping Actuator with Self-sensing Mechanism (자가 변위 측정이 가능한 전기-유압식 소프트 지핑 구동기의 개발)

  • Lee, Dongyoung;Kwak, Bokeon;Bae, Joonbum
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.79-85
    • /
    • 2021
  • Soft fluidic actuators (SFAs) are widely utilized in various areas such as wearable systems due to the inherent compliance which allows safe and flexible interaction. However, SFA-driven systems generally require a large pump, multiple valves and tubes, which hinders to develop a miniaturized system with small range of motion. Thus, a highly integrated soft actuator needs to be developed for implementing a compact SFA-driven system. In this study, we propose an electro-hydraulic soft zipping actuator that can be used as a miniature pump. This actuator exerts tactile force as a dielectric liquid contained inside the actuator pressurized its deformable part. In addition, the proposed actuator can estimate the internal dielectric liquid thickness by using its self-sensing function. Besides, the electrical characteristics and driving performance of the proposed system were verified through experiments.

Building of a Hierarchical Semantic Map with Classified Area Information in Home Environments (가정환경에서의 분류된 지역정보를 통한 계층적 시맨틱 지도 작성)

  • Park, Joong-Tae;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.4
    • /
    • pp.252-258
    • /
    • 2012
  • This paper describes hierarchical semantic map building using the classified area information in home environments. The hierarchical semantic map consists of a grid, CAIG (Classified Area Information in Grid), and topological map. The grid and CAIG maps are used for navigation and motion selection, respectively. The topological map provides the intuitive information on the environment, which can be used for the communication between robots and users. The proposed semantic map building algorithm can greatly improve the capabilities of a mobile robot in various domains, including localization, path-planning and HRI (Human-Robot Interaction). In the home environment, a door can be used to divide an area into various sections, such as a room, a kitchen, and so on. Therefore, we used not only the grid map of the home environment, but also the door information as a main clue to classify the area and to build the hierarchical semantic map. The proposed method was verified through various experiments and it was found that the algorithm guarantees autonomous map building in the home environment.

Mechanization of humans, humanization of machines, and coexistence through dance works (무용작품을 통해 본 인간의 기계화, 기계의 인간화 그리고 공존)

  • Chang, So-Jung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.145-150
    • /
    • 2021
  • This thesis attempted to examine the mechanization of humans, humanization of machines, and coexistence through dance works. The dance works were reviewed by partial excerpts from Oscar Schlemer's <3 Chord Ballet>, Felindrome Dance Company's , and . Also, I looked at the dance work , which has an inherent form of coexistence. Through the above work, robot-like science and technology and fusion. It was found that various dance performances that coexist in complex forms provide continuous creativity to humans, and various forms of sensibility and creative movements based on data make it possible to produce rich performances for humans. This researcher expects numerous works that accept and reflect the changes of the times through the embodied interaction of dance performances with science and technology.

A Study on Improvement of the Human Posture Estimation Method for Performing Robots (공연로봇을 위한 인간자세 추정방법 개선에 관한 연구)

  • Park, Cheonyu;Park, Jaehun;Han, Jeakweon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.750-757
    • /
    • 2020
  • One of the basic tasks for robots to interact with humans is to quickly and accurately grasp human behavior. Therefore, it is necessary to increase the accuracy of human pose recognition when the robot is estimating the human pose and to recognize it as quickly as possible. However, when the human pose is estimated using deep learning, which is a representative method of artificial intelligence technology, recognition accuracy and speed are not satisfied at the same time. Therefore, it is common to select one of a top-down method that has high inference accuracy or a bottom-up method that has high processing speed. In this paper, we propose two methods that complement the disadvantages while including both the advantages of the two methods mentioned above. The first is to perform parallel inference on the server using multi GPU, and the second is to mix bottom-up and One-class Classification. As a result of the experiment, both of the methods presented in this paper showed improvement in speed. If these two methods are applied to the entertainment robot, it is expected that a highly reliable interaction with the audience can be performed.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Mobile Robot Control using Smart Phone for internet of Things (사물인터넷 구축을 위한 스마트폰을 이용한 이동로봇의 제어)

  • Yu, Je-Hun;Ahn, Seong-In;Lee, Sung-Won;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.396-401
    • /
    • 2016
  • Owing to developments in the internet of things, many products have developed and various researches have processed. Smart home systems in Internet of things area are receiving attention from many people than the other areas. Autonomous mobile robots perform various parts in many industries. In this paper, a smart housekeeping robot was implemented using internet of things and an autonomous mobile robot. In order to make a smart housekeeping robot, Raspberry Pi, wireless USB camera, and uBrain robot of Huins Corp. is used. To control the robot, cell-phone connected with IP of Raspberry Pi, and then Raspberry Pi connected with uBrain robot using Bluetooth. a smart housekeeping robot was controlled using commands of a cell-phone application. If some user wants to move a robot automatically, we implemented that a robot can be chosen an autonomous driving mode from the user. In addition, we checked a realtime video using a cell-phone and computer. This smart housekeeping robot can help user check their own homes in real time.

STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content (몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템)

  • Jeongho Kim;Byungsun Hwang;Jinwook Kim;Joonho Seon;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.89-95
    • /
    • 2023
  • In recent decades, human action recognition (HAR) has demonstrated potential applications in sports analysis, human-robot interaction, and large-scale signage content. In this paper, spatial temporal attention graph convolutional network (STAGCN)-based HAR system is proposed. Spatioal-temmporal features of skeleton sequences are assigned different weights by STAGCN, enabling the consideration of key joints and viewpoints. From simulation results, it has been shown that the performance of the proposed model can be improved in terms of classification accuracy in the NTU RGB+D dataset.

Intelligent robotic walker with actively controlled human interaction

  • Weon, Ihn-Sik;Lee, Soon-Geul
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.522-530
    • /
    • 2018
  • In this study, we developed a robotic walker that actively controls its speed and direction of movement according to the user's gait intention. Sensor fusion between a low-cost light detection and ranging (LiDAR) sensor and inertia measurement units (IMUs) helps determine the user's gait intention. The LiDAR determines the walking direction by detecting both knees, and the IMUs attached on each foot obtain the angular rate of the gait. The user's gait intention is given as the directional angle and the speed of movement. The two motors in the robotic walker are controlled with these two variables, which represent the user's gait intention. The estimated direction angle is verified by comparison with a Kinect sensor that detects the centroid trajectory of both the user's feet. We validated the robotic walker with an experiment by controlling it using the estimated gait intention.

Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal (음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식)

  • Kim Eun-Ho;Hyun Kyung-Hak;Kwak Yoon-Keun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

Recognition of Hand gesture to Human-Computer Interaction (손 동작을 통한 인간과 컴퓨터간의 상호 작용)

  • Lee, Lae-Kyoung;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2930-2932
    • /
    • 2000
  • In this paper. a robust gesture recognition system is designed and implemented to explore the communication methods between human and computer. Hand gestures in the proposed approach are used to communicate with a computer for actions of a high degree of freedom. The user does not need to wear any cumbersome devices like cyber-gloves. No assumption is made on whether the user is wearing any ornaments and whether the user is using the left or right hand gestures. Image segmentation based upon the skin-color and a shape analysis based upon the invariant moments are combined. The features are extracted and used for input vectors to a radial basis function networks(RBFN). Our "Puppy" robot is employed as a testbed. Preliminary results on a set of gestures show recognition rates of about 87% on the a real-time implementation.

  • PDF