• Title/Summary/Keyword: Arm Gesture

Search Result 25, Processing Time 0.021 seconds

Implementation of a Gesture Recognition Signage Platform for Factory Work Environments

  • Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.171-176
    • /
    • 2020
  • This paper presents an implementation of a gesture recognition platform that can be used in a factory workplaces. The platform consists of signages that display worker's job orders and a control center that is used to manage work orders for factory workers. Each worker does not need to bring work order documents and can browse the assigned work orders on the signage at his/her workplace. The contents of signage can be controlled by worker's hand and arm gestures. Gestures are extracted from body movement tracked by 3D depth camera and converted to the commandsthat control displayed content of the signage. Using the control center, the factory manager can assign tasks to each worker, upload work order documents to the system, and see each worker's progress. The implementation has been applied experimentally to a machining factory workplace. This flatform provides convenience for factory workers when they are working at workplaces, improves security of techincal documents, but can also be used to build smart factories.

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

Design and Control of Wire-driven Flexible Robot Following Human Arm Gestures (팔 동작 움직임을 모사하는 와이어 구동 유연 로봇의 설계 및 제어)

  • Kim, Sanghyun;Kim, Minhyo;Kang, Junki;Son, SeungJe;Kim, Dong Hwan
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.50-57
    • /
    • 2019
  • This work presents a design and control method for a flexible robot arm operated by a wire drive that follows human gestures. When moving the robot arm to a desired position, the necessary wire moving length is calculated and the motors are rotated accordingly to the length. A robotic arm is composed of a total of two module-formed mechanism similar to real human motion. Two wires are used as a closed loop in one module, and universal joints are attached to each disk to create up, down, left, and right movements. In order to control the motor, the anti-windup PID was applied to limit the sudden change usually caused by accumulated error in the integral control term. In addition, master/slave communication protocol and operation program for linking 6 motors to MYO sensor and IMU sensor output were developed at the same time. This makes it possible to receive the image information of the camera attached to the robot arm and simultaneously send the control command to the robot at high speed.

Implementation of a Spring Backboned Soft Arm Emulating Human Gestures (인간 동작 표현용 스프링 백본 구조 소프트 암의 구현)

  • Yoon, Hyun-Soo;Choi, Jae-Yeon;Oh, Se-Min;Lee, Byeong-Ju;Yoon, Ho-Sup;Cho, Young-Jo
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.65-75
    • /
    • 2012
  • This study deals with the design of a spring backboned soft arm, which will be employed for generation of human gesture as an effective means of Human Robot interaction. The special features of the proposed mechanism are the light weight and the flexibility of the whole mechanism by using a spring backbone. Thus, even in the case of collision with human, this device is able to absorb the impact structurally. The kinematics and the design for the soft arm are introduced. The performance of this mechanism was shown through experiment emulating several human gestures expressing human emotion and some service contents. Finally, this soft arm was implemented as the wing mechanism of a penguin robot.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

The Relationship between Lexical Retrieval and Coverbal Gestures (어휘인출과 구어동반 제스처의 관계)

  • Ha, Ji-Wan;Sim, Hyun-Sub
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.2
    • /
    • pp.123-143
    • /
    • 2011
  • At what point in the process of speech production are gestures involved? According to the Lexical Retrieval Hypothesis, gestures are involved in the lexicalization in the formulating stage. According to the Information Packaging Hypothesis, gestures are involved in the conceptual planning of massages in the conceptualizing stage. We investigated these hypotheses, using the game situation in a TV program that induced the players to involve in both lexicalization and conceptualization simultaneously. The transcription of the verbal utterances was augmented with all arm and hand gestures produced by the players. Coverbal gestures were classified into two types of gestures: lexical gestures and motor gestures. As a result, concrete words elicited lexical gestures significantly more frequently than abstract words, and abstract words elicited motor gestures significantly more frequently than concrete words. The difficulty of conceptualization in concrete words was significantly correlated with the amount of lexical gestures. However, the amount of words and the word frequency were not correlated with the amount of both gestures. This result supports the Information Packaging Hypothesis. Most of all, the importance of motor gestures was inferred from the result that abstract words elicited motor gestures more frequently rather than concrete words. Motor gestures, which have been considered as unrelated to verbal production, were excluded from analysis in many gestural studies. This study revealed motor gestures seemed to be connected to the abstract conceptualization.

  • PDF

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF