• Title/Summary/Keyword: Gesture-based Interaction

Search Result 152, Processing Time 0.028 seconds

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

Gesture-based Table Tennis Game in AR Environment (증강현실과 제스처를 이용한 비전기반 탁구 게임)

  • Yang, Jong-Yeol;Lee, Sang-Kyung;Kyoung, Dong-Wuk;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.5 no.3
    • /
    • pp.3-10
    • /
    • 2005
  • We present the computer table tennis game using player's swing motion. We need to transform a real world coordinate into a virtual world coordinate in order to hit the virtual ball. We can not get a correct 3-dimension position of racket in environment that using one camera or simple image processing. Therefore we use Augmented Reality (AR) concept to develop the game. This paper shows the AR table tennis game using gesture and method to develop the 3D interaction game that only using one camera without any motion detection device or stereo cameras. Also, we use a scan line method to recognize gesture for speedy processing. The game is developed using ARtoolkit and DirectX that is popular tool of SDK for game development.

  • PDF

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Implementation of Hand-Gesture-Based Augmented Reality Interface on Mobile Phone (휴대폰 상에서의 손동작 기반 증강현실 인터페이스 구현)

  • Choi, Jun-Yeong;Park, Han-Hoon;Park, Jung-Sik;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.941-950
    • /
    • 2011
  • With the recent advance in the performance of mobile phones, many effective interfaces for them have been proposed. This paper implements a hand-gesture-and-vision-based interface on a mobile phone. This paper assumes natural interaction scenario when user holds a mobile phone in a hand and sees the other hand's palm through mobile phone's camera. Then, a virtual object is rendered on his/her palm and reacts to hand and finger movements. Since the implemented interface is based on hand familiar to humans and does not require any additional sensors or markers, user freely interacts with the virtual object anytime and anywhere without any training. The implemented interface worked at 5 fps on mobile phone (Galaxy S2 having a dual-core processor).

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

Gesture Recognition based on Mixture-of-Experts for Wearable User Interface of Immersive Virtual Reality (몰입형 가상현실의 착용식 사용자 인터페이스를 위한 Mixture-of-Experts 기반 제스처 인식)

  • Yoon, Jong-Won;Min, Jun-Ki;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • As virtual realty has become an issue of providing immersive services, in the area of virtual realty, it has been actively investigated to develop user interfaces for immersive interaction. In this paper, we propose a gesture recognition based immersive user interface by using an IR LED embedded helmet and data gloves in order to reflect the user's movements to the virtual reality environments effectively. The system recognizes the user's head movements by using the IR LED embedded helmet and IR signal transmitter, and the hand gestures with the data gathered from data gloves. In case of hand gestures recognition, it is difficult to recognize accurately with the general recognition model because there are various hand gestures since human hands consist of many articulations and users have different hand sizes and hand movements. In this paper, we applied the Mixture-of-Experts based gesture recognition for various hand gestures of multiple users accurately. The movement of the user's head is used to change the perspection in the virtual environment matching to the movement in the real world, and the gesture of the user's hand can be used as inputs in the virtual environment. A head mounted display (HMD) can be used with the proposed system to make the user absorbed in the virtual environment. In order to evaluate the usefulness of the proposed interface, we developed an interface for the virtual orchestra environment. The experiment verified that the user can use the system easily and intuituvely with being entertained.

  • PDF

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Augmented Reality Game Interface Using Hand Gestures Tracking (사용자 손동작 추적에 기반한 증강현실 게임 인터페이스)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.3-12
    • /
    • 2006
  • Recently, Many 3D augmented reality games that provide strengthened immersive have appeared in the 3D game environment. In this article, we describe a barehanded interaction method based on human hand gestures for augmented reality games. First, feature points are extracted from input video streams. Point features are tracked and motion of moving objects are computed. The shape of the motion trajectories are used to determine whether the motion is intended gestures. A long smooth trajectory toward one of virtual objects or menus is classified as an intended gesture and the corresponding action is invoked. To prove the validity of the proposed method, we implemented two simple augmented reality applications: a gesture-based music player and a virtual basketball game. In the music player, several menu icons are displayed on the top of the screen and an user can activate a menu by hand gestures. In the virtual basketball game, a virtual ball is bouncing in a virtual cube space and the real video stream is shown in the background. An user can hit the virtual ball with his hand gestures. From the experiments for three untrained users, it is shown that the accuracy of menu activation according to the intended gestures is 94% for normal speed gestures and 84% for fast and abrupt gestures.

  • PDF

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF