• Title/Summary/Keyword: Finger Tracking

Search Result 47, Processing Time 0.022 seconds

Analysis on sEMG Signals of Contents Using Finger Tapping Device (Finger Tapping 기기를 활용한 콘텐츠의 sEMG 신호 분석)

  • Han, Sang-bae;Byeon, Sang-kyu;Kim, Jae-hoon;Shin, Sung-Wook;Chung, Sung-taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.153-160
    • /
    • 2019
  • In this paper, we would like to support anyone who can rehabilitate conveniently and happily by implementing rehabilitation device and game contents that can improve the motor ability of fingers. So we developed a Finger Tapping Device that can measure finger-regulation ability, accuracy, and agility and implemented tracking, visual response, finger-regulation on game contents by utilizing this device. The verification of usability was confirmed by analyzing sEMG signals during the execution of three types of game contents after attaching sEMG to the flexor digitorum poundus, which is most involved in finger movement. As a result of the experiment, activation of the flexor digitorum poundus was performed during execution of every game contents. Furthermore, we confirmed that there is a difference in agility by measuring the reaction time for each finger according to the visual response.

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.

Force Tracking Control of a Small-Sized SMA Gripper H$_\infty$ Synthesis (H$_\infty$ 제어기법을 적용한 소형 SMA 그립퍼의 힘 추적 제어)

  • 한영민;최승복;정재천
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.391-395
    • /
    • 1996
  • This paper presents a robust force tracking control of a small-sized SMA gripper with two fingers using shape memory alloy(SMA) actuators. The mathematical governing equation of the proposed system is derived by Hamilton's principle and Lagrangian equation and then, the control system model is integrated with the first-order actuator dynamics. Uncertain system parameters such as time constant of the actuators are also included in the control model. A robust two degree of freedom(TDF) controller using H$_{\infty}$ control theory, which has inherent robustness to model uncertainties and external disturbances, is adopted to achieve end-point force tracking control of the two-finger gripper. Force tracking control performances for desired trajectories represented by sinusoidal and step functions are evaluated by undertaking both simulation and experimental works.

  • PDF

Development of a General Purpose PID Motion Controller Using a Field Programmable Gate Array

  • Kim, Sung-Su;Jung, Seul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.360-365
    • /
    • 2003
  • In this paper, we have developed a general purpose motion controller using an FPGA(Field Programmable Gate Array). The multi-PID controllers on a single chip are implemented as a system-on-chip for multi-axis motion control. We also develop a PC GUI for an efficient interface control. Comparing with the commercial motion controller LM 629 it has multi-independent PID controllers so that it has several advantages such as space effectiveness, low cost and lower power consumption. In order to test the performance of the proposed controller, robot finger is controlled. The robot finger has three fingers with 2 joints each. Finger movements show that position tracking was very effective. Another experiment of balancing an inverted pendulum on a cart has been conducted to show the generality of the proposed FPGA PID controller. The controller has well maintained the balance of the pendulum.

  • PDF

Real Time Recognition of Finger-Language Using Color Information and Fuzzy Clustering Algorithm

  • Kim, Kwang-Baek;Song, Doo-Heon;Woo, Young-Woon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.1
    • /
    • pp.19-22
    • /
    • 2010
  • A finger language helping hearing impaired people in communication A sign language helping hearing impaired people in communication is not popular to ordinary healthy people. In this paper, we propose a method for real-time sign language recognition from a vision system using color information and fuzzy clustering system. We use YCbCr color model and canny mask to decide the position of hands and the boundary lines. After extracting regions of two hands by applying 8-directional contour tracking algorithm and morphological information, the system uses FCM in classifying sign language signals. In experiment, the proposed method is proven to be sufficiently efficient.

A Prototype of Flex Sensor Based Data Gloves to Track the Movements of Fingers

  • Bang, Junseung;You, Jinho;Lee, Youngho
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.53-57
    • /
    • 2019
  • In this paper, we propose a flex sensor-based data glove to track the movements of human fingers for virtual reality education. By putting flex sensors and utilizing an accelerometer, this data glove allows people to enjoy applications for virtual reality (VR) or augmented reality (AR). With the maximum and minimum values of the flex sensor at each finger joint, it determines an angle corresponding to the bending value of the flex sensor. It tracks the movements of fingers and hand gestures with respect to the angle values at finger joints. In order to prove the effectiveness of the proposed data glove, we implemented a VR classroom application.

Skin Color Based Hand and Finger Detection for Gesture Recognition in CCTV Surveillance (CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we proposed the skin color based hand and finger detection technology for the gesture recognition in CCTV surveillance. The aim of this paper is to present the methodology for hand detection and propose the finger detection method. The detected hand and finger can be used to implement the non-contact mouse. This technology can be used to control the home devices such as home-theater and television. Skin color is used to segment the hand region from background and contour is extracted from the segmented hand. Analysis of contour gives us the location of finger tip in the hand. After detecting the location of the fingertip, this system tracks the fingertip by using only R channel alone, and in recognition of hand motions to apply differential image, such as the removal of useless image shows a robust side. We explain about experiment which relates in fingertip tracking and finger gestures recognition, and experiment result shows the accuracy above 96%.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

Combining Communications and Tracking: A New Paradigm of Smartphone Games

  • Lee, Soong-Hee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.2
    • /
    • pp.202-215
    • /
    • 2013
  • The generalization trend of smartphones has brought many smartphone games into daily lives. These games are mainly dependent on the interactions on the display of the phone using finger touches. On the other hand, functions for detecting the positions and actions of the phones such as gyro-sensors have been rapidly developed over the former orientation sensors and acceleration sensors. Though it has become technologically possible to detect the users' motion via the smartphone devices and to use the phone device directly as the game device, it is hard to find the actualized cases. This paper proposes a new paradigm that includes basic frameworks and algorithms for the games combining the motion tracking and mutual communications on the smartphones and presents the details of its implementation and results.

A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development (한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구)

  • Kim, Jung-Hwan;Hong, Seung-Kweon;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.