• Title/Summary/Keyword: gesture detecting

Search Result 40, Processing Time 0.029 seconds

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Design and Development of Virtual Reality Exergame using Smart mat and Camera Sensor (스마트매트와 카메라 센서를 이용한 가상현실 체험형 운동게임 시스템 설계 및 구현)

  • Seo, Duck Hee;Park, Kyung Shin;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2297-2304
    • /
    • 2016
  • In this study, we designed and developed the virtual reality Exergame using the smart mat and the camera sensor for exercises in indoor environments. For detecting the gestures of a upper body of users, the KINECT camera based the gesture recognition algorithm used angles between user's joint information system was adopted, and the smart mat system including a LED equipment and Bluetooth communication module was developed for user's stepping data during the exercises that requires the gestures and stepping of users. Finally, the integrated virtual reality Exergame system was implement along with the Unity 3D engine and different kinds of user' virtual avatar characters with entertainment game contents such as displaying gesture guideline and a scoring function. Therefore, the designed system will useful for elders who need to improve cognitive ability and sense of balance or general users want to improve exercise ability and the indoor circumstances such home or wellness centers.

Part-based Hand Detection Using HOG (HOG를 이용한 파트 기반 손 검출 알고리즘)

  • Baek, Jeonghyun;Kim, Jisu;Yoon, Changyong;Kim, Dong-Yeon;Kim, Euntai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.551-557
    • /
    • 2013
  • In intelligent robot research, hand gesture recognition has been an important issue. And techniques that recognize simple gestures are commercialized in smart phone, smart TV for swiping screen or volume control. For gesture recognition, robust hand detection is important and necessary but it is challenging because hand shape is complex and hard to be detected in cluttered background, variant illumination. In this paper, we propose efficient hand detection algorithm for detecting pointing hand for recognition of place where user pointed. To minimize false detections, ROIs are generated within the compact search region using skin color detection result. The ROIs are verified by HOG-SVM and pointing direction is computed by both detection results of head-shoulder and hand. In experiment, it is shown that proposed method shows good performance for hand detection.

Design of Computer Vision Interface by Recognizing Hand Motion (손동작 인식에 의한 컴퓨터 비전 인터페이스 설계)

  • Yun, Jin-Hyun;Lee, Chong-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • As various interfacing devices for computational machines are being developed, a new HCI method using hand motion input is introduced. This interface method is a vision-based approach using a single camera for detecting and tracking hand movements. In the previous researches, only a skin color is used for detecting and tracking hand location. However, in our design, skin color and shape information are collectively considered. Consequently, detection ability of a hand increased. we proposed primary orientation edge descriptor for getting an edge information. This method uses only one hand model. Therefore, we do not need training processing time. This system consists of a detecting part and a tracking part for efficient processing. In tracking part, the system is quite robust on the orientation of the hand. The system is applied to recognize a hand written number in script style using DNAC algorithm. Performance of the proposed algorithm reaches 82% recognition ratio in detecting hand region and 90% in recognizing a written number in script style.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction (휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Skin Color Based Hand and Finger Detection for Gesture Recognition in CCTV Surveillance (CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we proposed the skin color based hand and finger detection technology for the gesture recognition in CCTV surveillance. The aim of this paper is to present the methodology for hand detection and propose the finger detection method. The detected hand and finger can be used to implement the non-contact mouse. This technology can be used to control the home devices such as home-theater and television. Skin color is used to segment the hand region from background and contour is extracted from the segmented hand. Analysis of contour gives us the location of finger tip in the hand. After detecting the location of the fingertip, this system tracks the fingertip by using only R channel alone, and in recognition of hand motions to apply differential image, such as the removal of useless image shows a robust side. We explain about experiment which relates in fingertip tracking and finger gestures recognition, and experiment result shows the accuracy above 96%.

A Finger Counting Method for Gesture Recognition (제스처 인식을 위한 손가락 개수 인식 방법)

  • Lee, DoYeob;Shin, DongKyoo;Shin, DongIl
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.29-37
    • /
    • 2016
  • Humans develop and maintain relationship through communication. Communication is largely divided into verbal communication and non-verbal communication. Verbal communication involves the use of a language or characters, while non-verbal communication utilizes body language. We use gestures with language together in conversations of everyday life. Gestures belong to non-verbal communication, and can be offered using a variety of shapes and movements to deliver an opinion. For this reason, gestures are spotlighted as a means of implementing an NUI/NUX in the fields of HCI and HRI. In this paper, using Kinect and the geometric features of the hand, we propose a method for recognizing the number of fingers and detecting the hand area. A Kinect depth image can be used to detect the hand region, with the finger number identified by comparing the distance of outline and the central point of a hand. Average recognition rate for recognizing the number of fingers is 98.5%, from the proposed method, The proposed method would help enhancing the functionality of the human computer interaction by increasing the expression range of gestures.

Study on Signal Processing Method for Extracting Hand-Gesture Signals Using Sensors Measuring Surrounding Electric Field Disturbance (주변 전기장 측정센서를 이용한 손동작 신호 검출을 위한 신호처리시스템 연구)

  • Cheon, Woo Young;Kim, Young Chul
    • Smart Media Journal
    • /
    • v.6 no.2
    • /
    • pp.26-32
    • /
    • 2017
  • In this paper, we implement a signal-detecting electric circuit based LED lighting control system which is essential in NUI technology using EPIC converting surrounding earth electric field disturbance signals to electric potential signals. We used signal-detecting electric circuits which was developed to extract individual signal for each EPIC sensor while conventional EPIC-based development equipments provide limited forms of signals. The signals extracted from our developed circuit contributed to better performance as well as flexiblity in processes of feature extracting stage and pattern recognition stage. We designed a system which can control the brightness and on/off of LED lights with four hand gestures in order to justify its applicability to real application systems. We obtained faster pattern classification speed not only by developing an instruction system, but also by using interface control signals.