• Title/Summary/Keyword: Hand Feature Extraction

Search Result 71, Processing Time 0.033 seconds

SVM-Based EEG Signal for Hand Gesture Classification (서포트 벡터 머신 기반 손동작 뇌전도 구분에 대한 연구)

  • Hong, Seok-min;Min, Chang-gi;Oh, Ha-Ryoung;Seong, Yeong-Rak;Park, Jun-Seok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.7
    • /
    • pp.508-514
    • /
    • 2018
  • An electroencephalogram (EEG) evaluates the electrical activity generated by brain cell interactions that occur during brain activity, and an EEG can evaluate the brain activity caused by hand movement. In this study, a 16-channel EEG was used to measure the EEG generated before and after hand movement. The measured data can be classified as a supervised learning model, a support vector machine (SVM). To shorten the learning time of the SVM, a feature extraction and vector dimension reduction by filtering is proposed that minimizes motion-related information loss and compresses EEG information. The classification results showed an average of 72.7% accuracy between the sitting position and the hand movement at the electrodes of the frontal lobe.

EEG Feature Classification for Precise Motion Control of Artificial Hand (의수의 정확한 움직임 제어를 위한 동작 별 뇌파 특징 분류)

  • Kim, Dong-Eun;Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.29-34
    • /
    • 2015
  • Brain-computer interface (BCI) is being studied for convenient life in various application fields. The purpose of this study is to investigate a changing electroencephalography (EEG) for precise motion of a robot or an artificial arm. Three subjects who participated in this experiment performed three-task: Grip, Move, Relax. Acquired EEG data was extracted feature data using two feature extraction algorithm (power spectrum analysis and multi-common spatial pattern). Support vector machine (SVM) were applied the extracted feature data for classification. The classification accuracy was the highest at Grip class of two subjects. The results of this research are expected to be useful for patients required prosthetic limb using EEG.

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

Combining Dynamic Time Warping and Single Hidden Layer Feedforward Neural Networks for Temporal Sign Language Recognition

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee;Kim, Soo-Hyung
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.14-22
    • /
    • 2011
  • Temporal Sign Language Recognition (TSLR) from hand motion is an active area of gesture recognition research in facilitating efficient communication with deaf people. TSLR systems consist of two stages: a motion sensing step which extracts useful features from signers' motion and a classification process which classifies these features as a performed sign. This work focuses on two of the research problems, namely unknown time varying signal of sign languages in feature extraction stage and computing complexity and time consumption in classification stage due to a very large sign sequences database. In this paper, we propose a combination of Dynamic Time Warping (DTW) and application of the Single hidden Layer Feedforward Neural networks (SLFNs) trained by Extreme Learning Machine (ELM) to cope the limitations. DTW has several advantages over other approaches in that it can align the length of the time series data to a same prior size, while ELM is a useful technique for classifying these warped features. Our experiment demonstrates the efficiency of the proposed method with the recognition accuracy up to 98.67%. The proposed approach can be generalized to more detailed measurements so as to recognize hand gestures, body motion and facial expression.

Feeature extraction for recognition rate improvemen of hand written numerals (필기체 숫자 인식률 향상을 위한 특징추출)

  • Koh, Chan;Lee, Chang-In
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.10
    • /
    • pp.2102-2111
    • /
    • 1997
  • Hand written numeral is projected on the 3D space after pre-processing of inputs and it makes a index by tracking of numerals. It computes the distance between extracted every features. It is used by input part of recognition process from the statistical historgram of the normalization of data in order to adaptation from variation. One hundred unmeral patterns have used for making a standard feature map and 100 pattern for the recogintion experiment. The result of it, we have the recoginition rete is 93.5% based on thresholding is 0.20 and 97.5% based on 0.25.

  • PDF

Hand Region Feature Point Extraction Using Vision (비젼을 이용한 손 영역 특징점 추출)

  • Jeong, Hyun-Suk;Oh, Myung-Jea;Joon, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2009.07a
    • /
    • pp.1798_1799
    • /
    • 2009
  • 본 논문에서는 강인한 손 영역 특징 점 추출 방법을 제안한다. 제안하는 방법은 HCbCr 칼라 모델을 생성한 후 퍼지 색상 필터에 적용하여 손 후보 영역을 추출한다. 최종적으로 손 영역을 추출하기 위해서 레이블링 기법을 사용한다. 그 후, 추출된 손 영역의 실루엣을 추출하고 히스토그램 기법을 적용하여 손 영역 내의 COG를 추출 한다. 손 영역 특징 점 추출을 위해 Canny edge 기법과 Chain Code기법, DP(Douglas-Peucker)기법들을 이용하여 전처리 과정을 거쳐 1차 특징점을 추출한다. 추출된 1차 특징 점을 Convex Hull기법에 적용하여 최종적인 손 영역 특징 점을 추출한다. 마지막으로, 복잡하고 다양한 실내 환경에서의 실험을 통해 그 응용 가능성을 증명한다.

  • PDF

Improved Feature Extraction of Hand Movement EEG Signals based on Independent Component Analysis and Spatial Filter

  • Nguyen, Thanh Ha;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.4
    • /
    • pp.515-520
    • /
    • 2012
  • In brain computer interface (BCI) system, the most important part is classification of human thoughts in order to translate into commands. The more accuracy result in classification the system gets, the more effective BCI system is. To increase the quality of BCI system, we proposed to reduce noise and artifact from the recording data to analyzing data. We used auditory stimuli instead of visual ones to eliminate the eye movement, unwanted visual activation, gaze control. We applied independent component analysis (ICA) algorithm to purify the sources which constructed the raw signals. One of the most famous spatial filter in BCI context is common spatial patterns (CSP), which maximize one class while minimize the other by using covariance matrix. ICA and CSP also do the filter job, as a raw filter and refinement, which increase the classification result of linear discriminant analysis (LDA).

A Deep Learning Approach for Classification of Cloud Image Patches on Small Datasets

  • Phung, Van Hiep;Rhee, Eun Joo
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.3
    • /
    • pp.173-178
    • /
    • 2018
  • Accurate classification of cloud images is a challenging task. Almost all the existing methods rely on hand-crafted feature extraction. Their limitation is low discriminative power. In the recent years, deep learning with convolution neural networks (CNNs), which can auto extract features, has achieved promising results in many computer vision and image understanding fields. However, deep learning approaches usually need large datasets. This paper proposes a deep learning approach for classification of cloud image patches on small datasets. First, we design a suitable deep learning model for small datasets using a CNN, and then we apply data augmentation and dropout regularization techniques to increase the generalization of the model. The experiments for the proposed approach were performed on SWIMCAT small dataset with k-fold cross-validation. The experimental results demonstrated perfect classification accuracy for most classes on every fold, and confirmed both the high accuracy and the robustness of the proposed model.

A Study on Face Recognition based on Partial Least Squares (부분 최소제곱법을 이용한 얼굴 인식에 관한 연구)

  • Lee Chang-Beom;Kim Do-Hyang;Baek Jang-Sun;Park Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.393-400
    • /
    • 2006
  • There are many feature extraction methods for face recognition. We need a new method to overcome the small sample problem that the number of feature variables is larger than the sample size for face image data. The paper considers partial least squares(PLS) as a new dimension reduction technique for feature vector. Principal Component Analysis(PCA), a conventional dimension reduction method, selects the components with maximum variability, irrespective of the class information. So, PCA does not necessarily extract features that are important for the discrimination of classes. PLS, on the other hand, constructs the components so that the correlation between the class variable and themselves is maximized. Therefore PLS components are more predictive than PCA components in classification. The experimental results on Manchester and ORL databases shows that PLS is to be preferred over PCA when classification is the goal and dimension reduction is needed.

Content-based Image Retrieval using Variable Region Color (가변 영역 색상을 이용한 내용기반 영상검색)

  • Kim Dong-Woo;Song Young-Jun;Kwon Dong-Jin;Ahn Jae-Hyeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.5
    • /
    • pp.367-372
    • /
    • 2005
  • In this paper, we proposed a method of content-based image retrieval using variable region. Content-based image retrieval uses color histogram for the most part. But the existing color histogram methods have a disadvantage that it reduces accuracy because of quantization error and absence of spatial information. In order to overcome this, we convert color information to HSV space, quantize hue factor being pure color information, and calculate histogram of the factor. On the other hand, to solve the problem of the absence of spatial information, we select object region in consideration of color feature and region correlation. It maintains the size of region in the selected object region. But non-object region is integrated in one region. After of selection variable region, we retrieve using color feature. As the result of experimentation, the proposed method improves 10$\%$ in average of precision.

  • PDF