• 제목/요약/키워드: Gesture Input

검색결과 148건 처리시간 0.024초

Gesture based Input Device: An All Inertial Approach

  • Chang Wook;Bang Won-Chul;Choi Eun-Seok;Yang Jing;Cho Sung-Jung;Cho Joon-Kee;Oh Jong-Koo;Kim Dong-Yoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권3호
    • /
    • pp.230-245
    • /
    • 2005
  • In this paper, we develop a gesture-based input device equipped with accelerometers and gyroscopes. The sensors measure the inertial measurements, i.e., accelerations and angular velocities produced by the movement of the system when a user is inputting gestures on a plane surface or in a 3D space. The gyroscope measurements are integrated to give orientation of the device and consequently used to compensate the accelerations. The compensated accelerations are doubly integrated to yield the position of the device. With this approach, a user's gesture input trajectories can be recovered without any external sensors. Three versions of motion tracking algorithms are provided to cope with wide spectrum of applications. Then, a Bayesian network based recognition system processes the recovered trajectories to identify the gesture class. Experimental results convincingly show the feasibility and effectiveness of the proposed gesture input device. In order to show practical use of the proposed input method, we implemented a prototype system, which is a gesture-based remote controller (Magic Wand).

Gesture Input as an Out-of-band Channel

  • Chagnaadorj, Oyuntungalag;Tanaka, Jiro
    • Journal of Information Processing Systems
    • /
    • 제10권1호
    • /
    • pp.92-102
    • /
    • 2014
  • In recent years, there has been growing interest in secure pairing, which refers to the establishment of a secure communication channel between two mobile devices. There are a number of descriptions of the various types of out-of-band (OOB) channels, through which authentication data can be transferred under a user's control and involvement. However, none have become widely used due to their lack of adaptability to the variety of mobile devices. In this paper, we introduce a new OOB channel, which uses accelerometer-based gesture input. The gesture-based OOB channel is suitable for all kinds of mobile devices, including input/output constraint devices, as the accelerometer is small and incurs only a small computational overhead. We implemented and evaluated the channel using an Apple iPhone handset. The results demonstrate that the channel is viable with completion times and error rates that are comparable with other OOB channels.

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • 인터넷정보학회논문지
    • /
    • 제21권5호
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

Effect of Input Data Video Interval and Input Data Image Similarity on Learning Accuracy in 3D-CNN

  • Kim, Heeil;Chung, Yeongjee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.208-217
    • /
    • 2021
  • 3D-CNN is one of the deep learning techniques for learning time series data. However, these three-dimensional learning can generate many parameters, requiring high performance or having a significant impact on learning speed. We will use these 3D-CNNs to learn hand gesture and find the parameters that showed the highest accuracy, and then analyze how the accuracy of 3D-CNN varies through input data changes without any structural changes in 3D-CNN. First, choose the interval of the input data. This adjusts the ratio of the stop interval to the gesture interval. Secondly, the corresponding interframe mean value is obtained by measuring and normalizing the similarity of images through interclass 2D cross correlation analysis. This experiment demonstrates that changes in input data affect learning accuracy without structural changes in 3D-CNN. In this paper, we proposed two methods for changing input data. Experimental results show that input data can affect the accuracy of the model.

연속DP와 칼만필터를 이용한 손동작의 추적 및 인식 (Tracking and Recognizing Hand Gestures using Kalman Filter and Continuous Dynamic Programming)

  • 문인혁;금영광
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.13-16
    • /
    • 2002
  • This paper proposes a method to track hand gesture and to recognize the gesture pattern using Kalman filter and continuous dynamic programming (CDP). The positions of hands are predicted by Kalman filter, and corresponding pixels to the hands are extracted by skin color filter. The center of gravity of the hands is the same as the input pattern vector. The input gesture is then recognized by matching with the reference gesture patterns using CDP. From experimental results to recognize circle shape gesture and intention gestures such as “Come on” and “Bye-bye”, we show the proposed method is feasible to the hand gesture-based human -computer interaction.

  • PDF

제스처 형태의 한글입력을 위한 오토마타에 관한 연구 (A Study on the Automata for Hangul Input of Gesture Type)

  • 임양원;임한규
    • 한국산업정보학회논문지
    • /
    • 제16권2호
    • /
    • pp.49-58
    • /
    • 2011
  • 터치스크린을 이용한 스마트 디바이스의 보급이 활성화되어 한글 입력방식도 다양해지고 있다. 본 논문에서는 스마트 디바이스에 적합한 한글 입력방식을 조사 분석하고 오토마타 이론을 이용하여 터치 UI에 적합한 제스처 형태의 한글 입력방식에서 사용할 수 있는 간단하고 효율적인 오토마타를 제시하였다.

제스쳐 허용 전자 잉크 에디터의 개발 (Development of Gesture-allowed Electronic Ink Editor)

  • 조미경;오암석
    • 한국멀티미디어학회논문지
    • /
    • 제6권6호
    • /
    • pp.1054-1061
    • /
    • 2003
  • 전자 잉크 데이터는 스타일러스 펜을 주된 입력 도구로 사용하는 PDA 등과 같은 펜 기반 컴퓨터의 개발로 출현한 멀티미디어 데이터이다. 최근 들어 펜 기반 모바일 컴퓨터의 발전과 보급은 전자 잉크 데이터 처리 기술에 대한 필요성을 증가시키고 있다. 본 논문에서는 펜 제스쳐 (pen gesture)를 허용하는 전자 잉크 에디터개발에 필요한 기술들을 연구하였다. 제스쳐와 잉크 데이터는 펜 기반 사용자 인터페이스의 가장 큰 특징중 하나이지만 아직 충분한 연구가 되지 않았다. 본 논문에서는 펜 제스쳐 구분을 위한 새로운 제스쳐 인식 알고리즘과 제스쳐 명령을 수행하기 위한 잉크 데이터의 분할 방법이 제안되었으며 제안된 방법들을 이용하여 제스쳐를 허용하는 전자 잉크 에디터 GesEdit를 개발하였다. 제스쳐 인식 알고리즘은 입력된 획의 여덟 가지 특징에 기반하고 있으며 전자 잉크 데이터를 GC(Gesture Components) 단위로 분할하는 방법은 볼록껍질(convex hull)과 입력 시간을 사용하였다. 열 명의 피실험자에 의해 수행된 다양한 실험 결과 아흡 가지 제스쳐들은 평균 99.6%의 인식률을 보여 주었다.

  • PDF

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권3호
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계 (Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition)

  • 김주창;박정우;김우현;이원형;정명진
    • 로봇학회논문지
    • /
    • 제8권1호
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법 (Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM)

  • 라승탁;이승호
    • 전기전자학회논문지
    • /
    • 제27권4호
    • /
    • pp.486-493
    • /
    • 2023
  • 본 논문에서는 FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법을 제안한다. 제안한 기법은 기존의 방식과 다른 다음과 같은 독창성이 있다. 첫 번째, 기존의 range-doppler 등의 2D 이미지를 입력 벡터로 하는 방식과 다르게 시계열 형태의 포인트 클라우드 입력 벡터는 레이다 전방에서 발생하는 시간에 따른 움직임을 좌표계 형태로 인식할 수 있는 직관적인 입력 데이터이다. 두 번째, 입력 벡터의 크기가 작기 때문에 인식에 쓰이는 딥러닝 모델도 가볍게 설계할 수 있다. 제안하는 기법의 수행 과정은 다음과 같다. FMCW 레이다로 측정된 거리, 속도, 각도 정보를 활용해 x, y, z 좌표 형식과 도플러 속도 정보를 포함한 포인트 클라우드를 활용한다. 제스처 영역은 속도 정보를 통해 얻어진 도플러 포인트를 이용하여 제스처의 시작과 끝 지점을 파악해 자동으로 핸드 제스처 영역을 추출하게 된다. 추출된 제스처 영역의 시점에 해당하는 시계열 형태의 포인트 클라우드는 최종적으로 본 논문에서 사용한 LSTM 딥러닝 모델의 학습 및 인식에 활용되게 된다. 제안하는 기법의 객관적인 신뢰성을 평가하기 위해 다른 딥러닝 모델들과 MAE를 산출하는 실험과 기존 기법들과 인식률을 산출하는 실험을 수행하여 비교하였다. 실험 결과, 시계열 형태의 포인트 클라우드 입력 벡터 + LSTM 딥러닝 모델의 MAE 값이 0.262, 인식률이 97.5%로 산출되었다. MAE는 낮을수록, 인식률은 높을수록 우수한 결과를 나타내므로 본 논문에서 제안한 기법의 효율성이 입증되었다.