• Title/Summary/Keyword: Body gesture

Search Result 100, Processing Time 0.024 seconds

The Study on Gesture Recognition for Fighting Games based on Kinect Sensor (키넥트 센서 기반 격투액션 게임을 위한 제스처 인식에 관한 연구)

  • Kim, Jong-Min;Kim, Eun-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.552-555
    • /
    • 2018
  • This study developed a gesture recognition method using Kinect sensor and proposed a fighting action control interface. To extract the pattern features of a gesture, it used a method of extracting them in consideration of a body rate based on the shoulders, rather than of absolute positions. Although the same gesture is made, the positional coordinates of each joint caught by Kinect sensor can be different depending on a length and direction of the arm. Therefore, this study applied principal component analysis in order for gesture modeling and analysis. The method helps to reduce the effects of data errors and bring about dimensional contraction effect. In addition, this study proposed a modified matching algorithm to reduce motion restrictions of gesture recognition system.

  • PDF

Gesture Communications Between Different Avatar Models Using A FBML (FBML을 이용한 서로 다른 아바타 모델간의 제스처 통신)

  • 이용후;김상운;아오끼요시나오
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.41-49
    • /
    • 2004
  • As a means of overcoming the linguistic barrier between different languages in the Internet cyberspace, a sign-language communication system has been proposed. However, the system supports avatars having the same model structure so that it is difficult to communicate between different avatar models. Therefore, in this paper, we propose a new gesture communication system in which different avatars models can communicate with each other by using a FBML (Facial Body Markup Language). Using the FBML, we define a standard document format that contains the messages to be transferred between models, where the document includes the action units of facial expression and the joint angles of gesture animation. The proposed system is implemented with Visual C++ and Open Inventor on Windows platforms. The experimental results demonstrate a possibility that the method could be used as an efficient means to overcome the linguistic problem.

A Study on Gesture Recognition Using Principal Factor Analysis (주 인자 분석을 이용한 제스처 인식에 관한 연구)

  • Lee, Yong-Jae;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.981-996
    • /
    • 2007
  • In this paper, we describe a method that can recognize gestures by obtaining motion features information with principal factor analysis from sequential gesture images. In the algorithm, firstly, a two dimensional silhouette region including human gesture is segmented and then geometric features are extracted from it. Here, global features information which is selected as some meaningful key feature effectively expressing gestures with principal factor analysis is used. Obtained motion history information representing time variation of gestures from extracted feature construct one gesture subspace. Finally, projected model feature value into the gesture space is transformed as specific state symbols by grouping algorithm to be use as input symbols of HMM and input gesture is recognized as one of the model gesture with high probability. Proposed method has achieved higher recognition rate than others using only shape information of human body as in an appearance-based method or extracting features intuitively from complicated gestures, because this algorithm constructs gesture models with feature factors that have high contribution rate using principal factor analysis.

  • PDF

Implementation of User Gesture Recognition System for manipulating a Floating Hologram Character (플로팅 홀로그램 캐릭터 조작을 위한 사용자 제스처 인식 시스템 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.143-149
    • /
    • 2019
  • Floating holograms are technologies that provide rich 3D stereoscopic images in a wide space such as advertisement, concert. In addition, It is possible to reduce the 3D glasses inconvenience, eye strain, and space distortion, and to enjoy 3D images with excellent realism and existence. Therefore, this paper implements a user gesture recognition system for manipulating a floating hologram characters that can be used in a small space devices. The proposed method detects face region using haar feature-based cascade classifier, and recognizes the user gestures using a user gesture-occurred position information that is acquired from the gesture difference image in real time. And Each classified gesture information is mapped to the character motion in floating hologram for manipulating a character action. In order to evaluate the performance of the proposed user gesture recognition system for manipulating a floating hologram character, we make the floating hologram display devise, and measures the recognition rate of each gesture repeatedly that includes body shaking, walking, hand shaking, and jumping. As a results, the average recognition rate was 88%.

Gesture Communication: Collaborative and Participatory Design in a New Type of Digital Communication (제스츄어 커뮤니케이션: 새로운 방식의 디지털 커뮤니케이션의 참여 디자인 제안)

  • Won, Ha Youn
    • Korea Science and Art Forum
    • /
    • v.20
    • /
    • pp.307-314
    • /
    • 2015
  • Tele-Gesture is a tangible user interface(TUI) device that allows a user to physically point to a 3D object in real life and have their gestures play back by a robotic finger that can point to the same object, either at the same time, or at another point in time. To understand the extent of the gestures as new way of digital collaborative communication, collaboration situation and types were experimented as TUI implementations. The design prototype reveals that there is a rich non-verbal component of communication in the form of gesture-clusters and body movements that happen in an digital communication. This result of analysis can contribute to compile relevant contributions to the fields of communication, human behavior, and interaction with high technology through an interpretive social experience.

Multi - Modal Interface Design for Non - Touch Gesture Based 3D Sculpting Task (비접촉식 제스처 기반 3D 조형 태스크를 위한 다중 모달리티 인터페이스 디자인 연구)

  • Son, Minji;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.5
    • /
    • pp.177-190
    • /
    • 2017
  • This research aims to suggest a multimodal non-touch gesture interface design to improve the usability of 3D sculpting task. The task and procedure of design sculpting of users were analyzed across multiple circumstances from the physical sculpting to computer software. The optimal body posture, design process, work environment, gesture-task relationship, the combination of natural hand gesture and arm movement of designers were defined. The preliminary non-touch 3D S/W were also observed and natural gesture interaction, visual metaphor of UI and affordance for behavior guide were also designed. The prototype of gesture based 3D sculpting system were developed for validation of intuitiveness and learnability in comparison to the current S/W. The suggested gestures were proved with higher performance as a result in terms of understandability, memorability and error rate. Result of the research showed that the gesture interface design for productivity system should reflect the natural experience of users in previous work domain and provide appropriate visual - behavioral metaphor.

Coordinations of Articulators in Korean Place Assimilation

  • Son, Min-Jung
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.29-35
    • /
    • 2011
  • This paper examines several articulatory properties of /k/, known as a trigger of place assimilation as well as the object of post-obstruent tensing (/tk/), in comparison to non-assimilating controls (/kk/ and /kt/). Using EMMA, tongue body articulation in the place assimilation context robustly shows greater spatio-temporal articulation and lower jaw position. Results showed several characteristics. Firstly, constriction duration of the tongue body gesture in C2 of the assimilation context (/tk/) was longer than non-assimilating controls (/kk/ and /kt/). Secondly, constriction maxima also demonstrated greater constriction in the /tk/ sequences than in the control /kk/, but similar values with the control /kt/. In particular, results showed a significant relationship between the two variables - the longer the constriction duration, the greater the constriction degree. Lastly, jaw height was lower for the assimilating context /tk/, intermediate for the control /kk/, and higher for the control /kt/. Results suggest that speakers have lexical knowledge of place assimilation, producing a greater tongue body gesture in the spatio-temporal domains with lower jaw height as an indication of anticipating reduction of C1 in /tk/ sequences.

  • PDF

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Implementation of a Gesture Recognition Signage Platform for Factory Work Environments

  • Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.171-176
    • /
    • 2020
  • This paper presents an implementation of a gesture recognition platform that can be used in a factory workplaces. The platform consists of signages that display worker's job orders and a control center that is used to manage work orders for factory workers. Each worker does not need to bring work order documents and can browse the assigned work orders on the signage at his/her workplace. The contents of signage can be controlled by worker's hand and arm gestures. Gestures are extracted from body movement tracked by 3D depth camera and converted to the commandsthat control displayed content of the signage. Using the control center, the factory manager can assign tasks to each worker, upload work order documents to the system, and see each worker's progress. The implementation has been applied experimentally to a machining factory workplace. This flatform provides convenience for factory workers when they are working at workplaces, improves security of techincal documents, but can also be used to build smart factories.