• Title/Summary/Keyword: hand interface

Search Result 600, Processing Time 0.025 seconds

Implementing Leap-Motion-Based Interface for Enhancing the Realism of Shooter Games (슈팅 게임의 현실감 개선을 위한 립모션 기반 인터페이스 구현)

  • Shin, Inho;Cheon, Donghun;Park, Hanhoon
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.5-10
    • /
    • 2016
  • This paper aims at providing a shooter game interface which enhances the game's realism by recognizing user's hand gestures using the Leap Motion. In this paper, we implemented the functions such as shooting, moving, viewpoint change, and zoom in/out, which are necessary in shooter games, and confirmed through user test that the game interface using familiar and intuitive hand gestures is superior to the conventional mouse/keyboard in terms of ease-to-manipulation, interest, extendability, and so on. Specifically, the user satisfaction index(1~5) was 3.02 on average when using the mouse/keyboard interface and 3.57 on average when using the proposed hand gesture interface.

Implementation of Hand-Gesture-Based Augmented Reality Interface on Mobile Phone (휴대폰 상에서의 손동작 기반 증강현실 인터페이스 구현)

  • Choi, Jun-Yeong;Park, Han-Hoon;Park, Jung-Sik;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.941-950
    • /
    • 2011
  • With the recent advance in the performance of mobile phones, many effective interfaces for them have been proposed. This paper implements a hand-gesture-and-vision-based interface on a mobile phone. This paper assumes natural interaction scenario when user holds a mobile phone in a hand and sees the other hand's palm through mobile phone's camera. Then, a virtual object is rendered on his/her palm and reacts to hand and finger movements. Since the implemented interface is based on hand familiar to humans and does not require any additional sensors or markers, user freely interacts with the virtual object anytime and anywhere without any training. The implemented interface worked at 5 fps on mobile phone (Galaxy S2 having a dual-core processor).

A Long-Range Touch Interface for Interaction with Smart TVs

  • Lee, Jaeyeon;Kim, DoHyung;Kim, Jaehong;Cho, Jae-Il;Sohn, Joochan
    • ETRI Journal
    • /
    • v.34 no.6
    • /
    • pp.932-941
    • /
    • 2012
  • A powerful interaction mechanism is one of the key elements for the success of smart TVs, which demand far more complex interactions than traditional TVs. This paper proposes a novel interface based on the famous touch interaction model but utilizes long-range bare hand tracking to emulate touch actions. To satisfy the essential requirements of high accuracy and immediate response, the proposed hand tracking algorithm adopts a fast color-based tracker but with modifications to avoid the problems inherent to those algorithms. By using online modeling and motion information, the sensitivity to the environment can be greatly decreased. Furthermore, several ideas to solve the problems often encountered by users interacting with smart TVs are proposed, resulting in a very robust hand tracking algorithm that works superbly, even for users with sleeveless clothing. In addition, the proposed algorithm runs at a very high speed of 82.73 Hz. The proposed interface is confirmed to comfortably support most touch operations, such as clicks, swipes, and drags, at a distance of three meters, which makes the proposed interface a good candidate for interaction with smart TVs.

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.

Study on User Interface for a Capacitive-Sensor Based Smart Device

  • Jung, Sun-IL;Kim, Young-Chul
    • Smart Media Journal
    • /
    • v.8 no.3
    • /
    • pp.47-52
    • /
    • 2019
  • In this paper, we designed HW / SW interfaces for processing the signals of capacitive sensors like Electric Potential Sensor (EPS) to detect the surrounding electric field disturbance as feature signals in motion recognition systems. We implemented a smart light control system with those interfaces. In the system, the on/off switch and brightness adjustment are controlled by hand gestures using the designed and fabricated interface circuits. PWM (Pulse Width Modulation) signals of the controller with a driver IC are used to drive the LED and to control the brightness and on/off operation. Using the hand-gesture signals obtained through EPS sensors and the interface HW/SW, we can not only construct a gesture instructing system but also accomplish the faster recognition speed by developing dedicated interface hardware including control circuitry. Finally, using the proposed hand-gesture recognition and signal processing methods, the light control module was also designed and implemented. The experimental result shows that the smart light control system can control the LED module properly by accurate motion detection and gesture classification.

A Hierarchical Bayesian Network for Real-Time Continuous Hand Gesture Recognition (연속적인 손 제스처의 실시간 인식을 위한 계층적 베이지안 네트워크)

  • Huh, Sung-Ju;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1028-1033
    • /
    • 2009
  • This paper presents a real-time hand gesture recognition approach for controlling a computer. We define hand gestures as continuous hand postures and their movements for easy expression of various gestures and propose a Two-layered Bayesian Network (TBN) to recognize those gestures. The proposed method can compensate an incorrectly recognized hand posture and its location via the preceding and following information. In order to vertify the usefulness of the proposed method, we implemented a Virtual Mouse interface, the gesture-based interface of a physical mouse device. In experiments, the proposed method showed a recognition rate of 94.8% and 88.1% for a simple and cluttered background, respectively. This outperforms the previous HMM-based method, which had results of 92.4% and 83.3%, respectively, under the same conditions.

Implementation of Hand-Gesture Interface to manipulate a 3D Object of Augmented Reality (증강현실의 3D 객체 조작을 위한 핸드-제스쳐 인터페이스 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.117-123
    • /
    • 2016
  • A hand-gesture interface to manipulate a 3D object of augmented reality is implemented by recognizing the user hand-gesture in this paper. Proposed method extracts the hand region from real image, and creates augmented object by hand marker recognized user hand-gesture. Also, 3D object manipulation corresponding to user hand-gesture is performed by analyzing a hand region ratio, a numbet of finger and a variation ratio of hand region center. In order to evaluate the performance of the our proposed method, after making a 3D object by using the OpenGL library, all processing tasks are implemented by using the Intel OpenCV library and C++ language. As a result, the proposed method showed the average 90% recognition ratio by the user command-modes successfully.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Clinical outcomes of a low-cost single-channel myoelectric-interface three-dimensional hand prosthesis

  • Ku, Inhoe;Lee, Gordon K.;Park, Chan Yong;Lee, Janghyuk;Jeong, Euicheol
    • Archives of Plastic Surgery
    • /
    • v.46 no.4
    • /
    • pp.303-310
    • /
    • 2019
  • Background Prosthetic hands with a myoelectric interface have recently received interest within the broader category of hand prostheses, but their high cost is a major barrier to use. Modern three-dimensional (3D) printing technology has enabled more widespread development and cost-effectiveness in the field of prostheses. The objective of the present study was to evaluate the clinical impact of a low-cost 3D-printed myoelectric-interface prosthetic hand on patients' daily life. Methods A prospective review of all upper-arm transradial amputation amputees who used 3D-printed myoelectric interface prostheses (Mark V) between January 2016 and August 2017 was conducted. The functional outcomes of prosthesis usage over a 3-month follow-up period were measured using a validated method (Orthotics Prosthetics User Survey-Upper Extremity Functional Status [OPUS-UEFS]). In addition, the correlation between the length of the amputated radius and changes in OPUS-UEFS scores was analyzed. Results Ten patients were included in the study. After use of the 3D-printed myoelectric single electromyography channel prosthesis for 3 months, the average OPUS-UEFS score significantly increased from 45.50 to 60.10. The Spearman correlation coefficient (r) of the correlation between radius length and OPUS-UEFS at the 3rd month of prosthetic use was 0.815. Conclusions This low-cost 3D-printed myoelectric-interface prosthetic hand with a single reliable myoelectrical signal shows the potential to positively impact amputees' quality of life through daily usage. The emergence of a low-cost 3D-printed myoelectric prosthesis could lead to new market trends, with such a device gaining popularity via reduced production costs and increased market demand.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF