• Title/Summary/Keyword: Gestures Mouse

Search Result 28, Processing Time 0.022 seconds

Mouse Gesture Design Based on Mental Model (심성모형 기반의 마우스 제스처 개발)

  • Seo, Hye Kyung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.3
    • /
    • pp.163-171
    • /
    • 2013
  • Various web browsers offer mouse gesture functions because they are convenient input methods. Mouse gestures enable users to move to the previous page or tab without clicking its relevant icon or menu of the web browser. To maximize the efficiency of mouse gestures, they should be designed to match users' mental models. Mental models of human beings are used to make accurate predictions and reactions when certain information has been recognized by humans. This means providing users with appropriate information about mental models will lead to fast understanding and response. A cognitive response test was performed in order to evaluate whether the mouse gestures easily associate with their respective functional meanings or not. After extracting mouse gestures which needed improvement, those were redesigned to reduce cognitive load via sketch maps. The methods presented in this study will be of help for evaluating and designing mouse gestures.

A Joystick-driven Mouse Controlling Method using Hand Gestures (손 제스쳐를 이용한 조이스틱 방식의 마우스제어 방법)

  • Jung, Jin-Young;Kim, Jung-In
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.1
    • /
    • pp.60-67
    • /
    • 2016
  • PC users have long been controlling their computers using input devices such as mouse and keyboard. To improve inconveniences of these devices, the method of screen-touching has widely been used these days, and devices recognizing human gestures are being developed one after another. Fox example, Kinect, developed and distributed by Microsoft, is a non-contact input device that recognizes human gestures through motion-recognizing sensors, thus replacing the mouse as an input device. However, when controlling the mouse on a large screen, it suffers from the problem of requiring large motions in order to move the mouse pointer to the edges of the screen. In this paper, we propose a joystick-driven mouse-controlling method which enables the user to move the mouse pointer to the corners of the screen with small motions. The experimental results show that movements of the user's palm within the range of 30 cm ensure movements of the mouse pointer to the edges of the screen.

Development of Finger Gestures for Touchscreen-based Web Browser Operation (터치스크린 기반 웹브라우저 조작을 위한 손가락 제스처 개발)

  • Nam, Jong-Yong;Choe, Jae-Ho;Jung, Eui-S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.27 no.4
    • /
    • pp.109-117
    • /
    • 2008
  • Compared to the existing PC which uses a mouse and a keyboard, the touchscreen-based portable PC allows the user to use fingers, requiring new operation methods. However, current touchscreen-based web browser operations in many cases involve merely having fingers move simply like a mouse and click, or not corresponding well to the user's sensitivity and the structure of one's index finger, making itself difficult to be used during walking. Therefore, the goal of this study is to develop finger gestures which facilitate the interaction between the interface and the user, and make the operation easier. First, based on the frequency of usage in the web browser and preference, top eight functions were extracted. Then, the users' structural knowledge was visualized through sketch maps, and the finger gestures which were applicable in touchscreens were derived through the Meaning in Mediated Action method. For the front/back page, and up/down scroll functions, directional gestures were derived, and for the window closure, refresh, home and print functions, letter-type and icon-type gestures were drawn. A validation experiment was performed to compare the performance between existing operation methods and the proposed one in terms of execution time, error rate, and preference, and as a result, directional gestures and letter-type gestures showed better performance than the existing methods. These results suggest that not only during the operation of touchscreen-based web browser in portable PC but also during the operation of telematics-related functions in automobile, PDA and so on, the new gestures can be used to make operation easier and faster.

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

A Hierarchical Bayesian Network for Real-Time Continuous Hand Gesture Recognition (연속적인 손 제스처의 실시간 인식을 위한 계층적 베이지안 네트워크)

  • Huh, Sung-Ju;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1028-1033
    • /
    • 2009
  • This paper presents a real-time hand gesture recognition approach for controlling a computer. We define hand gestures as continuous hand postures and their movements for easy expression of various gestures and propose a Two-layered Bayesian Network (TBN) to recognize those gestures. The proposed method can compensate an incorrectly recognized hand posture and its location via the preceding and following information. In order to vertify the usefulness of the proposed method, we implemented a Virtual Mouse interface, the gesture-based interface of a physical mouse device. In experiments, the proposed method showed a recognition rate of 94.8% and 88.1% for a simple and cluttered background, respectively. This outperforms the previous HMM-based method, which had results of 92.4% and 83.3%, respectively, under the same conditions.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

A Development of the Next-generation Interface System Based on the Finger Gesture Recognizing in Use of Image Process Techniques (영상처리를 이용한 지화인식 기반의 차세대 인터페이스 시스템 개발)

  • Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.935-942
    • /
    • 2011
  • This study aims to design and implement the finger gesture recognizing system that automatically recognizes finger gestures input through a camera and controls the computer. Common CCD cameras were redesigned as infrared light cameras to acquire the images. The recorded images go through the pre-process to find the hand features, the finger gestures are read accordingly, and an event takes place for the follow-up mouse controlling and presentation, and finally the way to control computers is suggested. The finger gesture recognizing system presented in this study has been verified as the next-generation interface to replace the mouse and keyboard for the future information-based units.

Implementing Leap-Motion-Based Interface for Enhancing the Realism of Shooter Games (슈팅 게임의 현실감 개선을 위한 립모션 기반 인터페이스 구현)

  • Shin, Inho;Cheon, Donghun;Park, Hanhoon
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.5-10
    • /
    • 2016
  • This paper aims at providing a shooter game interface which enhances the game's realism by recognizing user's hand gestures using the Leap Motion. In this paper, we implemented the functions such as shooting, moving, viewpoint change, and zoom in/out, which are necessary in shooter games, and confirmed through user test that the game interface using familiar and intuitive hand gestures is superior to the conventional mouse/keyboard in terms of ease-to-manipulation, interest, extendability, and so on. Specifically, the user satisfaction index(1~5) was 3.02 on average when using the mouse/keyboard interface and 3.57 on average when using the proposed hand gesture interface.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF