• Title/Summary/Keyword: mouse gesture

Search Result 43, Processing Time 0.027 seconds

Hand Gesture Classification Using Multiple Doppler Radar and Machine Learning (다중 도플러 레이다와 머신러닝을 이용한 손동작 인식)

  • Baik, Kyung-Jin;Jang, Byung-Jun
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • This paper suggests a hand gesture recognition technology to control smart devices using multiple Doppler radars and a support vector machine(SVM), which is one of the machine learning algorithms. Whereas single Doppler radar can recognize only simple hand gestures, multiple Doppler radar can recognize various and complex hand gestures by using various Doppler patterns as a function of time and each device. In addition, machine learning technology can enhance recognition accuracy. In order to determine the feasibility of the suggested technology, we implemented a test-bed using two Doppler radars, NI DAQ USB-6008, and MATLAB. Using this test-bed, we can successfully classify four hand gestures, which are Push, Pull, Right Slide, and Left Slide. Applying SVM machine learning algorithm, it was confirmed the high accuracy of the hand gesture recognition.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Hand Interface using Intelligent Recognition for Control of Mouse Pointer (마우스 포인터 제어를 위해 지능형 인식을 이용한 핸드 인터페이스)

  • Park, Il-Cheol;Kim, Kyung-Hun;Kwon, Goo-Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1060-1065
    • /
    • 2011
  • In this paper, the proposed method is recognized the hands using color information with input image of the camera. It controls the mouse pointer using recognized hands. In addition, specific commands with the mouse pointer is designed to perform. Most users felt uncomfortable since existing interaction multimedia systems depend on a particular external input devices such as pens and mouse However, the proposed method is to compensate for these shortcomings by hand without the external input devices. In experimental methods, hand areas and backgrounds are separated using color information obtaining image from camera. And coordinates of the mouse pointer is determined using coordinates of the center of a separate hand. The mouse pointer is located in pre-filled area using these coordinates, and the robot will move and execute with the command. In experimental results, the recognition of the proposed method is more accurate but is still sensitive to the change of color of light.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

피지컬 인터페이스의 구현에 관한 연구

  • 오병근
    • Archives of design research
    • /
    • v.16 no.2
    • /
    • pp.131-140
    • /
    • 2003
  • The input for computer interaction design is very limited for the users to control the interface by only using keyboard and mouse. However, using the basic electrical engineering, the input design can be different from the existing method. Interactive art using computer technology is recently emersed, which is completed by people's participation. The electric signal transmitted in digital and analogue type from the interface controled by people to the computer can be used in multimedia interaction design. The electric circuit design will be necessary to transmit very safe electric signal from the interface. Electric switch, sensor, and camera technologies can be applied to input interface design, which would be alternative physical interaction without computer keyboard and mouse. This type of interaction design using human's body language and gesture would convey the richness of humanity.

  • PDF

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Developing User-friendly Hand Mouse Interface via Gesture Recognition (손 동작 인식을 통한 사용자에게 편리한 핸드마우스 인터페이스 구현)

  • Kang, Sung-Won;Kim, Chul-Joong;Sohn, Won
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.129-132
    • /
    • 2009
  • 컴퓨터의 소형화로 휴대성과 공간의 제약이 없는 컴퓨터 인터페이싱 방법의 필요성이 증가하고 있으며, 이와 관련하여 인간-컴퓨터 상호작용(HCI)을 위한 제스처 기반의 제어방식에 대한 연구가 활발하게 진행되고 있다. 기존의 손동작 인터페이스 구현들은 컴퓨터를 제어하기 위하여 사용방법에 대한 선행학습이 필요하였다. 이 논문은 사용자의 손 모양과 손끝 정보만을 가지고 선행학습이 요구되지 않는 간편한 인터페이스 구현방법을 제안하였다. 이를 위해 1대의 웹캠과 인텔의 오픈소스 영상처리 라이브러리 OpenCv를 사용하였다. 차영상과 화소값 기반의 영상처리과정을 통해 실시간으로 손 영역을 추적하고 이를 이진화 시켰다. 손가락의 움직임도 값이 변하지 않도록 중심모멘트를 설정하여 마우스 커서 움직임을 상대적으로 활용하였다. 상황에 따라 손 끝점을 절대적 좌표로 활용하여 손이 웹캠에서 벋어날 때 움직임을 자연스럽게 연결시켰다. 마지막으로 검지의 움직임 하나 만으로 마우스 클릭 이벤트를 수행함으로써 보다 사용자에게 친숙한 핸드마우스 인터페이스를 구현하였다.

  • PDF