• Title/Summary/Keyword: Gesture-Based User Interface

Search Result 107, Processing Time 0.024 seconds

User Interface Design Platform based on Usage Log Analysis (사용성 로그 분석 기반의 사용자 인터페이스 설계 플랫폼)

  • Kim, Ahyoung;Lee, Junwoo;Kim, Mucheol
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.151-159
    • /
    • 2016
  • The user interface is an important factor in providing efficient services to application users. In particular, mobile applications that can be executed anytime and anywhere have a higher priority of usability than applications in other domains.Previous studies have used prototype and storyboard methods to improve the usability of applications. However, this approach has limitations in continuously identifying and improving the usability problems of a particular application. Therefore, in this paper, we propose a usability analysis method using touch gesture data. It could identify and improve the UI / UX problem of the application continuously by grasping the intention of the user after the application is distributed.

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Performance Improvement of Facial Gesture-based User Interface Using MediaPipe Face Mesh (MediaPipe Face Mesh를 이용한 얼굴 제스처 기반의 사용자 인터페이스의 성능 개선)

  • Jinwang Mok;Noyoon Kwak
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.125-134
    • /
    • 2023
  • The purpose of this paper is to propose a method to improve the performance of the previous research is characterized by recognizing facial gestures from the 3D coordinates of seven landmarks selected from the MediaPipe Face Mesh model, generating corresponding user events, and executing corresponding commands. The proposed method applied adaptive moving average processing to the cursor positions in the process to stabilize the cursor by alleviating microtremor, and improved performance by blocking temporary opening/closing discrepancies between both eyes when opening and closing both eyes simultaneously. As a result of the usability evaluation of the proposed facial gesture interface, it was confirmed that the average recognition rate of facial gestures was increased to 98.7% compared to 95.8% in the previous research.

Development of Multi Card Touch based Interactive Arcade Game System (멀티 카드 터치기반 인터랙티브 아케이드 게임 시스템 구현)

  • Lee, Dong-Hoon;Jo, Jae-Ik;Yun, Tae-Soo
    • Journal of Korea Entertainment Industry Association
    • /
    • v.5 no.2
    • /
    • pp.87-95
    • /
    • 2011
  • Recently, the issue has been tangible game environment due to the various interactive interface developments. In this paper, we propose the multi card touch based interactive arcade system by using marker recognition interface and multi-touch interaction interface. For our system, the card's location and orientation information is recognized through DI-based recognition algorithm. In addition, the user's hand gesture tracking informations are provided by the various interaction metaphors. The system provides the user with a higher engagement offers a new experience. Therefore, our system will be used in the tangible arcade game machine.

User-Defined Hand Gestures for Small Cylindrical Displays (소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처)

  • Kim, Hyoyoung;Kim, Heesun;Lee, Dongeon;Park, Ji-hyung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.74-87
    • /
    • 2017
  • This paper aims to elicit user-defined hand gestures for the small cylindrical displays with flexible displays which has not emerged as a product yet. For this, we first defined the size and functions of a small cylindrical display, and elicited the tasks for operating its functions. Henceforward we implemented the experiment environment which is similar to real cylindrical display usage environment by developing both of a virtual cylindrical display interface and a physical object for operating the virtual cylindrical display. And we showed the results of each task in the virtual cylindrical display to the participants so they could define the hand gestures which are suitable for each task in their opinion. We selected the representative gestures for each task by choosing the gestures of the largest group in each task, and we also calculated agreement scores for each task. Finally we observed mental model of the participants which was applied for eliciting the gestures, based on analyzing the gestures and interview results from the participants.

State-of-the-Art on Gesture Sensing Technology Based on Infrared Proximity Sensor (적외선 근접센서 기반 제스처 센싱기술 동향)

  • Suk, J.H.;Jeon, Y.D.;Lyuh, C.G.
    • Electronics and Telecommunications Trends
    • /
    • v.30 no.6
    • /
    • pp.31-41
    • /
    • 2015
  • 사람은 User Interface(UI)를 통해 기기와 접촉하고 활용하며, 서비스를 제공받는다. 대부분의 입력 도구들은 사용자의 접촉을 필요로 하기 때문에 기기에 접촉이 불가능한 상황에서는 사용자의 의도를 기기에 전달하기 어렵다. 본고에서는 접촉 없이 사용자 입력을 가능하게 하는 기술 중 적외선 근접센서에 기반을 둔 제스처 센싱기술을 소개한다.

  • PDF

State-of-the-Art on Gesture Sensing Technology Based on Infrared Proximity Sensor (스마트폰 시장동향 - 적외선 근접센서 기반 제스처 센싱기술 동향)

  • Suk, J.H.;Jeon, J.D.;Lyuh, C.G.
    • The Optical Journal
    • /
    • s.161
    • /
    • pp.58-73
    • /
    • 2016
  • 사람은 User Interface(UI)를 통해 기기와 접촉하고, 활용하며 서비스를 제공받는다. 대부분의 입력 도구들은 사용자의 접촉을 필요로 하기 때문에 기기에 접촉이 불가능한 상황에서는 사용자의 의도를 기기에 전달하기 어렵다. 본고에서는 접촉 없이 사용자의 입력을 가능하게 하는 기술 중 적외선 근접센서에 기반을 둔 제스처 센싱기술을 소개한다.

  • PDF

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.