• Title/Summary/Keyword: natural user interface

Search Result 226, Processing Time 0.026 seconds

CNN-Based Hand Gesture Recognition for Wearable Applications (웨어러블 응용을 위한 CNN 기반 손 제스처 인식)

  • Moon, Hyeon-Chul;Yang, Anna;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.246-252
    • /
    • 2018
  • Hand gestures are attracting attention as a NUI (Natural User Interface) of wearable devices such as smart glasses. Recently, to support efficient media consumption in IoT (Internet of Things) and wearable environments, the standardization of IoMT (Internet of Media Things) is in the progress in MPEG. In IoMT, it is assumed that hand gesture detection and recognition are performed on a separate device, and thus provides an interoperable interface between these modules. Meanwhile, deep learning based hand gesture recognition techniques have been recently actively studied to improve the recognition performance. In this paper, we propose a method of hand gesture recognition based on CNN (Convolutional Neural Network) for various applications such as media consumption in wearable devices which is one of the use cases of IoMT. The proposed method detects hand contour from stereo images acquisitioned by smart glasses using depth information and color information, constructs data sets to learn CNN, and then recognizes gestures from input hand contour images. Experimental results show that the proposed method achieves the average 95% hand gesture recognition rate.

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

A Study on the Reliability of Voice Payment Interface (음성결제 인터페이스의 신뢰도에 관한 연구)

  • Gwon, Hyeon Jeong;Lee, Jee Yeon
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.3
    • /
    • pp.101-140
    • /
    • 2021
  • As the payment service sector actively embraces artificial intelligence technology, "Voice Payments" is becoming a trend in contactless payment services. Voice payment services can execute payments faster and more intuitively through "voice," the most natural means of communication for humans. In this study, we selected richness, intimacy, and autonomy as factors for building trust with artificial intelligence agents. We wanted to determine whether the trust will be formed if the factors were applied to the voice payment services. The experiment results showed that the higher the richness and autonomy of the voice payment interface and the lower the intimacy, the higher the trust. In addition, the two-way interaction effects of richness and autonomy were significant. We analyzed and synthesized the collected short-answer system to identify users' anxiety when using voice payment services and proposed speech interface design ideas to increase their trust in the voice payment.

HUD Interface and VR content interaction: VR+HUD (HUD Interface와 VR 콘텐츠 인터렉션: VR+HUD)

  • Park, Keonhee;Chin, Seongah
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.925-932
    • /
    • 2018
  • Virtual reality seems to be the center of the next generation platform, which is founded on various engines that can easily make device progress and content. However, the interaction between virtual reality contents and users is thought of as relatively requiring technological advances. In this paper, we propose a technique to improve the interaction technique based on the case of Virtual Figure Model Crafting (VFMC) to analyze the problem of interaction caused by virtual reality contents. We introduced the concept of Head-Up Display (HUD) to present a more natural interaction method. The HUD is the digital visual interface of the aircraft. The advantage of HUD visual interface is to minimizes the user's visual movement by displaying the information of the scattered view to the forward direction of the pilot. In other words, we can reduce unnecessary left and right movements that make it is possible to expect an effect of reducing fatigue and increasing immersion.

3D Pottery Modeling in Augmented Reality (증강현실 기반의 3차원 도자기 모델링 시스템)

  • Han, Gab-Jong;Hwang, Jane;Choi, Seung-Moon;Kim, Gerard Joung-Hyun
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.19-26
    • /
    • 2007
  • This paper presents an augmented reality based modeling system that can provide pottery design experiences to the user. Augmented reality offers natural 3D interaction, a tangible interface, and integration into the real environment. In addition, six modeling techniques that mimics the hand movements in real world deformation process and an occlusion based interaction technique is provided for pottery modeling. The developed interface facilitates fast and intuitive pottery design. The AR pottery system can be used for pottery prototyping/design and educational purposes.

  • PDF

Multi-sensor based NUI/NUX framework for various interactive applications (다양한 상호작용 어플리케이션을 위한 이종 센서 NUI/NUX 프레임워크)

  • Zhang, Weiqiang;Xi, Yulong;Wen, Mingyun;Cho, Seoungjae;Chae, Jeongsook;Kim, Junoh;Um, Kyhyun;Cho, Kungeun
    • Annual Conference of KIPS
    • /
    • 2017.04a
    • /
    • pp.1077-1078
    • /
    • 2017
  • In this study, we implement a natural user interface/experience framework using multi-sensors: Microsoft Kinect, Leap Motion, and Myo Armband. The framework is designed for customers to use in various types of interactive applications. We integrate the functions of three sensors into an application and provide an interface for customers, who can use it to interact with a computer easily. The framework can track body information in real-time, and accurately recognize the motion of different body parts.

Development of Motion based Training Contents: "3D Space Exploration" Case Study (동작 기반의 훈련콘텐츠 : "3D 우주탐험" 개발사례)

  • Lim, C.J.;Park, Seung Goo;Jeong, Yun Guen
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.63-72
    • /
    • 2013
  • To enhance the effect of science educational contents, we developed a motion based training content: 3D space exploration. In this content, we used the 3D depth camera for user's motion recognition. Learners have to conduct the space station maintenance mission using the motion based natural and intuitive interface. The result this study is expected to propose the immersive training simulation for young science learners.

A Study on the Development Methodology for User-Friendly Interactive Chatbot (사용자 친화적인 대화형 챗봇 구축을 위한 개발방법론에 관한 연구)

  • Hyun, Young Geun;Lim, Jung Teak;Han, Jeong Hyeon;Chae, Uri;Lee, Gi-Hyun;Ko, Jin Deuk;Cho, Young Hee;Lee, Joo Yeoun
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.215-226
    • /
    • 2020
  • Chatbot is emerging as an important interface window for business. This change is due to the continued development of chatbot-related research from NLP to NLU and NLG. However, the reality is that the methodological study of drawing domain knowledge and developing it into a user-friendly interactive interface is weak in the process of developing chatbot. In this paper, in order to present the process criteria of chatbot development, we applied it to the actual project based on the methodology presented in the previous paper and improved the development methodology. In conclusion, the productivity of the test phase, which is the most important step, was improved by 33.3%, and the number of iterations was reduced to 37.5%. Based on these results, the "3 Phase and 17 Tasks Development Methodology" was presented, which is expected to dramatically improve the trial and error of the chatbot development.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF