• Title/Summary/Keyword: natural user interface

Search Result 226, Processing Time 0.027 seconds

User goal and plan recognition using plan recognition system in natural language Dialogue (자연언어 대화 (NL Dialogue)에서 플랜 인지 시스템을 이용한 사용자의 목표 (Goal) 도출)

  • Kim, Do-Wan;Park, Jae-Deuk;Park, Dong-In
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.393-399
    • /
    • 1996
  • 자연언어 대화에서 사용자의 정확한 의도(Intention)를 인지함에 있어서 나타나는 문제는, 자연언어 대화체의 생략성이 강한 문장의 불완전성 외에도, 여러 연속되는 대화체 문장에 분산되어 나타나는 사용자의 의도를 정확히 파악하는 것이다. 이러한 불완전한 대화체 문장 속에 산재되어 있는 사용자의 의도를 빠르고 신뢰성 있게 인지하여, 사용자와 시스템간의 원활한 자연언어 대화 상호작용 (Interaction)을 가능하게 하기 위하여 플랜 인지 시스템의 이용은 매우 효과적으로 보인다. 현재까지 개발된 대부분의 플랜 인지시스템들은 사용자의 액션 분석 및 플랜의 인지를 통하여 HCI를 지원하는 측면에 (예: 지능형 도움말) 집중되어 있다. 본 논문은 지역 광고 신문에 실린 매입-매도광고 데이타베이스의 검색을 위한 Natural language dialogue user interface에서 사용자 의도를 인지할 수 있는 플랜 인지 시스템을 기술하고 있다.

  • PDF

Arm Orientation Estimation Method with Multiple Devices for NUI/NUX

  • Sung, Yunsick;Choi, Ryong;Jeong, Young-Sik
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.980-988
    • /
    • 2018
  • Motion estimation is a key Natural User Interface/Natural User Experience (NUI/NUX) technology to utilize motions as commands. HTC VIVE is an excellent device for estimating motions but only considers the positions of hands, not the orientations of arms. Even if the positions of the hands are the same, the meaning of motions can differ according to the orientations of the arms. Therefore, when the positions of arms are measured and utilized, their orientations should be estimated as well. This paper proposes a method for estimating the arm orientations based on the Bayesian probability of the hand positions measured in advance. In experiments, the proposed method was used to measure the hand positions with HTC VIVE. The results showed that the proposed method estimated orientations with an error rate of about 19%, but the possibility of estimating the orientation of any body part without additional devices was demonstrated.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

Verification of AI Voice User Interface(VUI) Usability Evaluation : Focusing on Chinese Navigation VUI (인공지능 음성사용자 인터페이스 사용성 평가 기준 검증 : 중국 내비게이션 VUI를 중심으로)

  • Zhou, Yi Mou;Shang, Lin Rru;Lim, Hyun Chan;Hwang, Mi Kyung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.913-921
    • /
    • 2021
  • After arranging the general usability evaluation criteria of existing VUI researchers, this study verified how appropriate these criteria are for AI VUI specialized in navigation and the priority of their suitability. The VUI used in this study was analyzed through a survey from a total of 195 Chinese users after analyzing the navigation VUI used in China. As a result of the analysis, the usability evaluation criteria of the navigation VUI were extracted from three sub-factors of 'task accuracy', 'function satisfaction', and 'information reliability' in verifying conformance with general VUI evaluation criteria. With the recent advent of self-driving cars, safety and response speed are becoming very important, so Chinese users also ranked responsiveness as the top priority in VUI design, and the importance was also found to be high. Also, both men and women have the highest reactivity and the lowest multiplicity. VUI requires a convenient and natural interface to understand the intention between two objects through usability evaluation and verification in order to have effective interaction between humans and machines.

Layered Object and Script Language Model for Avatar Behavior Scenario Generation (아바타 행위 시나리오 생성을 위한 계층적 객체 및 스크립트 언어 모델)

  • Kim, Jae-Kyung;Sohn, Won-Sung;Lim, Soon-Bum;Choy, Yoon-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.1
    • /
    • pp.61-75
    • /
    • 2008
  • A script language, which represents and controls avatar behaviors in a natural language style, is especially remarkable, because it can provide a fast and easy way to develop an animation scenario script. However, the studies that consider avatar behavior interactions with various virtual objects and intuitive interface techniques to design scenario script have been lack. Therefore, we proposed a context-based avatar-object behavior model and layered script language. The model defines context-based elements to solve ambiguity problems that occur in abstract behavior interface and it provides user interface to control avatar in the object-based approach. Also, the proposed avatar behavior script language consisted of a layered structure that represents domain user interface, motion sequence, and implement environment information at each level. Using the proposed methods, the user can conveniently and quickly design an avatar-object behavior scenario script.

  • PDF

Semi-automatic Field Morphing : Polygon-based Vertex Selection and Adaptive Control Line Mapping

  • Kwak, No-Yoon
    • International Journal of Contents
    • /
    • v.3 no.4
    • /
    • pp.15-21
    • /
    • 2007
  • Image morphing deals with the metamorphosis of one image into another. The field morphing depends on the manual work for most of the process, where a user has to designate the control lines. It takes time and requires skills to have fine quality results. It is an object of this paper to propose a method capable of realizing the semi-automation of field morphing using adaptive vertex correspondence based on image segmentation. The adaptive vertex correspondence process efficiently generates a pair of control lines by adaptively selecting reference partial contours based on the number of vertices that are included in the partial contour of the source morphing object and in the partial contour of the destination morphing object, in the pair of the partial contour designated by external control points through user input. The proposed method generates visually fluid morphs and warps with an easy-to-use interface. According to the proposed method, a user can shorten the time to set control lines and even an unskilled user can obtain natural morphing results as he or she designates a small number of external control points.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.

A Study on the design implementation of ODA document formatter using backtracking mechanism (역추적 기능을 이용한 ODA 문서 포맷터 설계 및 구현에 관한 연구)

  • Jung, H.K.;JO, I.J.;Kim, J.S.
    • The Journal of Natural Sciences
    • /
    • v.8 no.1
    • /
    • pp.93-100
    • /
    • 1995
  • This paper describes the design and implementation of ODA document formatter with the capability of interchange for the structured multimedia document information between heterogeneous systems. We designed the formatter generating the specific layout structure by the generic about structure and establishing relationship between specific logical/layout structure by interaction of user. For it, we proposed backtracking mechanism and processing rules of layout directive. Especially, we implemented and interactive method as user interface for ease creation of a document due to show user transparently complicated internal of structure.

  • PDF