• 제목/요약/키워드: Natural User Interfaces

검색결과 47건 처리시간 0.01초

GUI 어플리케이션 제어를 위한 제스처 인터페이스 모델 설계 (Design of Gesture based Interfaces for Controlling GUI Applications)

  • 박기창;서성채;정승문;강임철;김병기
    • 한국콘텐츠학회논문지
    • /
    • 제13권1호
    • /
    • pp.55-63
    • /
    • 2013
  • 사용자 인터페이스 기술은 CLI(Command Line Interfaces), GUI(Graphical User Interfaces)를 거쳐 NUI(Natural User Interfaces)로 발전하고 있다. NUI는 멀티터치, 모션 트래킹, 음성, 스타일러스 등 다양한 입력형식을 사용한다. 기존 GUI 어플리케이션에 NUI를 적용하기 위해서는 이러한 장치관련 라이브러리 추가, 관련 코드 수정, 디버그 등의 과정이 필요하다. 본 논문에서는 기존 이벤트 기반 GUI 어플리케이션의 수정 없이 제스처 기반 인터페이스를 적용할 수 있는 모델을 제안한다. 또한 제안한 모델을 명세하기 위한 XML 스키마를 제시하고, 3D 제스처와 마우스 제스처 프로토타입 개발을 통해 제안모델의 활용방안을 보인다.

LCD Display 설비 Contents의 Kinect기반 동작제어 기술 구현에 관한 연구 (A Study on Implementing Kinect-Based Control for LCD Display Contents)

  • 노정규
    • 전기학회논문지
    • /
    • 제63권4호
    • /
    • pp.565-569
    • /
    • 2014
  • Recently, various kinds of new computer controlled devices have been introduced in a wide range of areas, and convenient user interfaces for controlling the devices are strongly needed. To implement natural user interfaces(NUIs) on top of the devices, new technologies like a touch screen, Wii Remote, wearable interfaces, and Microsoft Kinect were presented. This paper presents a natural and intuitive gesture-based model for controlling contents of LCD display. Microsoft Kinect sensor and its SDK are used to recognize human gestures, and the gestures are interpreted into corresponding commands to be executed. A command dispatch model is also proposed in order to handle the commands more naturally. I expect the proposed interface can be used in various fields, including display contents control.

공동 작업을 위한 사용자 인터페이스로서의 멀티미디어 문서 (Multimedia documents for user interfaces of cooperative work)

  • 성미영
    • 대한인간공학회:학술대회논문집
    • /
    • 대한인간공학회 1995년도 추계학술대회논문집
    • /
    • pp.46-55
    • /
    • 1995
  • The multimedia documents becomes the most natural user interface for CSCW(Conputer Supported Cooperative Work) in distributed environment. The objective of this study is to propose a multimedia document architecture and to develop a system that can manage it well. The new architecture is for revisable documents and is the basic layer for hypermedia documents. A good document architecture for CSCW must support pointing, marking, and editing over a part of documents. The user views, version control, and full- content search are also desirable features. In this paper, we discuss the basic concept of a new document architecture for CSCW. We also present the user interfaces for spatio-temporal compositions of multimedia documents.

  • PDF

음성인식용 인터페이스의 사용편의성 평가 방법론 (A Usability Evaluation Method for Speech Recognition Interfaces)

  • 한성호;김범수
    • 대한인간공학회지
    • /
    • 제18권3호
    • /
    • pp.105-125
    • /
    • 1999
  • As speech is the human being's most natural communication medium, using it gives many advantages. Currently, most user interfaces of a computer are using a mouse/keyboard type but the interface using speech recognition is expected to replace them or at least be used as a tool for supporting it. Despite the advantages, the speech recognition interface is not that popular because of technical difficulties such as recognition accuracy and slow response time to name a few. Nevertheless, it is important to optimize the human-computer system performance by improving the usability. This paper presents a set of guidelines for designing speech recognition interfaces and provides a method for evaluating the usability. A total of 113 guidelines are suggested to improve the usability of speech-recognition interfaces. The evaluation method consists of four major procedures: user interface evaluation; function evaluation; vocabulary estimation; and recognition speed/accuracy evaluation. Each procedure is described along with proper techniques for efficient evaluation.

  • PDF

가상현실 환경에서 3D 가상객체 조작을 위한 인터페이스와 인터랙션 비교 연구 (Comparative Study on the Interface and Interaction for Manipulating 3D Virtual Objects in a Virtual Reality Environment)

  • 박경범;이재열
    • 한국CDE학회논문집
    • /
    • 제21권1호
    • /
    • pp.20-30
    • /
    • 2016
  • Recently immersive virtual reality (VR) becomes popular due to the advanced development of I/O interfaces and related SWs for effectively constructing VR environments. In particular, natural and intuitive manipulation of 3D virtual objects is still considered as one of the most important user interaction issues. This paper presents a comparative study on the manipulation and interaction of 3D virtual objects using different interfaces and interactions in three VR environments. The comparative study includes both quantitative and qualitative aspects. Three different experimental setups are 1) typical desktop-based VR using mouse and keyboard, 2) hand gesture-supported desktop VR using a Leap Motion sensor, and 3) immersive VR by wearing an HMD with hand gesture interaction using a Leap Motion sensor. In the desktop VR with hand gestures, the Leap Motion sensor is put on the desk. On the other hand, in the immersive VR, the sensor is mounted on the HMD so that the user can manipulate virtual objects in the front of the HMD. For the quantitative analysis, a task completion time and success rate were measured. Experimental tasks require complex 3D transformation such as simultaneous 3D translation and 3D rotation. For the qualitative analysis, various factors relating to user experience such as ease of use, natural interaction, and stressfulness were evaluated. The qualitative and quantitative analyses show that the immersive VR with the natural hand gesture provides more intuitive and natural interactions, supports fast and effective performance on task completion, but causes stressful condition.

가상현실 3차원 색상 선택기의 사용자 요인 분석 및 평가 (User Factor Analysis and Evaluation of Virtual Reality 3D Color Picker)

  • 이지은
    • 한국멀티미디어학회논문지
    • /
    • 제25권8호
    • /
    • pp.1175-1187
    • /
    • 2022
  • 3D interaction between humans and computers has been possible with the popularization of virtual reality, and it is important to study natural and efficient virtual reality user interfaces. In user interface development, it is essential to analyze and evaluate user factors. In order to analyze the influence of factors on users who use the virtual reality color picker, this paper divides the user groups based on whether they major in art or design, whether they have experience in virtual reality, and whether they have prior knowledge about 3D color space. The color selection error and color selection time of all user groups were compared and analyzed. Although there were statistically significant differences according to the user groups, all user groups used the virtual reality color picker accurately and effectively without any difficulties.

비접촉식 촉감 디스플레이 기술 동향 (Trends on Non-contact Haptic Display Technology)

  • 황인욱;김진용;윤성률
    • 전자통신동향분석
    • /
    • 제33권5호
    • /
    • pp.95-102
    • /
    • 2018
  • With the widespread use of multifunctional devices, haptic sensation is a promising type of sensory channel because it can be applied as an additional channel for transferring information for traditional audiovisual user interfaces. Many researchers have shed new light on non-contact haptic displays for their potential use on ambient and natural user interfaces. This paper introduces several of the latest schemes for creating a mid-air haptic sensation based on their transfer medium: ultrasonic phased arrays, air nozzles, thermal and plasmonic lasers, and electromagnets. We describe the principles used in delivering haptic sensation in each technology, as well as state-of-the-art technologies from leading research groups, and brief forecasts for further research directions.

멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계 (Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction)

  • 임미정;박범
    • 대한인간공학회지
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

User Requirement Analysis on Risk Management of Architectural Heritage in Virtual Reality

  • Lee, Jongwook
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권9호
    • /
    • pp.69-75
    • /
    • 2019
  • We propose a method to analyze user requirements to design a virtual reality-based risk management system. This paper presents surveys, interviews, prototype evaluation methods, and implementation process. Architectural heritage is easily exposed to natural and artificial dangers caused by various material combinations and structural features. So, risk management of cultural heritage plays a key role in preserving and managing cultural heritage. However, risk management has been carried out through empirical methods using distributed data. This study analyzes user requirements for designing functions and interfaces of VR-based risk management system and evaluates prototypes to overcome the above problems. As a result, most heritage managers wanted a system function to support risk analysis and response. They also found that they prefer 2D information such as existing drawings and photos rather than 3D information. The results of the user requirements analysis derived from this study will be used to create risk management applications.

주거 공간에서의 3차원 핸드 제스처 인터페이스에 대한 사용자 요구사항 (User Needs of Three Dimensional Hand Gesture Interfaces in Residential Environment Based on Diary Method)

  • 정동영;김희진;한성호;이동훈
    • 대한산업공학회지
    • /
    • 제41권5호
    • /
    • pp.461-469
    • /
    • 2015
  • The aim of this study is to find out the user's needs of a 3D hand gesture interface in the smart home environment. To find out the users' needs, we investigated which object the users want to use with a 3D hand gesture interface and why they want to use a 3D hand gesture interface. 3D hand gesture interfaces are studied to be applied to various devices in the smart environment. 3D hand gesture interfaces enable the users to control the smart environment with natural and intuitive hand gestures. With these advantages, finding out the user's needs of a 3D hand gesture interface would improve the user experience of a product. This study was conducted using a diary method to find out the user's needs with 20 participants. They wrote the needs of a 3D hand gesture interface during one week filling in the forms of a diary. The form of the diary is comprised of who, when, where, what and how to use a 3D hand gesture interface with each consisting of a usefulness score. A total of 322 data (209 normal data and 113 error data) were collected from users. There were some common objects which the users wanted to control with a 3D hand gesture interface and reasons why they want to use a 3D hand gesture interface. Among them, the users wanted to use a 3D hand gesture interface mostly to control the light, and to use a 3D hand gesture interface mostly to overcome hand restrictions. The results of this study would help develop effective and efficient studies of a 3D hand gesture interface giving valuable insights for the researchers and designers. In addition, this could be used for creating guidelines for 3D hand gesture interfaces.