• Title/Summary/Keyword: Natural User Interface

Search Result 225, Processing Time 0.027 seconds

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

User Factor Analysis and Evaluation of Virtual Reality 3D Color Picker (가상현실 3차원 색상 선택기의 사용자 요인 분석 및 평가)

  • Lee, Jieun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1175-1187
    • /
    • 2022
  • 3D interaction between humans and computers has been possible with the popularization of virtual reality, and it is important to study natural and efficient virtual reality user interfaces. In user interface development, it is essential to analyze and evaluate user factors. In order to analyze the influence of factors on users who use the virtual reality color picker, this paper divides the user groups based on whether they major in art or design, whether they have experience in virtual reality, and whether they have prior knowledge about 3D color space. The color selection error and color selection time of all user groups were compared and analyzed. Although there were statistically significant differences according to the user groups, all user groups used the virtual reality color picker accurately and effectively without any difficulties.

Prototyping Training Program in Immersive Virtual Learning Environment with Head Mounted Displays and Touchless Interfaces for Hearing-Impaired Learners

  • HAN, Insook;RYU, Jeeheon;KIM, Minjeong
    • Educational Technology International
    • /
    • v.18 no.1
    • /
    • pp.49-71
    • /
    • 2017
  • The purpose of the study was to identify key design features of virtual reality with head-mounted displays (HMD) and touchless interface for the hearing-impaired and hard-of-hearing learners. The virtual reality based training program was aimed to help hearing-impaired learners in machine operating learning, which requires spatial understanding to operate. We developed an immersive virtual learning environment prototype with an HMD (Oculus Rift) and a touchless natural user interface (Leap Motion) to identify the key design features required to enhance virtual reality for the hearing-impaired and hard-of-hearing learners. Two usability tests of the prototype were conducted, which revealed that several features in the system need revision and that the technology presents an enormous potential to help hearing-impaired learners by providing realistic and immersive learning experiences. After the usability tests of hearing-impaired students' exploring the 3D virtual space, interviews were conducted, which also established that further revision of the system is needed, which would take into account the learners' physical as well as cognitive characteristics.

User's Emotional Touch Recognition Interface Using non-contact Touch Sensor and Accelerometer (비접촉식 터치센서와 가속도센서를 이용한 사용자의 감정적 터치 인식 인터페이스 시스템)

  • Koo, Seong-Yong;Lim, Jong-Gwan;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.348-353
    • /
    • 2008
  • This paper proposes a novel touch interface for recognizing user's touch pattern and understanding emotional information by eliciting natural user interaction. To classify physical touches, we represent the similarity between touches by analyzing touches based on its dictionary meaning and design the algorithm to recognize various touch patterns in real time. Finally we suggest the methodology to estimate user's emotional state based on touch.

  • PDF

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

A Design and Implementation of Natural User Interface System Using Kinect (키넥트를 사용한 NUI 설계 및 구현)

  • Lee, Sae-Bom;Jung, Il-Hong
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.473-480
    • /
    • 2014
  • As the use of computer has been popularized these days, an active research is in progress to make much more convenient and natural interface compared to the existing user interfaces such as keyboard or mouse. For this reason, there is an increasing interest toward Microsoft's motion sensing module called Kinect, which can perform hand motions and speech recognition system in order to realize communication between people. Kinect uses its built-in sensor to recognize the main joint movements and depth of the body. It can also provide a simple speech recognition through the built-in microphone. In this paper, the goal is to use Kinect's depth value data, skeleton tracking and labeling algorithm to recognize information about the extraction and movement of hand, and replace the role of existing peripherals using a virtual mouse, a virtual keyboard, and a speech recognition.

Immersive user interfaces for visual telepresence in human-robot interaction (사람과 로봇간 원격작동을 위한 몰입형 사용자 인터페이스)

  • Jang, Su-Hyeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.406-410
    • /
    • 2009
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

User Interfaces for Visual Telepresence in Human-Robot Interaction Using Wii Controller (WII 컨트롤러를 이용한 사람과 로봇간 원격작동 사용자 인터페이스)

  • Jang, Su-Hyung;Yoon, Jong-Won;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.3 no.1
    • /
    • pp.27-32
    • /
    • 2008
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

Natural Language Interface for Composite Web Services (복합 웹 서비스를 위한 자연어 인터페이스)

  • Lim, Jong-Hyun;Lee, Kyong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.144-156
    • /
    • 2010
  • With the wide spread of Web services in various fields, there is a growing interest in building a composite Web service, however, it is very difficult for ordinary users to specify how to compose services. Therefore, a convenient interface for generating and invoking composite Web services are required. This paper proposes a natural language interface to invoke services. The proposed interface provides a way to describe users' requests for composite Web Services in a natural language. A user with no technical knowledge about Web services can describe requests for composite Web services through the proposed interface. The proposed method extracts a complex workflow and finds appropriate Web services from the requests. Experimental results show that the proposed method extracts a sophisticated workflow from complex sentences with many phrases and control constructs.