• Title/Summary/Keyword: Natural User Interfaces

Search Result 47, Processing Time 0.024 seconds

Scenario-Based Design of The Next Generation Information Appliances (시나리오 기반 차세대 정보가전 신제품 개발)

  • 박지수
    • Archives of design research
    • /
    • v.16 no.2
    • /
    • pp.35-48
    • /
    • 2003
  • Home networking technology connects individual home appliances through a wired or wireless network and makes possible new functions that were impossible when they were used independently. However, the new functions must not simply be confusing arrays of functions that are possible to implement, but those absolutely necessary to the users. To develop innovative information appliances with such functions, scenarios were used and played guiding roles in suggesting new product ideas, making design mockups, and producing videos to show natural situations where the products would be used in home of the future. In the phase of suggesting new product ideas, user action scenarios in the home, generated by a team consisting of experts in the fields of cognitive engineering, user interface, computer science, cultural anthropology, interaction design, and product design, helped the team identify user needs and design factors necessary to fulfill those needs and suggest new product ideas from the design factors. In the phase of making design mockups, the procedures of using the products were described in the scenario format. Based on the scenarios the s쇼les and the user interfaces of them were designed. In the phase of producing videos, the interactions between the user and the product were embodied in the course of professional writers'arranging the scenarios of using the products for the scripts of the videos. Videos were produced to show the actual situations where the design mockups would be used in home of the future and the dynamic aspects of interaction design.

  • PDF

Arduino-based Tangible User Interfaces Smart Puck Systems (아두이노 기반의 텐저블 유저 인터페이스 스마트퍽 시스템)

  • Bak, Seon Hui;Kim, Eung Soo;Lee, Jeong Bae;Lee, Heeman
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.334-343
    • /
    • 2016
  • In this paper, we developed a low cost smart puck system that can interact with the intuitive operation of natural finger touches. The tangible smart puck, designed for capacitive tabletop display, has Arduino embedded processor which communicates only with the MT server. The MT server communicates both to the smart puck and the display server. The display server displays the relevance information on the location of the smart pucks on the tabletop display and handles interactions with the users. The experiment results show that the accuracy of identifying the smart puck ID was very reliable enough to use in practice, and the information presentation processing time is confirmed excellent enough compared to traditional expensive commercial products.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Natural Language Interface for Composite Web Services (복합 웹 서비스를 위한 자연어 인터페이스)

  • Lim, Jong-Hyun;Lee, Kyong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.144-156
    • /
    • 2010
  • With the wide spread of Web services in various fields, there is a growing interest in building a composite Web service, however, it is very difficult for ordinary users to specify how to compose services. Therefore, a convenient interface for generating and invoking composite Web services are required. This paper proposes a natural language interface to invoke services. The proposed interface provides a way to describe users' requests for composite Web Services in a natural language. A user with no technical knowledge about Web services can describe requests for composite Web services through the proposed interface. The proposed method extracts a complex workflow and finds appropriate Web services from the requests. Experimental results show that the proposed method extracts a sophisticated workflow from complex sentences with many phrases and control constructs.

Buying vs. Using: User Segmentation & UI Optimization through Mobile Phone Log Analysis (구매 vs. 사용 휴대폰 Log 분석을 통한 사용자 재분류 및 UI 최적화)

  • Jeon, Myoung-Hoon;Na, Dae-Yol;Ahn, Jung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.460-464
    • /
    • 2008
  • To improve and optimize user interfaces of the system, the accurate understanding of users' behavior is an essential prerequisite. Direct questions depend on user' s ambiguous memory and usability tests depend on the researchers' intention instead of users'. Furthermore, they do not provide with natural context of use. In this paper we described the work which examined users' behavior through log analysis in their own environment. 50 users were recruited by consumer segmentation and they were downloaded logging-software in their mobile phone. After two weeks, logged data were gathered and analyzed. The complementary methods such as a user diary and an interview were conducted. The result of the analysis showed the frequency of menu and key access, used time, data storage and several usage patterns. Also, it was found that users could be segmented into new groups by their usage patterns. The improvement of the mobile phone user interface was proposed based on the result of this study.

  • PDF

Effects of Interactivity and Usage Mode on User Experience in Chatbot Interface (챗봇 기반 인터페이스의 상호작용성과 사용 모드가 사용자 경험에 미치는 영향)

  • Baek, Hyunji;Kim, Sangyeon;Lee, Sangwon
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.35-43
    • /
    • 2019
  • This study examines how interactivity and usage mode of a chatbot interface affects user experience. Chatbot has rapidly been commercialized in accordance with improvements in artificial intelligence and natural language processing. However, most of the researches have focused on the technical aspect to improve the performance of chatbots, and it is necessary to study user experience on a chatbot interface. In this article, we investigated how 'interactivity' of an interface and the 'usage mode' referring to situations of a user affect the satisfaction, flow, and perceived usefulness of a chatbot for exploring user experience. As the result, first, the higher level of interactivity, the higher user experience. Second, usage mode showed interaction effect with interactivity on flow, although it didn't show the main effect. In specific, when interactivity is high in usage mode, flow was the highest rather than other conditions. Thus, for designing better chatbot interfaces, it should be considered to increase the degree of interactivity, and for users to achieve a goal easily through various functions with high interactivity.

Development of a Voice User Interface for Web Browser using VoiceXML (VoiceXML을 이용한 VUI 지원 웹브라우저 개발)

  • Yea SangHoo;Jang MinSeok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.2
    • /
    • pp.101-111
    • /
    • 2005
  • The present web informations are mainly described in terms of HTML, which users obtain through input devices such as mouse, keyboard, etc. Thus the existing GUI environment have not supported human's most natural information acquisition means, that is, voice. To solve the problem, several vendors are developing voice user interface. However these products are deficient in man -machine interactivity and their accommodation of existing web environment. This paper presents a VUI(Voice User Interface) supporting web browser by utilizing more and more maturing speech recognition technology and VoiceXML, a markup language derived from XML. It provides users with both interfaces, VUI as well as GUI. In addition, XML Island technology is applied to the bowser in a way that VoiceXML fragments are nested in HTML documents to accommodate the existing web environment. Also for better interactivity, dialogue scenarios for menu, bulletin, and search engine are suggested.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

A Study on the Cognitive Potential of Pre-school Children with AR Collaborative TUI

  • Deng, Qianrong;Cho, Dong-min
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.649-659
    • /
    • 2022
  • The most important factor in pre-school children's psychological perception is ease of learning, and the closest measure is "natural" interaction. This study aims to explore the potential of tangible user interfaces (TUI) for AR collaboration for children's cognitive development. The conceptual model is constructed by analyzing physical interaction, spatial perception and social collaboration on the usability of TUI, to explore the role of TUI in pre-school children's cognition. In the empirical study, children aged 3-6 were taken as research objects. The experimental tool is "Plugo" education application. Parents answered questionnaires after observing their children's use. Research shows that physical interaction are the most critical factor in TUI. TUI is beneficial to the cultivation of spatial ability. The results are as follows: 1. Cronbach's Alpha and KMO were 0.921 and 0.965, which were significant and passed the reliability and validity test. 2. Through confirmatory factor analysis (model fit index, combinatorial validity), we found that physical interaction were closely related to usability. 3. The path analysis of the relationship proves that usability has a significant impact on the cultivation of pre-school children's spatial ability.

Implementation of a Virtual Crowd Simulation System

  • Jeong, Il-Kwon;Baek, Seong-Min;Lee, Choon-Young;Lee, In-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2217-2220
    • /
    • 2005
  • This paper introduces a practical implementation of virtual crowd simulation software. Usual commercial crowd simulation softwares are complex and have program-like script interfaces, which makes an animator hard to learn and use them. Based on the observations that most crowd scenes include walking, running and fighting movements, we have implemented a crowd simulation system that automatically generates movements of virtual characters given user's minimal direction of initial configuration. The system was implemented as a plug-in of Maya which is one of the most commonly used 3D software for movies. Because generated movements are based on optically captured motion clips, the results are sufficiently natural.

  • PDF