• Title/Summary/Keyword: Advanced user interaction

Search Result 101, Processing Time 0.026 seconds

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.4
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

Designing Mobile User Interface with Grip-Pattern Recognition (파지 형태 인식을 통한 휴대 단말용 사용자 인터페이스 설계)

  • Chang, Wook;Kim, Kee-Eung;Lee, Hyun-Jeong;Cho, Joon-Kee;Soh, Byung-Seok;Shim, Jung-Hyun;Yang, Gyung-Hye;Cho, Sung-Jung;Park, Joon-Ah
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.678-683
    • /
    • 2006
  • A novel and intuitive way of accessing applications of mobile devices is presented. The key idea is to use grip-pattern, which is naturally produced when a user tries to use the mobile device, as a clue to determine an application to be launched. To this end, a capacitive touch sensor system is carefully designed and installed underneath the housing of the mobile terminal to capture the image of the user's grip-pattern. The captured data is then recognized by a recognizer with dedicated preprocessing and postprocessing algorithms. The recognition test is performed to validate the feasibility of the proposed user interface system.

  • PDF

Designing a Mobile User Interface with Grip-Pattern Recognition (파지 형태 감지를 통한 휴대 단말용 사용자 인터페이스 개발)

  • Chang Wook;Kim Kee Eung;Lee Hyunjeong;Cho Joon Kee;Soh Byung Seok;Shim Jung Hyun;Yang Gyunghye;Cho Sung Jung;Park Joonah
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.245-248
    • /
    • 2005
  • This paper presents a novel user interface system which aims at easy controlling of mobile devices. The fundamental concept of the proposed interface is to launch an appropriate function of the device by sensing and recognizing the grip-pattern when the user tries to use the mobile device. To this end, we develop a prototype system which employs capacitive touch sensors covering the housing of the system and a recognition algorithm for offering the appropriate function which suitable for the sensed grip-pattern. The effectiveness and feasibility of the proposed method is evaluated through the test of recognition rate with the collected grip-pattern database.

  • PDF

MPEG-U part 2 based Advanced User Interaction Interface System (MPEG-U part 2 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Baek, A-Ram;Choi, Haechul
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.54-62
    • /
    • 2012
  • AUI(Advanced User Interaction) interface aims to enhance interaction between various input/output devices and scene descriptions represented by video, audio, and graphic. Recently, MPEG-U part 2 standardization for the AUI interface is under development by MPEG(moving picture experts group). This paper introduces MPEG-U part 2 standard and presents MPEG-U part 2 AUI interface system. The AUI interface system consists of user interface in/output modules and MPEG-U XML generation/interpretation modules. The former and the latter are for UID data handling and XML data processing, respectively. This system MPEG-U standards-based input/output devices and to improve the interaction with the user can be used as a framework. By implementation of the proposed AUI interface system, MPEG-U usage scenario is introduced and it is verified that the AUI interface system conforms to MPEG-U standard.

Gesture based Input Device: An All Inertial Approach

  • Chang Wook;Bang Won-Chul;Choi Eun-Seok;Yang Jing;Cho Sung-Jung;Cho Joon-Kee;Oh Jong-Koo;Kim Dong-Yoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.230-245
    • /
    • 2005
  • In this paper, we develop a gesture-based input device equipped with accelerometers and gyroscopes. The sensors measure the inertial measurements, i.e., accelerations and angular velocities produced by the movement of the system when a user is inputting gestures on a plane surface or in a 3D space. The gyroscope measurements are integrated to give orientation of the device and consequently used to compensate the accelerations. The compensated accelerations are doubly integrated to yield the position of the device. With this approach, a user's gesture input trajectories can be recovered without any external sensors. Three versions of motion tracking algorithms are provided to cope with wide spectrum of applications. Then, a Bayesian network based recognition system processes the recovered trajectories to identify the gesture class. Experimental results convincingly show the feasibility and effectiveness of the proposed gesture input device. In order to show practical use of the proposed input method, we implemented a prototype system, which is a gesture-based remote controller (Magic Wand).

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

An analysis of the component of Human-Robot Interaction for Intelligent room

  • Park, Jong-Chan;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2143-2147
    • /
    • 2005
  • Human-Robot interaction (HRI) has recently become one of the most important issues in the field of robotics. Understanding and predicting the intentions of human users is a major difficulty for robotic programs. In this paper we suggest an interaction method allows the robot to execute the human user's desires in an intelligent room-based domain, even when the user does not give a specific command for the action. To achieve this, we constructed a full system architecture of an intelligent room so that the following were present and sequentially interconnected: decision-making based on the Bayesian belief network, responding to human commands, and generating queries to remove ambiguities. The robot obtained all the necessary information from analyzing the user's condition and the environmental state of the room. This information is then used to evaluate the probabilities of the results coming from the output nodes of the Bayesian belief network, which is composed of the nodes that includes several states, and the causal relationships between them. Our study shows that the suggested system and proposed method would improve a robot's ability to understand human commands, intuit human desires, and predict human intentions resulting in a comfortable intelligent room for the human user.

  • PDF

Adaptive Event Clustering for Personalized Photo Browsing (사진 사용 이력을 이용한 이벤트 클러스터링 알고리즘)

  • Kim, Kee-Eung;Park, Tae-Suh;Park, Min-Kyu;Lee, Yong-Beom;Kim, Yeun-Bae;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.711-716
    • /
    • 2006
  • Since the introduction of digital camera to the mass market, the number of digital photos owned by an individual is growing at an alarming rate. This phenomenon naturally leads to the issues of difficulties while searching and browsing in the personal digital photo archive. Traditional approach typically involves content-based image retrieval using computer vision algorithms. However, due to the performance limitations of these algorithms, at least on the casual digital photos taken by non-professional photographers, more recent approaches are centered on time-based clustering algorithms, analyzing the shot times of photos. These time-based clustering algorithms are based on the insight that when these photos are clustered according to the shot-time similarity, we have "event clusters" that will help the user browse through her photo archive. It is also reported that one of the remaining problems with the time-based approach is that people perceive events in different scales. In this paper, we present an adaptive time-based clustering algorithm that exploits the usage history of digital photos in order to infer the user's preference on the event granularity. Experiments show significant performance improvements in the clustering accuracy.

  • PDF

A new human-robot interaction method using semantic symbols

  • Park, Sang-Hyun;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2005-2010
    • /
    • 2004
  • As robots become more prevalent in human daily life, situations requiring interaction between humans and robots will occur more frequently. Therefore, human-robot interaction (HRI) is becoming increasingly important. Although robotics researchers have made many technical developments in their field, intuitive and easy ways for most common users to interact with robots are still lacking. This paper introduces a new approach to enhance human-robot interaction using a semantic symbol language and proposes a method to acquire the intentions of robot users. In the proposed approach, each semantic symbol represents knowledge about either the environment or an action that a robot can perform. Users'intentions are expressed by symbolized multimodal information. To interpret a users'command, a probabilistic approach is used, which is appropriate for interpreting a freestyle user expression or insufficient input information. Therefore, a first-order Markov model is constructed as a probabilistic model, and a questionnaire is conducted to obtain state transition probabilities for this Markov model. Finally, we evaluated our model to show how well it interprets users'commands.

  • PDF

An Outlook for Interaction Experience in Next-generation Television

  • Kim, Sung-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.557-565
    • /
    • 2012
  • Objective: This paper focuses on the new trend of applying NUI(natural user interface) such as gesture interaction into television and investigates on the design improvement needed in application. The intention is to find better design direction of NUI on television context, which will contribute to making new features and behavioral changes occurring in next-generation television more practically usable and meaningful use experience elements. Background: Traditional television is rapidly evolving into next-generation television thanks to the influence of "smartness" from mobile domain. A number of new features and behavioral changes occurred from such evolution are on their way to be characterized as the new experience elements of next-generation television. Method: A series of expert review by television UX professionals based on AHP (Analytic Hierarchy Process) was conducted to check on the "relative appropriateness" of applying gesture interaction to a number of selected television user experience scenarios. Conclusion: It is critical not to indiscriminately apply new interaction techniques like gesture into television. It may be effective in demonstrating new technology but generally results in poor user experience. It is imperative to conduct consistent validation of its practical appropriateness in real context. Application: The research will be helpful in applying gesture interaction in next-generation television to bring optimal user experience in.