• Title/Summary/Keyword: Human Computer Interface (HCI)

Search Result 117, Processing Time 0.022 seconds

Human-Computer Interface Based on Bio-Signal (생체신호 기반 사용자 인터페이스 기술)

  • Kim, J.S.;Kim, H.K.;Jeong, H.;Kim, K.H.;Im, S.H.;Son, W.H.
    • Electronics and Telecommunications Trends
    • /
    • v.20 no.4 s.94
    • /
    • pp.67-81
    • /
    • 2005
  • 생체신호 기반 인터페이스 기술이란 근전도 및 뇌파와 같은 인위적으로 발생 가능한 생체 신호를 이용하여, 노약자나 장애인이 컴퓨터를 이용하는 데 있어서의 인터페이스(Human-Computer Interface)로 사용하거나 휠체어 등의 재활기기 구동 제어를 위한 명령어를 생성하기 위한 기술을 의미한다. 생체신호 기반 인터페이스는 센서를 몸에 부착하여 사용하며 사용자의 의도에 의해 자연스럽게 생성된 생체신호를 이용하기 때문에 가상현실, 착용형 컴퓨터나 지체 장애인용 인터페이스로 활용될 수 있을 것으로 기대된다. 본 논문에서는 이러한 생체신호 기반의 인터페이스에 관한 국내외 기술 동향과 현재 개발중인 HCI 시스템에 대한 개요에 대해 논하고자한다.

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.4
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

User modeling agent using natural language interface for information retrieval in WWW (자연언어 대화 Interface를 이용한 정보검색 (WWW)에 있어서 사용자 모델 에이젼트)

  • Kim, Do-Wan;Park, Jae-Deuk;Park, Dong-In
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.75-84
    • /
    • 1996
  • 인간의 가장 자연스러운 통신 수단은 자연언어이다. 본 논문에서는 자연언어 대화체를 사용한 인터네트 상에서의 정보 검색에 있어서 사용자 모델링 에이젼트 (User modeling Agent or User modeling system)의 모델 형성 기술 및 그의 역할을 서술하고 있다. 사용자 모델은 인간의 심성 모델 (Mental model)에 해당하며, 심성 모델이 사용자가 시스템에 대한 지식과 자신의 문제상황 또는 주변환경에 대하여 가지는 모델임에 반하여, 사용자 모델은 시스템이 사용자의 지식 및 문제 상황을 표상(Representation)하여 형성한 사용자에 대한 모델이다. 따라서 사용자 모델은 시스템의 지능적인 Human Computer Interaction (HCI)의 지원을 위하여 필수적이다. 본 논문에서는 사용자 모델 형성 기술 및 지능형 대화 모델의 지원을 위한 시스템 실례로써 사용자 모델 형성 시스템 $BGP-MS^2$ 와 사용자 모델의 형성을 위하여 구축된 지식베이스 구조를 설명하고 있다.

  • PDF

Real-time Avatar Animation using Component-based Human Body Tracking (구성요소 기반 인체 추적을 이용한 실시간 아바타 애니메이션)

  • Lee Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.65-74
    • /
    • 2006
  • Human tracking is a requirement for the advanced human-computer interface (HCI), This paper proposes a method which uses a component-based human model, detects body parts, estimates human postures, and animates an avatar, Each body part consists of color, connection, and location information and it matches to a corresponding component of the human model. For human tracking, the 2D information of human posture is used for body tracking by computing similarities between frames, The depth information is decided by a relative location between components and is transferred to a moving direction to build a 2-1/2D human model. While each body part is modelled by posture and directions, the corresponding component of a 3D avatar is rotated in 3D using the information transferred from the human model. We achieved 90% tracking rate of a test video containing a variety of postures and the rate increased as the proposed system processed more frames.

  • PDF

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

Appraising the Interface Features of Web Search Engines Based on User-defined Relevance Criteria (이용자정의형 적합성 기준을 토대로 한 웹검색엔진 인터페이스 평가)

  • Kim, Yang-Woo
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.1
    • /
    • pp.247-262
    • /
    • 2011
  • Although research has shown a significant amount of work identifying various dimensions of relevance along with exhaustive lists of relevance criteria, there seem to have been less effort to apply the findings to improve actual systems design. Based on this assumption, this paper investigates to what extent those relevance criteria have been incorporated into the interface features of major commercial Web search engines, suggesting what can/should be done more. Before stepping into the actual system features, this paper compares recent relevance research in Information Science with other human factor studies both in Information Science and its neighboring discipline (HCI), as an attempt to identify studies that are conceptually similar to the relevance research, but not named as such way. Similarities and differences between these studies are presented. Recommendations suggested to support applicable interface features include: 1) further personalization of interface designs; 2) author-supplied meta tags for the Web contents; and 3) extensions of beyond-topical representations based on link structure.

A Study on the Tangible Interface Design System -With Emphasis on the Prototyping & Design Methods of Tangibles - (실체적 인터페이스 디자인 시스템에 관한 연구 - 텐저블즈의 설계 및 프로토타입 구현을 중심으로 -)

  • 최민영;임창영
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.5-14
    • /
    • 2004
  • Introducing human capacities of control and sensation which have been overlooked into Human-Computer Interaction(HCI), Ubiquitous computing, Augmented Reality and others have been researched recently. New vision of HCI has embodied in Tangible User Interface(TUI). TUI allows users to grasp and manipulate bits with everyday physical object and architectural surface and also TUI enables user to be aware of background object at the periphery of human perception using ambient display media such of light, sound, airflow and water movement. Tangibles, physical object which constitutes TUI system, is the physical object embodied digital bit. Tangibles is not only input device but also the configuration of computing. To get feedback of computing result, user controls the system with Tangibles as action and the system represents reaction in response to User's action. User appreciates digital representation (sound, graphic information) and physical representation (form, size, location, direction etc.) for reaction. TUI's characters require the consideration about both user's action and system's reaction. Therefore we have to need the method to be concerned about physical object and interaction which can be combined with action, reaction and feedback.

  • PDF

A Study on the Multi-Modal Browsing System by Integration of Browsers Using lava RMI (자바 RMI를 이용한 브라우저 통합에 의한 멀티-모달 브라우징 시스템에 관한 연구)

  • Jang Joonsik;Yoon Jaeseog;Kim Gukboh
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.95-103
    • /
    • 2005
  • Recently researches about multi-modal system has been studied widely and actively, Such multi-modal systems are enable to increase possibility of HCI(Human-computer Interaction) realization, enable to provide information in various ways and also enable to be applicable in e-business application, If ideal multi-modal system can be realized in future, eventually user can maximize interactive usability between information instrument and men in hands-free and eyes-free, In this paper, a new multi-modal browsing system using Java RMI as communication interface, which integrated by HTML browser and voice browser is suggested and also English-English dictionary search application system is implemented as example.

  • PDF

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF