• 제목/요약/키워드: Gesture-based interaction

검색결과 153건 처리시간 0.029초

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

다수의 스마트 디바이스를 활용한 멀티 디스플레이 동적 생성 및 인터랙션 (Dynamic Association and Natural Interaction for Multi-Displays Using Smart Devices)

  • 김민석;이재열
    • 한국CDE학회논문집
    • /
    • 제20권4호
    • /
    • pp.337-347
    • /
    • 2015
  • This paper presents a dynamic association and natural interaction method for multi-displays composed of smart devices. Users can intuitively associate relations among smart devices by shake gestures, flexibly modify the layout of the display by tilt gestures, and naturally interact with the multi-display by multi-touch interactions. First of all, users shake their smart devices to create and bind a group for a multi-display with a matrix configuration in an ad-hoc and collaborative situation. After the creation of the group, if needed, their display layout can be flexibly changed by tilt gestures that move the tilted device to the nearest vacant cell in the matrix configuration. During the tilt gestures, the system automatically modifies the relation, view, and configuration of the multi-display. Finally, users can interact with the multi-display through multi-touch interactions just as they interact with a single large display. Furthermore, depending on the context or role, synchronous or asynchronous mode is available to them for providing a split view or another UI. We will show the effectiveness and advantages of the proposed approach by demonstrating implementation results and evaluating the method by the usability study.

차량내 시스템에 대한 접촉 및 제스처 방식의 운전자 인터페이스에 관한 연구 (A Study on Tactile and Gestural Controls of Driver Interfaces for In-Vehicle Systems)

  • 심지성;이상헌
    • 한국CDE학회논문집
    • /
    • 제21권1호
    • /
    • pp.42-50
    • /
    • 2016
  • Traditional tactile controls that include push buttons and rotary switches may cause significant visual and biomechanical distractions if they are located away from the driver's line of sight and hand position, for example, on the central console. Gestural controls, as an alternative to traditional controls, are natural and can reduce visual distractions; however, their types and numbers are limited and have no feedback. To overcome the problems, a driver interface combining gestures and visual feedback with a head-up display has been proposed recently. In this paper, we investigated the effect of this type of interface in terms of driving performance measures. Human-in-the-loop experiments were conducted using a driving simulator with the traditional tactile and the new gesture-based interfaces. The experimental results showed that the new interface caused less visual distractions, better gap control between ego and target vehicles, and better recognition of road conditions comparing to the traditional one.

각도정보를 이용한 중국식 한손 숫자표현 2,6,8 분류 방법 (Recognition Method of Chinese Finger Number 2, 6, 8 Using Angle Information)

  • 리평;이희성;김미혜
    • 한국게임학회 논문지
    • /
    • 제12권6호
    • /
    • pp.121-130
    • /
    • 2012
  • 최근 스마트 미디어가 발달함에 따라 인간과 컴퓨터 사이의 상호작용(HCI)에 대한 욕구가 증대되었다. 또 이러한 욕구에 부합하기 위하여 영상처리를 이용한 제스처 인식분야의 연구가 활발히 진행되고 있다. 이에 본 논문에서는 영상처리를 이용한 중국식 한손 숫자 표현을 인식하는 방법을 제안한다. 입력된 영상을 피부색을 기반으로 이진화 하여 관심영역을 추출하고, 펼쳐진 손가락의 각도정보를 이용하여 사용자가 표현한 숫자를 파악한다. 본 논문에서는 중국식 한손 숫자 표현 2,6,8을 인식하였고 제안한 방법은 95.83%의 인식률을 보였다.

멀티 카드 터치기반 인터랙티브 아케이드 게임 시스템 구현 (Development of Multi Card Touch based Interactive Arcade Game System)

  • 이동훈;조재익;윤태수
    • 한국엔터테인먼트산업학회논문지
    • /
    • 제5권2호
    • /
    • pp.87-95
    • /
    • 2011
  • 최근 다양한 인터랙티브 인터페이스 개발로 인해 체감형 게임 환경에 관심이 모아지고 있다. 따라서 본 논문에서는 증강현실 기술의 하나인 마커인식 인터페이스와 멀티 터치 인터랙션 인터페이스를 이용한 멀티 카드 터치 기반의 인터랙티브 아케이드 시스템을 제안한다. 이를 위해 DI 기반의 인식 알고리즘을 적용하여 카드의 위치, 방향 정보를 인식한다. 또한 사용자의 손 제스처 정보를 트래킹하여 다양한 인터랙션 메타포를 제공한다. 본 시스템은 사용자에게 좀 더 높은 몰입감을 제공하며 새로운 경험을 제공한다. 따라서 제안하는 시스템은 체감형 아케이드 게임기기에 활용될 것이다.

화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발 (Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions)

  • 진용규;유수정;조혜경
    • 로봇학회논문지
    • /
    • 제10권3호
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템 (MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition)

  • 한국희;이인재;최해철
    • 방송공학회논문지
    • /
    • 제19권1호
    • /
    • pp.83-95
    • /
    • 2014
  • 최근 손과 손가락을 인식하는 기술은 HCI(human computer interaction)에서 자연스럽고 친숙한 환경을 제공하기 위한 기술로 주목 받고 있다. 본 논문에서는 깊이 카메라를 이용하여 손과 손가락의 모양을 검출 및 인식하는 방법을 제안하고, 그 인식 결과를 활용하여 다양한 기기와 상호연동 할 수 있는 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템을 제안한다. 제안하는 시스템은 깊이 카메라를 이용하여 손을 검출한 후, 손목의 위치를 찾아 최소 손 영역을 검출한다. 이어서 검출된 최소 손 영역으로부터 손가락 끝점을 검출 한 후, 최소 손 영역의 중심점과 손가락 끝점간의 뼈대를 만든다. 이렇게 만든 뼈대의 길이와 인접 뼈대간의 각도차를 분석하여 손가락을 판별한다. 또한, 제안하는 시스템은 사용자가 MPEG-U에서 정의하는 다양한 심벌들을 손 자세로 취하였을 때 제안 방법을 이용하여 손 자세를 인식하고, 인식 결과를 상호연동 가능한 MPEG-U 스키마 구조로 표현한다. 실험에서는 다양한 환경에서 제안하는 손 자세 인식 방법의 성능을 보인다. 또한, 제안 시스템의 상호연동성을 보이기 위해 인식 결과를 MPEG-U part2 표준에 맞는 XML 문서로 표현하고, MPEG-U 참조 소프트웨어를 이용하여 그 표현 결과에 대한 표준 부합성을 검증한다.

Designing Effective Virtual Training: A Case Study in Maritime Safety

  • Jung, Jinki;Kim, Hongtae
    • 대한인간공학회지
    • /
    • 제36권5호
    • /
    • pp.385-394
    • /
    • 2017
  • Objective: The aim of this study is to investigate how to design effective virtual reality-based training (i.e., virtual training) in maritime safety and to present methods for enhancing interface fidelity by employing immersive interaction and 3D user interface (UI) design. Background: Emerging virtual reality technologies and hardware enable to provide immersive experiences to individuals. There is also a theory that the improvement of fidelity can improve the training efficiency. Such a sense of immersion can be utilized as an element for realizing effective training in the virtual space. Method: As an immersive interaction, we implemented gesture-based interaction using leap motion and Myo armband type sensors. Hand gestures captured from both sensors are used to interact with the virtual appliance in the scenario. The proposed 3D UI design is employed to visualize appropriate information for tasks in training. Results: A usability study to evaluate the effectiveness of the proposed method has been carried out. As a result, the usability test of satisfaction, intuitiveness of UI, ease of procedure learning, and equipment understanding showed that virtual training-based exercise was superior to existing training. These improvements were also independent of the type of input devices for virtual training. Conclusion: We have shown through experiments that the proposed interaction design results are more efficient interactions than the existing training method. The improvement of interface fidelity through intuitive and immediate feedback on the input device and the training information improve user satisfaction with the system, as well as training efficiency. Application: Design methods for an effective virtual training system can be applied to other areas by which trainees are required to do sophisticated job with their hands.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

증강현실을 위한 히스토그램 기반의 손 인식 시스템 (Histogram Based Hand Recognition System for Augmented Reality)

  • 고민수;유지상
    • 한국정보통신학회논문지
    • /
    • 제15권7호
    • /
    • pp.1564-1572
    • /
    • 2011
  • 본 논문에서는 증강현실을 위한 히스토그램 기반의 손 인식 기법을 제안한다. 손동작 인식은 사용자와 컴퓨터 사이의 친숙한 상호작용을 가능하게 한다. 하지만, 비젼 기반의 손동작 인식은 복잡한 손의 형태로 인한 관찰 방향 변화에 따른 입력 영상의 다양함으로 인식에 어려움이 따른다. 따라서 본 논문에서는 손의 형태적인 특징을 이용한 새로운 모델을 제안한다. 제안하는 기법에서 손 인식은 카메라로부터 획득한 영상에서 손 영역을 분리하는 부분과 인식하는 부분으로 구성된다. 카메라로부터 획득한 영상에서 배정을 제거하고 피부색 정보를 이용하여 손 영역을 분리한다. 다음으로 히스토그램을 이용하여 손의 특징점을 구하여 손의 형태를 계산한다. 마지막으로 판별된 손인식 정보를 이용하여 3차원 객체를 제어하는 증강현실 시스템을 구현하였다. 실험을 통해 제안한 기법의 구현 속도가 빠르고 인식률도 91.7%로 비교적 높음을 확인하였다.