• 제목/요약/키워드: Finger Pointing

검색결과 16건 처리시간 0.023초

Gaze Effects on Spatial and Kinematic Characteristics in Pointing to a Remembered Target

  • Ryu, Young-Uk;Kim, Won-Dae;Kim, Hyeong-Dong
    • 한국전문물리치료학회지
    • /
    • 제13권4호
    • /
    • pp.23-29
    • /
    • 2006
  • The purpose of the present study was to examine gaze effects on spatial and kinematic characteristics during a pointing task. Subjects were asked to watch and point to an aimed target (2 mm in diameter) displayed on a vertically mounted board. Four gaze conditions were developed as combinations of "seeing-aiming" in terms of the eye movements: Focal-Focal (F-F), Focal-Fixing (F-X), Fixing-Focal (X-F), and Fixing-Fixing (X-X). Both the home target and an aimed target were presented for 1 second and then were disappeared in F-F and X-F. In X-F and X-X, only an aimed target disappeared after 1 second. Subjects were asked to point (with index finger tip) to an aimed target accurately as soon as the aimed target was removed. A significant main effect of gaze was found (p<.01) for normalized movement time. Peripheral retina targets had significantly larger absolute error compared to central retina targets on the x (medio-lateral) and z (superior-inferior) axes (p<.01). A significant undershooting to peripheral retina targets on the x axis was found (p<.01). F-F and X-F had larger peak velocities compared to F-X and X-X (p<.01). F-F and X-F were characterized by more time spent in the deceleration phase compared to F-X and X-X (p<.01). The present study demonstrates that central vision utilizes a form of on-line visual processing to reach to an object, and thus increases spatial accuracy. However, peripheral vision utilizes a relatively off-line visual processing with a dependency on proprioceptive information.

  • PDF

On a Multi-Agent System for Assisting Human Intention

  • Tawaki, Hajime;Tan, Joo Kooi;Kim, Hyoung-Seop;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1126-1129
    • /
    • 2003
  • In this paper, we propose a multi-agent system for assisting those who need help in taking objects around him/her. One may imagine this kind of situation when a person is lying in bed and wishes to take an object on a distant table that cannot be reached only by stretching his/her hand. The proposed multi-agent system is composed of three main independent agents; a vision agent, a robot agent, and a pass agent. Once a human expresses his/her intention by pointing to a particular object using his/her hand and a finger, these agents cooperatively bring the object to him/her. Natural communication between a human and the multi-agent system is realized in this way. Performance of the proposed system is demonstrated in an experiment, in which a human intends to take one of the four objects on the floor and the three agents successfully cooperate to find out the object and to bring it to the human.

  • PDF

Gesture based Natural User Interface for e-Training

  • Lim, C.J.;Lee, Nam-Hee;Jeong, Yun-Guen;Heo, Seung-Il
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.577-583
    • /
    • 2012
  • Objective: This paper describes the process and results related to the development of gesture recognition-based natural user interface(NUI) for vehicle maintenance e-Training system. Background: E-Training refers to education training that acquires and improves the necessary capabilities to perform tasks by using information and communication technology(simulation, 3D virtual reality, and augmented reality), device(PC, tablet, smartphone, and HMD), and environment(wired/wireless internet and cloud computing). Method: Palm movement from depth camera is used as a pointing device, where finger movement is extracted by using OpenCV library as a selection protocol. Results: The proposed NUI allows trainees to control objects, such as cars and engines, on a large screen through gesture recognition. In addition, it includes the learning environment to understand the procedure of either assemble or disassemble certain parts. Conclusion: Future works are related to the implementation of gesture recognition technology for a multiple number of trainees. Application: The results of this interface can be applied not only in e-Training system, but also in other systems, such as digital signage, tangible game, controlling 3D contents, etc.

The Development of a Haptic Interface for Interacting with BIM Elements in Mixed Reality

  • Cho, Jaehong;Kim, Sehun;Kim, Namyoung;Kim, Sungpyo;Park, Chaehyeon;Lim, Jiseon;Kang, Sanghyeok
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.1179-1186
    • /
    • 2022
  • Building Information Modeling (BIM) is widely used to efficiently share, utilize and manage information generated in every phase of a construction project. Recently, mixed reality (MR) technologies have been introduced to more effectively utilize BIM elements. This study deals with the haptic interactions between humans and BIM elements in MR to improve BIM usability. As the first step in interacting with virtual objects in mixed reality, we challenged moving a virtual object to the desired location using finger-pointing. This paper presents the procedure of developing a haptic interface system where users can interact with a BIM object to move it to the desired location in MR. The interface system consists of an MR-based head-mounted display (HMD) and a mobile application developed using Unity 3D. This study defined two segments to compute the scale factor and rotation angle of the virtual object to be moved. As a result of testing a cuboid, the user can successfully move it to the desired location. The developed MR-based haptic interface can be used for aligning BIM elements overlaid in the real world at the construction site.

  • PDF

MEMS 기반 손가락 착용형 컴퓨터 입력장치 (A MEMS-Based Finger Wearable Computer Input Devices)

  • 김창수;정세현
    • 한국정보통신학회논문지
    • /
    • 제20권6호
    • /
    • pp.1103-1108
    • /
    • 2016
  • 각종 센서 기술의 발달로 일반 사용자들이 스마트폰, 콘솔게임기와 같은 동작인식 장치를 접해 볼 수 있는 환경이 증가하면서 동작인식 기반 입력장치에 대한 사용자 요구가 증가하는 추세이다. 기존 동작인식 마우스는 소형으로 제작이 되어 버튼을 조작하는데 어려움이 있으며, 동작인식 기술을 커서의 포인팅에만 사용되어 동작인식 기술을 적용에는 한계가 있다. 이에 본 논문에서는 MEMS 기반 동작인식 센서를 이용, 인체의 2지점(엄지와 검지)의 동작을 인식하여 동작데이터와 제어신호를 생성하고, 생성된 제어신호를 무선 송신하는 컴퓨터 입력장치에 관해 연구하였다.

MEMS 기반 손가락 착용형 컴퓨터 입력장치에 관한 연구 (A Study of an MEMS-based finger wearable computer input devices)

  • 김창수;정세현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2016년도 춘계학술대회
    • /
    • pp.791-793
    • /
    • 2016
  • 각종 센서 기술의 발달로 일반 사용자들이 스마트폰, 콘솔게임기(닌텐도 Wii)와 같은 동작인식 장치를 접해 볼 수 있는 환경이 증가하면서 동작인식 기반 입력장치에 대한 사용자 니즈가 증가하는 추세이다. 기존 동작인식 마우스는 외부에 마우스 버튼이 변형 된 형태로 장착되어 마우스 좌,우 버튼과 휠 역할을 하며, 내부에는 가속도센서(또는 자이로센서 포함)를 장착하여 마우스 커서 역할을 담당하고 있고, 소형으로 제작이 되어 버튼을 조작하는데 어려움이 있으며, 동작인식 기술을 커서의 포인팅에만 사용되어 동작인식 기술을 적용에는 한계가 있다. 이에 본 논문에서는 MEMS 기반 모션 레코그니션 센서(Motion Recognition Sensor)를 이용, 인체의 2지점(엄지와 검지)의 동작을 인식하여 동작데이터를 생성하고 이를 기초로 하여 사전 결정된 매칭테이블(커서이동 및 마우스 버튼 이벤트)과 비교, 판단하여 제어신호를 생성하고, 생성된 제어신호를 무선 송신하는 컴퓨터 입력장치에 관해 연구하였다.

  • PDF