• 제목/요약/키워드: User Tracking

검색결과 600건 처리시간 0.025초

대화형 방송 환경에서 부가서비스 제공을 위한 객체 추적 시스템 (Object Tracking System for Additional Service Providing under Interactive Broadcasting Environment)

  • 안준한;변혜란
    • 한국정보과학회논문지:정보통신
    • /
    • 제29권1호
    • /
    • pp.97-107
    • /
    • 2002
  • 본 논문은 대화형 방송환경에서 부가서비스를 제공받기 위해서 탐다운(Top-Down)메뉴 검색을 하는 것이 아니라, 방송영상의 화면 내부에서 부가서비스가 제공되길 원하는 객체를 선택했을 때 선택한 객체에 대한 부가서비스를 제공하는 새로운 방법을 제안한다. 이를 위해서는 실시간으로 방송되고 있는 동영상과 객체정보(위치, 크기, 모양)의 동기를 맞추는 기술과 동영상 내부의 객체 추적 기술이 필수적이다. 동영상과 객체정보의 동기를 맞추는 기술은 마이크로소프트사의 다이렉트쇼(DirectShow)를 이용하였으며, 객체를 추적하기 위한 방법은 객체를 크게 사람과 사물로 나누어, 사람의 얼굴은 모델을 만들어 추적하는 모델 기반 얼굴 추적 방법(Model-based face tracking)을 사용하고 나머지 사물에 대해서는 객체의 영역을 지정하여 영역을 추적하는 움직임 기반 추적 방법(Motion-based Tracking)을 적용하였다. 또한 움직임 기반 추적을 할 수 있도록 하고 모델 기반 추적 방법을 적용하여 움직임이 큰 객체도 검색 영역 확장 없이 정확한 추적을 할 수 있도록 하고 모델 기반 추적 방법에는 타원 모델과 색상 모델을 결합한 얼굴 모델을 적용하여 얼굴이 회전하여도 정확한 추적을 할 수 있도록 개선하였다.

Investigating Key User Experience Factors for Virtual Reality Interactions

  • Ahn, Junyoung;Choi, Seungho;Lee, Minjae;Kim, Kyungdoh
    • 대한인간공학회지
    • /
    • 제36권4호
    • /
    • pp.267-280
    • /
    • 2017
  • Objective: The aim of this study is to investigate key user experience factors of interactions for Head Mounted Display (HMD) devices in the Virtual Reality Environment (VRE). Background: Virtual reality interaction research has been conducted steadily, while interaction methods and virtual reality devices have improved. Recently, all of the virtual reality devices are head mounted display based ones. Also, HMD-based interaction types include Remote Controller, Head Tracking, and Hand Gesture. However, there is few study on usability evaluation of virtual reality. Especially, the usability of HMD-based virtual reality was not investigated. Therefore, it is necessary to study the usability of HMD-based virtual reality. Method: HMD-based VR devices released recently have only three interaction types, 'Remote Controller', 'Head Tracking', and 'Hand Gesture'. We search 113 types of research to check the user experience factors or evaluation scales by interaction type. Finally, the key user experience factors or relevant evaluation scales are summarized considering the frequency used in the studies. Results: There are various key user experience factors by each interaction type. First, Remote controller's key user experience factors are 'Ease of learning', 'Ease of use', 'Satisfaction', 'Effectiveness', and 'Efficiency'. Also, Head tracking's key user experience factors are 'Sickness', 'Immersion', 'Intuitiveness', 'Stress', 'Fatigue', and 'Ease of learning'. Finally, Hand gesture's key user experience factors are 'Ease of learning', 'Ease of use', 'Feedback', 'Consistent', 'Simple', 'Natural', 'Efficiency', 'Responsiveness', 'Usefulness', 'Intuitiveness', and 'Adaptability'. Conclusion: We identified key user experience factors for each interaction type through literature review. However, we did not consider objective measures because each study adopted different performance factors. Application: The results of this study can be used when evaluating HMD-based interactions in virtual reality in terms of usability.

Development of 3-D viewer for indoor location tracking system using wireless sensor network

  • Yang, Chi-Shian;Chung, Wan-Young
    • 센서학회지
    • /
    • 제16권2호
    • /
    • pp.110-114
    • /
    • 2007
  • In this paper we present 3-D Navigation View, a three-dimensional visualization of indoor environment which serves as an intuitive and unified user interface for our developed indoor location tracking system via Virtual Reality Modeling Language (VRML) in web environment. The extracted user's spatial information from indoor location tracking system was further processed to facilitate the location indication in virtual 3-D indoor environment based on his location in physical world. External Authoring Interface (EAI) provided by VRML enables the integration of interactive 3-D graphics into web and direct communication with the encapsulated Java applet to update position and viewpoint of user periodically in 3-D indoor environment. As any web browser with VRML viewer plug-in is able to run the platform independent 3-D Navigation View, specialized and expensive hardware or software can be disregarded.

사용자 캘리브레이션이 필요 없는 시선 추적 모델 연구 (User-Calibration Free Gaze Tracking System Model)

  • 고은지;김명준
    • 한국정보통신학회논문지
    • /
    • 제18권5호
    • /
    • pp.1096-1102
    • /
    • 2014
  • 적외선 조명을 이용한 간접시선추적 시스템에서는 촬영된 이미지에서 동공에 반사된 조명의 위치에 대한 캘리브레이션이 필수적이다. 하지만 안구의 크기나 머리의 위치에 따라 달라질 수도 있는 변수가 캘리브레이션 과정에서 정의된 상수로 계산에 포함되어 있어 오차를 감소시키는데 한계가 있다. 본 논문은 적외선 조명을 사용하면서도, 사용자 캘리브레이션 과정을 생략할 수 있는 방법을 연구한다. 반사각에 의한 글린트(glint)위치 차이에 영향을 받지 않게 하면서, 시스템의 모델과 시선 계산은 단순하게 하여 실시간 연산이 가능하도록 하는 것이 목표이다.

스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발 (Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication)

  • 오병훈
    • 한국인터넷방송통신학회논문지
    • /
    • 제17권5호
    • /
    • pp.143-150
    • /
    • 2017
  • 본 논문에서는 스마트 폰 추적 및 색상 통신을 이용한 새로운 동작인식 플랫폼을 개발한다. 카메라가 탑재된 PC 혹은 스마트 TV와 개인 스마트 폰 가지고 영상을 기반으로 한 객체 인식 기술을 이용하여 동작 인식 유저 인터페이스를 제공한다. 사용자는 손으로 스마트 폰을 움직여 모션 컨트롤러처럼 사용할 수 있으며, 플랫폼에서는 이 스마트폰을 실시간으로 검출하고, 3차원 거리와 각도를 추정하여 사용자의 동작을 인식한다. 또한, 스마트 폰과 서버의 통신을 위하여 색상 디지털 코드를 이용한 통신 시스템이 사용된다. 사용자들은 색상 통신 방법을 이용하여 텍스트 데이터를 자유자재로 주고받을 수 있으며, 동작을 취하는 도중에도 끊임없이 데이터를 전송할 수 있다. 제안한 동작인식 플랫폼 기반의 실행 가능한 콘텐츠를 구현하여 결과를 제시한다.

최소 제어 인자 도출을 통한 사용편의성 높은 제어시스템 설계 (Design of a User-Friendly Control System using Least Control Parameters)

  • 허영진;박대길;김진현
    • 로봇학회논문지
    • /
    • 제9권1호
    • /
    • pp.67-77
    • /
    • 2014
  • An electric motor is the one of the most important parts in robot systems, which mainly drives the wheel of mobile robots or the joint of manipulators. According to the requirement of motor performance, the controller type and parameters vary. For the wheel driving motors, a speed tracking controller is used, while a position tracking controller is required for the joint driving motors. Moreover, if the mechanical parameters are changed or a different motor is used, we might have to tune again the controller parameters. However, for the beginners who are not familiar about the controller design, it is hard to design pertinently. In this paper, we develop a nominal robust controller model for the velocity tracking of wheel driving motors and the position tracking of joint driving motors based on the disturbance observer (DOB) which can reject disturbances, modeling errors, and dynamic parameter variations, and propose the methodology for the determining the least control parameters. The proposed control system enables the beginners to easily construct a controller for the newly designed robot system. The purpose of this paper is not to develop a new controller theory, but to increase the user-friendliness. Finally, simulation and experimental verification have performed through the actual wheel and joint driving motors.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -1
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

Development of Cultural Contents using Auger Reality Based Markerless Tracking

  • Kang, Hanbyeol;Park, DaeWon;Lee, SangHyun
    • International journal of advanced smart convergence
    • /
    • 제5권4호
    • /
    • pp.57-65
    • /
    • 2016
  • This paper aims to improve the quality of cultural experience by providing a three - dimensional guide service that enables users to experience themselves without additional guides and cultural commentators using the latest mobile IT technology to enhance understanding of cultural heritage. In this paper, we propose a method of constructing cultural contents based on location information such as user / cultural heritage using markerless tracking based augmented reality and GPS. We use marker detection technology and markerless tracking technology to recognize smart augmented reality object accurately and accurate recognition according to the state of cultural heritage, and also use Android's Google map to locate the user. The purpose of this paper is to produce content for introducing cultural heritage using GPS and augmented reality based on Android. It can be used in combination with various objects beyond the limitation of existing augmented reality contents.

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

이동물체 추적 가능한 이동형 로봇구동 시스템 설계 및 센서 구현 (Robot Driving System and Sensors Implementation for a Mobile Robot Capable of Tracking a Moving Target)

  • 명호준;김동환
    • 한국생산제조학회지
    • /
    • 제22권3_1spc호
    • /
    • pp.607-614
    • /
    • 2013
  • This paper proposes a robot driving system and sensor implementation for use with an education robot. This robot has multiple functions and was designed so that children could use it with interest and ease. The robot recognizes the location of a user and follows that user at a specific distance when the robot and user communicate with each other. In this work, the robot was designed and manufactured to evaluate its performance. In addition, an embedded board was installed with the purpose of communicating with a smart phone, and a camera mounted on the robot allowed it to monitor the environment. To allow the robot to follow a moving user, a set of sensors combined with an RF module and ultrasonic sensors were adopted to measure the distance between the user and the robot. With the help of this ultrasonic sensors arrangement, the location of the user couldbe identified in all directions, which allowed the robot to follow the moving user at the desired distance. Experiments were carried out to see how well the user's location could be recognized and to investigate how accurately the robot trackedthe user, which eventually yielded a satisfactory performance.