• Title/Summary/Keyword: person tracking

Search Result 162, Processing Time 0.022 seconds

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

A Study on the location tracking system by using Zigbee in wireless sensor network (무선 센서네트워크에서 Zigbee를 적용한 위치추정시스템 구현에 관한연구)

  • Jung, Suk;Kim, Hwan-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.2120-2126
    • /
    • 2010
  • This paper aims to realize the location tracking system using the Zigbee in the wireless sensor network. The wireless sensor network offers the user-oriented location tracking service in the ubiquitous environment. The location tracking service can track the location of an object or a person and provides it. The location tracking system realized in this paper can be used inside or outside without any black-out areas to measure the location of the moving nodes. In tracking inside the RSSI signals use and in tracking outside will connect with the GPS signals to track the location. Also, by using Zigbee, the wireless sensor network environment was established and by obtaining the location of the moving nodes, the real-time tracking is possible.

Face Tracking and Recognition on the arbitrary person using Nonliner Manifolds (비선형적 매니폴드를 이용한 임의 얼굴에 대한 얼굴 추적 및 인식)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.342-347
    • /
    • 2008
  • Face tracking and recognition are difficult problems because the face is a non-rigid object. If the system tries to track or recognize the unknown face continuously, it can be more hard problems. In this paper, we propose the method to track and to recognize the face of the unknown person on video sequences using linear combination of nonlinear manifold models that is constructed in the system. The arbitrary input face has different similarities with different persons in system according to its shape or pose. Do we can approximate the new nonlinear manifold model for the input face by estimating the similarities with other faces statistically. The approximated model is updated at each frame for the input face. Our experimental results show that the proposed method is efficient to track and recognize for the arbitrary person.

  • PDF

A Tracking Algorithm to Certain People Using Recognition of Face and Cloth Color and Motion Analysis with Moving Energy in CCTV (폐쇄회로 카메라에서 운동에너지를 이용한 모션인식과 의상색상 및 얼굴인식을 통한 특정인 추적 알고리즘)

  • Lee, In-Jung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.197-204
    • /
    • 2008
  • It is well known that the tracking a certain person is a vary needed technic in the humanoid robot. In robot technic, we should consider three aspects that is cloth color matching, face recognition and motion analysis. Because a robot technic use some sensors, it is many different with the robot technic to track a certain person through the CCTV images. A system speed should be fast in CCTV images, hence we must have small calculation numbers. We need the statistical variable for color matching and we adapt the eigen-face for face recognition to speed up the system. In this situation, motion analysis have to added for the propose of the efficient detecting system. But, in many motion analysis systems, the speed and the recognition rate is low because the system operates on the all image area. In this paper, we use the moving energy only on the face area which is searched when the face recognition is processed, since the moving energy has low calculation numbers. When the proposed algorithm has been compared with Girondel, V. et al's method for experiment, we obtained same recognition rate as Girondel, V., the speed of the proposed algorithm was the more faster. When the LDA has been used, the speed was same and the recognition rate was better than Girondel, V.'s method, consequently the proposed algorithm is more efficient for tracking a certain person.

Location tracking of an object in. a room using the passive tag of an RFID system (무선인식 시스템의 패시브 태그를 이용한 실내의 물체위치 추적)

  • Baek Sun-Ki;Park Myeon-Gyu;Lee Key-Sea
    • Proceedings of the KSR Conference
    • /
    • 2003.10c
    • /
    • pp.568-573
    • /
    • 2003
  • This paper proposed to recognize and tracking the ID and location when a person and objects moved from indoor using RFID a type of passive tag. An antenna was installed in both sides of a door due to the limitation of a recognition distance. And frequency bandwidth was used to the 134.2kHz bandwidth to pass and bend several obstacles. Because a type of passive tag is a semi -permanent and miniaturization, it has applied to this paper.

  • PDF

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

Key Technologies in Robot Assistants: Motion Coordination Between a Human and a Mobile Robot

  • Prassler, Erwin;Bank, Dirk;Kluge, Boris
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.56-61
    • /
    • 2002
  • In this paper we describe an approach to coordinating the motion of a human with a mobile robot moving in a populated, continuously changing. natural environment. Our test application is a wheelchair accompanying a person through the concourse of a railway station moving side by side with the person. Our approach is based on a method for motion planning amongst moving obstacles known as Velocity Obstacle approach. We extend this method by a method for tracking a virtual target which allows us to vary the robot's heading and velocity with the locomotion of the accompanied person and the state of the surrounding environment.

A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker (캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종)

  • Lee, S.J.;Won, Mooncheol
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

Design of a machine learning based mobile application with GPS, mobile sensors, public GIS: real time prediction on personal daily routes

  • Shin, Hyunkyung
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.27-39
    • /
    • 2018
  • Since the global positioning system (GPS) has been included in mobile devices (e.g., for car navigation, in smartphones, and in smart watches), the impact of personal GPS log data on daily life has been unprecedented. For example, such log data have been used to solve public problems, such as mass transit traffic patterns, finding optimum travelers' routes, and determining prospective business zones. However, a real-time analysis technique for GPS log data has been unattainable due to theoretical limitations. We introduced a machine learning model in order to resolve the limitation. In this paper presents a new, three-stage real-time prediction model for a person's daily route activity. In the first stage, a machine learning-based clustering algorithm is adopted for place detection. The training data set was a personal GPS tracking history. In the second stage, prediction of a new person's transient mode is studied. In the third stage, to represent the person's activity on those daily routes, inference rules are applied.

Real Time Eye and Gaze Tracking

  • Park Ho Sik;Nam Kee Hwan;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.857-861
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF