• Title/Summary/Keyword: Person tracking by detection

Search Result 38, Processing Time 0.025 seconds

3D First Person Shooting Game by Using Eye Gaze Tracking (눈동자 시선 추적에 의한 3차원 1인칭 슈팅 게임)

  • Lee, Eui-Chul;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.465-472
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMB. The proposed method is composed of 3 parts. At first, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, when the user gaze on the monitor plane, the geometric relationship between the gazing position of monitor and the detected position of pupil center is determined. In the last part, the final gaze position on the HMD monitor is tracked and the 3D view in game is controlled by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his(or her) hand. Also, it can Increase the interest and the immersion by synchronizing the gaze direction of game player and the view direction of game character.

Vehicle-Level Traffic Accident Detection on Vehicle-Mounted Camera Based on Cascade Bi-LSTM

  • Son, Hyeon-Cheol;Kim, Da-Seul;Kim, Sung-Young
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.167-175
    • /
    • 2020
  • In this paper, we propose a traffic accident detection on vehicle-mounted camera. In the proposed method, the minimum bounding box coordinates the central coordinates on the bird's eye view and motion vectors of each vehicle object, and ego-motions of the vehicle equipped with dash-cam are extracted from the dash-cam video. By using extracted 4 kinds features as the input of Bi-LSTM (bidirectional LSTM), the accident probability (score) is predicted. To investigate the effect of each input feature on the probability of an accident, we analyze the performance of the detection the case of using a single feature input and the case of using a combination of features as input, respectively. And in these two cases, different detection models are defined and used. Bi-LSTM is used as a cascade, especially when a combination of the features is used as input. The proposed method shows 76.1% precision and 75.6% recall, which is superior to our previous work.

ACMs-based Human Shape Extraction and Tracking System for Human Identification (개인 인증을 위한 활성 윤곽선 모델 기반의 사람 외형 추출 및 추적 시스템)

  • Park, Se-Hyun;Kwon, Kyung-Su;Kim, Eun-Yi;Kim, Hang-Joon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.5
    • /
    • pp.39-46
    • /
    • 2007
  • Research on human identification in ubiquitous environment has recently attracted a lot of attention. As one of those research, gait recognition is an efficient method of human identification using physical features of a walking person at a distance. In this paper, we present a human shape extraction and tracking for gait recognition using geodesic active contour models(GACMs) combined with mean shift algorithm The active contour models (ACMs) are very effective to deal with the non-rigid object because of its elastic property. However, they have the limitation that their performance is mainly dependent on the initial curve. To overcome this problem, we combine the mean shift algorithm with the traditional GACMs. The main idea is very simple. Before evolving using level set method, the initial curve in each frame is re-localized near the human region and is resized enough to include the targe region. This mechanism allows for reducing the number of iterations and for handling the large object motion. The proposed system is composed of human region detection and human shape tracking modules. In the human region detection module, the silhouette of a walking person is extracted by background subtraction and morphologic operation. Then human shape are correctly obtained by the GACMs with mean shift algorithm. In experimental results, the proposed method show that it is extracted and tracked efficiently accurate shape for gait recognition.

  • PDF

Detection and Blocking of a Face Area Using a Tracking Facility in Color Images (컬러 영상에서 추적 기능을 활용한 얼굴 영역 검출 및 차단)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.454-460
    • /
    • 2020
  • In recent years, the rapid increases in video distribution and viewing over the Internet have increased the risk of personal information exposure. In this paper, a method is proposed to robustly identify areas in images where a person's privacy is compromised and simultaneously blocking the object area by blurring it while rapidly tracking it using a prediction algorithm. With this method, the target object area is accurately identified using artificial neural network-based learning. The detected object area is then tracked using a location prediction algorithm and is continuously blocked by blurring it. Experimental results show that the proposed method effectively blocks private areas in images by blurring them, while at the same time tracking the target objects about 2.5% more accurately than another existing method. The proposed blocking method is expected to be useful in many applications, such as protection of personal information, video security, object tracking, etc.

Real Time Eye and Gaze Tracking (실시간 눈과 시선 위치 추적)

  • Cho, Hyeon-Seob;Kim, Hee-Sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.2
    • /
    • pp.195-201
    • /
    • 2005
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

Detecting and Counting People system based on Vision Sensor (비전 센서 기반의 사람 검출 및 계수 시스템)

  • Park, Ho-Sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.1
    • /
    • pp.1-5
    • /
    • 2013
  • The number of pedestrians is considered essential information which can be used to control a person who makes a entrance or a exit into a building. The number of pedestrians, also, can be used to help to manage pedestrian traffic and the volume of pedestrian flow within the building. Due to the fact there is incorrect detection by occluded, shadows, and illumination, however, difficulty can arise in existing system which is for detection and counts of a person who makes a entrance or a exit into a building. In this paper, it is minimized that the change of illumination and the effect of shadow through the transmitted image from camera which is created and processed with great adaptability. The accuracy of the calculations can be increase as well by using Kalman Filter and Mean-Shift Algorithm in order to avoid overlapped counts. As a result of the test, it is proved that the count method that shows the accuracy of 95.4% should be effective for detection and counts.

Analysis of Sleep Breathing Type According to Breathing Strength (호흡 강도에 따른 수면 호흡 유형 분석)

  • Kang, Yunju;Jung, Sungoh;Kook, Joongjin
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.1-5
    • /
    • 2021
  • Sleep apnea refers to a condition in which a person does not breathe during sleep, and is a dangerous symptom that blocks oxygen supply in the body, causing various complications, and the elderly and infants can die if severe. In this paper, we present an algorithm that classifies sleep breathing by analyzing the intensity of breathing with images alone in preparation for the risk of sleep apnea. Only the chest of the person being measured is set to the Region of Interest (ROI) to determine the breathing strength by the differential image within the corresponding ROI area. The adult was selected as the target of the measurement and the breathing strength was measured accurately, and the difference in breathing intensity was also distinguished using depth information. Two videos of sleeping babies also show that even microscopic breathing motions smaller than adults can be detected, which is also expected to help prevent infant death syndrome (SIDS).

Robust Head Tracking using a Hybrid of Omega Shape Tracker and Face Detector for Robot Photographer (로봇 사진사를 위한 오메가 형상 추적기와 얼굴 검출기 융합을 이용한 강인한 머리 추적)

  • Kim, Ji-Sung;Joung, Ji-Hoon;Ho, An-Kwang;Ryu, Yeon-Geol;Lee, Won-Hyung;Jin, Chung-Myung
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.152-159
    • /
    • 2010
  • Finding a head of a person in a scene is very important for taking a well composed picture by a robot photographer because it depends on the position of the head. So in this paper, we propose a robust head tracking algorithm using a hybrid of an omega shape tracker and local binary pattern (LBP) AdaBoost face detector for the robot photographer to take a fine picture automatically. Face detection algorithms have good performance in terms of finding frontal faces, but it is not the same for rotated faces. In addition, when the face is occluded by a hat or hands, it has a hard time finding the face. In order to solve this problem, the omega shape tracker based on active shape model (ASM) is presented. The omega shape tracker is robust to occlusion and illuminationchange. However, whenthe environment is dynamic,such as when people move fast and when there is a complex background, its performance is unsatisfactory. Therefore, a method combining the face detection algorithm and the omega shape tracker by probabilistic method using histograms of oriented gradient (HOG) descriptor is proposed in this paper, in order to robustly find human head. A robot photographer was also implemented to abide by the 'rule of thirds' and to take photos when people smile.

Real-Time Head Tracking using Adaptive Boosting in Surveillance (서베일런스에서 Adaptive Boosting을 이용한 실시간 헤드 트래킹)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.243-248
    • /
    • 2013
  • This paper proposes an effective method using Adaptive Boosting to track a person's head in complex background. By only one way to feature extraction methods are not sufficient for modeling a person's head. Therefore, the method proposed in this paper, several feature extraction methods for the accuracy of the detection head running at the same time. Feature Extraction for the imaging of the head was extracted using sub-region and Haar wavelet transform. Sub-region represents the local characteristics of the head, Haar wavelet transform can indicate the frequency characteristics of face. Therefore, if we use them to extract the features of face, effective modeling is possible. In the proposed method to track down the man's head from the input video in real time, we ues the results after learning Harr-wavelet characteristics of the three types using AdaBoosting algorithm. Originally the AdaBoosting algorithm, there is a very long learning time, if learning data was changes, and then it is need to be performed learning again. In order to overcome this shortcoming, in this research propose efficient method using cascade AdaBoosting. This method reduces the learning time for the imaging of the head, and can respond effectively to changes in the learning data. The proposed method generated classifier with excellent performance using less learning time and learning data. In addition, this method accurately detect and track head of person from a variety of head data in real-time video images.