• Title/Summary/Keyword: Space Object Tracking

Search Result 140, Processing Time 0.022 seconds

Method for Extracting Features of Conscious Eye Moving for Exploring Space Information (공간정보 탐색을 위한 의식적 시선 이동특성 추출 방법)

  • Kim, Jong-Ha;Jung, Jae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2016
  • This study has estimated the traits of conscious eye moving with the objects of the halls of subway stations. For that estimation, the observation data from eye-tracking were matched with the experiment images, while an independent program was produced and utilized for the analysis of the eye moving in the selected sections, which could provide the ground for clarifying the traits of space-users' eye moving. The outcomes can be defines as the followings. First, The application of the independently produced program provides the method for coding the great amount of observation data, which cut down a lot of analysis time for finding out the traits of conscious eye moving. Accordingly, the inclusion of eye's intentionality in the method for extracting the characteristics of eye moving enabled the features of entrance and exit of particular objects with the course of observing time to be organized. Second, The examination of eye moving at each area surrounding the object factors showed that [out]${\rightarrow}$[in], which the line of sight is from the surround area to the objects, characteristically moved from the left-top (Area I) of the selected object to the object while [in]${\rightarrow}$[out], which is from the inside of the object to the outside, also moved to the left-top (Area I). Overall, there were much eye moving from the tops of right and left (Area I, II) to the object, but the eye moving to the outside was found to move to the left-top (Area I), the right-middle (Area IV) and the right-top (Area II). Third, In order to find if there was any intense eye-moving toward a particular factor, the dominant standards were presented for analysis, which showed that there was much eye-moving from the tops (Area I, II) to the sections of 1 and 2. While the eye-moving of [in] was [I $I{\rightarrow}A$](23.0%), [$I{\rightarrow}B$](16.1%) and [$II{\rightarrow}B$](13.8%), that of [out] was [$A{\rightarrow}I$](14.8%), [$B{\rightarrow}I$](13.6%), [$A{\rightarrow}II$](11.4%), [$B{\rightarrow}IV$](11.4%) and [$B{\rightarrow}II$](10.2%). Though the eye-moving toward objects took place in specific directions (areas), that (out) from the objects to the outside was found to be dispersed widely to different areas.

Modified HOG Feature Extraction for Pedestrian Tracking (동영상에서 보행자 추적을 위한 변형된 HOG 특징 추출에 관한 연구)

  • Kim, Hoi-Jun;Park, Young-Soo;Kim, Ki-Bong;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • In this paper, we proposed extracting modified Histogram of Oriented Gradients (HOG) features using background removal when tracking pedestrians in real time. HOG feature extraction has a problem of slow processing speed due to large computation amount. Background removal has been studied to improve computation reductions and tracking rate. Area removal was carried out using S and V channels in HSV color space to reduce feature extraction in unnecessary areas. The average S and V channels of the video were removed and the input video was totally dark, so that the object tracking may fail. Histogram equalization was performed to prevent this case. HOG features extracted from the removed region are reduced, and processing speed and tracking rates were improved by extracting clear HOG features. In this experiment, we experimented with videos with a large number of pedestrians or one pedestrian, complicated videos with backgrounds, and videos with severe tremors. Compared with the existing HOG-SVM method, the proposed method improved the processing speed by 41.84% and the error rate was reduced by 52.29%.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • Kim, Jin Dae;Lee, Jae Won;Sin, Chan Bae
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.83-83
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot′s end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • 김진대;이재원;신찬배
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.89-90
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot's end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

Real-time Face Tracking Method using Improved CamShift (향상된 캠쉬프트를 사용한 실시간 얼굴추적 방법)

  • Lee, Jun-Hwan;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.861-877
    • /
    • 2016
  • This paper first discusses the disadvantages of the existing CamShift Algorithm for real time face tracking, and then proposes a new Camshift Algorithm that performs better than the existing algorithm. The existing CamShift Algorithm shows unstable tracking when tracing similar colors in the background of objects. This drawback of the existing CamShift is resolved by using Kinect’s pixel-by-pixel depth information and the Skin Detection algorithm to extract candidate skin regions based on HSV color space. Additionally, even when the tracking object is not found, or when occlusion occurs, the feature point-based matching algorithm makes it robust to occlusion. By applying the improved CamShift algorithm to face tracking, the proposed real-time face tracking algorithm can be applied to various fields. The results from the experiment prove that the proposed algorithm is superior in tracking performance to that of existing TLD tracking algorithm, and offers faster processing speed. Also, while the proposed algorithm has a slower processing speed than CamShift, it overcomes all the existing shortfalls of the existing CamShift.

A neural network based real-time robot tracking controller using position sensitive detectors (신경회로망과 위치 검출장치를 사용한 로보트 추적 제어기의 구현)

  • 박형권;오세영;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.660-665
    • /
    • 1993
  • Neural networks are used in the framework of sensorbased tracking control of robot manipulators. They learn by practice movements the relationship between PSD ( an analog Position Sensitive Detector) sensor readings for target positions and the joint commands to reach them. Using this configuration, the system can track or follow a moving or stationary object in real time. Furthermore, an efficient neural network architecture has been developed for real time learning. This network uses multiple sets of simple backpropagation networks one of which is selected according to which division (corresponding to a cluster of the self-organizing feature map) in data space the current input data belongs to. This lends itself to a very fast training and processing implementation required for real time control.

  • PDF

Height and Position Estimation of Moving Objects using a Single Camera

  • Lee, Seok-Han;Lee, Jae-Young;Kim, Bu-Gyeom;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.158-163
    • /
    • 2009
  • In recent years, there has been increased interest in characterizing and extracting 3D information from 2D images for human tracking and identification. In this paper, we propose a single view-based framework for robust estimation of height and position. In the proposed method, 2D features of target object is back-projected into the 3D scene space where its coordinate system is given by a rectangular marker. Then the position and the height are estimated in the 3D space. In addition, geometric error caused by inaccurate projective mapping is corrected by using geometric constraints provided by the marker. The accuracy and the robustness of our technique are verified on the experimental results of several real video sequences from outdoor environments.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Development of Radar System for Laser Tracking System (레이저 추적 시스템을 위한 레이더 시스템 개발)

  • Ki-Pyoung Sung;Hyung-Chul Lim;Man-Soo Choi;Sung-Yeol Yu
    • Journal of Space Technology and Applications
    • /
    • v.4 no.1
    • /
    • pp.1-11
    • /
    • 2024
  • Korea Astronomy and Space Science Institute (KASI) developed an satellite laser ranging (SLR) system for tracking space objects using ultra-pulsed lasers. For the safe operation of SLR system, aircraft surveillance radar system (ASRS) was developed to prevent human damage from high power laser transmitted from the SLR system. The ASRS consists of the radar hardware subsystem (RHS) and main control subsystem (MCS), in order to detect flying objects in the direction of laser propagation and then stop immediately the laser transmission. The RHS transmits the radio frequency (RF) pulse signals and receives the returned signals, while the MCS analyzes the characteristics of received signals and distinguishes the existence of flying objects. If the flying objects are determined to be existed, the MCS sends the command signal to the laser controller in SLR system to pause the laser firing. In this study, we address the interface and operational scenarios of ASRS, including the design of RHS and MCS. It was demonstrated in the aircraft experiments that the ASRS could detect an aircraft and then stop transmitting high power laser successfully.

3D Position Tracking for Moving objects using Stereo CCD Cameras (스테레오 CCD 카메라를 이용한 이동체의 실시간 3차원 위치추적)

  • Kwon, Hyuk-Jong;Bae, Sang-Keun;Kim, Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.2 s.33
    • /
    • pp.129-138
    • /
    • 2005
  • In this paper, a 3D position tracking algorithm for a moving objects using a stereo CCD cameras was proposed. This paper purposed the method to extract the coordinates of the moving objects. That is improve the operating and data processing efficiency. We were applied the relative orientation far the stereo CCD cameras and image coordinates extraction in the left and right images after the moving object segmentation. Also, it is decided on 3D position far moving objects using an acquired image coordinates in the left and right images. We were used independent relative orientation to decide the relative location and attitude of the stereo CCD cameras and RGB pixel values to segment the moving objects. To calculate the coordinates of the moving objects by space intersection. And, We conducted the experiment the system and compared the accuracy of the results.

  • PDF