• Title/Summary/Keyword: 3D object tracking

Search Result 160, Processing Time 0.024 seconds

Human Motion Tracking by Combining View-based and Model-based Methods for Monocular Video Sequences (하나의 비디오 입력을 위한 모습 기반법과 모델 사용법을 혼용한 사람 동작 추적법)

  • Park, Ji-Hun;Park, Sang-Ho;Aggarwal, J.K.
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.657-664
    • /
    • 2003
  • Reliable tracking of moving humans is essential to motion estimation, video surveillance and human-computer interface. This paper presents a new approach to human motion tracking that combines appearance-based and model-based techniques. Monocular color video is processed at both pixel level and object level. At the pixel level, a Gaussian mixture model is used to train and classily individual pixel colors. At the object level, a 3D human body model projected on a 2D image plane is used to fit the image data. Our method does not use inverse kinematics due to the singularity problem. While many others use stochastic sampling for model-based motion tracking, our method is purely dependent on nonlinear programming. We convert the human motion tracking problem into a nonlinear programming problem. A cost function for parameter optimization is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the real projection plane domain. Our method is used to recognize various human motions. Motion tracking results from video sequences are very encouraging.

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

3D Radar Objects Tracking and Reflectivity Profiling

  • Kim, Yong Hyun;Lee, Hansoo;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.263-269
    • /
    • 2012
  • The ability to characterize feature objects from radar readings is often limited by simply looking at their still frame reflectivity, differential reflectivity and differential phase data. In many cases, time-series study of these objects' reflectivity profile is required to properly characterize features objects of interest. This paper introduces a novel technique to automatically track multiple 3D radar structures in C,S-band in real-time using Doppler radar and profile their characteristic reflectivity distribution in time series. The extraction of reflectivity profile from different radar cluster structures is done in three stages: 1. static frame (zone-linkage) clustering, 2. dynamic frame (evolution-linkage) clustering and 3. characterization of clusters through time series profile of reflectivity distribution. The two clustering schemes proposed here are applied on composite multi-layers CAPPI (Constant Altitude Plan Position Indicator) radar data which covers altitude range of 0.25 to 10 km and an area spanning over hundreds of thousands $km^2$. Discrete numerical simulations show the validity of the proposed technique and that fast and accurate profiling of time series reflectivity distribution for deformable 3D radar structures is achievable.

Height and Position Estimation of Moving Objects using a Single Camera

  • Lee, Seok-Han;Lee, Jae-Young;Kim, Bu-Gyeom;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.158-163
    • /
    • 2009
  • In recent years, there has been increased interest in characterizing and extracting 3D information from 2D images for human tracking and identification. In this paper, we propose a single view-based framework for robust estimation of height and position. In the proposed method, 2D features of target object is back-projected into the 3D scene space where its coordinate system is given by a rectangular marker. Then the position and the height are estimated in the 3D space. In addition, geometric error caused by inaccurate projective mapping is corrected by using geometric constraints provided by the marker. The accuracy and the robustness of our technique are verified on the experimental results of several real video sequences from outdoor environments.

  • PDF

A Study of 3D World Reconstruction and Dynamic Object Detection using Stereo Images (스테레오 영상을 활용한 3차원 지도 복원과 동적 물체 검출에 관한 연구)

  • Seo, Bo-Gil;Yoon, Young Ho;Kim, Kyu Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.10
    • /
    • pp.326-331
    • /
    • 2019
  • In the real world, there are both dynamic objects and static objects, but an autonomous vehicle or mobile robot cannot distinguish between them, even though a human can distinguish them easily. It is important to distinguish static objects from dynamic objects clearly to perform autonomous driving successfully and stably for an autonomous vehicle or mobile robot. To do this, various sensor systems can be used, like cameras and LiDAR. Stereo camera images are used often for autonomous driving. The stereo camera images can be used in object recognition areas like object segmentation, classification, and tracking, as well as navigation areas like 3D world reconstruction. This study suggests a method to distinguish static/dynamic objects using stereo vision for an online autonomous vehicle and mobile robot. The method was applied to a 3D world map reconstructed from stereo vision for navigation and had 99.81% accuracy.

Multiple Moving Person Tracking based on the IMPRESARIO Simulator

  • Kim, Hyun-Deok;Jin, Tae-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.877-881
    • /
    • 2008
  • In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. To achieve this goal, we propose a method for 3D walking human tracking based on the IMPRESARIO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers have been also presented. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.

  • PDF

The Model based Tracking using the Object Tracking method in the Sequence Scene (장면 전환에서의 물체 추적을 통한 모델기반추적 방법 연구)

  • Kim, Se-Hoon;Hwang, Jung-Won;Kim, Ki-Sang;Choi, Hyung-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.775-778
    • /
    • 2008
  • Augmented Reality is a growing area in virtual reality research, The world environment around us provides a wealth of information that is difficult to duplicate in a computer. This evidenced by the worlds used in virtual environments. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with addition information. The registration method represent to the user enhances that person's performance in and perception of the world. It decide the direction and location between real world and 3D graphic objects. The registration method devide two method, Model based tracking and Move-Matching. This paper researched at to generate a commerce correlation using a tracking object method, using at a color distribution and information, in the sequence scene.

  • PDF

A Study on Correcting Virtual Camera Tracking Data for Digital Compositing (디지털영상 합성을 위한 가상카메라의 트래킹 데이터 보정에 관한 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.39-46
    • /
    • 2012
  • The development of the computer widens the expressive ways for the nature objects and the scenes. The cutting edge computer graphics technologies effectively create any images we can imagine. Although the computer graphics plays an important role in filming and video production, the status of the domestic contents production industry is not favorable for producing and research all at the same time. In digital composition, the match moving stage, which composites the captured real sequence with computer graphics image, goes through many complicating processes. The camera tracking process is the most important issue in this stage. This comprises the estimation of the 3D trajectory and the optical parameter of the real camera. Because the estimating process is based only on the captured sequence, there are many errors which make the process more difficult. In this paper we propose the method for correcting the tracking data. The proposed method can alleviate the unwanted camera shaking and object bouncing effect in the composited scene.

Statistical Model of 3D Positions in Tracking Fast Objects Using IR Stereo Camera (적외선 스테레오 카메라를 이용한 고속 이동객체의 위치에 대한 확률모델)

  • Oh, Jun Ho;Lee, Sang Hwa;Lee, Boo Hwan;Park, Jong-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.89-101
    • /
    • 2015
  • This paper proposes a statistical model of 3-D positions when tracking moving targets using the uncooled infrared (IR) stereo camera system. The proposed model is derived from two errors. One is the position error which is caused by the sampling pixels in the digital image. The other is the timing jitter which results from the irregular capture-timing in the infrared cameras. The capture-timing in the IR camera is measured using the jitter meter designed in this paper, and the observed jitters are statistically modeled as Gaussian distribution. This paper derives an integrated probability distribution by combining jitter error with pixel position error. The combined error is modeled as the convolution of two error distributions. To verify the proposed statistical position error model, this paper has some experiments in tracking moving objects with IR stereo camera. The 3-D positions of object are accurately measured by the trajectory scanner, and 3-D positions are also estimated by stereo matching from IR stereo camera system. According to the experiments, the positions of moving object are estimated within the statistically reliable range which is derived by convolution of two probability models of pixel position error and timing jitter respectively. It is expected that the proposed statistical model can be applied to estimate the uncertain 3-D positions of moving objects in the diverse fields.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.