• Title/Summary/Keyword: Moving target tracking

Search Result 251, Processing Time 0.028 seconds

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

Formation Control of Mobile Robot for Moving Object Tracking (이동물체 추적을 위한 이동로봇의 대형제어)

  • Oh, Young-Suk;Lee, Chung-Ho;Park, Jong-Hun;Kim, Jin-Hwan;Huh, Uk-Youl
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.4
    • /
    • pp.856-861
    • /
    • 2011
  • The mobile robot controller is designed to track the target and to maintain the formation at the same time. Formation control is included in mobile robot controller by extending the trajectory tracking algorithm. The dynamic model of mobile robot is used with kinematic model considering the practical physical parameters of mobile robot. The dynamic model of mobile robot transforms velocity control input of kinematic model into torque control input which is the practical control input of mobile robot. Formation controller of mobile robot is designed to satisfy Lyapunov stability by backstepping method. The designed formation controller is applied to the mobile robot for various target movements and simulated to confirm the Lyapunov stability.

Development of Relative Position Measuring Device for Moving Target in Local Area (국소영역에서 이동표적의 상대위치 측정 장치 개발)

  • Seo, Myoung Kook
    • Journal of Drive and Control
    • /
    • v.17 no.4
    • /
    • pp.8-14
    • /
    • 2020
  • Intelligent devices using ICT technology have been introduced in the field of construction machinery to improve productivity and stability. Among the intelligent devices, Machine Guidance is a device that provides real-time posture, location, and work range to drivers by installing various sensors, controllers, and satellite navigation systems on construction machines. Conversely, the efficiency of equipment that requires location information, such as machine guidance, will be greatly reduced in buildings, and tunnels in the GPS blind spots. Thus, the other high-precision positioning technologies are required in the GPS blind spot zone. In this study, we will develop a relative position measurement system that provides precise location information such as construction machinery and robots in a local area where the GPS reception is difficult. A relative position measurement system tracks a marker in the form of a sphere installed on a vehicle by using the image base tracking technology, and measures the distance and direction information to the marker to calculate a position.

Remote Monitoring and Control of the Real Robot associated with a Virtual Robot

  • Jeon, Byung-Joon;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.43-48
    • /
    • 2005
  • A robot system encompassing a remote control and monitoring through a virtual robot design is addressed and a tracking problem for a 2D (2 dimension) moving target by a robot vision is chosen as a case study. The virtual robot is developed, and it synchronizes with the real robot by compensating delay time. Two systems are displayed on a remote panel by communicating command and image data. The virtual robot utilizes an OpenGL library in Visual $C^{++}$ environment. Additionally, the remote monitoring and control between the real robot and the virtual robot are accomplished by employing an appropriate data compression in a network communication.

  • PDF

A SHIPBOARD MULTISENSOR SOLUTION FOR THE DETECTON OF FAST MOVING SMALL SURFACE OBJECTS

  • Ko, Hanseok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.174-177
    • /
    • 1995
  • Detecting a small threat object either fast moving or floating on shallow water presents a formidable challenge to shipboard sensor systems, which must determine whether or not to launch defensive weapons in a timely manner. An integrated multisensor concept is envisioned wherein the combined use of active and passive sensor is employed for the detection of short duration targets in dense ocean surface clutter to maximize detection range. The objective is to develop multisensor integration techniques that operate on detection data prior to track formation while simultaneously fusing contacts to tracks. In the system concept, detections from a low grazing angle search radar render designations to a sensor-search infrared sensor for target classification which in turn designates an active electro-optical sensor for sector search and target verification.

  • PDF

Multiple Properties-Based Moving Object Detection Algorithm

  • Zhou, Changjian;Xing, Jinge;Liu, Haibo
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.124-135
    • /
    • 2021
  • Object detection is a fundamental yet challenging task in computer vision that plays an important role in object recognition, tracking, scene analysis and understanding. This paper aims to propose a multiproperty fusion algorithm for moving object detection. First, we build a scale-invariant feature transform (SIFT) vector field and analyze vectors in the SIFT vector field to divide vectors in the SIFT vector field into different classes. Second, the distance of each class is calculated by dispersion analysis. Next, the target and contour can be extracted, and then we segment the different images, reversal process and carry on morphological processing, the moving objects can be detected. The experimental results have good stability, accuracy and efficiency.

LOS Moving Algorithm Design of Electro-Optical Targeting Pod for Joystick Command (조이스틱 명령에 따른 Electro-Optical Targeting Pod의 LOS 이동 알고리즘 설계)

  • Seo, Hyoungkyu;Park, Jaeyoung;Ahn, Jung-Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1395-1400
    • /
    • 2018
  • EO TGP(Electro-Optical Targeting Pod) is an optical tracking system which has various functions such as target tracking and image stabilization and LOS(Line of Sight) change. Especially, it is very important to move the LOS into a interest point for joystick command. When pilot move joystick in order to observe different scene, EO TGP gimbals should be operated properly. Generally, most EOTS just operate corresponding gimbal for joystick command. For example, if pilot input horizontal command in order to observe right hand screen, it just drive azimuth gimbal at any position. But in the screen, the image dosen't move in a horizontal direction because gimbal structure is Euler angle. And image rotation is occurred by elevation gimbal angle. So we need to move Pitch gimbal. So in the paper, we designed LOS moving algorithm which convert LOS command to gimbal velocity command to move LOS properly. We modeled a differential kinematic equation and then change the joystick command into velocity command of gimbals. This algorithm generate velocity command of each gimbal for same horizontal direction command. Finally, we verified performance through MATLAB/Simulink.

Stereo System for Tracking Moving Object using Log-Polar Transformation and ZDF (로그폴라 변환과 ZDF를 이용한 이동 물체 추적 스테레오 시스템)

  • Yoon, Jong-Kun;Park, Il-;Lee, Yong-Bum;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.61-69
    • /
    • 2002
  • Active stereo vision system allows us to localize a target object by passing only the features of small disparities without heavy computation for identifying the target. This simple method, however, is not applicable to the situations where a distracting background is included or the target and other objects are located on the zero disparity area simultaneously To alleviate these problems, we combined filtering with foveation which employs high resolution in the center of the visual field and suppresses the periphery which is usually less interesting. We adopted an image pyramid or log-polar transformation for foveated imaging representation. We also extracted the stereo disparity of the target by using projection to keep the stereo disparity small during tracking. Our experiments show that log-polar transformation is superior to either an image pyramid or traditional method in separating a target from the distracting background and fairly enhances the tracking performance.

Visual Object Tracking based on Particle Filters with Multiple Observation (다중 관측 모델을 적용한 입자 필터 기반 물체 추적)

  • Koh, Hyeung-Seong;Jo, Yong-Gun;Kang, Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.539-544
    • /
    • 2004
  • We investigate a visual object tracking algorithm based upon particle filters, namely CONDENSATION, in order to combine multiple observation models such as active contours of digitally subtracted image and the particle measurement of object color. The former is applied to matching the contour of the moving target and the latter is used to independently enhance the likelihood of tracking a particular color of the object. Particle filters are more efficient than any other tracking algorithms because the tracking mechanism follows Bayesian inference rule of conditional probability propagation. In the experimental results, it is demonstrated that the suggested contour tracking particle filters prove to be robust in the cluttered environment of robot vision.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF