• Title/Summary/Keyword: 3D object tracking

Search Result 160, Processing Time 0.025 seconds

Convergence Control of Moving Object using Opto-Digital Algorithm in the 3D Robot Vision System

  • Ko, Jung-Hwan;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.3 no.2
    • /
    • pp.19-25
    • /
    • 2002
  • In this paper, a new target extraction algorithm is proposed, in which the coordinates of target are obtained adaptively by using the difference image information and the optical BPEJTC(binary phase extraction joint transform correlator) with which the target object can be segmented from the input image and background noises are removed in the stereo vision system. First, the proposed algorithm extracts the target object by removing the background noises through the difference image information of the sequential left images and then controlls the pan/tilt and convergence angle of the stereo camera by using the coordinates of the target position obtained from the optical BPEJTC between the extracted target image and the input image. From some experimental results, it is found that the proposed algorithm can extract the target object from the input image with background noises and then, effectively track the target object in real time. Finally, a possibility of implementation of the adaptive stereo object tracking system by using the proposed algorithm is also suggested.

3D Object Location Identification Using Finger Pointing and a Robot System for Tracking an Identified Object (손가락 Pointing에 의한 물체의 3차원 위치정보 인식 및 인식된 물체 추적 로봇 시스템)

  • Gwak, Dong-Gi;Hwang, Soon-Chul;Ok, Seo-Won;Yim, Jung-Sae;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.703-709
    • /
    • 2015
  • In this work, a robot aimed at grapping and delivering an object by using a simple finger-pointing command from a hand- or arm-handicapped person is introduced. In this robot system, a Leap Motion sensor is utilized to obtain the finger-motion data of the user. In addition, a Kinect sensor is also used to measure the 3D (Three Dimensional)-position information of the desired object. Once the object is pointed at through the finger pointing of the handicapped user, the exact 3D information of the object is determined using an image processing technique and a coordinate transformation between the Leap Motion and Kinect sensors. It was found that the information obtained is transmitted to the robot controller, and that the robot eventually grabs the target and delivers it to the handicapped person successfully.

Visual servoing based on neuro-fuzzy model

  • Jun, Hyo-Byung;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.712-715
    • /
    • 1997
  • In image jacobian based visual servoing, generally, inverse jacobian should be calculated by complicated coordinate transformations. These are required excessive computation and the singularity of the image jacobian should be considered. This paper presents a visual servoing to control the pose of the robotic manipulator for tracking and grasping 3-D moving object whose pose and motion parameters are unknown. Because the object is in motion tracking and grasping must be done on-line and the controller must have continuous learning ability. In order to estimate parameters of a moving object we use the kalman filter. And for tracking and grasping a moving object we use a fuzzy inference based reinforcement learning algorithm of dynamic recurrent neural networks. Computer simulation results are presented to demonstrate the performance of this visual servoing

  • PDF

Development of Auto Tracking System for Baseball Pitching (투구된 공의 실시간 위치 자동추적 시스템 개발)

  • Lee, Ki-Chung;Bae, Sung-Jae;Shin, In-Sik
    • Korean Journal of Applied Biomechanics
    • /
    • v.17 no.1
    • /
    • pp.81-90
    • /
    • 2007
  • The effort identifying positioning information of the moving object in real time has been a issue not only in sport biomechanics but also other academic areas. In order to solve this issue, this study tried to track the movement of a pitched ball that might provide an easier prediction because of a clear focus and simple movement of the object. Machine learning has been leading the research of extracting information from continuous images such as object tracking. Though the rule-based methods in artificial intelligence prevailed for decades, it has evolved into the methods of statistical approach that finds the maximum a posterior location in the image. The development of machine learning, accompanied by the development of recording technology and computational power of computer, made it possible to extract the trajectory of pitched baseball from recorded images. We present a method of baseball tracking, based on object tracking methods in machine learning. We introduce three state-of-the-art researches regarding the object tracking and show how we can combine these researches to yield a novel engine that finds trajectory from continuous pitching images. The first research is about mean shift method which finds the mode of a supposed continuous distribution from a set of data. The second research is about the research that explains how we can find the mode and object region effectively when we are given the previous image's location of object and the region. The third is about the research of representing data into features that we can deal with. From those features, we can establish a distribution to generate a set of data for mean shift. In this paper, we combine three works to track baseball's location in the continuous image frames. From the information of locations from two sets of images, we can reconstruct the real 3-D trajectory of pitched ball. We show how this works in real pitching images.

Object Tracking for a Video Sequence from a Moving Vehicle: A Multi-modal Approach

  • Hwang, Tae-Hyun;Cho, Seong-Ick;Park, Jong-Hyun;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.28 no.3
    • /
    • pp.367-370
    • /
    • 2006
  • This letter presents a multi-modal approach to tracking geographic objects such as buildings and road signs in a video sequence recorded from a moving vehicle. In the proposed approach, photogrammetric techniques are successfully combined with conventional tracking methods. More specifically, photogrammetry combined with positioning technologies is used to obtain 3-D coordinates of chosen geographic objects, providing a search area for conventional feature trackers. In addition, we present an adaptive window decision scheme based on the distance between chosen objects and a moving vehicle. Experimental results are provided to show the robustness of the proposed approach.

  • PDF

Mixed reality system using adaptive dense disparity estimation (적응적 미세 변이추정기법을 이용한 스테레오 혼합 현실 시스템 구현)

  • 민동보;김한성;양기선;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.171-174
    • /
    • 2003
  • In this paper, we propose the method of stereo images composition using adaptive dense disparity estimation. For the correct composition of stereo image and 3D virtual object, we need correct marker position and depth information. The existing algorithms use position information of markers in stereo images for calculating depth of calibration object. But this depth information may be wrong in case of inaccurate marker tracking. Moreover in occlusion region, we can't know depth of 3D object, so we can't composite stereo images and 3D virtual object. In these reasons, the proposed algorithm uses adaptive dense disparity estimation for calculation of depth. The adaptive dense disparity estimation is the algorithm that use pixel-based disparity estimation and the search range is limited around calibration object.

  • PDF

Robust Dynamic Projection Mapping onto Deforming Flexible Moving Surface-like Objects (유연한 동적 변형물체에 대한 견고한 다이내믹 프로젝션맵핑)

  • Kim, Hyo-Jung;Park, Jinho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.897-906
    • /
    • 2017
  • Projection Mapping, also known as Spatial Augmented Reality(SAR) has attracted much attention recently and used for many division, which can augment physical objects with projected various virtual replications. However, conventional approaches towards projection mapping have faced some limitations. Target objects' geometric transformation property does not considered, and movements of flexible objects-like paper are hard to handle, such as folding and bending as natural interaction. Also, precise registration and tracking has been a cumbersome process in the past. While there have been many researches on Projection Mapping on static objects, dynamic projection mapping that can keep tracking of a moving flexible target and aligning the projection at interactive level is still a challenge. Therefore, this paper propose a new method using Unity3D and ARToolkit for high-speed robust tracking and dynamic projection mapping onto non-rigid deforming objects rapidly and interactively. The method consists of four stages, forming cubic bezier surface, process of rendering transformation values, multiple marker recognition and tracking, and webcam real time-lapse imaging. Users can fold, curve, bend and twist to make interaction. This method can achieve three high-quality results. First, the system can detect the strong deformation of objects. Second, it reduces the occlusion error which reduces the misalignment between the target object and the projected video. Lastly, the accuracy and the robustness of this method can make result values to be projected exactly onto the target object in real-time with high-speed and precise transformation tracking.

Visual Tracking of Moving Target Using Mobile Robot with One Camera (하나의 카메라를 이용한 이동로봇의 이동물체 추적기법)

  • 한영준;한헌수
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.12
    • /
    • pp.1033-1041
    • /
    • 2003
  • A new visual tracking scheme is proposed for a mobile robot that tracks a moving object in 3D space in real time. Visual tracking is to control a mobile robot to keep a moving target at the center of input image at all time. We made it possible by simplifying the relationship between the 2D image frame captured by a single camera and the 3D workspace frame. To precisely calculate the input vector (orientation and distance) of the mobile robot, the speed vector of the target is determined by eliminating the speed component caused by the camera motion from the speed vector appeared in the input image. The problem of temporary disappearance of the target form the input image is solved by selecting the searching area based on the linear prediction of target motion. The experimental results have shown that the proposed scheme can make a mobile robot successfully follow a moving target in real time.

Real-Time Tomato Instance Tracking Algorithm by using Deep Learning and Probability Model (딥러닝과 확률모델을 이용한 실시간 토마토 개체 추적 알고리즘)

  • Ko, KwangEun;Park, Hyun Ji;Jang, In Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2021
  • Recently, a smart farm technology is drawing attention as an alternative to the decline of farm labor population problems due to the aging society. Especially, there is an increasing demand for automatic harvesting system that can be commercialized in the market. Pre-harvest crop detection is the most important issue for the harvesting robot system in a real-world environment. In this paper, we proposed a real-time tomato instance tracking algorithm by using deep learning and probability models. In general, It is hard to keep track of the same tomato instance between successive frames, because the tomato growing environment is disturbed by the change of lighting condition and a background clutter without a stochastic approach. Therefore, this work suggests that individual tomato object detection for each frame is conducted by YOLOv3 model, and the continuous instance tracking between frames is performed by Kalman filter and probability model. We have verified the performance of the proposed method, an experiment was shown a good result in real-world test data.

Robust Object Extraction Algorithm in the Sea Environment (해양환경에서 강건한 물표 추적 알고리즘)

  • Park, Jiwon;Jeong, Jongmyeon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.298-303
    • /
    • 2014
  • In this paper, we proposed a robust object extraction and tracking algorithm in the IR image sequence acquired in the sea environment. In order to extract size-invariant object, we detect horizontal and vertical edges by using DWT and combine it to generate saliency map. To extract object region, binarization technique is applied to saliency map. The correspondences between objects in consecutive frames are defined by the calculating minimum weighted Euclidean distance as a matching measure. Finally, object trajectories are determined by considering false correspondences such as entering object, vanishing objects and false object and so on. The proposed algorithm can find trajectories robustly, which has shown by experimental results.