• Title/Summary/Keyword: object tracking out

Search Result 107, Processing Time 0.031 seconds

The Application of Dyadic Wavelet In the RS Image Edge Detection

  • Qiming, Qin;Wenjun, Wang;Sijin, Chen
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1268-1271
    • /
    • 2003
  • In the edge detection of RS image, the useful detail losing and the spurious edge often appear. To solve the problem, we use the dyadic wavelet to detect the edge of surface features by combining the edge detecting with the multi-resolution analyzing of the wavelet transform. Via the dyadic wavelet decomposing, we obtain the RS image of a certain appropriate scale, and figure out the edge data of the plane and the upright directions respectively, then work out the grads vector module of the surface features, at last by tracing them we get the edge data of the object therefore build the RS image which obtains the checked edge. This method can depress the effect of noise and examine exactly the edge data of the object by rule and line. With an experiment of a RS image which obtains an airport, we certificate the feasibility of the application of dyadic wavelet in the object edge detection.

  • PDF

Multi-level Cross-attention Siamese Network For Visual Object Tracking

  • Zhang, Jianwei;Wang, Jingchao;Zhang, Huanlong;Miao, Mengen;Cai, Zengyu;Chen, Fuguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3976-3990
    • /
    • 2022
  • Currently, cross-attention is widely used in Siamese trackers to replace traditional correlation operations for feature fusion between template and search region. The former can establish a similar relationship between the target and the search region better than the latter for robust visual object tracking. But existing trackers using cross-attention only focus on rich semantic information of high-level features, while ignoring the appearance information contained in low-level features, which makes trackers vulnerable to interference from similar objects. In this paper, we propose a Multi-level Cross-attention Siamese network(MCSiam) to aggregate the semantic information and appearance information at the same time. Specifically, a multi-level cross-attention module is designed to fuse the multi-layer features extracted from the backbone, which integrate different levels of the template and search region features, so that the rich appearance information and semantic information can be used to carry out the tracking task simultaneously. In addition, before cross-attention, a target-aware module is introduced to enhance the target feature and alleviate interference, which makes the multi-level cross-attention module more efficient to fuse the information of the target and the search region. We test the MCSiam on four tracking benchmarks and the result show that the proposed tracker achieves comparable performance to the state-of-the-art trackers.

ROI Based Object Extraction Using Features of Depth and Color Images (깊이와 칼라 영상의 특징을 사용한 ROI 기반 객체 추출)

  • Ryu, Ga-Ae;Jang, Ho-Wook;Kim, Yoo-Sung;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.395-403
    • /
    • 2016
  • Recently, Image processing has been used in many areas. In the image processing techniques that a lot of research is tracking of moving object in real time. There are a number of popular methods for tracking an object such as HOG(Histogram of Oriented Gradients) to track pedestrians, and Codebook to subtract background. However, object extraction has difficulty because that a moving object has dynamic background in the image, and occurs severe lighting changes. In this paper, we propose a method of object extraction using depth image and color image features based on ROI(Region of Interest). First of all, we look for the feature points using the color image after setting the ROI a range to find the location of object in depth image. And we are extracting an object by creating a new contour using the convex hull point of object and the feature points. Finally, we compare the proposed method with the existing methods to find out how accurate extracting the object is.

Improved Object Tracking using Surrounding Information Detection (주변정보 검출을 통한 개선된 객체추적 기법)

  • Cho, Chi-young;Kim, Soo-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1027-1030
    • /
    • 2013
  • For the detection of objects in the videos, there are various ways that use the frequency transformation. In the videos, the images of objects could be changed slightly. Object detection methods using frequency transformation such as ASEF and MOSSE have the ability to renew the detection filter in order to deal with the change of object images. But these approaches are likely to fail the detection because the image changes often occur when they come out again after being hidden by other objects. What is worse, when they show up again, they appear in another place, not the original one. In this paper, a new proposal is present so that the detection can be carried out efficiently even when the images come out in other place, and the failure of the detection can be reduced.

  • PDF

A Study on the location tracking system by using Zigbee in wireless sensor network (무선 센서네트워크에서 Zigbee를 적용한 위치추정시스템 구현에 관한연구)

  • Jung, Suk;Kim, Hwan-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.2120-2126
    • /
    • 2010
  • This paper aims to realize the location tracking system using the Zigbee in the wireless sensor network. The wireless sensor network offers the user-oriented location tracking service in the ubiquitous environment. The location tracking service can track the location of an object or a person and provides it. The location tracking system realized in this paper can be used inside or outside without any black-out areas to measure the location of the moving nodes. In tracking inside the RSSI signals use and in tracking outside will connect with the GPS signals to track the location. Also, by using Zigbee, the wireless sensor network environment was established and by obtaining the location of the moving nodes, the real-time tracking is possible.

Opto-Digital Implementation of Convergence-Controlled Stereo Target Tracking System (주시각이 제어된 스테레오 물체추적 시스템의 광-디지털적 구현)

  • 고정환;이재수;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.4B
    • /
    • pp.353-364
    • /
    • 2002
  • In this paper, a new onto-digital stereo object-tracking system using hierarchical digital algorithms and optical BPEJTC is proposed. This proposed system can adaptively track a moving target by controlling the convergence of stereo camera. firstly, the target is detected through the background matching of the sequential input images by using optical BPEJTC and then the target area is segmented by using the target projection mask which is composed by hierarchical digital processing of image subtraction, logical operation and morphological filtering. Secondly, the location's coordinate of the moving target object for each of the sequential input frames can be extracted through carrying out optical BPEJTC between the reference image of the target region mask and the stereo input image. Finally, the convergence and pan/tilt of stereo camera can be sequentially controlled by using these target coordinate values and the target can be kept in tracking. Also, a possibility of real-time implementation of the adaptive stereo object tracking system is suggested through optically implementing the proposed target extraction and convergence control algorithms.

Real-Time Moving Object Tracking System using Advanced Block Based Image Processing (개선된 블록기반 영상처리기법에 의한 실시간 이동물체 추적시스템)

  • Kim, Dohwan;Cheoi, Kyung-Joo;Lee, Yillbyung
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.333-349
    • /
    • 2005
  • In this paper, we propose a real tine moving object tracking system based on block-based image processing technique and human visual processing. The system has two nun features. First, to take advantage of the merit of the biological mechanism of human retina, the system has two cameras, a CCD(Charge-Coupled Device) camera equipped with wide angle lens for more wide scope vision and a Pan-Tilt-Zoon tamers. Second, the system divides the input image into a numbers of blocks and processes coarsely to reduce the rate of tracking error and the processing time. Tn an experiment, the system showed satisfactory performances coping with almost every noisy image, detecting moving objects very int and controlling the Pan-Tilt-Zoom camera precisely.

  • PDF

Study on the Improved Target Tracking for the Collaborative Control of the UAV-UGV (UAV-UGV의 협업제어를 위한 향상된 Target Tracking에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.450-456
    • /
    • 2013
  • This paper suggests the target tracking method improved for the collaboration of the quad rotor type UAV (Unmanned Aerial Vehicle) and omnidirectional Unmanned Ground Vehicle. If UAV shakes or UGV moves rapidly, the existing method generates a phenomenon that the tracking object loses the tracking target. To solve the problems, we propose an algorithm that can track continually when they lose the target. The proposed algorithm stores the vector of the landmark. And if the target was lost, the control signal was inputted so that the landmark could move continuously to the direction running out. Prior to the experiment, Proportional and integral control were used in 4 motors in order to calibrate the Heading value of the omnidirectional mobile robot. The landmark of UGV was recognized as the camera adhered to UAV and the target was traced through the proportional-integral-derivative control. Finally, the performance of the target tracking controller and proposed algorithm was evaluated through the experiment.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Method for Extracting Features of Conscious Eye Moving for Exploring Space Information (공간정보 탐색을 위한 의식적 시선 이동특성 추출 방법)

  • Kim, Jong-Ha;Jung, Jae-Young
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2016
  • This study has estimated the traits of conscious eye moving with the objects of the halls of subway stations. For that estimation, the observation data from eye-tracking were matched with the experiment images, while an independent program was produced and utilized for the analysis of the eye moving in the selected sections, which could provide the ground for clarifying the traits of space-users' eye moving. The outcomes can be defines as the followings. First, The application of the independently produced program provides the method for coding the great amount of observation data, which cut down a lot of analysis time for finding out the traits of conscious eye moving. Accordingly, the inclusion of eye's intentionality in the method for extracting the characteristics of eye moving enabled the features of entrance and exit of particular objects with the course of observing time to be organized. Second, The examination of eye moving at each area surrounding the object factors showed that [out]${\rightarrow}$[in], which the line of sight is from the surround area to the objects, characteristically moved from the left-top (Area I) of the selected object to the object while [in]${\rightarrow}$[out], which is from the inside of the object to the outside, also moved to the left-top (Area I). Overall, there were much eye moving from the tops of right and left (Area I, II) to the object, but the eye moving to the outside was found to move to the left-top (Area I), the right-middle (Area IV) and the right-top (Area II). Third, In order to find if there was any intense eye-moving toward a particular factor, the dominant standards were presented for analysis, which showed that there was much eye-moving from the tops (Area I, II) to the sections of 1 and 2. While the eye-moving of [in] was [I $I{\rightarrow}A$](23.0%), [$I{\rightarrow}B$](16.1%) and [$II{\rightarrow}B$](13.8%), that of [out] was [$A{\rightarrow}I$](14.8%), [$B{\rightarrow}I$](13.6%), [$A{\rightarrow}II$](11.4%), [$B{\rightarrow}IV$](11.4%) and [$B{\rightarrow}II$](10.2%). Though the eye-moving toward objects took place in specific directions (areas), that (out) from the objects to the outside was found to be dispersed widely to different areas.