• 제목/요약/키워드: long-Term Object Tracking

검색결과 19건 처리시간 0.018초

LSTM Network with Tracking Association for Multi-Object Tracking

  • Farhodov, Xurshedjon;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제23권10호
    • /
    • pp.1236-1249
    • /
    • 2020
  • In a most recent object tracking research work, applying Convolutional Neural Network and Recurrent Neural Network-based strategies become relevant for resolving the noticeable challenges in it, like, occlusion, motion, object, and camera viewpoint variations, changing several targets, lighting variations. In this paper, the LSTM Network-based Tracking association method has proposed where the technique capable of real-time multi-object tracking by creating one of the useful LSTM networks that associated with tracking, which supports the long term tracking along with solving challenges. The LSTM network is a different neural network defined in Keras as a sequence of layers, where the Sequential classes would be a container for these layers. This purposing network structure builds with the integration of tracking association on Keras neural-network library. The tracking process has been associated with the LSTM Network feature learning output and obtained outstanding real-time detection and tracking performance. In this work, the main focus was learning trackable objects locations, appearance, and motion details, then predicting the feature location of objects on boxes according to their initial position. The performance of the joint object tracking system has shown that the LSTM network is more powerful and capable of working on a real-time multi-object tracking process.

Dynamic Tracking Aggregation with Transformers for RGB-T Tracking

  • Xiaohu, Liu;Zhiyong, Lei
    • Journal of Information Processing Systems
    • /
    • 제19권1호
    • /
    • pp.80-88
    • /
    • 2023
  • RGB-thermal (RGB-T) tracking using unmanned aerial vehicles (UAVs) involves challenges with regards to the similarity of objects, occlusion, fast motion, and motion blur, among other issues. In this study, we propose dynamic tracking aggregation (DTA) as a unified framework to perform object detection and data association. The proposed approach obtains fused features based a transformer model and an L1-norm strategy. To link the current frame with recent information, a dynamically updated embedding called dynamic tracking identification (DTID) is used to model the iterative tracking process. For object association, we designed a long short-term tracking aggregation module for dynamic feature propagation to match spatial and temporal embeddings. DTA achieved a highly competitive performance in an experimental evaluation on public benchmark datasets.

Surf points based Moving Target Detection and Long-term Tracking in Aerial Videos

  • Zhu, Juan-juan;Sun, Wei;Guo, Bao-long;Li, Cheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권11호
    • /
    • pp.5624-5638
    • /
    • 2016
  • A novel method based on Surf points is proposed to detect and lock-track single ground target in aerial videos. Videos captured by moving cameras contain complex motions, which bring difficulty in moving object detection. Our approach contains three parts: moving target template detection, search area estimation and target tracking. Global motion estimation and compensation are first made by grids-sampling Surf points selecting and matching. And then, the single ground target is detected by joint spatial-temporal information processing. The temporal process is made by calculating difference between compensated reference and current image and the spatial process is implementing morphological operations and adaptive binarization. The second part improves KALMAN filter with surf points scale information to predict target position and search area adaptively. Lastly, the local Surf points of target template are matched in this search region to realize target tracking. The long-term tracking is updated following target scaling, occlusion and large deformation. Experimental results show that the algorithm can correctly detect small moving target in dynamic scenes with complex motions. It is robust to vehicle dithering and target scale changing, rotation, especially partial occlusion or temporal complete occlusion. Comparing with traditional algorithms, our method enables real time operation, processing $520{\times}390$ frames at around 15fps.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • 제4권2호
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

인간 행동 분석을 이용한 위험 상황 인식 시스템 구현 (A Dangerous Situation Recognition System Using Human Behavior Analysis)

  • 박준태;한규필;박양우
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

투시적 깊이를 활용한 중첩된 객체의 관계추적 (Relation Tracking of Occluded objects using a Perspective Depth)

  • 박화진
    • 디지털콘텐츠학회 논문지
    • /
    • 제16권6호
    • /
    • pp.901-908
    • /
    • 2015
  • 스토킹과 같은 장시간 동안의 이상행위를 추적하기 위해선 네트워크로 연결된 다중 CCTV환경하에서 객체간의 관계를 지속적으로 추적하는 시스템이 매우 필요하다. 그러나 추적과정에서 자주 발생하는 객체의 겹침문제가 해결되지 않는다면 객체 추적이 중단되거나 다른 객체로 대체되는 등의 치명적인 오류가 발생할 가능성이 농후하다. 본 연구는 기 설치된 CCTV를 최대한 활용하기 위해 투시적 투영깊이 및 객체특성을 활용하여 겹침문제를 해결함으로써 중첩된 객체 관계를 지속적으로 추적가능하게 한다. 객체간 겹침문제 뿐만 아니라 배경에 포함된 객체 즉 벽이나 기둥 등의 객체와의 겹침문제도 함께 다룬다.

Traffic Accident Detection Based on Ego Motion and Object Tracking

  • Kim, Da-Seul;Son, Hyeon-Cheol;Si, Jong-Wook;Kim, Sung-Young
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.15-23
    • /
    • 2020
  • In this paper, we propose a new method to detect traffic accidents in video from vehicle-mounted cameras (vehicle black box). We use the distance between vehicles to determine whether an accident has occurred. To calculate the position of each vehicle, we use object detection and tracking method. By the way, in a crowded road environment, it is so difficult to decide an accident has occurred because of parked vehicles at the edge of the road. It is not easy to discriminate against accidents from non-accidents because a moving vehicle and a stopped vehicle are mixed on a regular downtown road. In this paper, we try to increase the accuracy of the vehicle accident detection by using not only the motion of the surrounding vehicle but also ego-motion as the input of the Recurrent Neural Network (RNN). We improved the accuracy of accident detection compared to the previous method.

가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹 (Hybrid Inertial and Vision-Based Tracking for VR applications)

  • 구재필;안상철;김형곤;김익재;구열회
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

Autofocus Tracking System Based on Digital Holographic Microscopy and Electrically Tunable Lens

  • Kim, Ju Wan;Lee, Byeong Ha
    • Current Optics and Photonics
    • /
    • 제3권1호
    • /
    • pp.27-32
    • /
    • 2019
  • We present an autofocus tracking system implemented by the digital refocusing of digital holographic microscopy (DHM) and the tunability of an electrically tunable lens (ETL). Once the defocusing distance of an image is calculated with the DHM, then the focal plane of the imaging system is optically tuned so that it always gives a well-focused image regardless of the object location. The accuracy of the focus is evaluated by calculating the contrast of refocused images. The DHM is performed in an off-axis holographic configuration, and the ETL performs the focal plane tuning. With this proposed system, we can easily track down the object drifting along the depth direction without using any physical scanning. In addition, the proposed system can simultaneously obtain the digital hologram and the optical image by using the RGB channels of a color camera. In our experiment, the digital hologram is obtained by using the red channel and the optical image is obtained by the blue channel of the same camera at the same time. This technique is expected to find a good application in the long-term imaging of various floating cells.

커널상관필터를 이용한 소형무인기 추적 (Small UAV tracking using Kernelized Correlation Filter)

  • 선선구;이의혁
    • 한국인터넷방송통신학회논문지
    • /
    • 제20권1호
    • /
    • pp.27-33
    • /
    • 2020
  • 최근 영상 센서를 이용한 물체 탐지 및 추적 기술은 많은 응용분야에서 그 사용이 널리 확대되고 있다. 민수 산업 분야에서 로보틱스, 비디오 감시정찰 및 차량 네비게이션 분야와 같은 영역으로 널리 확대되고 있는 추세이다. 특히, 드론의 사용이 널리 확대되고 있는 현 상황에서 공항, 원자력 발전소 및 중요시설에서는 불법적으로 운용되고 있는 소형무인기를 탐지 및 추적하여 격추시키는 시스템 개발이 매우 중요하다. 최근 영상센서를 활용한 물체 추적 방법으로 이목을 끌고 있는 방법이 학습에 기반을 둔 KCF 방법이다. 그러나 이 방법은 추적 기간이 길어지면 추적 과정에서 표적의 드리프트가 발생하는 문제점이 있다. 비디오 감시정찰 분야에서 표적의 드리프트 문제를 줄이기 위해 우리는 KCF와 적응 임계치설정 및 칼만필터를 적용하여 표적 드리프트 문제를 줄일 수 있는 방법을 제안하였다. 실험을 통해서 실제 무인비행체가 운용되는 실제 환경에서 획득된 흑백 비디오 영상에 제안한 방법과 기존의 KCF 알고리즘을 비교하여 제안한 방법의 우수성을 입증하였다.