• 제목/요약/키워드: discriminative correlation filter

검색결과 6건 처리시간 0.024초

Visual Tracking using Weighted Discriminative Correlation Filter

  • Song, Tae-Eun;Jang, Kyung-Hyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권11호
    • /
    • pp.49-57
    • /
    • 2016
  • In this paper, we propose the novel tracking method which uses the weighted discriminative correlation filter (DCF). We also propose the PSPR instead of conventional PSR as tracker performance evaluation method. The proposed tracking method uses multiple DCF to estimates the target position. In addition, our proposed method reflects more weights on the correlation response of the tracker which is expected to have more performance using PSPR. While existing multi-DCF-based tracker calculates the final correlation response by directly summing correlation responses from each tracker, the proposed method acquires the final correlation response by weighted combining of correlation responses from the selected trackers robust to given environment. Accordingly, the proposed method can provide high performance tracking in various and complex background compared to multi-DCF based tracker. Through a series of tracking experiments for various video data, the presented method showed better performance than a single feature-based tracker and also than a multi-DCF based tracker.

Visual tracking based Discriminative Correlation Filter Using Target Separation and Detection

  • Lee, Jun-Haeng
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.55-61
    • /
    • 2017
  • In this paper, we propose a novel tracking method using target separation and detection that are based on discriminative correlation filter (DCF), which is studied a lot recently. 'Retainability' is one of the most important factor of tracking. There are some factors making retainability of tracking worse. Especially, fast movement and occlusion of a target frequently occur in image data, and when it happens, it would make target lost. As a result, the tracking cannot be retained. For maintaining a robust tracking, in this paper, separation of a target is used so that normal tracking is maintained even though some part of a target is occluded. The detection algorithm is executed and find new location of the target when the target gets out of tracking range due to occlusion of whole part of a target or fast movement speed of a target. A variety of experiments with various image data sets are conducted. The algorithm proposed in this paper showed better performance than other conventional algorithms when fast movement and occlusion of a target occur.

Adaptive Weight Collaborative Complementary Learning for Robust Visual Tracking

  • Wang, Benxuan;Kong, Jun;Jiang, Min;Shen, Jianyu;Liu, Tianshan;Gu, Xiaofeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권1호
    • /
    • pp.305-326
    • /
    • 2019
  • Discriminative correlation filter (DCF) based tracking algorithms have recently shown impressive performance on benchmark datasets. However, amount of recent researches are vulnerable to heavy occlusions, irregular deformations and so on. In this paper, we intend to solve these problems and handle the contradiction between accuracy and real-time in the framework of tracking-by-detection. Firstly, we propose an innovative strategy to combine the template and color-based models instead of a simple linear superposition and rely on the strengths of both to promote the accuracy. Secondly, to enhance the discriminative power of the learned template model, the spatial regularization is introduced in the learning stage to penalize the objective boundary information corresponding to features in the background. Thirdly, we utilize a discriminative multi-scale estimate method to solve the problem of scale variations. Finally, we research strategies to limit the computational complexity of our tracker. Abundant experiments demonstrate that our tracker performs superiorly against several advanced algorithms on both the OTB2013 and OTB2015 datasets while maintaining the high frame rates.

Robust Visual Tracking using Search Area Estimation and Multi-channel Local Edge Pattern

  • Kim, Eun-Joon
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권7호
    • /
    • pp.47-54
    • /
    • 2017
  • Recently, correlation filter based trackers have shown excellent tracking performance and computational efficiency. In order to enhance tracking performance in the correlation filter based tracker, search area which is image patch for finding target must include target. In this paper, two methods to discriminatively represent target in the search area are proposed. Firstly, search area location is estimated using pyramidal Lucas-Kanade algorithm. By estimating search area location before filtering, fast motion target can be included in the search area. Secondly, we investigate multi-channel Local Edge Pattern(LEP) which is insensitive to illumination and noise variation. Qualitative and quantitative experiments are performed with eight dataset, which includes ground truth. In comparison with method without search area estimation, our approach retain tracking for the fast motion target. Additionally, the proposed multi-channel LEP improves discriminative performance compare to existing features.

다중 레벨 양자화 기법을 적용한 오디오 핑거프린트 추출 방법 (Audio Fingerprint Extraction Method Using Multi-Level Quantization Scheme)

  • 송원식;박만수;김회린
    • 한국음향학회지
    • /
    • 제25권4호
    • /
    • pp.151-158
    • /
    • 2006
  • 본 논문은 필립스의 음악 검색 기법을 기반으로 필터 뱅크 에너지 변화량과 음악의 통계적인 특성을 이용한 오디오 핑거프린트 추출 방법을 제안하였다. 기존의 필립스 방식은 제한된 주파수 영역을 너무 많은 필터 뱅크로 분할하여 분석함으로써 밴드들 사이에 연계성 및 왜곡에 대한 민감도가 증가하는 특징을 보일 수 있다. 제안된 방법은 필터 뱅크의 밴드 수를 줄여 왜곡에 대한 강인성을 증진시키고, 필터 뱅크 에너지의 변화량의 부호와 크기 정보를 통계적 특성을 고려한 양자화 기법을 이용해 2비트로 할당함으로써 오디오 핑거프린트의 고유성을 확보하였다. 추출된 2비트는 4개의 레벨로 정보를 표현함으로 각 레벨 사이에 연계성이 존재하게 된다. 이 같은 레벨 사이의 연계성은 유사도 측정 시 이용될 뿐만 아니라 오디오 핑거프린트를 기준으로 검색 영역을 확장하는 제안된 방식에서는 효율적인 검색 영역을 선택할 수 있는 정보로 활용 되었다. 제안된 방식은 다양한 주변 잡음환경 (거리, 백화점, 자동차, 사무실, 식당)에서의 실험을 통하여 주변 잡음에 강인한 특성을 보일 뿐만 아니라 검색 속도 또한 향상되는 특징을 보였다.

A New CSR-DCF Tracking Algorithm based on Faster RCNN Detection Model and CSRT Tracker for Drone Data

  • Farhodov, Xurshid;Kwon, Oh-Heum;Moon, Kwang-Seok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제22권12호
    • /
    • pp.1415-1429
    • /
    • 2019
  • Nowadays object tracking process becoming one of the most challenging task in Computer Vision filed. A CSR-DCF (channel spatial reliability-discriminative correlation filter) tracking algorithm have been proposed on recent tracking benchmark that could achieve stat-of-the-art performance where channel spatial reliability concepts to DCF tracking and provide a novel learning algorithm for its efficient and seamless integration in the filter update and the tracking process with only two simple standard features, HoGs and Color names. However, there are some cases where this method cannot track properly, like overlapping, occlusions, motion blur, changing appearance, environmental variations and so on. To overcome that kind of complications a new modified version of CSR-DCF algorithm has been proposed by integrating deep learning based object detection and CSRT tracker which implemented in OpenCV library. As an object detection model, according to the comparable result of object detection methods and by reason of high efficiency and celerity of Faster RCNN (Region-based Convolutional Neural Network) has been used, and combined with CSRT tracker, which demonstrated outstanding real-time detection and tracking performance. The results indicate that the trained object detection model integration with tracking algorithm gives better outcomes rather than using tracking algorithm or filter itself.