DOI QR코드

DOI QR Code

Object Tracking Based on Weighted Local Sub-space Reconstruction Error

  • Zeng, Xianyou (Institute of Information Science, Beijing Jiaotong University) ;
  • Xu, Long (Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences) ;
  • Hu, Shaohai (Institute of Information Science, Beijing Jiaotong University) ;
  • Zhao, Ruizhen (Institute of Information Science, Beijing Jiaotong University) ;
  • Feng, Wanli (Institute of Information Science, Beijing Jiaotong University)
  • Received : 2017.12.25
  • Accepted : 2018.04.04
  • Published : 2019.02.28

Abstract

Visual tracking is a challenging task that needs learning an effective model to handle the changes of target appearance caused by factors such as pose variation, illumination change, occlusion and motion blur. In this paper, a novel tracking algorithm based on weighted local sub-space reconstruction error is presented. First, accounting for the appearance changes in the tracking process, a generative weight calculation method based on structural reconstruction error is proposed. Furthermore, a template update scheme of occlusion-aware is introduced, in which we reconstruct a new template instead of simply exploiting the best observation for template update. The effectiveness and feasibility of the proposed algorithm are verified by comparing it with some state-of-the-art algorithms quantitatively and qualitatively.

Keywords

1. Introduction

With the rapid development of computer hardware level, image processing technology and artificial intelligence, computer vision has been widely applied in many fields of human activity and life, such as information retrieval (e.g. [34-36]), intelligent classification (e.g. [37-38]), decision making system and so on. Visual tracking plays a critical role in computer vision due to its wide range of applications such as motion analysis, video surveillance, vehicle navigation, human-computer interaction, aeronautics and astronautics. Although significant progress has been made in the past decades, robust object tracking is still a challenging problem due to numerous factors such as partial occlusion, illumination variation, motion blur and pose change.

The existing object tracking methods are classified into two categories: generative methods (e.g. [1-4]) and discriminative methods (e.g. [5-8], [29]). The discriminative methods transform the tracking into a classification problem and distinguish the target and the background by modeling a conditional distribution. The generative tracking methods aim to learn a visual model representing the appearance of the target being tracked and perform the tracking by looking for the image area that most matches the target object. It has been shown that generative models achieve higher generalization when training data is limited [17], while discriminative models perform better if the training set is large [18]. In addition, many hybrid tracking methods [16], [28] have been proposed to take the advantages of both generative and discriminative models.

In generative tracking methods, the object appearance representation is very important and greatly affects the likelihood estimation. Many representation schemes have been proposed, such as template-based (see [1], [4], [9]), sub-space-based (see [2], [10-11]), sparse representation-based (see [3], [12-13], [31], [33]) and feature-based (see [5-7], [15]) models. Among these representation methods, sub-space representation models provide a compact concept for the tracked object and promotes other visual tasks. Ross et al. [2] proposed an incremental visual tracking (IVT) method which is robust to in-plane rotation, illumination variation, scale change and pose change. However, it has been shown that the IVT method is sensitive to partial occlusion.

Considering the partial occlusion, quite a few attempts have been made. Adam et al. [1] proposed a fragment-based tracking approach, where the target region is partitioned into several fragments and partial occlusion is handled by combining the voting maps of these fragments. The authors of [19] extended the idea of fragment and presented local sensitive histogram to overcome multiple challenges including illumination changes and partial occlusion for robust tracking. In [20], the bag of words model was introduced into visual tracking to address partial occlusion. In [3] and [13], partial occlusion was modeled by sparse representation of trivial templates. The authors in [32] use a regularized robust sparse coding (RRSC) to robustly deal with occlusion and noise.

In this paper, a new visual tracking algorithm based on weighted local sub-space reconstruction error is proposed. First, candidate targets are represented through the PCA sub-space. Second, patch-based generative weights are computed from structural reconstruction error. Based on the patch-based representation error of the PCA sub-space and the patch-based generative weight, an effective tracking method based on particle filter is developed and used for the prediction of the tracked target. In addition, a template update scheme of occlusion-aware is introduced, which can handle appearance changes caused by occlusion or other disturbances during tracking. The main contributions of this work are outlined below.

(1) A novel tracking algorithm based on weighted local sub-space reconstruction error is presented in this paper.

(2) A generative weight calculation method based on structural reconstruction error is inserted to deal with appearance changes in the tracking process.

(3) A template update scheme of occlusion-aware is introduced to avoid bringing noise into the template set by reconstructing a new template for template update.

The rest of the paper is arranged as follows. The related work is briefly introduced in section 2. The proposed tracking algorithm is described in detail in section 3. The comparisons between the proposed tracking algorithm and some state-of-the-art tracking algorithms are presented in section 4. Finally, the concluding remark is given in section 5.

2. Related work

A lot of works have been done in visual tracking and good reviews can be seen from [21-22]. Here, we discuss the methods that are most related to our work, namely, incremental sub-space learning based trackers and sparse representation based trackers.

2.1 incremental sub-space learning based trackers

In recent years, visual tracking based on sub-space learning ([2], [10-11], [23]) has received considerable attention. The IVT method [2] incrementally learns and updates a low dimensional PCA sub-space representation, which online adapts to the appearance changes of the target. Several experimental results demonstrate that the IVT method is effective in dealing with appearance changes caused by in-plane rotation, scale and illumination variations. However, it has the following drawbacks. Firstly, the IVT method assumes that the reconstruction error is Gaussian distributed with small variances. The assumption does not hold as partial occlusion occurs, resulting in compromised performance of tracking. Secondly, the IVT method doesn’t have an effective update scheme. It directly updates the sub-space model with new observations without detecting and processing outliers. To solve partial occlusion, Lu et al. [10] introduced l1 noise regularization into the PCA reconstruction. Wang et al. [11] utilized the linear regression with Gaussian-Laplacian assumption to deal with outliers for reliable tracking. Pan et al. [23] employed l0 norm to regularize the linear coefficients of incrementally updated linear basis to remove the redundant features of basis vectors. Zhou et al. [39] developed a tracking algorithm based on weighted sub-space reconstruction error, which can take the advantages of sparse representation and sub-space learning model. Different from the aforementioned holistic models, a novel tracking method via weighted local sub-space reconstruction error is proposed in this paper.

2.2 sparse representation based trackers

Sparse representation has been widely studied and applied to visual tracking. Mei and Ling [3] sparsely represented each candidate object in a space spanned by target templates and trivial templates to tackle occlusion and corruption challenges. Liu et al. [24] incorporated group sparsity to boost the robustness and efficiency of the tracker. In [13], a faster version of [3] was proposed, which was further extended to handle multi-task in [25]. The works in [26] and [27] combined sparse representation and incremental sub-space learning for object tracking by reconstructing a new template and exploiting it for template update. Our method is motivated by the works in [10], [26], [27]. We use a patch-based generative weight to adjust the patch-based reconstruction error of PCA sub-space model. To get rid of image noise, we introduce an occlusion-aware template update scheme for the object tracking.

2.3 deep networks based trackers

Recently, deep neural network has been introduced into tracking for its powerful feature learning capability. In [40], a neural network with three convolution layers was proposed for visual tracking, which learned feature representation and classifier simultaneously. In [41] and [30], a convolution neural network (CNN) was respectively pre-trained on image classification dataset, and then it was transferred to visual tracking. In [42] and [43], the authors directly trained their CNNS on large amounts of video sequences.

3. Proposed visual tracking algorithm

Object tracking can be considered as a Bayesian filtering process. Let the target state \(x_{t}=\left\{l_{x}, l_{y}, \mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}\right\}\), where \(l_{x}, l_{y}, \mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}\) denote the horizontal and vertical translations, rotation angle, scale, aspect ratio, and skew parameter respectively. Given the observation set \(Y_{t}=\left\{y_{1}, y_{2}, \dots, y_{t}\right\}\) up to frame t, we estimate the state of the object xt recursively

\(p\left(x_{t} | Y_{t}\right) \propto p\left(y_{t} | x_{t}\right) \int p\left(x_{t} | x_{t-1}\right) p\left(x_{t-1} | Y_{t-1}\right) d x_{t-1}\)       (1)

where \(p(x_t|x_{t-1})\) is the motion model that represents the state transition of the object between the two consecutive frames, and \(p(y_t|x_t)\) denotes the observation model that estimates the likelihood of the observation yt at state xt . Particle filtering is an effective implementation of Bayesian filtering. The optimal state is computed by the maximum a posterior estimation (MAP) of N samples,

\(\hat{x}_{t}=\arg \max _\limits{x_{t}^{i}} p\left(y_{t} | x_{t}^{i}\right) p\left(x_{t}^{i} | \begin{array}{c} {x_{t-1}^{\wedge}} \end{array}\right)\)       (2)

where \(x_t^i\) is the \(i\)-th  sample of frame t . The observation model \(p(y_t|x_t^i)\) (2) is crucial for robust tracking. In this paper, the observation model is estimated through a weighted local PCA sub-space model. Fig. 1 shows the observation model of our method, which will be explained in detail as follows.

 

Fig. 1. The weighted local PCA sub-space observation model.

3.1 Motivation of this work

We assume that target appearance can be represented by an image sub-space with corruption,

\(y=U z+e=\left[U I_{q}\right]\left[\begin{array}{l} {z} \\ {e} \end{array}\right]\)       (3)

where \(y \in R^{q\times 1}\) denotes an observation vector, \(U\) represents a matrix of column basis vectors, \(I_q \in R^{q\times q}\) is an identity matrix, \(z\) is the coefficient vector of basis vectors, and e indicates the error term modeled by a Laplacian noise. The coefficient vector \(z\) and the error term \(e\) can be computed by

\([z, e]=\min _\limits{z, e} \frac{1}{2}\|\bar{y}-U z-e\|_{2}^{2}+\lambda\|e\|_{1}\)       (4)

where \(\overline{y}= y-\mu,\space \mu\) is the mean vector. After obtaining \(z\) , the observation likelihood of the observation \(y\) can be measured by the reconstruction error

\(\begin{aligned} p(y | x) &=\exp \left(-\|\bar{y}-U z\|_{2}^{2}\right) \\ &=\exp \left(-\left\|E_{P C A}\right\|_{2}^{2}\right) \end{aligned}\)     (5)

where \(E_{PCA}\) is the reconstruction error. Eq. (5) is a holistic estimation method. It is usually sensitive to partial occlusion.

Inspired by local models, we reorganize the reconstruction error \(E_{PCA}\) as the connection of \(M\) local feature vectors \(E_{PCA}=[t_1^T,t_2^T,\ldots,t_M^T]^T\), where \(t_i\in R^{l\times 1}\) is a column vector denoting the \(i\)-th local patch of the reconstruction error, and \(M=q/l\). Then, Eq. (5) can be reformulated as

\(\begin{array}{c}p(y | x)=\exp \left(-\left\|E_{P C A}\right\|_{2}^{2}\right)\\=\exp \left[-\left(\left\|t_{1}\right\|_{2}^{2}+\left\|t_{2}\right\|_{2}^{2}+\ldots+\left\|t_{M}\right\|_{2}^{2}\right)\right]\\=\exp \left[-\left(1 \cdot\left\|t_{1}\right\|_{2}^{2}+1 \cdot\left\|t_{2}\right\|_{2}^{2}+\ldots+1 \cdot\left\|t_{M}\right\|_{2}^{2}\right)\right]\end{array}\)        (6)

From (6), we can see that the penalty weight of each part \(t_i\) is 1, which means that holistic model deals with the observation uniformly and treats each part of the observation equally regardless of the condition of each part of the observation during the tracking. It does not hold when the observation is subjected to some impulse noise, such as partial occlusion and local illumination variations. Based on the above discussion, we aim to learn a set of generative weights via sparse coding of each local patch of the observation to penalize each part of the reconstruction error \(E_{PCA}\) differently.

3.2 Weight learning by structural reconstruction error

1) Preprocessing: Each input image is adjusted to a standard size of 32 32 × pixels and represented by gray-scale features. We employ a sliding window to sample a bank of non-overlapping local image patches \(X=\{x_1,x_2,\ldots,x_m\} \in R^{l \times M}\) in the input image, where \(x_i\) is the \(i\)-th column local vectorized patch, \(l\) is the dimension of patch vectors and \(M\) is the number of local patches. Each patch \(x_i\) is preprocessed by \(l_2\) normalization.

2) Templates: Initially, we use the CT algorithm [7] to track the first n frames. Tracking results are used to form the templates \(T=\left[T_{1}, T_{2}, \ldots, T_{n}\right] \) . Each template is split into local image patches. Then a dictionary \(D=\left[d_{1}, d_{2}, \ldots, d_{M \times n}\right] \in R^{l \times(M \times n)}\) can be obtained for encoding local patches of each candidate target. Each element \(d_i\) is a normalized column vector which corresponds to a local patch cropped from \(T\) .

3) Weight learning: Given a candidate target, each local image patch \(x_i\) of it can be encoded using the elements of the dictionary \(D\) by solving

\(\min _\limits{\partial_{i}}\left\|x_{i}-D \partial_{i}\right\|_{2}^{2}+\lambda_{1}\left\|\partial_{i}\right\|_{1}\)       (7)

where \(\partial_{i} \in R^{(M \times n) \times 1}\) is the sparse code of \(x_i\) , \(\lambda_1\) is a control parameter. In order to take into account the spatial layout, the dictionary \(D\) can be written as

\(\begin{aligned} D &=\left[d_{1}, \ldots, d_{M}, d_{M+1}, \ldots, d_{2 M}, \ldots, d_{(n-1) M+1}, \ldots, d_{n M}\right] \\ &=\left[D_{i}, D_{o t h e r}\right] \end{aligned}\)       (8)

where \(D_{i}=\left[d_{i}, d_{M+i}, \ldots, d_{(n-1) M+i}\right] \in R^{l \times n}, 1 \leq i \leq M, D_{other}\) is made up of the other elements of \(D\) . Accordingly, the sparse code \(\partial_i\) can be denoted as , \(\partial_i=[\beta_i^T,\beta_{other}^T]^T\), where \(\beta_{i}=\left[\partial_{i}^{i}, \partial_{i}^{M+i}, \ldots, \partial_{i}^{(n-1) M+i}\right]^{T} \in R^{n \times 1}\) is the sparse coefficients of the patch \(x_i\) under sub-dictionary \(D_i\) , \(\beta_{other}\) is the sparse coefficients of the patch \(x_i\) under sub-dictionary \(D_{other}\) .

The weight of \(x_i\) can be obtained by

\(w_{i}=\left\|x_{i}-D\left(\omega_{i} \otimes \partial_{i}\right)\right\|_{2}^{2}+\gamma\left\|D\left(\left(1-\omega_{i}\right) \otimes \partial_{i}\right)\right\|_{1}\)       (9)

where \(\omega_{i}=\left[\omega_{i}^{1}, \omega_{i}^{2}, \ldots, \omega_{i}^{(M \times n)}\right]^{T}\) is an indicator vector, ⊗ is the element-wise multiplication, and γ is a control parameter. Each element of ωi is obtained by

\(\omega_{i}^{j}=\left\{\begin{array}{l} {1, j=i, M+i, \ldots,(n-1) M+i} \\ {0, \text { others }} \end{array}\right.\)       (10)

The flow of weight calculation is shown in Fig. 2. In (9), the first term is the reconstruction error of \(x_i\) under sub-dictionary \(D_i\) , and the second term is the sparse reconstruction of \(x_i\) under sub-dictionary \(D_{other }\) which is a penalty term. If the candidate target is perfect, both the first and the second terms on the right side of (9) are very small. Otherwise, they become very large. In this way, we can learn a set of different weights for local patches of the candidate target which satisfies \(\sum_\limits{i=1}^{M} w_{i}=1\). The main advantage lies in that the structural similarity between the candidate target and templates is fully considered. Then, the observation likelihood of the candidate target can be measured by

\(p=\exp \left[-\left(\sum_\limits{i=1}^{M} w_{i} \cdot\left\|t_{i}\right\|_{2}^{2}+\tau\|e\|_{0}\right)\right]\)       (11)

where \(\|e\|_{0}\) denotes the number of the outliers of the candidate target, \(\tau\) is a constant. After the observation likelihood of all candidate targets is obtained, the candidate target with the biggest observation likelihood is taken as the tracked target.

 

Fig. 2. Overall diagram of weight calculation.

3.3 Model updating

To adapt to the appearance change of a target object, the observation model needs to be updated dynamically. The model updating includes the updating of PCA sub-space and templates.

1) PCA sub-space updating: since the error term e can identify some outliers (e.g. partial occlusion, illumination change), we adopt the strategy proposed by [11] to update PCA sub-space (including PCA basis \(U\) and the mean vector \(\mu\)). After we obtain the tracked target of each frame \(y_o\) , the tracked target is reconstructed by

\(y_{r}^{i}=\left\{\begin{array}{l} {y_{o}^{i}, e_{o}^{i}=0} \\ {\mu_{o}^{i}, e_{o}^{i} \neq 0} \end{array}\right.\)       (12)

where \(y_r\) is the reconstructed vector of the tracked target of each frame, eo is the error term corresponding to the tracked target \(y_o\)\(y_r\) is cumulated and used to incrementally update \(U\) and \(\mu\).

2) Occlusion-aware template updating: In this study, we give each template \(T_i\) a weight \(a_i\) which has an initial value of 1. After obtaining the reconstructed vector of the tracked target in each frame, we update the value of the weight \(a_i\) as follows.

\(a_{i}=a_{i} e^{-\theta_{i}}\)       (13)

where \(\theta_i\) is the angle between \(T_i\) and \(y_r\) . In [26] and [27], sparse representation and incremental sub-space learning are used to reconstruct a new template for the template update, which can avoid introducing noise into templates \(T\) . Inspired by the work [26] and [27], we propose an effective template update method. The template updating method includes two operations: template replacement and weight updating. For template replacement, we first get the coefficient \(z\) of the tracked target in each frame, and then reconstruct a new template through

\(T^{*}=U z+\mu\)       (14)

\(T^*\) replaces the template that has the least weight. During weight updating, the median weight of the rest \(n-1\) templates is used as the weight of \(T^*\) . Algorithm 1 summarizes our method and its process is shown in Fig. 3.

Algorithm 1. Our proposed tracker

JAKO201914260133340_수식_0001.PNG 이미지

 

Fig. 3. Overview diagram of the proposed tracking approach.

4. Experimental results

4.1 Implementation details

The proposed algorithm is executed in MATLAB and has a running speed of 1.1 frames per second on a 3.4 GHZ i7-4770 core PC with 16GB memory. The number of templates \(n\) is 10. For all experiments, the number of patch \(M\) is 16. The variable \(\lambda\) in (4), \(\lambda_1\) in (7), \(\gamma\) in (9) and \(\tau\) in (11) are set to 0.1, 0.01, 0.01 and 0.05 respectively. \(\left\{l_{x}, l_{y}, \mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}\right\}\) is fixed to \(\{6,6,0.01,0,0.005,0\}\). The maximum number of PCA basis vectors is set to 16. The number of particles \(N\) is set to 600 for balancing effectiveness and speed. The proposed observation model is updated every 5 frames.

4.2 Quantitative evaluation

In order to evaluate the effectiveness and feasibility of the proposed tracker (WLSRE), experiments are carried out on 26 publicly available sequences [22] which contain different challenging situations (e.g. partial occlusion, illumination variation, etc.). Our WLSRE tracker is compared with seven state-of-the-art trackers: IVT [2], LSST [11], SPT [10], ASLA [26], CT [7], KCF [15], and TGPR [14].

Two metrics are measured to evaluate the proposed algorithm with other state-of-the-art methods. The first metric is the center location error which reflects the error between the center of the tracking bounding box and the center of the ground truth bounding box. The second one is the overlap rate, defined as \(score=\frac{area(R_T\cap R_G)}{area (R_T\cup R_G)}\), where \(R_T\) and \(R_G\) mark the tracking bounding box and the ground truth bounding box. The average center location errors are reported in Table 1, where smaller center errors mean more accurate tracking results. The average overlap rates are listed in Table 2, where the larger the value, the more accurate the tracking result. It can be concluded from these tables that the proposed tracker is effective and feasible.

Table 1. Average center location error (in pixels). Top three results are shown in color fonts.

 

Table 2. Average overlap rate. Top three results are shown in color fonts.

 

4.3 Qualitative evaluation

We choose some tracking results from the test sequences for qualitative evaluation. The results are shown in Figs. 4-9, which exhibit the feasibility and effectiveness of the proposed method.

Heavy occlusion: We test several sequences (Faceocc1, Faceocc2, Jogging2, David3, Walking2) with heavy or long-time partial occlusion. In the Faceocc1 sequence, a woman frequently uses a book to occlude her face. With the exception of ASLA and SPT, the remaining six trackers perform well. For the Faceocc2 sequence, IVT, KCF, TGPR and our WLSRE tracker can successfully track the target. In the Jogging2 sequence, the target meets with heavy occlusion. LSST, KCF, CT, ASLA, IVT and SPT are unable to recapture the target and suffer significant deviation when the person passes through the obstacle and reappears (see #62, #150 and #307). In contrast, TGPR and our WLSRE tracker precisely track the target in this sequence. In the David3 sequence, TGPR, KCF, SPT and our tracker successfully deal with heavy occlusion and perform well in this sequence (seen from #83, #150 and #220). In the Walking2 sequence, the walking woman is occluded by a man for a long period of time. Only IVT, TGPR and our WLSRE tracker successfully complete the tracking task, which can be seen from #100, #240, #400 and #500. The robustness of our WLSRE tracker against occlusion can be attributed to two reasons: (1) patch-based weights impose larger penalties on occluded parts and reduce the influence of occlusion; (2) occlusion-aware template update scheme effectively prevents noise from entering the template set.

 

Fig. 4. Screenshots of the tracking results on 5 sequences with occlusion.

Illumination variations: Fig. 5 provides the results of some sequences with illumination changes. In the Cardark sequence, CT can’t perform well (seen from #100, #250, #300 and #393). SPT loses the tracked object after 200 frames (seen from #250 and #300). IVT drifts away from the correct location of the target at last (see #393). KCF, TGPR, ASLA, LSST and our WLSRE tracker successfully capture the target trajectory of all frames. In the Fish sequence, the illumination changes obviously. All the methods except SPT robustly overcome this difficulty and achieve accurate tracking. For the Mhyang sequence, IVT and the proposed WLSRE method are superior to other methods and obtain better tracking results. In the David sequence, LSST cannot follow the tracked object rightly during tracking (seen from #409 and #499). CT and SPT exhibit a small deviation in some frames, which can be seen from #499 and #749 respectively. TGPR, KCF, ASLA, IVT and our WLSRE tracker successfully track the target throughout this sequence and ASLA achieves the best performance in terms of both location and scale. In the Car4 sequence, SPT and CT drift off the target when there is a large illumination variation at frames #200 and #240. Moreover, the target undergoes scale variation. While TGPR and KCF can successfully estimate the location of the target, they do not deal with scale changes of it well (seen from #400 and #500). Due to the use of incremental PCA sub-space, IVT, LSST, ASLA and the proposed algorithm achieve good performance in dealing with the appearance change caused by illumination and scale changes.

 

Fig. 5. The comparison of qualitative results on 5 sequences with illumination changes.

Scale variations: Fig. 6 shows some of the results of four sequences containing scale variations. The target in the doll sequence experiences a long time scale change and rotation. SPT drifts away and finally loses the target (seen from #1000, #2250 and #3500). CT and IVT fail to precisely locate the target at the end (see #3500). Except for the three methods, the other five methods perform well. For the Dog1 and Walking sequences, LSST, IVT, ASLA and our WLSRE tracker are superior to others. The Singer1 sequence is very difficult due to large changes in light and scale. SPT runs poorly (see #200, #250 and #351). IVT slightly deviates from the target location (see #250 and #351). TGPR is incapable of tracking the target properly when drastic illumination changes occur (see #100, #200 and #351). As can be seen from frames #250 and #351, KCF and CT can’t deal with scale change well. ASLA, LSST and our WLSRE tracker robustly overcome the challenges caused by the changes in illumination and scale, and accurately locate the target over the whole sequence.  

 

Fig. 6. The representative results when the tracked targets experience scale variation.

Background clutters: Fig. 7 gives some representative tracking results of Football, Singer2, Basketball and Dudek sequences, where the targets are disturbed by background clutters. The target in the Football sequence not only has a very similar appearance to the background, but also is affected by occlusion and rotation. SPT can’t track the target accurately (seen from #100, #140 and #200). LSST drifts to the background (e.g. #200). CT, IVT, ASLA, KCF, TGPR and our WLSRE method can successfully track most frames. For the Singer2 sequence, the target being tracked goes through numerous challenges including background clutters, illumination variations, deformation and rotation. SPT, IVT, ASLA and CT fail to track when the target rotates (e.g. #89). Instead, KCF, TGPR, LSST and our WLSRE method win mentioned challenges and exactly keep track of the target on this sequence (seen from #89, #200 and #300). In the Basketball sequence, TGPR, KCF, SPT and our WLSRE tracker persistently track the target, while other methods fail. The Dudek sequence involves a number of challenges of background clutters, occlusion and pose change. SPT fails in many frames (seen from #600, #800 and #1070). Except for SPT, other methods can stably track the target, among which LSST and our WLSRE method run best.

 

Fig. 7. The results of all evaluated trackers on 4 sequences with background clutters.

Fast motion: Fast motion of the target object leads to blurred image appearance which is difficult to tackle in tracking task. Fig.8 illustrates the tracking results on the Fleetface, boy, Jumping and Lemming sequences. For Fleetface sequence, most trackers can successfully track most of the frames except for SPT. In the boy sequence, the target suffers from fast motion, motion blur, as well as rotation. SPT, ASLA, IVT and LSST lose track of the target when motion blur occurs, whereas KCF, TGPR and our WLSRE method perform favorably (see #375 and #570). In the Jumping sequence, the target moves so drastically that it is difficult to predict its location. Only the proposed tracker successfully tracks the target in the entire sequence (seen from #34, #175 and #313). The Lemming sequence is very challenging for visual tracking as the target meets with multiple challenges of fast motion, heavy occlusion, together with out-of-plane rotation. We can note that CT and our method perform more excellently than other methods (e.g. #400, #850 and #1336).

 

Fig. 8. Qualitative evaluation of different tracking algorithms on 4 sequences with fast motion.

Rotation: Fig. 9 presents a few results for four sequences with rotation challenge. In the David2 sequence, KCF, TGPR, IVT, ASLA and our WLSRE method perform well and achieve outstanding performance. In the Freeman1 sequence, the face of a man experiences large scale changes and rotation. Only IVT, TGPR and our WLSRE tracker can track the target of most frames, and our WLSRE method achieves the best overlap rate. The Freeman3 sequence includes scale variation, in-plane and out-of-plane rotations, which makes this tracking task difficult. We can see that only LSST, ASLA and our WLSRE tracker can win these difficulties and exactly track the target throughout the frames, which has been verified on frames #146, #278, #350 and #460. There are in-plane rotation and out-of-plane rotation in the Football1 sequence. Along with rotation is background clutter. SPT performs unsteadily and shakes around the target position (seen from #14, #40 and #74). CT doesn’t track well from the beginning (see frame #14). LSST is unable to lock the tracked object well in the latter half of the sequence (see #40 and #74). IVT, KCF and TGPR drift away to the background region at the end (see #74). Our WLSRE method stably tracks the target till the end.

 

Fig. 9. Sample results of all compared trackers on several sequences with rotation.

4.4 Evaluation on OTB-50

In order to make the experiments more convincing, we also run the proposed method on the object tracking benchmark (OTB) [22]. The trackers that are compared with our method include KCF [15], TGPR [14], VTD [4], DLT [30], ASLA [26], IVT [2], LSST [11], SPT [10], CT [7], FCNT [41], and boostingtrack [27]. Precision and success plots are used for the evaluation of the performance of all compared trackers. Fig. 10 reports the performance (in terms of precision plot, success plot, precision score and success score) of the 12 trackers on 50 videos. We can observe that our method obtains more satisfying and more promising results than holistic models such as LSST, SPT and IVT.

 

Fig. 10. Performance evaluation of the 12 trackers on OTB-50.

5. Conclusions

This paper presents a novel tracking algorithm based on weighted local sub-space reconstruction error. In this work, we explicitly take partial occlusion and other interference factors into account by learning a set of weights for local patches of PCA sub-space reconstruction error. Under a generative model, the weights are calculated through the structural errors and reflect the spatial similarity between the candidate targets and the templates. At the same time, an occlusion-aware template update method is introduced to enhance the performance of the tracker. Extensive evaluation demonstrates the effectiveness and feasibility of the proposed algorithm. Our future work will focus on integrating effective detection modules for persistent tracking. Moreover, a particle selection mechanism will be introduced to accelerate our tracker.

References

  1. A. Adam, E. Rivlin and I. Shimshoni, "Robust fragments-based tracking using the integral histogram," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 798-805, June 17-22, 2006.
  2. D. A. Ross, J. Lim, R. S. Lin and M. H. Yang, "Incremental learning for robust visual tracking," International Journal of Computer Vision, vol. 77, no. 1-3, pp. 125-141, May, 2008. https://doi.org/10.1007/s11263-007-0075-7
  3. X. Mei and H. Ling, "Robust visual tracking using l1 minimization," in Proc. of International Conference on Computer Vision, pp. 1436-1443, September 29-October 2, 2009.
  4. J. Kwon and K. M. Lee, "Visual tracking decomposition," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1269-1276, June 13-18, 2010.
  5. B. Babenko, M. H. Yang and S. Belongie, "Robust Object Tracking with Online Multiple Instance Learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33 no. 8, pp. 1619-1632, August, 2011. https://doi.org/10.1109/TPAMI.2010.226
  6. S. Hare, S. Golodetz and A. Saffari, "Struck: Structured output tracking with kernels," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38 no. 10, pp. 2096-2109, October, 2016. https://doi.org/10.1109/TPAMI.2015.2509974
  7. K. Zhang, L. Zhang and M. H. Yang, "Fast Compressive Tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 10, pp. 2002-2015, October, 2014. https://doi.org/10.1109/TPAMI.2014.2315808
  8. F. Yang, H. Lu and M. H. Yang, "Robust superpixel tracking," IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1639-1651, April, 2014. https://doi.org/10.1109/TIP.2014.2300823
  9. Q. Wang, F. Chen and W. Xu, "Tracking by third-order tensor representation," IEEE Transactions on Systems Man and Cybernetics, vol. 41, no. 2, pp. 385-396, April, 2011.
  10. D. Wang, H. Lu and M. H. Yang, "Online object tracking with sparse prototypes," IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 314-325, January, 2013. https://doi.org/10.1109/TIP.2012.2202677
  11. D. Wang, H. Lu and M. H. Yang, "Robust Visual Tracking via Least Soft-thresold Squares," IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 9, pp. 1709-1721, September, 2016. https://doi.org/10.1109/TCSVT.2015.2462012
  12. B. Liu, J. Huang, L. Yang and C. Kulikowsk, "Robust tracking using local sparse appearance model and k-selection," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1313-1320. June 20-25, 2011.
  13. C. Bao, Y. Wu, H. Ling and H. Ji, "Real time robust l1 tracker using accelerated proximal gradient approach," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1830-1837, June 16-21, 2012.
  14. J. Gao, H. Ling, W. Hu and J. Xing, "Transfer learning based visual tracking with gaussian processes regression," in Proc. of European Conference on Computer Vision, pp. 188-203, September 6-12, 2014.
  15. J. F. Henriques, R. Caseiro, P. Martins and J. Batista. "High-speed tracking with kernelized correlation filters," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583-596, March, 2015. https://doi.org/10.1109/TPAMI.2014.2345390
  16. W. Zhong, H. Lu and M. H. Yang, "Robust object tracking via sparse collaborative appearance model," IEEE Transactions on Image Processing, vol. 23, no. 5, pp. 2356-2368, May, 2014. https://doi.org/10.1109/TIP.2014.2313227
  17. J. Xue and D. M. Titterington, "Comment on "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes"," Neural Processing Letters, vol. 28, no. 3, pp. 169-187, October, 2008. https://doi.org/10.1007/s11063-008-9088-7
  18. J. A. Lasserre, C. M. Bishop and T. P. Minka, "Principled hybrids of generative and discriminative models," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 87-94, June 17-22, 2006.
  19. S. He, Q. Yang, R. W. Lau, J. Wang and M. H. Yang, "Visual tracking via locality sensitive histograms," in Proc. of International Conference on Computer Vision, pp. 2427-2434, June 23-28, 2013.
  20. F. Yang, H. Lu, W. Zhang and G. Yang, "Visual tracking via bag of features," IET Image Processing, vol. 6, no. 2, pp. 115-128, March, 2012. https://doi.org/10.1049/iet-ipr.2010.0127
  21. A. W. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara and A. Dehghan, "Visual Tracking: An experimental survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442-1468, July, 2014. https://doi.org/10.1109/TPAMI.2013.230
  22. Y. Wu, J. Lim and M. H. Yang, "Online Object Tracking: A Benchmark," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411-2418, June 23-28, 2013.
  23. J. Pan, J. Lim, Z. Su and M. H. Yang, "L0-Regularized Object Representation for Visual Tracking," in Proc. of British Machine Vision Conference, September 1-5, 2014.
  24. B. Liu, L. Yang, J. Huang, P. Meer and L. Gong, "Robust and fast collaborative tracking with two stage sparse optimization," in Proc. of European Conference on Computer Vision, pp. 624-637, September 5-11, 2010.
  25. T. Zhang, B. Ghanem, S. Liu and N. Ahuja, "Robust visual tracking via multi-task sparse learning," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2042-2049, June 16-21, 2012.
  26. X. Jia, H. Lu and M. H. Yang, "Visual tracking via adaptive structural local sparse appearance model," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1822-1829, June 16-21, 2012.
  27. B. Ma, J. Shen, Y. Liu and H. Hu, "Visual tracking using strong classifier and structural descriptors," IEEE Transactions on Multimedia, vol. 17, no. 10, pp. 1818-1828, October, 2015. https://doi.org/10.1109/TMM.2015.2463221
  28. X. Zeng, L. Xu, L. Ma, R. Zhao and Y. Cen, "Visual tracking using global sparse coding and local convolutional features," Digital Signal Processing, vol. 72, pp. 115-125, January, 2018. https://doi.org/10.1016/j.dsp.2017.10.007
  29. W. Feng, Y. Cen, X. Zeng, Z. Li, M. Zeng and V. Voronin, "Object tracking based on adaptive updating of a spatial-temporal context model," KSII Transactions on Internet and Information Systems, vol. 11, no. 11, pp. 5459-5473, November, 2017. https://doi.org/10.3837/tiis.2017.11.015
  30. N. Wang and D. Y. Yeung, "Learning a deep compact image representation for visual tracking," in Proc. of Advances in Neural Information Processing Systems, pp. 809-817, December 5-10, 2013.
  31. Y. Qi, L. Qin, J. Zhang, S. Zhang, Q. Huang and M. H. Yang, "Structure-aware local sparse coding for visual tracking," IEEE Transactions on Image Processing, vol. pp, no. 99, pp. 1-1, January, 2018.
  32. P. P. Dash and D. Patra, "Efficient visual tracking using multi-feature regularized robust sparse coding and quantum particle filter based localization," Journal of Ambient Intelligence and Humanized Computing, vol. 2018, no. 5, pp. 1-14, January, 2018.
  33. Y. Zhou, J. Han, X. Yuan, Z. Wei and R. Hong, "Inverse Sparse Group Lasso Model for Robust Object Tracking," IEEE Transactions on Multimedia, vol. 19, no. 8, pp. 1798-1810, August, 2017. https://doi.org/10.1109/TMM.2017.2689918
  34. N. Ali, K. B. Bajwa, R. Sablatnig and Z. Mehmood, "Image retrieval by addition of spatial information based on histograms of triangular regions," Computers & Electrical Engineering, vol. 54, no. c, pp. 539-550, August, 2016. https://doi.org/10.1016/j.compeleceng.2016.04.002
  35. N. Ali, K. B. Bajwa, R. Sablatnig and S. A. Chatzichristofis, "A novel image retrieval based on visual words integration of SIFT and SURF," Plos One, vol. 11, no. 6, pp. e0157428, June, 2016. https://doi.org/10.1371/journal.pone.0157428
  36. N. Ali, D. A. Mazhar, Z. lqbaland and R. Ashraf, "Content-Based Image Retrieval Based on Late Fusion of Binary and Local Descriptors," International Journal of Computer Science & Information Security, vol. 14, no. 11, pp. 821-837, March, 2017.
  37. L. Ye, L. Wang, Y. Sun, L. Zhao and Y. Wei, "Parallel multi-stage features fusion of deep convolutional neural networks for aerial scene classification," Remote Sensing Letters, vol. 9, no. 3, pp. 294-303, December, 2017. https://doi.org/10.3390/rs9030294
  38. Q. Liu, R. Hang, H. Song and Z. Li, "Learning Multiscale Deep Features for High-Resolution Satellite Image Scene Classification," IEEE Transactions on Geoscience & Remote Sensing, vol. 56, no. 1, pp. 117-126, January, 2018. https://doi.org/10.1109/TGRS.2017.2743243
  39. T. Zhou, K. Xie, J. Zhang, J. Yang and X. He, "Robust object tracking based on weighted subspace reconstruction error with forward: backward tracking criterion," Journal of Electronic Image, vol. 24, no. 3, pp. 033005, 2015. https://doi.org/10.1117/1.JEI.24.3.033005
  40. H. Li, Y. Li and F. Porikli, "Robust online visual tracking with a single convolutional neural network," in Proc. of the 12-th Asian Conference on Computer Vision, pp. 194-209, November 1-5, 2014.
  41. L. Wang, W. Ouyang, X. Wang and H. Lu, "Visual tracking with fully convolutional networks," in Proc. of IEEE International Conference on Computer Vision, pp. 3119-3127, December 7-13, 2015.
  42. R. Tao, E. Gavves and A. W. M. Smeulders, "Siamese instance search for tracking," in Proc. of Computer Vision and Pattern Recognition, June 27-30, 2016.
  43. H. Nam and B. Han, "Learning multi-domain convolutional neural networks for visual tracking," in Proc. of Computer Vision and Pattern Recognition, June 27-30, 2016.