Acknowledgement
본 연구 논문은 한국전자통신연구원 연구운영지원사업의 일환으로 수행되었음[21ZD1130, 지능제어기반 스마트 기계 및 로봇 기술 개발].
References
- L. Bertinetto et al., "Fully-convolutional siamses networks for object tracking," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Amsterdam, Netherlands), Oct. 2016, pp. 850-865.
- M. Kristan et al., "The visual object tracking VOT2013 challenge results," in Proc. IEEE Int. Conf. Comput. Vis. Workshops, (Sydney, Australia), Dec. 2013, pp. 93-111.
- M. Kristan et al., "The visual object tracking VOT2014 challenge results," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Zurich, Switzerland), Sept. 2014, pp. 191-217.
- M. Kristan et al., "The visual object tracking VOT2015 challenge results," in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCV), (Santiago, Chile), Dec. 2015, pp. 1-23.
- M. Kristan et al., "The visual object tracking VOT2016 challenge results," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Amsterdam, Netherlands), Oct. 2016, pp. 777-823.
- M. Kristan et al., "The visual object tracking VOT2017 challenge results," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), (Venice, Italy), Oct. 2017, pp. 1949-1972.
- M. Kristan et al., "The sixth visual object tracking VOT2018 challenge results," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Munich, Germany), Sept. 2018.
- M. Kristan et al., "The seventh visual object tracking VOT2019 challenge results," in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), (Seoul, Republic of Korea), Oct. 2019, pp. 2206-2241.
- M. Kristan et al., "The eighth visual object tracking VOT2020 challenge results," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Glasgow, UK), Aug. 2020, pp. 547-601.
- M. Kristan et al., "The ninth visual object tracking VOT2021 challenge results," in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 2711-2738.
- Y. Wu et al., "Object tracking benchmark," IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, 2015, pp. 1834-1848. https://doi.org/10.1109/TPAMI.2014.2388226
- M. Muller et al., "TrackingNet: A large-scale dataset and benchmark for object tracking in the wild," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Munich, Germany), Sept. 2018, pp. 300-317.
- H. Fan et al., "LaSOT: A high-quality benchmark for large-scale single object tracking," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Long Beach, CA, USA), June 2019, pp. 5374-5383.
- L. Huang et al., "GOT-10k: A large high-diversity benchmark for generic object tracking in the wild," IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 5, 2021, pp. 1562-1577. https://doi.org/10.1109/TPAMI.2019.2957464
- Z. Kalal et al., "Tracking-learing-detection," IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, 2011, pp. 1409-1422. https://doi.org/10.1109/TPAMI.2011.239
- J.F. Henriques et al., "High-speed tracking with kernelized correlation filters," IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, 2015, pp. 583-596. https://doi.org/10.1109/TPAMI.2014.2345390
- H. Nam and B. Han, "Learning multi-domain convolutional neural networks for visual tracking," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Las Vegas, NV, USA), June 2016, pp. 4293-4302.
- O. Russakovsky et al., "ImageNet large scale visual recognition challenge," Int. J. Comput. Vis., vol. 115, 2015, pp. 211-252. https://doi.org/10.1007/s11263-015-0816-y
- B. Li et al., "High performance visual tracking with siamese region proposal network," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Salt Lake Sity, UT, USA), June 2018, pp. 8971-8980.
- Z. Zhu et al., "Distractor-aware siamese networks for visual object tracking," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Munich, Germany), Sept. 2018.
- A. Krizhevsky et al., "ImageNet classification with deep convolutional neural networks," in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), (Lake Taho, NV, USA), Dec. 2012, pp. 1097-1105.
- K. He et al., "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Las Vegas, NV, USA), June 2016, pp. 770-778.
- B. Li et al., "SiamRPN++: Evolution of siamese visual tracking with very deep networks," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Long Beach, CA, USA), June 2019, pp. 4282-4291.
- Z. Zhang et al., "Deeper and wider siamese networks for real-time visual tracking," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Long Beach, CA, USA), June 2019, pp. 4591-4600.
- C. Szegedy et al., "Going deeper with convolutions," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Boston, MA, USA), June 2015, pp. 1-9.
- S. Xie et al., "Aggregated residual transformations for deep neural networks," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Honolulu, HI, USA), July 2017, pp. 1492-1500.
- Q. Wang et al., "Fast online object tracking and segmentation: A unifying approach," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Long Beach, CA, USA), June 2019, pp. 1328-1338.
- L. Zhang et al., "Learning the model update for siamese trackers," in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), (Seoul, Republic of Korea), Oct. 2019, pp. 4010-4019.
- Y. Yu et al., "Deformable siamese attention networks for visual object tracking," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2020, pp. 6728-6737.
- Z. Tian et al., "FCOS: Fully convolutional one-stage object detection," in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), (Seoul, Republic of Korea), Oct. 2019, pp. 9627-9636.
- Z. Chen et al., "Siamese box adaptive network by visual tracking," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2020, pp. 6668-6677.
- Z. Zhang et al., "Ocean: Object-aware anchor-free tracking," in Proc. Eur. Conf. Comput. Vis. (ECCV), (Glasgow, UK), Aug. 2020, pp. 771-787.
- J. Dai et al., "Deformable convolution networks," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), (Venice, Italy), Oct. 2017, pp. 764-773.
- H. Zhang et al., "Context encoding for semantic segmentation," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Salt Lake Sity, UT, USA), June 2018, pp. 7151-7160.
- N. Wang et al., "Transformer meets tracker: Exploiting temporal context for robust visual tracking," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2021, pp. 1571-1580.
- B. Yan et al., "LightTrack: Finding lightweight neural networks for object tracking via one-shot architecture search," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2021, pp. 15180-15189.