DOI QR코드

DOI QR Code

Adaptive Detection of a Moving Target Undergoing Illumination Changes against a Dynamic Background

  • Lu, Mu (Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences) ;
  • Gao, Yang (Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences) ;
  • Zhu, Ming (Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences)
  • Received : 2016.08.11
  • Accepted : 2016.11.28
  • Published : 2016.12.25

Abstract

A detection algorithm, based on the combined local-global (CLG) optical-flow model and Gaussian pyramid for a moving target appearing against a dynamic background, can compensate for the inadaptability of the classic Horn-Schunck algorithm to illumination changes and reduce the number of needed calculations. Incorporating the hypothesis of gradient conservation into the traditional CLG optical-flow model and combining structure and texture decomposition enable this algorithm to minimize the impact of illumination changes on optical-flow estimates. Further, calculating optical-flow with the Gaussian pyramid by layers and computing optical-flow at other points using an optical-flow iterative with higher gray-level points together reduce the number of calculations required to improve detection efficiency. Finally, this proposed method achieves the detection of a moving target against a dynamic background, according to the background motion vector determined by the displacement and magnitude of the optical-flow. Simulation results indicate that this algorithm, in comparison to the traditional Horn-Schunck optical-flow algorithm, accurately detects a moving target undergoing illumination changes against a dynamic background and simultaneously demonstrates a significant reduction in the number of computations needed to improve detection efficiency.

Keywords

References

  1. W. Hu, T. Tan, and S. Maybank, "A Survey on Visual Surveillance of Object Motion and Behaviors," IEEE Trans. Syst., Man, Cybern., Syst., Part C (Applications and Reviews), 34(3), 334-352 (2014).
  2. K. Zhang, L. Zhang, and M. H. Yang, "Real-Time Compressive Tracking," in Proc. 12th European conference on Computer Vision (Florence, Italy, Oct. 2012), pp. 864-877.
  3. M. H. Jeong, "Image Blurring Estimation and Calibration with a Joint Transform Correlator," J. Opt. Soc. Korea 18(5), 472-476 (2014). https://doi.org/10.3807/JOSK.2014.18.5.472
  4. Y. Wei, F. Wen, W. Zhu, and J. Sun, "Geodesic Saliency Using Background Priors," in Proc. 12th European conference on Computer Vision (Florence, Italy, Oct. 2012), pp. 29-42.
  5. Y. Feng, R. H. Zhang, L. S. Zhang, H. W. Wu, and J. M. Xia, "Lane Detection Algorithm for Night-time Digital Image Based on Distribution Feature of Boundary Pixels," J. Opt. Soc. Korea, 17(2):188-199 (2013). https://doi.org/10.3807/JOSK.2013.17.2.188
  6. P. Sundberg, T. Brox, M. Maire, P. Arbelaez, and J. Malik, "Occlusion boundary detection and figure/ground assignment from optical flow," IEEE Conference on Computer Vision and Pattern Recognition 2011, (Crowne Plaza, Colorado, USA, Jun. 2011), pp. 2233-2240.
  7. M. Heikkila and M. Pietikainen, "A texture-based method for modeling the background and detecting moving objects," IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657-662 (2006). https://doi.org/10.1109/TPAMI.2006.68
  8. Z. Jiang and L. H. Ding, "Aerial video image object detection and tracing based on motion vector compensation and statistic analysis," In Proc 2009 IEEE Asia Pacific Conference on Postgraduate Research in Microelectronics and Electronics (Shanghai, China, Nov. 2009), pp. 302-305.
  9. B. H. Do and S. C. Huang, "Dynamic background modeling based on radial basis function neural networks for moving object detection," In Proc 2011 IEEE International Conference on Multimedia and Expo, (Barcelona, Spain, Jul. 2011), pp. 1-4.
  10. W. H. Lee, "Foreground objects detection using multiple difference images," Opt. Eng. 49(4), 047201 (2010). https://doi.org/10.1117/1.3374043
  11. S. Vladimir and K. K. Sung, "Reference Functions for Synthesis and Analysis of Multiview and Integral Images," J. Opt. Soc. Korea 17(2), 148-161 (2013). https://doi.org/10.3807/JOSK.2013.17.2.148
  12. Y. Yang, M. Fu, X. Yang, G. Xiong, and J. Gong, "Autonomous ground vehicle navigation method in complex environment," 2010 IEEE Intelligent Vehicles Symposium, (San Diego, CA, USA, Jun. 2010), pp. 458-460.
  13. T. Liu, J. Sun, N. N. Zheng, X. Tang, and H. Y. Shum, "Learning to Detect a Salient Object," In Proc 2007 IEEE Conference on Computer Vision and Pattern Recognition, (Minneapolis, MN, USA, Jun. 2007), pp. 1-8.
  14. T. Meier and K. Ngan, "Automatic segmentation of moving objects for video object plane generation," IEEE Trans. Circuits Syst. Video Technol. 8(5), 525-538 (1998). https://doi.org/10.1109/76.718500
  15. S. Baker, D. Scharstein, J. Lewis, S. Roth, M. Black, and R. Szeliski, "A Database and Evaluation Methodology for Optical Flow," Int. J. Comput. Vision 92(1), 1-31 (2010). https://doi.org/10.1007/s11263-010-0390-2
  16. S. Sun, D. Haynor, and Y. Kim, "Motion estimation based on optical flow with adaptive gradients," In Proc. 2000 International Conference on Image Processing, (Vancouver, Canada, Sept. 2000), pp. 852-855.
  17. B. Andres, W. Joachim, F. Christian, K. Timo, and S. Christoph, "Real-Time Optic Flow Computation with Variational Methods," Computer Analysis of Images and Patterns, (Springer Berlin Heidelberg, Germany, 2003), pp. 222-229