DOI QR코드

DOI QR Code

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin (College of Mechanical Engineering, Chongqing Technology and Business University) ;
  • Hu, Kaiqun (College of Mechanical Engineering, Chongqing Technology and Business University)
  • Received : 2018.12.27
  • Accepted : 2019.09.26
  • Published : 2019.12.31

Abstract

To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

Keywords

References

  1. J. Ma, Y. Ma, and C. Li, "Infrared and visible image fusion methods and applications: a survey," Information Fusion, vol.45, pp. 153-178, 2019. https://doi.org/10.1016/j.inffus.2018.02.004
  2. F. Meng, M. Song, B. Guo, R. Shi, and D. Shan, "Image fusion based on object region detection and non-subsampled contourlet transform," Computers& Electrical Engineering, vol. 62, pp. 375-383, 2017. https://doi.org/10.1016/j.compeleceng.2016.09.019
  3. P. Zhu, X. Ma, and Z. Huang, "Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules," Infrared Physics & Technology, vol. 81, pp. 282-295, 2017. https://doi.org/10.1016/j.infrared.2017.01.013
  4. Y. Liu, X. Chen, H. Peng, and Z. Wang, "Multi-focus image fusion with a deep convolutional neural network," Information Fusion, vol. 36, pp. 191-207, 2017. https://doi.org/10.1016/j.inffus.2016.12.001
  5. B. Zhang, X. Lu, H. Pei, H. Liu, Y. Zhao, and W. Zhou, "Multi-focus image fusion algorithm based on focused region extraction," Neurocomputing, vol. 174, pp. 733-748, 2016. https://doi.org/10.1016/j.neucom.2015.09.092
  6. J. Ma, C. Chen, C. Li, and J. Huang, "Infrared and visible image fusion via gradient transfer and total variation minimization," Information Fusion, vol. 31, pp. 100-109, 2016. https://doi.org/10.1016/j.inffus.2016.02.001
  7. K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013. https://doi.org/10.1109/TPAMI.2012.213
  8. Q. Zhang and B. L. Guo, "Multifocus image fusion using the nonsubsampled contourlet transform," Signal Processing, vol. 89, no. 7, pp. 1334-1346, 2009. https://doi.org/10.1016/j.sigpro.2009.01.012
  9. B. Yang and S. Li, "Multifocus image fusion and restoration with sparse representation," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 4, pp. 884-892, 2010. https://doi.org/10.1109/TIM.2009.2026612
  10. Q. G. Miao, C. Shi, P. F. Xu, M. Yang, and Y. B. Shi, "A novel algorithm of image fusion using shearlets," Optics Communications, vol. 284, no. 6, pp. 1540-1547, 2011. https://doi.org/10.1016/j.optcom.2010.11.048