DOI QR코드

DOI QR Code

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu (College of Communication and Art Design, University of Shanghai for Science and Technology) ;
  • Yao Fang (College of Communication and Art Design, University of Shanghai for Science and Technology) ;
  • Leihong Zhang (School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology) ;
  • Dawei Zhang (School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology) ;
  • Lulu Zheng (School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology)
  • Received : 2023.03.14
  • Accepted : 2023.03.25
  • Published : 2023.06.25

Abstract

In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

Keywords

Acknowledgement

The authors acknowledge the support given by the National Natural Science Foundation of China and the Shanghai Industrial Collaborative Innovation Project. In addition, the authors are grateful to the editor and anonymous reviewers for their valuable comments and suggestions about this paper.

References

  1. O. Shacham, O. Haik, and Y. Yitzhaky, "Blind restoration of atmospherically degraded images by automatic best step-edge detection," Pattern Recognit. Lett. 28, 2094-2103 (2007).  https://doi.org/10.1016/j.patrec.2007.06.006
  2. M. Shimizu, S. Yoshimura, M. Tanaka, and M. Okutomi, "Super-resolution from image sequence under influence of hot-air optical turbulence," in Proc. IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, AK, USA, Jun. 23-28, 2008), pp. 1-8. 
  3. X. Zhu and P. Milanfar, "Image reconstruction from videos distorted by atmospheric turbulence," Proc. SPIE 7543, 75430S (2010). 
  4. C. P. Lau, Y. H. Lai, and L. M. Lui, "Variational models for joint subsampling and reconstruction of turbulence-degraded images," J. Sci. Comput. 78, 1488-1525 (2019).  https://doi.org/10.1007/s10915-018-0833-4
  5. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, "Automated video enhancement from a stream of atmospherically-distorted images: The lucky-region fusion approach," Proc. SPIE 7463, 74630C (2009). 
  6. O. Oreifej, X. Li, and M. Shah, "Simultaneous video stabilization and moving object detection in turbulence," IEEE Trans. Pattern Anal. Mach. Intell. 35, 450-462 (2013).  https://doi.org/10.1109/TPAMI.2012.97
  7. C. P. Lau, Y. H. Lai, and L. M. Lui, "Restoration of atmospheric turbulence-distorted images via RPCA and quasiconformal maps," Inverse Problems 35, 074002 (2019). 
  8. C. P. Lau, C. D. Castillo, and R. Chellappa, "ATFaceGAN: Single face semantic aware image restoration and recognition from atmospheric turbulence," IEEE Trans. Biom. Behav. Identity Sci. 3, 240-251 (2021).  https://doi.org/10.1109/TBIOM.2021.3058316
  9. R. Yasarla and V. M. Patel, "Learning to restore images degraded by atmospheric turbulence using uncertainty," in Proc. IEEE International Conference on Image Processing-ICIP (Anchorage, AK, USA, Sep. 19-22. 2021), pp. 1694-1698. 
  10. D. Jin, Y. Chen, Y. Lu, J. Chen, P. Wang, Z. Liu, S. Guo, and X. Bai, "Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning," Nat. Mach. Intell. 3, 876-884 (2021).  https://doi.org/10.1038/s42256-021-00392-1
  11. C.-M. Fan, T.-J. Liu, and K.-H. Liu, "Compound multibranch feature fusion for real image restoration," arxiv abs/2206.02748 (2022). 
  12. Y. Xie, W. Zhang, D. Tao, W. Hu, Y. Qu, and H. Wang, "Removing turbulence effect via hybrid total variation and deformation-guided kernel regression," IEEE Trans. Image Process. 25, 4943-4958 (2016).  https://doi.org/10.1109/TIP.2016.2598638
  13. S. H. Chan, "Tilt-then-blur or blur-then-tilt? Clarifying the atmospheric turbulence model," IEEE Signal Process. Lett. 29, 1833-1837 (2022).  https://doi.org/10.1109/LSP.2022.3200551
  14. A. Schwartzman, M. Alterman, R. Zamir, and Y. Y. Schechner, "Turbulence-induced 2D correlated image distortion," in Proc. IEEE International Conference on Computational Photography-ICCP (Stanford, CA, USA, May 12-14, 2017), pp. 1-13. 
  15. T. Weyand, A. Araujo, B. Cao, and J. Sim, "Google landmarks dataset v2-a large-scale benchmark for instance-level recognition and retrieval," in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition-CVPR (Virtual Conference, Jun. 14-19, 2020), pp. 2575-2584. 
  16. Y. Tian, S. G. Narasimhan, and A. J. Vannevel, "Depth from optical turbulence," in Proc. IEEE Conference on Computer Vision and Pattern Recognition (Providence, RI, USA, Jun. 16-21, 2012), pp. 246-253. 
  17. Y. Cheon and A. Muschinski, "Closed-form approximations for the angle-of-arrival variance of plane and spherical waves propagating through homogeneous and isotropic turbulence," J. Opt. Soc. Am. A 24, 415-422 (2007).  https://doi.org/10.1364/JOSAA.24.000415
  18. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Proc. International Conference on Medical Image Computing and Computerassisted Intervention (Munich, Germany, Oct. 5-9, 2015), pp. 234-241. 
  19. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, "Multi-stage progressive image restoration," arXiv:2102.02808 (2021). 
  20. L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, "Hinet: Half instance normalization network for image restoration," in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (Virtual conference, Jun. 19-25, 2021), pp. 182-192. 
  21. H. Zhao, X. Kong, J. He, Y. Qiao, and C. Dong, "Efficient image superresolution using pixel attention," in Proc. European Conference on Computer Vision (Glasgow, UK, Aug. 23-28, 2020), pp. 56-72. 
  22. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, "Image superresolution using very deep residual channel attention networks," in Proc. European conference on computer vision-ECCV (Munich, Germany, Sep. 8-14, 2018), pp. 286-301. 
  23. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proc. European conference on computer vision-ECCV (Munich, Germany, Sep. 8-14, 2018), pp. 3-19. 
  24. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, USA, Jun. 26-Jul. 1, 2016), pp. 770-778. 
  25. Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, "ECAnet: Efficient channel attention for deep convolutional neural networks," in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition-CVPR (Virtual conference, Jun. 14-19, 2020), pp. 11531-11539. 
  26. K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, "Multi-scale progressive fusion network for single image deraining," in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (Virtual conference, Jun. 14-19, 2020), pp. 8346-8355. 
  27. O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, "Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better," in Proc. IEEE/CVF International Conference on Computer Vision (Seoul, Korea, Oct. 27-Nov. 2, 2019), pp. 8878-8887. 
  28. S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, "Rethinking coarse-to-fine approach in single image deblurring," in Proc. IEEE/CVF International Conference on Computer Vision (Virtual conference, Oct. 11-17, 2021), pp. 4641-4650.