DOI QR코드

DOI QR Code

Multimodal Medical Image Fusion Based on Double-Layer Decomposer and Fine Structure Preservation Model

복층 분해기와 상세구조 보존모델에 기반한 다중모드 의료영상 융합

  • Received : 2021.12.24
  • Accepted : 2022.02.03
  • Published : 2022.06.30

Abstract

Multimodal medical image fusion (MMIF) fuses two images containing different structural details generated in two different modes into a comprehensive image with saturated information, which can help doctors improve the accuracy of observation and treatment of patients' diseases. Therefore, a method based on double-layer decomposer and fine structure preservation model is proposed. Firstly, a double-layer decomposer is applied to decompose the source images into the energy layers and structure layers, which can preserve details well. Secondly, The structure layer is processed by combining the structure tensor operator (STO) and max-abs. As for the energy layers, a fine structure preservation model is proposed to guide the fusion, further improving the image quality. Finally, the fused image can be achieved by performing an addition operation between the two sub-fused images formed through the fusion rules. Experiments manifest that our method has excellent performance compared with several typical fusion methods.

다중모드 의료영상 융합(MMIF)은 각기 다른 특징들을 나타내는 여러 종류의 모드의 이미지를 풍부한 정보가 포함된 하나의 결과 이미지로 통합하는 것이다. 이러한 의료영상 융합은 의사가 환자의 병변을 정확하게 관찰하고 치료하는 것을 도와줄 수 있다. 이러한 목적에 영향을 받아 본 논문에서는 복층 분해기 및 미세구조 보존 모델에 기반한 새로운 방법을 제안한다. 첫째, 복층 분해기를 사용하여 소스 이미지를 미세정보 보존의 특성을 갖는 에너지 층과 구조적 층으로 분해하였다. 둘째, 구조 텐서 연산자와 max-abs를 결합하여 구조적 층을 융합한다. 에너지 층의 융합을 위해 미세구조 보존 모델을 제안하였으며 이미지 융합성능을 크게 향상시킬 수 있었다. 마지막으로, 융합규칙을 통해 형성된 두 개의 융합된 하위 이미지를 합산하여 구축하였다. 실험을 통하여 제안된 방법이 현재까지 최첨단 융합 방법들과 비교하여 우수한 성능을 나타내는 것을 검증하였다.

Keywords

Acknowledgement

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (GR 2019R1D1A3A03103736) and in part by project for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Ministry of SMEs and Startups in 20 (Grant No. S3114049), and by project for 'Customized technology partner' funded Korea Ministry of SMEs and Startups in 2022 (RS-2022-00155266).

References

  1. M. Yin, X. Liu, Y. Liu, and X. Chen, "Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain," IEEE Transactions on Instrumentation and Measurement, Vol.68, No.1, pp.49-64, 2019. https://doi.org/10.1109/TIM.2018.2838778
  2. Z. Zhu, M. Zheng, G. Qi, D. Wang, and Y. Xiang, "A phase congruency and local laplacian energy based multimodality medical image fusion method in NSCT domain," IEEE Access, Vol.7, pp.20811-20824, 2019. https://doi.org/10.1109/access.2019.2898111
  3. L. Nie, L. Zhang, L. Meng, X. Song, X. Chang, and X. Li, "Modeling disease progression via multisource multitask learners: A case study with alzheimer's disease," IEEE Transactions on Neural Networks and Learning Systems, Vol.28, No.7, pp.1508-1519, Jul. 2017. https://doi.org/10.1109/TNNLS.2016.2520964
  4. H. Hermessi, O. Mourali, and E. Zagrouba, "Multimodal medical image fusion review: Theoretical background and recent advances," Signal Processing, Vol.183, pp.108036, 2021. https://doi.org/10.1016/j.sigpro.2021.108036
  5. S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, "Pixel-level image fusion: A survey of the state of the art," Information Fusion, Vol.33, pp.100-112, 2017. https://doi.org/10.1016/j.inffus.2016.05.004
  6. R. Singh and A. Khare, "Fusion of multimodal medical images using Daubechies complex wavelet transform-a multiresolution approach," Information Fusion, Vol.19, pp.49-60, 2014. https://doi.org/10.1016/j.inffus.2012.09.005
  7. Y. Liu, S. Liu, and Z. Wang, "A general framework for image fusion based on multi-scale transform and sparse representation," Information Fusion, Vol.24, No.1, pp.147-164, 2015. https://doi.org/10.1016/j.inffus.2014.09.004
  8. B. Yang and S. Li, "Multifocus image fusion and restoration with sparse representation," IEEE Transactions on Instrumentation and Measurement, Vol.59, No.4, pp.884-892, 2010. https://doi.org/10.1109/TIM.2009.2026612
  9. V. Prasath, R. Pelapur, G. Seetharaman, and K. Palaniappan, "Multiscale structure tensor for improved feature extraction and image regularization," IEEE Transactions on Image Processing, Vol.28, No.12, pp.6198-6210, 2019. https://doi.org/10.1109/tip.2019.2924799
  10. Y. Zhai and M. Shah, "Visual attention detection in video sequences using spatiotemporal cues," in Proceeding of the 14th ACM International Conference on Multimedia, pp. 815-824, 2006.
  11. C. S. Xydeas and V. Petrovic, "Objective image fusion performance measure," Electronics Letters, Vol.36, No.4, pp.308-309, 2000. https://doi.org/10.1049/el:20000267
  12. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, Vol.13, No.4, pp.600-612, 2004. https://doi.org/10.1109/TIP.2003.819861
  13. G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen, "Detail preserved fusion of visible and infrared images using regional saliency extraction and multiscale image decomposition," Optics Communications, Vol.341, pp.199-209, Apr. 2015. https://doi.org/10.1016/j.optcom.2014.12.032
  14. Y. Han, Y. Cai, Y. Cao, and X. Xu, "A new image fusion performance metric based on visual information fidelity," Information Fusion, Vol.14, No.2, pp.127-135, Apr. 2013. https://doi.org/10.1016/j.inffus.2011.08.002
  15. V. Aslantas and E. Bendes, "A new image quality metric for image fusion: the sum of the correlations of differences," Aeu-international Journal of Electronics and Communications, Vol.69, No.12, pp.1890-1896, 2015. https://doi.org/10.1016/j.aeue.2015.09.004
  16. S. Li and B. Yang, "Multifocus image fusion by combining curvelet and wavelet transform," Pattern Recognition Letter, Vol.29, No.9, pp.1295-1301, Jul. 2008. https://doi.org/10.1016/j.patrec.2008.02.002
  17. E. D. Vidoni, "The whole brain atlas: www.med.harvard.edu/aanlib," Journal of Neurologic Physical Therapy, Vol.36, No.2, pp.108, Jun. 2012, doi: 10.1097/NPT.0b013e3182563795.