DOI QR코드

DOI QR Code

AI Image Restoration Based on Synthetic Image for Improving Aircraft Optical Detection

AI 기반 항공기 광학 탐지 장치 성능 개선을 위한 합성 이미지 활용 연구

  • Sang Gyu Jeong (Aviation System Development Quality Research Team, Defense Agency for Technology and Quality) ;
  • Na Eun Kwon (Aviation System Development Quality Research Team, Defense Agency for Technology and Quality) ;
  • Hyung Woo Kim (Division of Mechanical Engineering, College of Engineering, Wonkwang University)
  • 정상규 (국방기술품질원 항공센터) ;
  • 권나은 (국방기술품질원 항공센터) ;
  • 김형우 (원광대학교 기계공학부)
  • Received : 2024.09.26
  • Accepted : 2024.10.29
  • Published : 2024.10.31

Abstract

This study proposes an AI-based image restoration technique to reduce image distortion caused by lighting and noise in nighttime environments and improve the performance of infrared detection systems. A synthetic image dataset was constructed using visible light images under various lighting conditions and ISO settings, and deep learning models (AutoEncoder and U-Net) were trained to assess image restoration performance. Experimental results show that the Multi-ISO model (9-channel) outperforms the Single-ISO model (3-channel), especially when utilizing input data with multiple ISO values. This study demonstrates that AI models can be effectively trained using synthetic data, even when real data collection is challenging, and can be applied to image restoration tasks. These findings are expected to contribute to enhancing the performance of optical detection systems through AI-based technology.

본 연구는 야간 환경에서 발생하는 조명과 노이즈에 의한 이미지 왜곡을 저감하고, 적외선 탐지 장치의 성능을 향상시키기 위해 AI 기반 이미지 복원 기술을 제안한다. 이를 위해 가시광선 이미지를 기반으로 다양한 조명 조건과 ISO 값을 반영한 합성 이미지 데이터셋을 구축하고, 딥러닝 모델(AutoEncoder 및 U-Net)을 활용하여 원본 이미지 복원 성능을 확인하였다. 실험 결과, Multi-ISO 모델(9채널)이 Single-ISO 모델(3채널)보다 전반적으로 우수한 성능을 보였으며, 특히 다양한 ISO 값을 활용한 입력 데이터가 이미지 복원 성능을 향상시킴을 입증하였다. 본 연구는 실제 데이터 수집이 어려운 상황에서도 합성 데이터를 통해 AI 모델을 효과적으로 학습시키고, 이미지 복원에 적용할 수 있음을 확인하였다. 이러한 연구 결과는 AI를 활용한 광학 탐지 장치의 성능을 향상시키는 데 기여할 수 있을 것으로 기대된다.

Keywords

Acknowledgement

이 논문은 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원-지역지능화혁신인재양성사업의 지원을 받아 수행된 연구임(IITP-2024-RS-2024-00439292)

References

  1. S. Deane, N. P. Avdelidis, C. Ibarra-Castanedo, H. Zhang, H. Yazdani Nezhad, A. A. Williamson, T. Mackley, X. Maldague, A. Tsourdos, and P. Nooralishahi, "Comparison of cooled and uncooled IR sensors by means of signal-to-noise ratio for NDT diagnostics of aerospace grade composites," Sensors, Vol. 20, No. 12, p. 3381, Jun. 2020. DOI: https://doi.org/10.3390/s20123381.
  2. J. Zhang, S. L. Jaker, J. S. Reid, S. D. Miller, J. Solbrig, and T. D. Toth, "Characterization and application of artificial light sources for nighttime aerosol optical depth retrievals using the visible infrared imager radiometer suite day/night band," Atmospheric Measurement Techniques, Vol. 12, No. 6, pp. 3209-3222, Jun. 2019. DOI: https://doi.org/10.5194/amt-12-3209-2019.
  3. G. R. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, Mumbai, India: Shroff Publishers & Distributors, 2008.
  4. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Oct. 2015. DOI: https://doi.org/10.1007/978-3-319-24574-4_28.
  5. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, ..., Y. Bengio, "Generative adversarial networks," Communications of the ACM, Vol. 63, No. 11, pp. 139-144, Oct. 2020. DOI: https://doi.org/10.1145/3422622.
  6. M. Tassano, J. Delon, and T. Veit, "FastDVDnet: Towards real-time deep video denoising without flow estimation," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle: WA, 2020. DOI: https://doi.org/10.1109/cvpr42600.2020.00143.
  7. L. Eversberg and J. Lambrecht, "Combining synthetic images and deep active learning: Data-efficient training of an industrial object detection model," Journal of Imaging Science and Technology, Vol. 10, No. 1, Jan. 2024. DOI: https://doi.org/10.3390/jimaging10010016.
  8. J. W. Anderson, M. Ziolkowski, K. Kennedy, and A. W. Apon, "Synthetic image data for deep learning," arXiv [cs.CV], Dec. 2022. DOI: https://doi.org/10.48550/arXiv.2212.06232.
  9. S. W. Zamir, A. Arora, S. Khan, and M. Hayat, "Cycleisp: Real image restoration via improved data synthesis," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2696-2705, 2020. arXiv: https://ui.adsabs.harvard.edu/abs/2020arXiv200307761W/abstract.
  10. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, …, M. Levoy, "Burst photography for high dynamic range and low-light imaging on mobile cameras," ACM Transactions on Graphics, Vol. 35, No. 6, pp. 1-12, Nov. 2016. DOI: https://doi.org/10.1145/2980179.2980254.
  11. Y.-I. Pyo, R.-H. Park, and S. Chang, "Noise reduction in high-ISO images using 3-D collaborative filtering and structure extraction from residual blocks," IEEE Transactions on Consumer Electronics, Vol. 57, No. 2, pp. 687-695, May 2011. DOI: https://doi.org/10.1109/tce.2011.5955209.
  12. T. Rabie, "Adaptive hybrid mean and median filtering of high-ISO long-exposure sensor noise for digital photography," Journal of Electronic Imaging, Vol. 13, No. 2, p. 264, Apr. 2004. DOI: https://doi.org/10.1117/1.1668279.
  13. J. M. B. Morillas, D. M. Gonzalez, and G. R. Gozalo, "A review of the measurement procedure of the ISO 1996 standard. relationship with the European noise directive," Science of the Total Environment, Vol. 565, pp. 595-606, Sep. 2016. DOI: https://doi.org/10.1016/j.scitotenv.2016.04.207.
  14. U. Sara, M. Akter, and M. S. Uddin, "Image quality assessment through FSIM, SSIM, MSE and PSNR-a comparative study," Journal of Computer and Communications, Vol. 7, No. 3, pp. 8-18, 2019. DOI: https://www.scirp.org/journal/paperinformation.aspx?paperid=90911.
  15. A. Hore and D. Ziou, "Image Quality Metrics: PSNR vs. SSIM," in International Conference on Pattern Recognition, Istanbul: Turkiye, 2010, pp. 2366-2369. DOI: https://doi.org/10.1109/icpr.2010.579.