Acknowledgement
This research was supported by an Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean Government (23ZH1300, Research on Hyper-realistic Interaction Technology for Five Senses and Emotional Experience) and by an Institute of Information & Communications Technology Planning & Evaluation (Institute for Information and Communications Technology Promotion [IITP]) grant funded by the Korean Government (MSIT) (2019-0-00001, Development of Holo-TV Core Technologies for Hologram Media Services).
References
- Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, Phase recovery and holographic image reconstruction using deep learning in neural networks, Light Sci. Appl. 7 (2018), 17141.
- Z. Ren, Z. Xu, and E. Y. Lam, Autofocusing in digital holography using deep learning, (Proc. SPIE 10499, Three-Dimensional Multidimension. Microsc.: Image Acquis. Process. XXV, SPIE, San Francisco, CA), 2018, pp. 157-164.
- H. Zehao, X. Sui, and L. Cao, Holographic 3D display using depth maps generated by 2D-to-3D rendering approach, Appl. Sci. 11 (2011), 9889.
- Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, PhaseCam3D-Learning phase masks for passive single view depth estimation, (2019 IEEE Int. Conf. Comput. Photography (ICCP), Tokyo), 2019, pp. 1-12.
- T. Pitkaaho, A. Manninen, and T. J. Naughton, Focus prediction in digital holographic microscopy using deep convolutional neural networks, Appl. Optics 58 (2019), A202-A208. https://doi.org/10.1364/AO.58.00A202
- D.-Y. Park and J.-H. Park, Hologram conversion for speckle free reconstruction using light field extraction and deep learning, Opt. Express 28 (2020), 5393-5409. https://doi.org/10.1364/OE.384888
- D.-Y. Park and J.-H. Park, Generation of distortion-free scaled holograms using light field data conversion, Opt. Express 29 (2021), 487-508. https://doi.org/10.1364/OE.412986
- T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, Realistic expression for full-parallax computer-generated holograms with the ray-tracing method, Appl. Optics 52 (2013), A201-A209. https://doi.org/10.1364/AO.52.00A201
- Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, Simple and fast calculation algorithm for computer-generated hologram based on integral imaging using look-up table, Opt. Express 26 (2018), 13322-13330. https://doi.org/10.1364/OE.26.013322
- J.-H. Park and M. Askari, Non-hogel-based computer generated hologram from light field using complex field recovery technique from Wigner distribution function, Opt. Express 27 (2019), 2562-2574. https://doi.org/10.1364/OE.27.002562
- J.-H. Park, Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field, Opt. Express 28 (2020), 6663-6683. https://doi.org/10.1364/OE.386632
- D. Min, K. Min, H.-J. Choi, H. Lee, and J.-H. Park, Non-hogel-based computer generated hologram with occlusion processing between the foreground light field and background hologram, Opt. Express 30 (2022), 38339-38356. https://doi.org/10.1364/OE.468748
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial networks, Commun. ACM 63 (2020), 139-144. https://doi.org/10.1145/3422622
- X. Li, Z. Du, Y. Huang, and Z. Tan, A deep translation (GAN) based change detection network for optical and SAR remote sensing images, ISPRS J. Phot. Remote Sens. 179 (2021), 14-34. https://doi.org/10.1016/j.isprsjprs.2021.07.007
- R. A. Khan, Y. Luo, and F.-X. Wu, Multi-scale GAN with residual image learning for removing heterogeneous blur, IET Image Proc. 16 (2022), 2412-2431.
- D.-W. Kim, J.-R. Chung, J. Kim, D. Lee, S. Jeong, and S.-W. Jung, Constrained adversarial loss for generative adversarial network-based faithful image restoration, ETRI J. 41 (2019), 415-425. https://doi.org/10.4218/etrij.2018-0473
- T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks, (2019 IEEE/CVF Conf. Comput. Vision Pattern Recognit. (CVPR), Long beach, USA), 2019, pp. 4396-4405.
- E. Agustsson and R. Timofte, NTIRE 2017 Challenge on single image super-resolution dataset and study, (2017 IEEE Conf. Comp. Vision Patt. Recog. Workshops (CVPRW), Honolulu, USA), 2017, pp. 1122-1131.
- O. Ronneberger, P. Fischer, and T. Brox, U-Net: convolutional networks for biomedical image segmentation, (Proc. Int. Conf. Med. Image Comput. Comput.-Assisted Intervention, Munich, Germany), 2015, pp. 234-241.
- K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising, IEEE Tran. Imag. Proc. 26 (2017), 3142-3155. https://doi.org/10.1109/TIP.2017.2662206
- Y. Huang, Z. Lu, Y. Liu, H. Chen, J. Zhou, L. Fang, and Y. Zhang, Noise-powered disentangled representation for unsupervised speckle reduction of optical coherence tomography images, IEEE Tran. Med. Imaging 40 (2021), 2600-2614. https://doi.org/10.1109/TMI.2020.3045207
- B. Curless and M. Levoy, A volumetric method for building complex models from range images, (Proc. 23rd Annu. Conf. Comput. Graphics Interact. Tech., New Orleans (SIGGRAPH), LA, USA), 1996, pp. 303-312.