DOI QR코드

DOI QR Code

A Method for Improving Resolution and Critical Dimension Measurement of an Organic Layer Using Deep Learning Superresolution

  • Kim, Sangyun (School of Mechanical and Aerospace Engineering, Seoul National University) ;
  • Pahk, Heui Jae (School of Mechanical and Aerospace Engineering, Seoul National University)
  • Received : 2018.02.07
  • Accepted : 2018.03.07
  • Published : 2018.04.25

Abstract

In semiconductor manufacturing, critical dimensions indicate the features of patterns formed by the semiconductor process. The purpose of measuring critical dimensions is to confirm whether patterns are made as intended. The deposition process for an organic light emitting diode (OLED) forms a luminous organic layer on the thin-film transistor electrode. The position of this organic layer greatly affects the luminescent performance of an OLED. Thus, a system for measuring the position of the organic layer from outside of the vacuum chamber in real-time is desired for monitoring the deposition process. Typically, imaging from large stand-off distances results in low spatial resolution because of diffraction blur, and it is difficult to attain an adequate industrial-level measurement. The proposed method offers a new superresolution single-image using a conversion formula between two different optical systems obtained by a deep learning technique. This formula converts an image measured at long distance and with low-resolution optics into one image as if it were measured with high-resolution optics. The performance of this method is evaluated with various samples in terms of spatial resolution and measurement performance.

Keywords

References

  1. B. Geffroy, P. L. Roy, and C. Prat, "Organic light-emitting diode (OLED) technology: Materials, devices and display technologies," Polym. Int. 55, 572-582 (2006). https://doi.org/10.1002/pi.1974
  2. K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, "Picam: An ultra-thin high performance monolithic camera array," ACM Trans. Graph. 32, 166 (2013).
  3. G. Carles, J. Downing, and A. R. Harvey, "Super-resolution imaging using a camera array," Opt. Lett. 39, 1889-1892 (2014). https://doi.org/10.1364/OL.39.001889
  4. J. Holloway, Y. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, "SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using fourier ptychography," Sci. Adv. 3, e1602564 (2017). https://doi.org/10.1126/sciadv.1602564
  5. G. Zheng, R. Horstmeyer, and C. Yang, "Wide-field, high-resolution fourier ptychographic microscopy," Nat. Photon. 7, 739-745 (2013). https://doi.org/10.1038/nphoton.2013.187
  6. S. Dong, Z. Bian, and R. Shiradkar, "Sparsely sampled fourier ptychography," Opt. Express 22, 5455-5464 (2014). https://doi.org/10.1364/OE.22.005455
  7. K. Guo, S. Dong, P. Nanda, and G. Zheng, "Optimization of sampling pattern and the design of fourier ptychographic illuminator," Opt. Express 23, 6171-6180 (2015). https://doi.org/10.1364/OE.23.006171
  8. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, "Aperture-scanning fourier ptychography for 3d refocusing and super-resolution," Opt. Express 22, 13586-13599 (2014). https://doi.org/10.1364/OE.22.013586
  9. N. T. Doan, J. H. Moon, T. W. Kim, and H. J. Pahk, "A fast image enhancement technique using a new scanning path for critical dimension measurement of glass panels," Int. J. Precis. Eng. Man. 13, 2109-2114 (2012). https://doi.org/10.1007/s12541-012-0279-9
  10. C. Dong, C. C. Loy, K. He, and X. Tang, "Learning a deep convolutional network for image super-resolution," in European Conference on Computer Vision (ECCV) (Chem, 6-12 Sept. 2014), pp. 184-199.
  11. C. Dong, C. C. Loy, K. He, and X. Tang, "Accelerating the superresolution convolutional neural network," in European Conference on Computer Vision (ECCV), (Amsterdam, 11-14, Oct. 2016), pp. 391-407.
  12. J. Kim, J. K. Lee, and K. M. Lee, "Accurate image super-resolution using very deep convolutional networks," in Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646-1654.
  13. W. Bae and J. Yoo, "Beyond deep residual learning for image restoration: Persistent homology-guided manifold simplification," https://arxiv.org/abs/1611.06345.
  14. Y. S. Han, J. Yoo, and J. C. Ye, "Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis," https://arxiv.org/abs/1611.06391.
  15. B. Lim, S. H. Son, H. W. Kim, S. J. Nah, and K. M. Lee, "Enhanced deep residual networks for single image super-resolution," in Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2017), pp. 136-144.
  16. L. Xu, J. Ren, C. Liu, and J. Jia, "Deep convolutional neural network for image deconvolution," in Proc. IEEE Conference on Advances in Neural Information Processing Systems (IEEE, 2014), pp. 1790-1798.
  17. J. Xie, L. Xu, and E. Che, "Image denoising and inpainting with deep neural networks", in Proc. IEEE Conference on Advances in Neural Information Processing Systems (IEEE, 2012), pp. 341-349.
  18. K. Zhang, W. Zuo, Y. Chen. D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142-3155 (2017). https://doi.org/10.1109/TIP.2017.2662206
  19. R. Timofte, E. Agustsson, L. V. Gool, M.-H. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, X. Wang, Y. Tian, K. Yu, Y. Zhang, S. Wu, C. Dong, L. Lin, Y. Qiao, C. C. Loy, W. Bae, J. Yoo, Y. Han, J. C. Ye, J.-S. Choi, M. Kim, Y. Fan, J. Yu, W. Han, D. Liu, H. Yu, Z. Wang, H. Shi, X. Wang, T. S. Huang, Y. Chen, K. Zhang, W. Zuo, Z. Tang, L. Luo, S. Li, M. Fu, L. Cao, W. Heng, G. Bui, T. Le, Y. Duan, D. Tao, R. Wang, X. Lin, J. Pang, J. Xu, Y. Zhao, X. Xu, J. Pan, D. Sun, Y. Zhang, X. Song, Y. Dai, X. Qin, X.-P. Huynh, T. Guo, H. S. Mousavi, T. H. Vu, V. Monga, C. Cruz, K. Egiazarian, V. Katkovnik, R. Mehta, A. K. Jain, A. Agarwalla, C. V. S. Praveen, R. Zhou, H. Wen, C. Zhu, Z. Xia, Z. Wang, and Q. Guo, "NTIRE 2017 challenge on single image super-resolution: Method and results," in Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1110-1121.
  20. D. G. Lowe, "Distinctive image features from scale-invariant keypoints," Int. J. Comput. Vis. 60, 91-110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94
  21. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, "Deep learning microscopy," Optica 4, 1437-1443 (2017). https://doi.org/10.1364/OPTICA.4.001437
  22. J. Lee, Y. Kim, S. Kim, I. Lee, and H. Pahk, "Real-time application of critical dimension measurement of TFT-LCD pattern using a newly proposed 2D image-processing algorithm," Opt. Lasers Eng. 46, 558-569 (2008). https://doi.org/10.1016/j.optlaseng.2008.01.009
  23. R. M. Haralick, "Digital step edge from zero crossing of second directional derivatives", IEEE Trans. Pattern Anal. Mach. Intell. 6, 58-68 (1984).
  24. S. McHugh, Digital photography tutorials (2005).