• 제목/요약/키워드: Information-Missing Patch

검색결과 2건 처리시간 0.017초

Interpretation of Real Information-missing Patch of Remote Sensing Image with Kriging Interpolation of Spatial Statistics

  • Yiming, Feng;Xiangdong, Lei;Yuanchang, Lu
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1479-1481
    • /
    • 2003
  • The aim of this paper was mainly to interpret the real information-missing patch of image by using the kriging interpolation technology of spatial statistics. The TM Image of the Jingouling Forest Farm of Wangqing Forestry Bureau of Northeast China on 1 July 1997 was used as the tested material in this paper. Based on the classification for the TM image, the information pixel-missing patch of image was interpolated by the kriging interpolation technology of spatial statistics theory under the image treatment software-ERDAS and the geographic information system software-Arc/Info. The interpolation results were already passed precise examination. This paper would provide a method and means for interpreting the information-missing patch of image.

  • PDF

ISFRNet: A Deep Three-stage Identity and Structure Feature Refinement Network for Facial Image Inpainting

  • Yan Wang;Jitae Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권3호
    • /
    • pp.881-895
    • /
    • 2023
  • Modern image inpainting techniques based on deep learning have achieved remarkable performance, and more and more people are working on repairing more complex and larger missing areas, although this is still challenging, especially for facial image inpainting. For a face image with a huge missing area, there are very few valid pixels available; however, people have an ability to imagine the complete picture in their mind according to their subjective will. It is important to simulate this capability while maintaining the identity features of the face as much as possible. To achieve this goal, we propose a three-stage network model, which we refer to as the identity and structure feature refinement network (ISFRNet). ISFRNet is based on 1) a pre-trained pSp-styleGAN model that generates an extremely realistic face image with rich structural features; 2) a shallow structured network with a small receptive field; and 3) a modified U-net with two encoders and a decoder, which has a large receptive field. We choose structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), L1 Loss and learned perceptual image patch similarity (LPIPS) to evaluate our model. When the missing region is 20%-40%, the above four metric scores of our model are 28.12, 0.942, 0.015 and 0.090, respectively. When the lost area is between 40% and 60%, the metric scores are 23.31, 0.840, 0.053 and 0.177, respectively. Our inpainting network not only guarantees excellent face identity feature recovery but also exhibits state-of-the-art performance compared to other multi-stage refinement models.