DOI QR코드

DOI QR Code

Deep Learning-based Keypoint Filtering for Remote Sensing Image Registration

원격 탐사 영상 정합을 위한 딥러닝 기반 특징점 필터링

  • Received : 2020.11.23
  • Accepted : 2021.01.07
  • Published : 2021.01.30

Abstract

In this paper, DLKF (Deep Learning Keypoint Filtering), the deep learning-based keypoint filtering method for the rapidization of the image registration method for remote sensing images is proposed. The complexity of the conventional feature-based image registration method arises during the feature matching step. To reduce this complexity, this paper proposes to filter only the keypoints detected in the artificial structure among the keypoints detected in the keypoint detector by ensuring that the feature matching is matched with the keypoints detected in the artificial structure of the image. For reducing the number of keypoints points as preserving essential keypoints, we preserve keypoints adjacent to the boundaries of the artificial structure, and use reduced images, and crop image patches overlapping to eliminate noise from the patch boundary as a result of the image segmentation method. the proposed method improves the speed and accuracy of registration. To verify the performance of DLKF, the speed and accuracy of the conventional keypoints extraction method were compared using the remote sensing image of KOMPSAT-3 satellite. Based on the SIFT-based registration method, which is commonly used in households, the SURF-based registration method, which improved the speed of the SIFT method, improved the speed by 2.6 times while reducing the number of keypoints by about 18%, but the accuracy decreased from 3.42 to 5.43. Became. However, when the proposed method, DLKF, was used, the number of keypoints was reduced by about 82%, improving the speed by about 20.5 times, while reducing the accuracy to 4.51.

본 논문에서는 원격 탐사 영상에 대한 특징 기반 영상 정합 (Image Registration) 방법의 고속화를 위한 딥러닝 기반 특징점 필터링 방법인 DLKF (Deep Learning Keypoint Filtering)를 제안한다. 기존의 특징 기반 영상 정합 방법의 복잡도는 특징 매칭 (Feature Matching) 단계에서 발생한다. 이 복잡도를 줄이기 위하여 본 논문에서는 특징 매칭이 영상의 구조물에서 검출된 특징점으로 매칭되는 것을 확인하여 특징점 검출기에서 검출된 특징점 중에서 구조물에서 검출된 특징점만 필터링하는 방법을 제안한다. DLKF는 영상 정합을 위하여 필수적인 특징점을 잃지 않으면서 그 수를 줄이기 위하여 구조물의 경계와 인접한 특징점을 보존하고, 서브 샘플링 (Subsampling)된 영상을 사용한다. 또한 영상 분할 (Image Segmentation) 방법을 위해 패치 단위로 잘라낸 영상을 다시 합칠 때 생기는 영상 패치 경계의 잡음을 제거하기 위하여 영상 패치를 중복하여 잘라낸다. DLKF의 성능을 검증하기 위하여 아리랑 3호 위성 원격 탐사 영상을 사용하여 기존 특징점 검출 방법과 속도와 정확도를 비교하였다. SIFT 기반 정합 방법을 기준으로 SURF 기반 정합 방법은 특징점의 수를 약 18% 감소시키고 속도를 약 2.6배 향상시켰지만 정확도가 3.42에서 5.43으로 저하되었다. 제안하는 방법인 DLKF를 사용하였을 때 특징점의 수를 약 82% 감소시키고 속도를 약 20.5배 향상시키면서 정확도는 4.51로 저하되었다.

Keywords

References

  1. B. Zitova, J. Flusser, "Image registration methods: a survey" Image and Vision Computing, Vol.21, No.11, pp.977-1000, June 2003. https://doi.org/10.1016/S0262-8856(03)00137-9
  2. S. Suri, P. Reinartz, "Mutual-information-based registration of terrasar-x and ikonos imagery in urban areas" IEEE Transactions on Geoscience and Remote Sensing, Vol. 48, No.2, pp.939-949, November 2009. https://doi.org/10.1109/TGRS.2009.2034842
  3. J.P. Kern, M.S. Pattichis, "Robust multispectral image registration using mutual-information models" IEEE Transactions on Geoscience and Remote Sensing, Vol.45, No.5, pp.1494-1505, April 2007. https://doi.org/10.1109/TGRS.2007.892599
  4. H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, "Speeded-up robust features (surf)" Computer Vision and Image Understanding, Vol.110, No3, pp.346-359, June 2008. https://doi.org/10.1016/j.cviu.2007.09.014
  5. D. Lowe, "Distinctive image features from scale-invariant keypoints" International Journal of Computer Vision, Vol.60, No.2, pp.91-110, January 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  6. M.I. Patel, V.K. Thakar, "Speed Improvement in Image Registration using Maximum Likelihood based Mutual Information" International Conference on Advanced Computing and Communication Systems, Coimbatore, India, pp.1-3, 2015.
  7. L.C. Chiu, T.S. Chang, J.Y. Chen, N.Y.C. Chang, "Fast SIFT Design for Real-Time Visual Feature Extraction" IEEE Transactions on Image Processing, Vol. 22, No. 8, pp.3158-3167, August 2013. https://doi.org/10.1109/TIP.2013.2259841
  8. Y. LeCun, Y. Bengion and G. Hinton, "Deep learning," Nature, Vol. 521, pp.436-444, May 2015. https://doi.org/10.1038/nature14539
  9. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional networks for biomedical image segmentation" Medical Image Computing and Computer-Assisted Intervention, Vol.9351. pp.234-241, November 2015.
  10. C. Harris, M. Stephens, "A combined corner and edge detector" Alvey Vision Conference, Manchester, UK, pp. 147-151, 1988.
  11. T. Lindeberg, "Feature detection with automatic scale selection", International Journal of Computer Vision, Vol. 30, No.2, pp.79-116, Novemver 1998. https://doi.org/10.1023/A:1008045108935
  12. K. Mikolajczyk, C. Schmid, "Indexing based on scale invariant interest points" International Conference on Computer Vision, Vancouver, Canada, pp.525-531, 2001.
  13. D. Lowe, "Object recognition from local scale-invariant features" International Conference on Computer Vision, Kerkyra, Greece, pp.1150-1157, 1999.
  14. T. Kadir, M. Brady, "Scale, saliency and image description" International Journal of Computer Vision, Vol. 45, No.2, pp.83-105, Novemver 2001. https://doi.org/10.1023/A:1012460413855
  15. F. Jurie, C. Schmid, "Scale-invariant shape features for recognition of object categories" IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, pp.90-96, 2004.
  16. L.M.J. Florack, B.M. ter Haar Romeny, J.J. Koenderink, M.A. Viergever, "General intensity transformations and differential invariants" Journal of Mathematical Imaging and Vision, Vol.4, No.2, pp.171-187, May 1994. https://doi.org/10.1007/BF01249895
  17. F. Mindru, T. Tuytelaars, L. Van Gool, T. Moons, "Moment invariants for recognition under changing viewpoint and illumination" Computer Vision and Image Understanding, Vol.94 No.1-3, pp.3-27, April 2004. https://doi.org/10.1016/j.cviu.2003.10.011
  18. A. Baumberg, "Reliable feature matching across widely separated views" IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, pp.774-781, 2000.
  19. W.T. Freeman, E.H. Adelson, "The design and use of steerable filters" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.13, No.9, pp.891-906, September 1991. https://doi.org/10.1109/34.93808
  20. G. Carneiro, A.D. Jepson, "Multi-scale phase-based local features" IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, pp.736-743, 2003.
  21. Y. Ke, R. Sukthankar, "PCA-SIFT: a more distinctive representation for local image descriptors" IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, pp.506-513, 2004.
  22. K. Mikolajczyk, C. Schmid, "A performance evaluation of local descriptors" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.27, No.10, pp.1615-1630, October 2003. https://doi.org/10.1109/TPAMI.2005.188
  23. H. Goncalves, J.A. Goncalves, L. Corte-Real, "Measures for an objective evaluation of the geometric correction process quality" IEEE Geoscience and Remote Sensing Letters, Vol.6, No.2, pp.292-296, April 2009. https://doi.org/10.1109/LGRS.2008.2012441