DOI QR코드

DOI QR Code

멀티 프레임 기반 건물 인식에 필요한 특징점 분류

Classification of Feature Points Required for Multi-Frame Based Building Recognition

  • Park, Si-young (Kwangwoon University Department of Department of Electronics) ;
  • An, Ha-eun (Kwangwoon University Department of Department of Electronics) ;
  • Lee, Gyu-cheol (Kwangwoon University Department of Department of Electronics) ;
  • Yoo, Ji-sang (Kwangwoon University Department of Department of Electronics)
  • 투고 : 2015.10.01
  • 심사 : 2016.03.23
  • 발행 : 2016.03.31

초록

영상에서 의미 있는 특징점(feature point)의 추출은 제안하는 기법의 성능과 직결되는 문제이다. 특히 나무나 사람 등에서의 가려짐 영역(occlusion region), 하늘과 산 등 객체가 아닌 배경에서 추출되는 특징점들은 의미없는 특징점으로 분류되어 정합과 인식 기법의 성능을 저하시키는 원인이 된다. 본 논문에서는 한 장 이상의 멀티 프레임을 이용하여 건물 인식에 필요한 특징점을 분류하여 인식과 정합단계에서 기존의 일반적인 건물 인식 기법의 성능을 향상시키기 위한 새로운 기법을 제안한다. 먼저 SIFT(scale invariant feature transform)를 통해 일차적으로 특징점을 추출한 후 잘못 정합 된 특징점은 제거한다. 가려짐 영역에서의 특징점 분류를 위해서는 RANSAC(random sample consensus)을 적용한다. 분류된 특징점들은 정합 기법을 통해 구하였기 때문에 하나의 특징점은 여러 개의 디스크립터가 존재하고 따라서 이를 통합하는 과정도 제안한다. 실험을 통해 제안하는 기법의 성능이 우수하다는 것을 보였다.

The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.

키워드

참고문헌

  1. J. Li, W. Huang, L. Shao, and N. Allinson, "Building recognition in urban environments: A survey of state-of-the-art and future challenges," Inf. Sci., vol. 277, no. 1, pp. 406-420, Sept. 2014. https://doi.org/10.1016/j.ins.2014.02.112
  2. D. Lowe, "Distinctive image features from scale-invariant keypoints," Int. J. Computer Vision, vol. 60, no. 2, pp. 91-110, Nov. 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  3. Y. Li and L. G. Shapiro, "Consistent line clusters for building recognition in CBIR," in Proc. 16th Int. Conf. Pattern Recognition, vol. 3, pp. 952-956, Aug. 2002.
  4. I. T. Jolliffe, Principal component analysis, 2nd Ed., Springer, 2002.
  5. G. J. Malachlan, Discriminant analysis and statistical pattern recognition, Wiley-interscience, New York, 1992.
  6. X. He and P. Niyogi, "Locality preserving projection," in Proc. Conf. Advances in Neural Inf. Process. Syst., 2003.
  7. D. Cai, X. He, and J. Han, Using graph model for face analysis, Department of Computer Science, University of Illinois at Urbana Champaign, Sept. 2005.
  8. D. Cai, X. he, and J. Han, "Semi-supervised discriminant analysis," in Proc. IEEE 11th Int. Conf. Computer Vision, pp. 1-7, Oct. 2007.
  9. J. H. Heo and M. C. Lee, "Building recognition using image segmentation and color features," J. Korea Robotics Soc., vol. 8, no. 2, pp. 82-91, Jun. 2013. https://doi.org/10.7746/jkros.2013.8.2.082
  10. W. Zahng and J. Kosecka, "Localization based on building recognition," IEEE Computer Soc. Conf., Jun. 2005.
  11. V. Vapnik, The nature of statistical learning theory, Springer, 1995.
  12. H. Trinh, D. N. Kim, and K. H. Jo, "Facet-based multiple building analysis for robot intelligence," Mathematics and Computation, vol. 205, no. 2, pp. 537-549, Nov. 2008. https://doi.org/10.1016/j.amc.2008.05.059
  13. M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, no. 6, pp. 381-395, Jun. 1981. https://doi.org/10.1145/358669.358692
  14. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, "Speeded-up robust feature," Computer Vision and Image Understanding, vol. 10, no. 3, pp. 346-359, Jun. 2008.
  15. S. M. Smith and J. M. Brady, "Susan - a new approach to low level image processing," Int. J. Computer Vision, vol. 23, no. 1, pp. 45-78, May 1997. https://doi.org/10.1023/A:1007963824710
  16. E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," Eur. Conf. Computer Vision, pp. 430-443, Graz, Austria, May 2006.
  17. L. M. J. Florack, B. M. T. H. Romeny, J. J. Koenderink, and M. A. Viergever, "General intensity transformations and differential invariants," J. Mathematical Imaging and Vision, vol. 4, no. 2, pp. 171-187, May 1994. https://doi.org/10.1007/BF01249895
  18. E. Dubrofsky, Homography estimation, Univ. of British COLUMBIA, Mar. 2009.
  19. M. M. Hossain, H. J. Lee, and J. S. Lee, "Fast image stitching for video stabilization using sift feature points," J. KICS, vol. 39, no. 10, pp. 957-966, Oct. 2014.
  20. B. W. Chung, K. Y. Park, and S. Y. Hwang, "A fast and efficient haar-like feature selection algorithm for object detection," J. KICS, vol. 38, no. 6, pp. 486-497, Jun. 2013.
  21. J. H. Hong, B. C. Ko, and J. Y. Nam, "Human action recognition in still image using weighted bag-of-features and ensemble decision trees," J. KICS, vol. 38, no. 1, pp. 1-9, Jan. 2013.