PCA-Based Feature Reduction for Depth Estimation

깊이 추정을 위한 PCA기반의 특징 축소

  • Shin, Sung-Sik (Division of Computer Science and Engineering, Chonbuk National University) ;
  • Gwun, Ou-Bong (Division of Computer Science and Engineering, Chonbuk National University)
  • Received : 2010.04.05
  • Accepted : 2010.04.30
  • Published : 2010.05.25

Abstract

This paper discusses a method that can enhance the exactness of depth estimation of an image by PCA(Principle Component Analysis) based on feature reduction through learning algorithm. In estimation of the depth of an image, hyphen such as energy of pixels and gradient of them are found, those selves and their relationship are used for depth estimation. In such a case, many features are obtained by various filter operations. If all of the obtained features are equally used without considering their contribution for depth estimation, The efficiency of depth estimation goes down. This paper proposes a method that can enhance the exactness of depth estimation of an image and its processing speed is considered as the contribution factor through PCA. The experiment shows that the proposed method(30% of an feature vector) is more exact(average 0.4%, maximum 2.5%) than using all of an image data in depth estimation.

본 논문에서는 한 장의 정지 영상에서 학습을 통한 방법으로 깊이 정보를 추정하는데 사용되어지는 특징 정보를 PCA(Principal Component Analaysis)기반으로 축소하여 깊이 정보의 정확성을 향상시키는 방법에 대하여 기술한다. 정지 영상에서 깊이 정보를 추정하기 위하여 이미지의 에너지 값과 기울기와 같은 특징을 추출하며 특징들의 관계를 이용하여 각 영역의 깊이 정보를 추정한다. 이 때 영상 필터를 사용하여 많은 특징을 추출하지만 특징의 중요성을 판단하지 않고 모두 사용하면 오히려 성능에 좋지 않은 영향을 미친다. 본 논문에서는 한 장의 정지 영상의 깊이 추정을 위해 PCA를 기반으로 중요도를 판단하여 특징 벡터의 차원을 줄이고 깊이를 정확하게 추정할 수 있는 방법에 대하여 제안한다. 제안한 방법을 스탠포드 대학의 평가 데이터로 실험한 결과, 깊이를 추정하는데 있어서 전체 특징 벡터의 30%만을 이용하여 평균 0.4%에서 최대 2.5%의 정확도가 향상되었다.

Keywords

References

  1. M. Tipping and C. Bishop, "Probabilistic principal component analysis," Journal of the Royal Statistical Society, Vol. 61, No. 3, pp. 611-622, 1999. https://doi.org/10.1111/1467-9868.00196
  2. T. Nagai, T. Naruse, M. Ikehara, and A. Kurematsu, "Hmm-based surface reconstruction from single images," In Proc IEEE Int''l Conf Image Processing., Vol. 2, pp. 561-564, 2002.
  3. J. Michels, A. Saxena, and A. Y. Ng, "High speed obstacle avoidance using monocular vision and reinforcement learning," International conference on Machine learning., Vol. 17, pp. 593-600, Bonn, Germany, August 2005.
  4. A. Saxena, S. H. Chung, and A. Ng, "Learning depth from single monocular images, " Advances in Neural Information Processing Systems., Vol. 18, pp. 1161-1168, 2006.
  5. A. V. D. Linde, "PCA-based dimension reduction for splines," Journal of Nonparametric Statistics., Vol. 15, pp. 77-92, 2003. https://doi.org/10.1080/10485250306037
  6. W. Chen, M. Er, and S. Wu, "PCA and LDA in DCT domain," , Pattern Recognition Letters, Vol. 26, No. 15, pp. 2474-2482, 2005. https://doi.org/10.1016/j.patrec.2005.05.004
  7. X. Wei and W. B. Croft, "LDA-based document models for ad-hoc retrieval," Proc. of ACM SIGIR., Vol. 15, pp. 178-185, Washington, USA, 2006.
  8. B. Wu, T. L. Ooi, and Z. J. He, "Perceiving distance accurately by a directional process of integrating ground information," Letters to Nature., Vol. 428, pp. 73-77, 2004. https://doi.org/10.1038/nature02350
  9. X. He, R. S. Zemel, and M. A. Carreira-Perpinan, "Multiscale conditional random fields for image labeling," In Proc. CVPR., Vol. 2, pp. 694-702, 2004.
  10. D. Scharstein and R. Szeliski, "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms," International Journal of Computer Vision., Vol. 47, pp. 7-42, 2002. https://doi.org/10.1023/A:1014573219977
  11. S. Das and N. Ahuja, "Performance analysis of stereo, vergence, and focus as depth cues for active vision," IEEE Trans Pattern Analysis & Machine Intelligence., Vol. 17, pp. 1213-1219, 1995. https://doi.org/10.1109/34.476513
  12. R. Wolke, "Iteratively reweighted least squares : A comparison of several single step algorithms for linear models," BIT Numerial Mathematics, Vol. 32, No. 3, pp. 506-524, 1992. https://doi.org/10.1007/BF02074884