DOI QR코드

DOI QR Code

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images

RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출

  • Seo, Woong (Department of Computer Science and Engineering, Sogang University) ;
  • Park, Sang Uk (Department of Computer Science and Engineering, Sogang University) ;
  • Ihm, Insung (Department of Computer Science and Engineering, Sogang University)
  • 서웅 (서강대학교 컴퓨터공학과) ;
  • 박상욱 (서강대학교 컴퓨터공학과) ;
  • 임인성 (서강대학교 컴퓨터공학과)
  • Received : 2018.06.23
  • Accepted : 2018.07.04
  • Published : 2018.07.10

Abstract

When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

RGB-D 카메라 촬영 영상에 대한 카메라 포즈 추정을 통하여 복원한 3차원 전역 공간의 점 집합으로부터 삼각형 메쉬를 생성할 때, 일반적으로 메쉬의 크기가 커질수록 3차원 모델의 품질 또한 향상된다. 하지만 어떤 한계를 넘어서 삼각형 메쉬의 해상도를 높일 경우, 메모리 요구량의 과도한 증가나 실시간 렌더링 성능저하 문제뿐만 아니라 RGB-D 센서의 정밀도 한계로 인한 접 집합 데이터의 노이즈에 민감해지는 문제가 발생한다. 본 논문에서는 실시간 응용에 적합한 3차원 모델 생성을 위하여 비교적 적은 크기의 삼각형 메쉬에 대하여 3차원 점 집합의 촬영 색상으로부터 고화질의 텍스쳐를 생성하는 기법을 제안한다. 특히 카메라 포즈 추정을 통하여 생성한 3차원 점 집합 공간과 2차원 텍스쳐 공간 간의 매핑 관계를 활용한 간단한 방법을 통하여 RGB-D 카메라 촬영 영상으로부터 복원한 3차원 모델에 대하여 효과적으로 텍스쳐를 생성할 수 있음을 보인다.

Keywords

References

  1. R. A Newcombe, S. lzadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, A. Fitzgibbon, "KinectFusion: Real-time dense surface mapping and tracking," Proc. the 2011 IEEE international Symposium on Mixed and Augmented Reality (JSMAR). pp. 127-136, October 2011.
  2. H. Pfister, M. Zwicker, J. van Baar, M. Gross, "Surfels: Surface elements as rendering primitives," in ACM Transactions on Graphics (Proc. ACM SJGGRAPH 2000). pp. 335-342, 2000.
  3. M. Kazhdan and H. Hoppe, "Screened Poisson surface reconstruction," ACM Transactions on Graphics (TOG), Vol. 32, No. 3, pp. 1-13, June 2013.
  4. B. Levy, S. Petitjean, N. Ray, and J. Maillot. "Least squares conformal maps for automatic texture atlas generation." ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2002). Vol. 21, No. 3, pp. 362-371, 2002.
  5. M. Arikan, R. Preiner, C. Scheiblauer, S. Jeschke, and M. Wimmer, "Large-scale point-cloud visualization through localized textured surface reconstruction," IEEE Transactions on Visualization and Computer Graphics, Vol. 20, No. 9, pp. 1280-1292, 2014. https://doi.org/10.1109/TVCG.2014.2312011
  6. T. Whelan, M. Kaess, H. Johannsson, M. Fallon, J. Leonard, and J. McDonald, "Real-time large-scale desne RGB-D SLAM with volumetric fusion," The Intl. J. of Robotics Research, Vol. 34, No. 4-5, pp. 598-626, 2015. https://doi.org/10.1177/0278364914551008
  7. L. Ma, T. Whelan, E. Bondarev, P. H. N. de With, and J. McDonald, "Planar simplification and texturing of dense point cloud maps," Proc. 2013 European Conf. on Mobile Robots, pp. 25-27, 2013.
  8. S. Liu, W. Li, P. Ogunbona, and Y. Chow, "Creating simplified 3D Models with high quality textures," Proc. 2015 Intl. Conf. on Digital Image Computing; Techniques and Applications, 2015.
  9. Q. Zhou and V. Koltun, "Color map optimization for 3D reconstruction with consumer depth cameras," ACM Transactions on Graphics, Vol. 33, Issue 4, Article No. 155, 2014.
  10. A. Handa, T. Whelan, J. McDonald and A. J. Davison, "A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM," Proc. IEEE Int. Conf. Robot. Autom., pp. 1524-1531, 2014.
  11. T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker and A. J. Davison, "ElasticFusion: Dense SLAM Without A Pose Graph," Proc. robotics Science and systems (RSS), 2015 .
  12. J. An, J. Lee, J. Jeong, and I. Ihm, "Tracking an RGB-D camera on mobile devices using an improved frame-to-frame pose estimation method," Proc. IEEE Winter Conf. on Applications of Computer Vision 2018, pp. 1142-1150, 2014.