DOI QR코드

DOI QR Code

A New Calibration of 3D Point Cloud using 3D Skeleton

3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션

  • Park, Byung-Seo (Kwangwoon university Electronic Materials Engineering) ;
  • Kang, Ji-Won (Kwangwoon university Electronic Materials Engineering) ;
  • Lee, Sol (Kwangwoon university Electronic Materials Engineering) ;
  • Park, Jung-Tak (Kwangwoon university Electronic Materials Engineering) ;
  • Choi, Jang-Hwan (Kwangwoon university Electronic Materials Engineering) ;
  • Kim, Dong-Wook (Kwangwoon university Electronic Materials Engineering) ;
  • Seo, Young-Ho (Kwangwoon university Electronic Materials Engineering)
  • 박병서 (광운대학교 전자재료공학과) ;
  • 강지원 (광운대학교 전자재료공학과) ;
  • 이솔 (광운대학교 전자재료공학과) ;
  • 박정탁 (광운대학교 전자재료공학과) ;
  • 최장환 (광운대학교 전자재료공학과) ;
  • 김동욱 (광운대학교 전자재료공학과) ;
  • 서영호 (광운대학교 전자재료공학과)
  • Received : 2021.04.06
  • Accepted : 2021.05.14
  • Published : 2021.05.30

Abstract

This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

본 논문에서는 3D(dimensional) 스켈레톤을 이용하여 다시점 RGB-D 카메라를 캘리브레이션 하는 새로운 기법을 제안하고자 한다. 다시점 카메라를 캘리브레이션 하기 위해서는 일관성 있는 특징점이 필요하다. 또한 높은 정확도의 캘리브레이션 결과를 얻기 위해서는 정확한 특징점의 획득이 필요하다. 우리는 다시점 카메라를 캘리브레이션 하기 위한 특징점으로 사람의 스켈레톤을 사용한다. 사람의 스켈레톤은 최신의 자세 추정(pose estimation) 알고리즘들을 이용하여 쉽게 구할 수 있게 되었다. 우리는 자세 추정 알고리즘을 통해서 획득된 3D 스켈레톤의 관절 좌표를 특징점으로 사용하는 RGB-D 기반의 캘리브레이션 알고리즘을 제안한다. 다시점 카메라에 촬영된 인체 정보는 불완전할 수 있기 때문에, 이를 통해 획득된 영상 정보를 바탕으로 예측된 스켈레톤은 불완전할 수 있다. 불완전한 다수의 스켈레톤을 효율적으로 하나의 스켈레톤으로 통합한 후에, 통합된 스켈레톤을 이용하여 카메라 변환 행렬을 구함으로써 다시점 카메라들을 캘리브레이션 할 수 있다. 캘리브레이션의 정확도를 높이기 위해서 시간적인 반복을 통해서 다수의 스켈레톤을 최적화에 이용한다. 우리는 실험을 통해서 불완전한 다수의 스켈레톤을 이용하여 다시점 카메라를 캘리브레이션 할 수 있음을 증명한다.

Keywords

Acknowledgement

이 논문은 2021년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업임(NRF-2018R1D1A1B0704322013). 이 논문은 2021년도 광운대학교 대학혁신지원사업에 의해 연구되었음.

References

  1. F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, "3D mapping with an RGB-D camera," IEEE Transactions on Robotics (T-RO), vol. 30, no. 1, pp. 177-187, 2013. https://doi.org/10.1109/TRO.2013.2279412
  2. M. Labb and F. Michaud, "Online global loop closure detection for large-scale multi-session graph-based SLAM," in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 2661-2666.
  3. M. Munaro and E. Menegatti, "Fast RGB-D People Tracking for Service Robots," Autonomous Robots, vol 37, pp. 227-242, 2014. https://doi.org/10.1007/s10514-014-9385-0
  4. C. Choi and H. Christensen, "RGB-D object tracking: A particle filter approach on GPU," in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, pp. 1084-1091, 2013.
  5. J. Tang, S. Miller, A. Singh, and P. Abbeel, "A textured object recognition pipeline for color and depth image data," in Proc. of the IEEE International Conference on Robotics and Automation, Saint Paul, USA, pp. 3467-3474, 2012.
  6. T. Munea, Y. Jembre, H. Weldegebriel, L. Chen, C. Huang and C. Yang, "The Progress of Human Pose Estimation: A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation," IEEE Access, vol. 8, pp. 133330-133348, 2020. https://doi.org/10.1109/access.2020.3010248
  7. M. Zollhofer, P. Stotko, A. Gorlitz, C. Theobalt, M. Niessner, R. Klein and A. Kolb, "State of the Art on 3D Reconstruction with RGB-D Cameras," Computer Graphics Forum, vol. 37, pp. 625-652, 2018.
  8. Giancola S., Valenti M., Sala R. "State-of-the-Art Devices Comparison. In: A Survey on 3D Cameras: Metrological Comparison of Time-of-Fli ght, Structured-Light and Active Stereoscopy Technologies," Springer Briefs in Computer Science, pp. 29-39, 2018.
  9. G. Chen, G. Cui, Z. Jin, F. Wu and X. Chen, "Accurate Intrinsic and Extrinsic Calibration of RGB-D Cameras With GP-Based Depth Correction," IEEE Sensors Journal, vol. 19, no. 7, pp. 2685-2694, 1 2019. https://doi.org/10.1109/jsen.2018.2889805
  10. W. Yun and J. Kim, "3D Modeling and WebVR Implementation using Azure Kinect, Open3D, and Three.js," 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, pp. 240-243, 2020.
  11. S. Yeh, Y. Chiou, H. Chang, W. Hsu, S. Liu and F. Tsai, "Performance improvement of offline phase for indoor positioning systems using Asus Xtion and smartphone sensors," Journal of Communications and Networks, vol. 18, no. 5, pp. 837-845, October 2016. https://doi.org/10.1109/JCN.2016.000112
  12. A. Zabatani et al., "Intel® RealSense™ SR300 Coded Light Depth Camera," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2333-2345, 1 Oct. 2020. https://doi.org/10.1109/tpami.2019.2915841
  13. G. Unal, A. Yezzi, S. Soatto and G. Slabaugh, "A Variational Approach to Problems in Calibration of Multiple Cameras," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1322-1338, Aug. 2007, https://doi.org/10.1109/TPAMI.2007.1035.
  14. F. Basso, E. Menegatti and A. Pretto, "Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras," IEEE Transactions on Robotics, vol. 34, no. 5, pp. 1315-1332, Oct. 2018. https://doi.org/10.1109/tro.2018.2853742
  15. K. Khoshelham and S. O. Elberink, "Accuracy and resolution of kinect depth data for indoor mapping applications," Sensors, vol. 12, no. 2, pp. 1437-1454, 2012. https://doi.org/10.3390/s120201437
  16. I. Mikhelson, P. Lee, A. Sahakian, Y. Wu, and A. Katsaggelos, "Automatic, fast, online calibration between depth and color cameras," Journal of Visual Communication and Image Representation, vol. 25, 2014.
  17. I. V. Mikhelson, P. G. Lee, A. V. Sahakian, Y. Wu, and A. K. Katsaggelos, "Automatic, fast, online calibration between depth and color cameras," Journal of Visual Communication and Image Representation, vol. 25, 2014.
  18. A. N. Staranowicz, G. R. Brown, F. Morbidi, and G. L. Mariottini, "Practical and accurate calibration of RGB-D cameras using spheres," Computer Vision and Image Understanding, vol. 137, pp. 102-114, 2015. https://doi.org/10.1016/j.cviu.2015.03.013
  19. K. Zheng, Y. Chen, F. Wu, and X. Chen, "A general batch-calibration framework of service robots," in Proc. Int. Conf. Intell. Robot. Appl. Springer, pp. 275-286, 2017.
  20. M. Lindner, I. Schiller, A. Kolb, and R. Koch, "Time-of-flight sensor calibration for accurate range sensing," Computer Vision and Image Understanding, vol. 114, no. 12, pp. 1318-1328, 2010. https://doi.org/10.1016/j.cviu.2009.11.002
  21. A. Kuznetsova and B. Rosenhahn, "On calibration of a low-cost time-offlight camera," Computer Vision - ECCV 2014 Workshops, Springer International Publishing, pp. 415-427, 2015.
  22. D. Ferstl, C. Reinbacher, G. Riegler, M. Rther, and H. Bischof, "Learning depth calibration of time-of-flight cameras," Proceedings of the British Machine Vision Conference (BMVC), pp. 102.1-102.12, September, 2015.
  23. A. Perez-Yus, E. Fernandez-Moral, G. Lopez-Nicolas, J. Guerrero, and P. Rives, "Extrinsic calibration of multiple RGB-D cameras from line observations," IEEE Robot. Automat. Lett, vol. 3, no. 1, pp. 273-280, 2018. https://doi.org/10.1109/LRA.2017.2739104
  24. N. Fukushima, "ICP with Depth Compensation for Calibration of Multiple ToF Sensors," 2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Helsinki, Finland, pp. 1-4, 2018.
  25. K. Desai, B. Prabhakaran and S. Raghuraman, "Skeleton-based Continuous Extrinsic Calibration of Multiple RGB-D Kinect Cameras," Proceedings of the 9th ACM Multimedia Systems Conference, pp. 250-257, June 2018.
  26. K. Kim, B. Park, D. Kim, S. Kim, and Y. Seo, "Real-time 3D Volumetr ic Model Generation using Multiview RGB-D Camera," Journal of Bro adcast Engineering,Vol. 25, No. 3, pp. 439-448, May 2020.