DOI QR코드

DOI QR Code

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim (Department of Civil, Environmental and Architectural Engineering, Korea University) ;
  • Heejae Ahn (Department of Civil, Environmental and Architectural Engineering, Korea University) ;
  • Sebeen Yoon (Department of Architectural Engineering, Seoul National University of Science and Technology) ;
  • Taehoon Kim (Department of Architectural Engineering, Seoul National University of Science and Technology) ;
  • Thomas H.-K. Kang (Department of Architecture and Architectural Engineering, Seoul National University) ;
  • Young K. Ju (Department of Civil, Environmental and Architectural Engineering, Korea University) ;
  • Minju Kim (Department of Construction Management, University of Washington) ;
  • Hunhee Cho (Department of Civil, Environmental and Architectural Engineering, Korea University)
  • 투고 : 2023.12.03
  • 심사 : 2024.03.12
  • 발행 : 2024.05.25

초록

In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

키워드

과제정보

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2021R1A5A1032433).

참고문헌

  1. An, G.H., Lee, S., Seo, M.W., Yun, K., Cheong, W.S. and Kang, S.J. (2018), "Charuco board-based omnidirectional camera calibration method", Electron., 7(12), 421. https://doi.org/10.3390/electronics7120421.
  2. Asadi, K., Ramshankar, H., Pullagurla, H., Bhandare, A., Shanbhag, S., Mehta, P., ... and Wu, T. (2018), "Vision-based integrated mobile robotic system for real-time applications in construction", Automat. Constr., 96, 470-482. https://doi.org/10.1016/j.autcon.2018.10.009.
  3. Assadzadeh, A., Arashpour, M., Bab-Hadiashar, A., Ngo, T. and Li, H. (2021), "Automatic far-field camera calibration for construction scene analysis", Comput. Aid. Civil Infrastr. Eng., 36(8), 1073-1090. https://doi.org/10.1111/mice.12660.
  4. Chen, C., Zheng, Z., Xu, T., Guo, S., Feng, S., Yao, W. and Lan, Y. (2023), "YOLO-based UAV technology: A review of the research and its applications", Drones, 7(3), 190. https://doi.org/10.1109/TNNLS.2018.2876865.
  5. Du, X., Meng, Z., Ma, Z., Lu, W. and Cheng, H. (2023), "Tomato 3D pose detection algorithm based on keypoint detection and point cloud processing", Comput. Electron. Agricult., 212, 108056. https://doi.org/10.1016/j.compag.2023.108056.
  6. Ekanayake, B., Wong, J.K.W., Fini, A.A.F. and Smith, P. (2021), "Computer vision-based interior construction progress monitoring: A literature review and future research directions", Automat. Constr., 127, 103705. https://doi.org/10.1016/j.autcon.2021.103705.
  7. Fang, W., Love, P.E., Luo, H. and Ding, L. (2020), "Computer vision for behaviour-based safety in construction: A review and future directions", Adv. Eng. Informat., 43, 100980. https://doi.org/10.1016/j.ssci.2020.105130.
  8. Feng, C., Kamat, V.R. and Cai, H. (2018), "Camera marker networks for articulated machine pose estimation", Automat. Constr., 96, 148-160. https://doi.org/10.1016/j.autcon.2018.09.004.
  9. Francani, A.O. and Maximo, M.R. (2022), "Dense prediction transformer for scale estimation in monocular visual odometry", 2022 Latin American Robotics Symposium (LARS), 2022 Brazilian Symposium on Robotics (SBR), and 2022 Workshop on Robotics in Education (WRE), Sao Bernardo do Campo, Brazil, October.
  10. Guo, B.H., Zou, Y., Fang, Y., Goh, Y.M. and Zou, P.X. (2021), "Computer vision technologies for safety science and management in construction: A critical review and future research directions", Saf. Sci., 135, 105130. https://doi.org/10.1016/j.ssci.2020.105130.
  11. Hartley, R. and Zisserman, A. (2003), Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge, UK.
  12. Itu, R., Borza, D. and Danescu, R. (2017), "Automatic extrinsic camera parameters calibration using convolutional neural networks", 2017 13th IEEE International Conference on Intelligent Computer Communication and Processing, ClujNapoca, Romania, September.
  13. Jia, Z., Rao, Y., Fan, H. and Dong, J. (2023), "An efficient visual SFM framework using planar markers", IEEE Trans. Instrument. Measure., 72, 1-12. https://doi.org/10.1109/TIM.2023.3241972.
  14. Jiang, C., Hu, Q., Li, H. and Li, D. (2022), "Homography-based PnP solution to reject outliers", IEEE Trans. Instrument. Measure., 71, 1-13. https://doi.org/10.1109/TIM.2022.3216085.
  15. Jiang, S., Jiang, C. and Jiang, W. (2020), "Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools", ISPRS J. Photogram. Remote Sens., 167, 230-251. https://doi.org/10.1016/j.isprsjprs.2020.04.016.
  16. Kazerouni, I.A., Fitzgerald, L., Dooly, G. and Toal, D. (2022), "A survey of state-of-the-art on visual SLAM", Expert Syst. Appl., 205, 117734. https://doi.org/10.1016/j.eswa.2022.117734.
  17. Lepetit, V., Moreno-Noguer, F. and Fua, P. (2009), "EP n P: An accurate O (n) solution to the P n P problem", Int. J. Comput. Vision, 81, 155-166. https://doi.org/10.1007/s11263-008-0152-6.
  18. Li, Y., Fan, Q., Huang, H., Han, Z. and Gu, Q. (2023), "A modified YOLOv8 detection network for UAV aerial image recognition", Drones, 7(5), 304. https://doi.org/10.3390/drones7050304.
  19. Liu, X., Huang, H. and Hu, B. (2022), "Indoor visual positioning method based on image features", Sensor. Mater., 34, 337-348. https://doi.org/10.18494/sam3562.
  20. Martinez, P., Al-Hussein, M. and Ahmad, R. (2019), "A scientometric analysis and critical review of computer vision applications for construction", Automat. Constr., 107, 102947. https://doi.org/10.1016/j.autcon.2019.102947.
  21. Mian, A.S., Bennamoun, M. and Owens, R. (2008), "Keypoint detection and local feature matching for textured 3D face recognition", Int. J. Comput. Vision, 79, 1-12. https://doi.org/10.1007/s11263-007-0085-5.
  22. Park, J., Cho, Y.K. and Martinez, D. (2016), "A BIM and UWB integrated mobile robot navigation system for indoor position tracking applications", J. Constr. Eng. Project Manag., 6(2), 30-39. https://doi.org/10.6106/JCEPM.2016.6.2.030.
  23. Pollok, T. and Monari, E. (2016), "A visual SLAM-based approach for calibration of distributed camera networks", 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Colorado Springs, CO, USA, August.
  24. Rajamohan, D., Kim, J., Garratt, M. and Pickering, M. (2022), "Image based localization under large perspective difference between Sfm and SLAM using split sim (3) optimization", Autonom. Robot., 46(3), 437-449. https://doi.org/10.1007/s10514-021-10031-8.
  25. Ren, Z., Fang, F., Yan, N. and Wu, Y. (2022), "State of the art in defect detection based on machine vision", Int. J. Precis. Eng. Manuf. Green Technol., 9(2), 661-691. https://doi.org/10.1007/s40684-021-00343-6.
  26. Shu, F., Lesur, P., Xie, Y., Pagani, A. and Stricker, D. (2021), "SLAM in the field: An evaluation of monocular mapping and localization on challenging dynamic agricultural environment", Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, January.
  27. Si, X., Xu, G., Ke, M., Zhang, H., Tong, K. and Qi, F. (2023), "Relative localization within a quadcopter unmanned aerial vehicle swarm based on airborne monocular vision", Drones, 7(10), 612. https://doi.org/10.3390/drones7100612.
  28. Sonka, M., Hlavac, V. and Boyle, R. (2013), Image Processing, Analysis and Machine Vision, 3 rd Edition, Springer, Boston, MA, USA.
  29. Strasdat, H., Montiel, J.M. and Davison, A.J. (2012), "Visual SLAM: Why filter?", Image Vision Comput., 30(2), 65-77. https://doi.org/10.1016/j.imavis.2012.02.009.
  30. Sumikura, S., Shibuya, M. and Sakurada, K. (2019), "OpenVSLAM: A versatile visual SLAM framework", Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, October.
  31. Unnikrishnan, R. and Herbert, M. (2005), Fast Extrinsic Calibration of a Laser Rangefinder to a Camera, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
  32. Xu, B. and Liu, C. (2022), "A 3D reconstruction method for buildings based on monocular vision", Comput. Aid. Civil Infrastr. Eng., 37(3), 354-369. https://doi.org/10.1111/mice.12715.
  33. Xu, S., Wang, J., Shou, W., Ngo, T., Sadick, A.M. and Wang, X. (2021), "Computer vision techniques in construction: a critical review", Arch. Comput. Method. Eng., 28, 3383-3397. https://doi.org/10.1007/s11831-020-09504-3.
  34. Xu, Z., Wang, S., Liu, M. and Zhang, Z. (2019), "Efficient camera calibration for robotic systems with a single plane chessboard", Appl. Sci., 9(24), 5348. https://doi.org/10.3390/app9245348.
  35. Zhang, Y.J. (2023), "Camera calibration", 3-D Computer Vision: Principles, Algorithms and Applications, Singapore, Springer.
  36. Zhang, Z. (2000), "A flexible new technique for camera calibration", IEEE Trans. Pattern Anal. Mach. Intell., 22(11), 1330-1334. https://doi.org/10.1109/34.888718.
  37. Zhao, Z.Q., Zheng, P., Xu, S.T. and Wu, X. (2019), "Object detection with deep learning: A review", IEEE Trans. Neural Netw. Learn. Syst., 30(11), 3212-323. https://doi.org/10.1109/TNNLS.201.