DOI QR코드

DOI QR Code

다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System

  • Ji Dong Choi (Korea Institute of Industrial Technology, Kyungpook National University) ;
  • Min Young Kim (Kyungpook National University) ;
  • Byeong Hak Kim (Korea Institute of Industrial Technology)
  • 투고 : 2023.09.29
  • 심사 : 2023.11.20
  • 발행 : 2023.12.31

초록

A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

키워드

과제정보

This study has been conducted with the support of the Korea Institute of Industrial Technology (KITECH). This work was supported by the Industrial Technology Innovation Program (20016970, Development of Cooperative Robot SI (System Integration) Service based on Safety Intelligence) funded by the Ministry of Trade Industry & Energy (MOTIE, Korea), This work was supported by the Industrial Technology Innovation Program(20018288, Development of 150-ton crawler crane with vision-based intelligent safety management system) funded by the Ministry of Trade Industry & Energy(MOTIE, Korea), and the Core Research Institute Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2021R1A6A1A03043144) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C2008133).

참고문헌

  1. P. F. Felzenszwalb, D. P. Huttenlocher, "Pictorial Structures for Object Recognition," International Journal of Computer Vision, Vol. 61, No. 1, pp. 55-79, 2005. https://doi.org/10.1023/B:VISI.0000042934.15159.49
  2. P. F. Felzenszwalb, R. B. Girshick, D. McAllester, D. Ramanan, "Object Detection with Discriminatively Trained Part-based Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, pp. 1627-1645, 2010. https://doi.org/10.1109/TPAMI.2009.167
  3. A. Toshev, C. Szegedy, "DeepPose: Human Pose Estimation Via Deep Neural Networks," IEEE Conference on Computer Vision and Pattern Recognition, pp.1653-1660, 2014.
  4. A. Newell, K. Yang, J. Deng, "Stacked Hourglass Networks for Human Pose Estimation," European Conference on Computer Vision, pp. 483-499, 2016.
  5. Z. Cao, G. Hidalgo, T. Simon, S. E. Wei, Y. Sheikh, "Openpose: Realtime Multi-person 2d Pose Estimation Using Part Affinity Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 43, No. 1, pp. 172-186, 2019. https://doi.org/10.1109/TPAMI.2019.2929257
  6. K. Sun, B. Xiao, D. Liu, J. Wang, "Deep High-resolution Representation Learning for Human Pose Estimation," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5693-5703, 2019.
  7. Y. Zhou, X. Wang, X. Xu, L. Zhao, J. Song, "X-hrnet: Towards Lightweight Human Pose Estimation with Spatially Unidimensional Self-attention," IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 2022.
  8. W. Zhao, W. Wang, Y. Tian, "Graformer: Graph-oriented Transformer for 3d Pose Estimation," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 20438-20447, June 2022.
  9. Q. Zhao, C. Zheng, M. Liu, P. Wang, C. Chen, "PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D Human Pose Estimation," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8877-8886, 2023.
  10. J. Cai, H. Liu, R. Ding, W. Li, J. Wu, M. Ban, "Htnet: Human Topology Aware Network for 3d Human Pose Estimation," arXiv preprint arXiv:2302.09790, 2023.
  11. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. "Real-time Human Pose Recognition in Parts from Single Depth Images," IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297-1304, 2011.
  12. C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard, T. Brox, "3D Human Pose Estimation in RGBD Images for Robotic task Learning," IEEE International Conference on Robotics and Automation, 2018.
  13. R. Bashirov, A. Ianina, K. Iskakov, Y. Kononenko, V. Strizhkova, V. Lempitsky, A. Vakhitov, "Real-Time RGBD-Based Extended Body Pose Estimation," IEEE Winter Conference on Applications of Computer Vision, pp. 2806-2815, 2021.
  14. M. Veges, A. Lorincz, "Absolute Human Pose Estimation with Depth Prediction Network," International Joint Conference on Neural Networks, pp. 1-7, 2019.
  15. G. Moon, J. Y. Chang, K. M. Lee, "V2V-PoseNet: Voxel-to-voxel Prediction Network for Accurate 3d Hand and Human Pose Estimation from a Single Depth Map," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5079-5088, 2018.
  16. R. A. Guler, N. Neverova, I. Kokkinos, "Densepose: Dense Human Pose Estimation in the Wild," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7297-7306, 2018.
  17. G. Rogez, P. Weinzaepfel, C. Schmid, "LCR-Net: Localization-classification-regression for Human Pose," IEEE Conference on Computer Vision and Pattern Recognition, pp.3433-3441, 2017.
  18. OptiTrack Motive. Available online: https://optitrack.com/software/motive/
  19. T. Maruyama, T. Ueshiba, M. Tada, H. Toda, Y. Endo, Y. Domae, Y. Nakabo, T. Mori, K. Suita, "Digital Twin-driven Human Robot Collaboration Using a Digital Human," Sensors, Vol. 21, No. 24 2021.
  20. V. Weistroffer, F. Keith, A. Bisiaux, C. Andriot, A. Lasnier, "Using Physics-based Digital Twins and Extended Reality for the Safety and Ergonomics Evaluation of Cobotic Workstations," Front. Virtual Real, Vol. 3, pp.1-18, 2022. https://doi.org/10.3389/frvir.2022.781830
  21. J. D. Choi, M. Y. Kim, "A Sensor Fusion System with Thermal Infrared Camera and LiDAR for Autonomous Vehicles and Deep Learning Based Object Detection," ICT Express, Vol. 9, No. 2, pp. 222-227, 2023. https://doi.org/10.1016/j.icte.2021.12.016
  22. Y. Park, S. Yun, C. S. Won, K. Cho, K. Um, S. Sim., "Calibration Between Color Camera and 3d Lidar Instruments with a Polygonal Planar Board," Sensors, Vol. 14, No. 3, pp. 5333-5353, 2014. https://doi.org/10.3390/s140305333
  23. H. Liu, H. Li, X. Liu, J. Luo, S. Xie, Y. Sun, "A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-based Patterns," Sensors, Vol. 19, No. 2, 2019.
  24. S. Lee, J. Yoo, M. Park, J. Kim, S. Kwon, "Robust Extrinsic Calibration of Multiple RGB-D Cameras with Body Tracking and Feature Matching," Sensors, Vol. 21, No. 3, 2021.
  25. N. Gelfand, L. Ikemoto, S. Rusinkiewicz, M. Levoy, "Geometrically Stable Sampling for the ICP Algorithm," IEEE 3D Digital Imaging and Modeling, 2003.
  26. ISO/TS 15066:2016, "Robots and robotic devices - Colla borative robots", https://www.iso.org/standard/62996.html