DOI QR코드

DOI QR Code

Motion Recognition System for Mobile Robot in Logistics Center

  • Trinh Tran Vinh An (Department of Ocean Convergence Logistics Innovation, Korea Maritime and Ocean University) ;
  • Bui Minh Hau (Department of Ocean Convergence Logistics Innovation, Korea Maritime and Ocean University) ;
  • Hwan-Seong Kim (Dept. of Logistics, Korea Maritime and Ocean University)
  • Received : 2024.10.11
  • Accepted : 2024.10.30
  • Published : 2024.10.31

초록

Motion recognition systems are crucial for mobile robots, particularly in logistics centers, where they need to interact with other robots or workers. Mobile robots require the capability to recognize movement patterns of surrounding objects to avoid collisions and maintain safety. Employing computer vision, machine learning, and deep learning techniques, cameras have become powerful sensors for object recognition and tracking. In this research, the YOLOv8 (You Only Look Once) model is utilized to detect and track object movements. The training data were gathered from videos of people and boxes in a lab setting. The system operates in real-time to meet the collision avoidance needs of mobile robots. The data captured by the camera is processed to analyze detected objects' movement patterns. The results demonstrate the success in real-time motion recognition and the capability of providing safety alerts when a tracked object enters the robot's safety perimeter.

키워드

과제정보

This research was supported by "Regional Innovation Strategy (RIS)" through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (MOE)(2023RIS-007).

참고문헌

  1. Muftygendhis, R., Shiang, W. J. and Hsu, C. H.(2022), "A Study of AGV Collaboration with Iternet of Things Concept for Collision Avoidance at Warehouse Intersection", RSF Conference Series: Engineering and Technology, Vol. 2 (1).
  2. Lee, C. K. M., Lin, B., Ng, K. K. H., Lv, Y., Tai W. C.(2019), "Smart Robotic Mobile Fulfillment System with Dynamic Conflict-free Strategies Considering Cyber-physical Integration", Advanced Engineering Informatics, Vol. 42, 100998.
  3. Vinh, N. Q., Park, J. H., Shin, H. S. and Kim, H. S.(2023), "3D Mapping for Improving the Safety of Autonomous Driving in Container Terminals", Journal of Navigation and Port Research, Vol. 47 (5), pp. 281-287.
  4. Syntakas, S., Vlachos, K. and Likas, A.(2022), "Object Detection and Navigation of a Mobile Robot by Fusing Laser and Camera Information", 2022 30th Mediterranean Conference on Control and Automation (MED), pp. 557-563.
  5. Lou, H., Duan, X., Guo, J., Liu, H., Gu, J., Bi, L. and Chen, H.(2023), "DC-YOLOv8: Small-Size Object Detection Algorithm Based on Camera Sensor", Electronics, 12, 2323.
  6. Chen, S., Sun, P., Song, Y. and Luo, P.(2023), "DiffusionDet: Diffusion Model for Object Detection", 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 19773-19786.
  7. Afdhal, A., Saddami, K., Sugiarto, S., Fuadi, Z. and Nasaruddin, N.(2023), "Real-Time Object Detection Performance of YOLOv8 Models for Self-Driving Cars in a Mixed Traffic Environment", 2023 2nd International Conference on Computer System, Information Technology and Electrical Engineering (COSITE), pp. 260-265.
  8. Reinard, V., Kristiano, Y. and Wulandari, M.(2023), "Distance and Accuracy in Object Detection Based on YOLOv8 Computer Vision Algorithm", International Journal of Application on Sciences, Technology and Engineering (IJASTE), Vol. 1 (3).
  9. Luo, W., Xiao, Z., Ebel, H. and Eberhard, P.(2019), "Stereo Vision-based Autonomous Target Detection and Tracking on an Omnidirectional Mobile Robot", In Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2019), pp. 268-275.
  10. Zhou, Y., Yang, Y., Yi, M., Bai, X., Liu, W. and Latecki, L. J.(2014), "Online Multiple Targets Detection and Tracking from Mobile Robot in Cluttered Indoor Environments with Depth Camera", International Journal of Pattern Recognition and Artificial Intelligence, Vol. 28 (1).
  11. Xu, Z., Zhan, X., Xiu, Y., Suzuki, C. and Shimada, K.(2023), "Onboard Dynamic-Object Detection and Tracking for Autonomous Robot Navigation With RGB-D Camera", IEEE Robotics and Automation Letters, Vol. 9, pp. 651-658.
  12. Wang, X., Fu, C., He, J., Wang, S. and Wang, J.(2023), "StrongFusionMOT: A Multi-Object Tracking Method Based on LiDAR-Camera Fusion", IEEE Sensors Journal, Vol. 23 (11), pp. 11241-11252.
  13. Zeng, Y., Ma, C., Zhu, M., Fan, Z. and Yang, X.(2021), "Cross-Modal 3D Object Detection and Tracking for Auto-Driving," 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, pp. 3850-3857.
  14. Xiao, Y., Peng, Z., Chen, L., Deng, Y., Li, Z. and Lei, X.(2022), "Combining Monocular Camera and 2D Lidar for Target Tracking Using Deep Convolution Neural Network based Detection and Tracking Algorithm," 2022 International Conference on Frontiers of Communications, Information System and Data Science (CISDS), pp. 122-127.
  15. Zhu, Y., Zhong, T., Wang, Y., Kan, J., Dong, F. and Chen, K.(2023), "Mobile Robot Tracking Method Based on Improved YOLOv8 Pedestrian Detection Algorithm", 2023 2nd International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM).