DOI QR코드

DOI QR Code

배달 로봇 응용을 위한 LiDAR 센서 기반 객체 분류 시스템

LiDAR Sensor based Object Classification System for Delivery Robot Applications

  • 박우진 ;
  • 이정규 ;
  • 박채운 ;
  • 정윤호
  • Woo-Jin Park (School of Electronics and Information Engineering, Korea Aerospace University) ;
  • Jeong-Gyu Lee (School of Electronics and Information Engineering, Korea Aerospace University) ;
  • Chae-woon Park (School of Electronics and Information Engineering, Korea Aerospace University) ;
  • Yunho Jung (School of Electronics and Information Engineering, Korea Aerospace University)
  • 투고 : 2024.09.03
  • 심사 : 2024.09.25
  • 발행 : 2024.09.30

초록

본 논문에서는 배달 서비스 로봇 응용을 위한 LiDAR 센서 기반 경량화된 객체 분류 시스템을 제안한다. 3차원 포인트 클라우드 데이터를 Pillar Feature Network (PFN)을 사용하여 2차원 pseudo image로 인코딩한 후, Depthwise Separable Convolution Neural Network (DS-CNN)에 기반하여 설계된 네트워크를 통해 객체 분류를 수행하는 경량화된 시스템을 설계하였다. 구현 결과, 설계한 분류 네트워크의 파라미터 수와 Multiply-Accumulate (MAC) 연산 수는 각각 9.08K 및 3.49M이며, 94.94%의 분류 정확도를 지원 가능함을 확인하였다.

In this paper, we propose a lightweight object classification system using a LiDAR sensor for delivery service robots. The 3D point cloud data is encoded into a 2D pseudo image using a Pillar Feature Network (PFN), and then passed through a lightweight classification network designed based on Depthwise Separable Convolutional Neural Networks (DS-CNN). The implementation results show that the designed classification network has 9.08K parameters and 3.49M Multiply-Accumulate (MAC) operations, while supporting a classification accuracy of 94.94%.

키워드

과제정보

This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by Korean government (MSIT) (No. 2022-0-00960), and the CAD tools were supported by IDEC.

참고문헌

  1. F. Amzajerdian, "Role of lidar technology in future nasa space missions," MRS Spring Meeting, 2008, pp.1076-K04-01.
  2. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang and O. Beijbom, "PointPillars: fast encoders for object detection from point clouds," Conference on Computer Vision and Pattern Recognition, 2019, pp.12689-12697. DOI: 10.48550/arXiv.1812.05784
  3. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, et al., MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, 2017.
  4. Ouster, High-resolution OS1 lidar sensor: robotics, trucking, mapping, security | Ouster, https://ouster.com/products/hardware/os1-lidar-sensor
  5. Shang J-J, Phipps N, Wey I-C, Teo TH. "A-DSCNN: Depthwise Separable Convolutional Neural Network Inference Chip Design Using an Approximate Multiplier," Chips, vol.2, no.3, pp. 159-172. 2023. DOI: 10.3390/chips2030010
  6. Kaiming He et al., "Deep residual learning for image recognition," In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.770-778, 2016. DOI: 10.1109/CVPR.2016.90
  7. LeCun, Yann, et al., "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol.86, no.11, pp.2278-2324, 1998. DOI: 10.1109/5.726791
  8. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv: 1412.6980, 2014. DOI: 10.48550/arXiv.1412.6980
  9. Ouster, Ouster SDK - Ouster Sensor SDK 0.12.0 documentation, https://static.ouster.dev/sdk-docs/index.html
  10. Ester, Martin, et al, "A density-based algorithm for discovering clusters in large spatial databases with noise," kdd, vol.96, no. 34, pp.226-231, 1996.