DOI QR코드

DOI QR Code

Tactile Sensor-based Object Recognition Method Robust to Gripping Conditions Using Fast Fourier Convolution Algorithm

고속 푸리에 합성곱을 이용한 파지 조건에 강인한 촉각센서 기반 물체 인식 방법

  • Received : 2022.03.24
  • Accepted : 2022.04.29
  • Published : 2022.08.31

Abstract

The accurate object recognition is important for the precise and accurate manipulation. To enhance the recognition performance, we can use various types of sensors. In general, acquired data from sensors have a high sampling rate. So, in the past, the RNN-based model is commonly used to handle and analyze the time-series sensor data. However, the RNN-based model has limitations of excessive parameters. CNN-based model also can be used to analyze time-series input data. However, CNN-based model also has limitations of the small receptive field in early layers. For this reason, when we use a CNN-based model, model architecture should be deeper and heavier to extract useful global features. Thus, traditional methods like RN N -based and CN N -based model needs huge amount of learning parameters. Recently studied result shows that Fast Fourier Convolution (FFC) can overcome the limitations of traditional methods. This operator can extract global features from the first hidden layer, so it can be effectively used for feature extracting of sensor data that have a high sampling rate. In this paper, we propose the algorithm to recognize objects using tactile sensor data and the FFC model. The data was acquired from 11 types of objects to verify our posed model. We collected pressure, current, position data when the gripper grasps the objects by random force. As a result, the accuracy is enhanced from 84.66% to 91.43% when we use the proposed FFC-based model instead of the traditional model.

Keywords

Acknowledgement

This study is a part of the research project, "Development of core machinery technologies for autonomous operation and manufacturing (NK236E)", which has been supported by a grant from National Research Council of Science & Technology under the R&D Program of Ministry of Science, ICT and Future Planning This research was financially supported by the Institute of Civil m Military Technology Cooperation funded by the Defense Acquisition Program Administration and Ministry of Trade, Industry and Energy of Korean government under grant No. 19-CM-GU-01

References

  1. S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee, "A survey of modern deep learning based object detection models," Digital Signal Processing, vol. 126, no. 30, June, 2022, DOI: 10.1016/j.dsp.2022.103514.
  2. S. Chatterjee, F. H. Zunjani, and G. C. Nandi, "Real-time object detection and recognition on low-compute humanoid robots using deep learning," 2020 6th International Conference on Control, Automation and Robotics (ICCAR), pp. 202-208, Singapore, 2020, DOI: 10.1109/ICCAR49639.2020.9108054.
  3. Q. Bai, S. Li, J. Yang, Q. Song, Z. Li, and X. Zhang, "Object detection recognition and robot grasping based on machine learning: A survey," IEEE Access, vol. 8, 2020, DOI: 10.1109/ACCESS.2020.3028740.
  4. M. Zambelli, Y. Aytar, F. Visin, Y. Zhou, and R. Hadsell, "Learning rich touch representations through cross-modal self-supervision," arXiv preprint arXiv:2101.08616, 2021, DOI: 10.48550/arXiv.2101.08616.
  5. A. H. Wei and B. Y. Chen, "Robotic object recognition and grasping with a natural background," International Journal of Advanced Robotic Systems, vol. 17, no. 2, 2020, DOI: 10.1177/1729881420921102.
  6. X. Chen and J. Guhl. "Industrial robot control with object recognition based on deep learning," Procedia CIRP, vol. 76, pp. 149-154, 2018, DOI: 10.1016/j.procir.2018.01.021.
  7. E. Martinez-Martin and A. P. del Pobil, "Object detection and recognition for assistive robots: Experimentation and implementation," IEEE Robotics & Automation Magazine, vol. 24, no. 3, pp. 123-138, 2017, DOI: 10.1109/MRA.2016.2615329.
  8. S. Liu, H. Xu, Q. Li, F. Zhang, and K. Hou, "A Robot Object Recognition Method Based on Scene Text Reading in Home Environments," Sensors, vol. 21, no. 5, 2021, DOI: 10.3390/s21051919.
  9. A. Yamaguchi and C. G. Atkeson, "Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables," 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 1045-1051, Cancun, Mexico, 2016, DOI: 10.1109/HUMANOIDS.2016.7803400.
  10. Z.-Q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, "Object detection with deep learning: A review," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, Nov., 2019, DOI: 10.1109/TNNLS.2018.2876865.
  11. P. Lang, X. Fu, M. Martorella, J. Dong, R. Qin, X. Meng, and M. Xie, "A comprehensive survey of machine learning applied to radar signal processing," arXiv preprint arXiv:2009.13702, 2020, DOI: 10.48550/arXiv.2009.13702.
  12. J. Shabbir and T. Anwer, "A survey of deep learning techniques for mobile robot applications," arXiv preprint arXiv:1803.07608, 2018, DOI: 10.48550/arXiv.1803.07608.
  13. J.-J. Kim, D.-Y. Koh, and J. Park, "Obstacle Avoidance for Mobile Robots Using End-to-End Learning," Journal of Institute of Control, Robotics and Systems, vol. 25, no. 6, pp. 541-545, 2019, DOI: 10.5302/J.ICROS.2019.19.0024.
  14. L. Chi, B. Jiang, and Y. Mu, "Fast fourier convolution," Advances in Neural Information Processing Systems, 33, pp. 4479-4488, 2020, [Online], https://papers.nips.cc/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf.
  15. ROBOTIS Co. Ltd., [Online], https://www.robotis.com, Accessed: March 22, 2022.
  16. Pressure Profile Systems, Inc. (PPS), [Online], https://pressureprofile.com, Accessed: March 22, 2022.
  17. G. D. Bergland, "A guided tour of the fast Fourier transform." IEEE Spectrum, vol. 6, no. 7 pp. 41-52, July, 1969, DOI: 10.1109/MSPEC.1969.5213896.
  18. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, DOI: 10.1109/CVPR.2016.90.