DOI QR코드

DOI QR Code

가상 환경에서의 강화학습을 활용한 모바일 로봇의 장애물 회피

Obstacle Avoidance of Mobile Robot Using Reinforcement Learning in Virtual Environment

  • 이종락 (영남이공대학교 사이버보안계열)
  • Lee, Jong-lark (Division of Cyber Security, YeungNam University College)
  • 투고 : 2021.10.18
  • 심사 : 2021.12.05
  • 발행 : 2021.12.31

초록

실 환경에서 로봇에 강화학습을 적용하기 위해서는 수많은 반복 학습이 필요하므로 가상 환경에서의 시뮬레이션을 사용할 수밖에 없다. 또한 실제 사용하는 로봇이 저사양의 하드웨어를 가지고 있는 경우 계산량이 많은 학습 알고리즘을 적용하는 것은 어려운 일이다. 본 연구에서는 저사양의 하드웨어를 가지고 있는 모바일 로봇의 장애물 충돌 회피 문제에 강화학습을 적용하기 위하여 가상의 시뮬레이션 환경으로서 Unity에서 제공하는 강화학습 프레임인 ML-Agent를 활용하였다. 강화학습 알고리즘으로서 ML-Agent에서 제공하는 DQN을 사용하였으며, 이를 활용하여 학습한 결과를 실제 로봇에 적용해 본 결과 1분간 충돌 횟수가 2회 이하로 발생하는 결과를 얻을 수 있었다.

In order to apply reinforcement learning to a robot in a real environment, it is necessary to use simulation in a virtual environment because numerous iterative learning is required. In addition, it is difficult to apply a learning algorithm that requires a lot of computation for a robot with low-spec. hardware. In this study, ML-Agent, a reinforcement learning frame provided by Unity, was used as a virtual simulation environment to apply reinforcement learning to the obstacle collision avoidance problem of mobile robots with low-spec hardware. A DQN supported by ML-Agent is adopted as a reinforcement learning algorithm and the results for a real robot show that the number of collisions occurred less then 2 times per minute.

키워드

과제정보

본 논문은 2021학년도 영남이공대학교 연구조성비 지원에 의한 것임

참고문헌

  1. D.W.Lee, K.M.cho and S.H.Lee, "Comparison & Analysis of Drones in Major Countries based on Self-Driving in IoT Environment," Journal of The Korea Internet of Things Society, Vol.6, No.2, pp.31-36, 2020. https://doi.org/10.20465/KIOTS.2020.6.2.031
  2. D. Filliat and J.A.Meyer, "Map-based navigation in mobile robots: I. A review of localization strategies," Cognitive Systems Research, Vol.4, No.4, pp.243-282, 2003. https://doi.org/10.1016/S1389-0417(03)00008-1
  3. J.A. Meyer and D. Filliat, "Map-based navigation in mobile robots: II. A review of map-learning and path-planning strategies," Cognitive Systems Research, Vol.4, No. 4, pp. 283-317, 2003. https://doi.org/10.1016/S1389-0417(03)00007-X
  4. R.S.Sutton and A.G.Barto, "Reinforcement Learning: An Introduction," A Bradford Book, MIT Press, 2th ed., 2017.
  5. A.E.Sallab, M.Abdou, E.Perot and S.Yogamani, "Deep reinforcement learning framework for autonomous driving," Journal of imaging Science and Technology, Vol.1, No.7, pp.70-76, 2017.
  6. X.B.Peng, G.Berseth, K.Yin and M.V.Panne, "Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning," ACM Transactions on Graphics, Vol.36, No.41 pp.1-13, 2017.
  7. J.H.Woo and N.K.Kim, "Collision Avoidance for an Unmanned Surface Vehicle Using Deep Reinforcement Learning," Graduate School of Seoul National University, Doctoral Dissertation, 2018.
  8. A.Coates, P.Abbeel and A.Y.Ng, "Apprenticeship learning for helicopter control," Communications of the ACM, Vol.52, No.7, pp.97-105, 2009. https://doi.org/10.1145/1538788.1538812
  9. S.Y.Park, "Object-spatial layout-route-based hybrid nap and its application to mobile robot navigation," Graduate School of Yonsei University, Doctoral Dissertation, 2010.
  10. N.J.Cho, "Learning, improving, and generalizing motor skills for autonomous robot manipulation : an integration of imitation learning, reinforcement learning, and deep learning," Graduate School of Hanyang University, Doctoral Dissertation, 2020.
  11. B.G.Ahn, "An Adaptive Motion Learning Architecture for Mobile Robots," Graduate school of SungKyunKwan University, Master's Thesis, 2006.
  12. https://github.com/Unity-Technologies/ml-agents
  13. A.B.Juliani, E.Teng, A.Cohen, J.Harper, C.Elion, C.Goy, Y.Gao, H.Henry, M.Mattar and D.Lange, "Unity: A General Platform for Intelligent Agents," arXiv:1809.02627, 2020.
  14. J.C.H.Watkins, D.Peter, "Q-learning," Machine Learning, Vol.8, No.1, pp.272-292, 1992.
  15. X.Chen, "A Reinforcement Learning Method of Obstacle Avoidance for Industrial Mobile Vehicles in Unknown Environments Using Neural Network," Proceedings of the 21st International Conference on Industrial Engineering and Engineering Management, Vol.1, No.1, pp.671-6, 2014.