DOI QR코드

DOI QR Code

Markov Decision Process-based Potential Field Technique for UAV Planning

  • MOON, CHAEHWAN (DEPARTMENT OF AEROSPACE ENGINEERING, KOREA ADVANCED SCIENCE INSTITUTE OF TECHNOLOGY) ;
  • AHN, JAEMYUNG (DEPARTMENT OF AEROSPACE ENGINEERING, KOREA ADVANCED SCIENCE INSTITUTE OF TECHNOLOGY)
  • Received : 2021.12.05
  • Accepted : 2021.12.14
  • Published : 2021.12.25

Abstract

This study proposes a methodology for mission/path planning of an unmanned aerial vehicle (UAV) using an artificial potential field with the Markov Decision Process (MDP). The planning problem is formulated as an MDP. A low-resolution solution of the MDP is obtained and used to define an artificial potential field, which provides a continuous UAV mission plan. A numerical case study is conducted to demonstrate the validity of the proposed technique.

Keywords

Acknowledgement

This paper is based on the master's thesis of the first author (C. Moon) [17], which was originally written in Korean.

References

  1. Waharte, S., & Trigoni, N., Supporting search and rescue operations with UAVs. In 2010 International Conference on Emerging Security Technologies, IEEE, 2010, pp. 142-147.
  2. U. Choi, S. Jeong, J. Ahn, Autonomous Single UAV Reconnaissance Mission Planning in Multi-Base and Multi-Threat Environment Based on Markov Decision Process, 2016 KSAS Fall Conference, Jeju, Korea 2016.
  3. S chesvold, D., Tang, J., Ahmed, B. M., Altenburg, K., & Nygard, K. E. POMDP planning for high level UAV decisions: Search vs. strike. In In Proceedings of the 16th International Conference on Computer Applications in Industry and Engineering, 2003.
  4. Ure, N. K., Chowdhary, G., Chen, Y. F., How, J. P., & Vian, J. Distributed learning for planning under uncertainty problems with heterogeneous teams. Journal of Intelligent & Robotic Systems, 74(1-2) (2014), 529-544. https://doi.org/10.1007/s10846-013-9980-x
  5. Lei, G., Dong, M. Z., Xu, T., & Wang, L. Multi-agent path planning for unmanned aerial vehicle based on threats analysis. In 2011 3rd International Workshop on Intelligent Systems and Applications, IEEE, 2011, pp. 1-4.
  6. Challita, U., Saad, W., & Bettstetter, C., Deep reinforcement learning for interference-aware path planning of cellular-connected UAVs. In 2018 IEEE International Conference on Communications (ICC) IEEE, 2018, pp. 1-7.
  7. Bethke, B., Redding, J. and How, J. P., Agent Capability in Persistent Mission Planning using Approximate Dynamic Programming, 2010 American Control Conference, 2010.
  8. B. Jeong G. Kim ,J. Ha , H. Choi., MDP based Mission Planning for multi-agent information gathering, 2013 KSAS Fall Conference, Jeju, Korea 2013.
  9. Jeong, B. M., Ha, J. S., & Choi, H. L. MDP-based mission planning for multi-UAV persistent surveillance. In 2014 14th International Conference on Control, Automation and Systems, ICCAS 2014, IEEE, 2014, pp. 831-834.
  10. Bhowal, A. Potential Field Methods for Safe Reinforcement Learning: Exploring Q-Learning and Potential Fields. Master's thesis, TU Delft, Delft, Netherlands, 2017.
  11. Zeng, J., Ju, R., Qin, L., Hu, Y., Yin, Q., & Hu, C., Navigation in Unknown Dynamic Environments Based on Deep Reinforcement Learning. Sensors, 19(18) (2019), 3837. https://doi.org/10.3390/s19183837
  12. Bellman, R., A Markovian decision process. Journal of mathematics and mechanics, (1957), 679-684.
  13. Shapley, L. S., Stochastic games. Proceedings of the national academy of sciences, 39(10) (1953), 1095-1100. https://doi.org/10.1073/pnas.39.10.1953
  14. Papadimitriou, C. H., & Tsitsiklis, J. N., The complexity of Markov decision processes. Mathematics of operations research, 12(3) (1987), 441-450. https://doi.org/10.1287/moor.12.3.441
  15. Littman, M. L., Dean, T. L., & Kaelbling, L. P., On the complexity of solving Markov decision problems. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, 1995, pp. 394-402.
  16. Jakes, W. C., & Cox, D. C., Microwave mobile communications. Wiley-IEEE Press, 1994.
  17. Moon, C., UAV Mission Planning Using MDP-based Artificial Potential Field, Master's Thesis, Korea Advanced Institute of Science and Technology (KAIST), 2021 (written in Korean).