Acknowledgement
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00096, Robot task planning for single and multiple robots connected to a cloud system)
References
- E. Tunstel, A. Howard, and H. Seraji, "Rule-based reasoning and neural network perception for safe off-road robot mobility," Expert Systems, vol. 19, no. 4, pp. 191-200, 2002, DOI: 10.1111/1468-0394.00204.
- A. Kreutzmann, D. Wolter, F. Dylla, and J. H. Lee, "Towards Safe Navigation by Formalizing Navigation Rules," TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, vol. 7, no. 2, pp. 161-168, Jun, 2013, DOI: 10.12716/1001.07.02.01.
- Z. Jiang and S. Luo, "Neural Logic Reinforcement Learning," Machine Learning, 2019, DOI: 10.48550/arXiv.1904.10729.
- A. Payani and F. Fekri, "Incorporating Relational Background Knowledge into Reinforcement Learning via Differentiable Inductive Logic Programming," Machine Learning, 2020, DOI: 10.48550/arXiv.2003.10386.
- M. Rojas, G. Hermosilla, D. Yunge, and G. Faris, "An Easy to Use Deep Reinforcement Learning Library for AI Mobile Robots in Isaac Sim," Applied Sciences, vol. 12, no. 17, 2022, DOI: 10.3390/app12178429.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riediller, "Playing atari with deep reinforcement learning," Machine Learning, 2013, DOI: 10.48550/arXiv.1312.5602.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov, "Proximal Policy Optimization Algorithms," Machine Learning, 2017, DOI: 10.48550/arXiv.1707.06347.
- R. Evans and E. Grefenstette, "Learning Explanatory Rules from Noisy Data," Journal of Artificial Intelligence Research, vol. 61, Jan, 2018, DOI: 10.1613/jair.5714.
- A. Payani and F. Fekri, "Inductive Logic Programming via Differentiable Deep Neural Logic Networks," Artificial Intelligence, 2019, DOI: 10.48550/arXiv.1906.03523.
- O. Rivlin, T. Hazan, and E. Karpas, "Generalized Planning With Deep Reinforcement Learning," Artificial Intelligence, 2020, DOI: 10.48550/arXiv.2005.02305.
- V. Zambaldi, D. Raposo, A. Santoro, V. Bapst, Y. Li, I. Babuschkin, K. Tuyls, D. Reichert, T. Lillicrap, E. Lockhart, M. Shanahan, V. Langston, R. Pascanu, M. Botvinick, O. Vinyals, and P. Battaglia, "Relational Deep Reinforcement Learning," Machine Learning, 2018, DOI: 10.48550/arXiv.1806.01830.
- J. Janisch, T. Pevny and V. Lisy, "Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks," Machine Learning, 2021, DOI: 10.48550/arXiv.2009.12462.
- S. Garg and A. Bajpai, "Symbolic Network: Generalized Neural Policies for Relational MDPs," the 37th International conference on machine learning, 2020, [Online] https://proceedings.mlr.press/v119/garg20a.html.
- D. Adjodah, T. Klinger, and J. Joseph, "Symbolic Relation Networks for Reinforcement Learning," 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montreal, Canada, 2018, [Online] https://r2learning.github.io/assets/papers/CameraReadySubmission%203.pdf.
- S. Das, S. Natarajan, K. Roy, R. Parr, and K. Kersting, "Fitted Q-Learning for Relational Domains," Machine Learning, 2020, DOI: 10.48550/arXiv.2006.05595.
- T. Gokhale, S. Sampat, Z. Fang, Y. Yang, and C. Baral, "Blocksworld Revisited: Learning and Reasoning to Generate Event-Sequences from Image Pairs," Computer Vision and Pattern Recognition, 2019, DOI : 10.48550/arXiv.1905.12042.