Application of Multi-agent Reinforcement Learning to CELSS Material Circulation Control

  • Hirosaki, Tomofumi (Fujitsu Limited Earth Science System Department , Science Systems Division) ;
  • Yamauchi, Nao (Nihon University Department of Aerospace Engineering, College of Science and Technology) ;
  • Yoshida, Hiroaki (Nihon University, Department of Precision Machninery Engineering, College of Science and Technology) ;
  • Ishikawa, Yoshio (Nihon University Department of Aerospace Engineering, College of Science and Technology) ;
  • Miyajima, Hiroyuki (Tokyo Jogakkan Junior Collage, Department of Information and Social Studies)
  • 발행 : 2001.01.01

초록

A Controlled Ecological Life Support System(CELSS) is essential for man to live a long time in a closed space such as a lunar base or a mars base. Such a system may be an extremely complex system that has a lot of facilities and circulates multiple substances,. Therefore, it is very difficult task to control the whole CELSS. Thus by regarding facilities constituting the CELSS as agents and regarding the status and action as information, the whole CELSS can be treated as multi-agent system(MAS). If a CELSS can be regarded as MAS the CELSS can have three advantages with the MAS. First the MAS need not have a central computer. Second the expendability of the CELSS increases. Third, its fault tolerance rises. However it is difficult to describe the cooperation protocol among agents for MAS. Therefore in this study we propose to apply reinforcement learning (RL), because RL enables and agent to acquire a control rule automatically. To prove that MAS and RL are effective methods. we have created the system in Java, which easily gives a distributed environment that is the characteristics feature of an agent. In this paper, we report the simulation results for material circulation control of the CELSS by the MAS and RL.

키워드