• Title/Summary/Keyword: Cooperative Multi-Agent

Search Result 47, Processing Time 0.026 seconds

Evoluationary Design of a Fuzzy Logic Controller For Multi-Agent Robotic Systems

  • Jeong, ll-Kwon1;Lee, Ju-Jang
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.1 no.2
    • /
    • pp.147-152
    • /
    • 1999
  • It is an interesting area in the field of artifical intelligence to find an analytic model of cooperative structure for multiagent system accomplishing a given task. Usually it is difficult to design controllers for multi-agent systems without a comprehensive knowledge about the system. One of the way to overcome this limitation is to implement an evolutionary approach to design the controllers. This paper introduces the use of a genetic algorithm to discover a fuzzy logic controller with rules that govern emergent agents solving a pursuit problem in a continuous world. Simulation results indicate that, given the complexity of the problem, an evolutionary approach to find the fuzzy logic controller seems to be promising.

  • PDF

Opportunistic Spectrum Access with Discrete Feedback in Unknown and Dynamic Environment:A Multi-agent Learning Approach

  • Gao, Zhan;Chen, Junhong;Xu, Yuhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.3867-3886
    • /
    • 2015
  • This article investigates the problem of opportunistic spectrum access in dynamic environment, in which the signal-to-noise ratio (SNR) is time-varying. Different from existing work on continuous feedback, we consider more practical scenarios in which the transmitter receives an Acknowledgment (ACK) if the received SNR is larger than the required threshold, and otherwise a Non-Acknowledgment (NACK). That is, the feedback is discrete. Several applications with different threshold values are also considered in this work. The channel selection problem is formulated as a non-cooperative game, and subsequently it is proved to be a potential game, which has at least one pure strategy Nash equilibrium. Following this, a multi-agent Q-learning algorithm is proposed to converge to Nash equilibria of the game. Furthermore, opportunistic spectrum access with multiple discrete feedbacks is also investigated. Finally, the simulation results verify that the proposed multi-agent Q-learning algorithm is applicable to both situations with binary feedback and multiple discrete feedbacks.

Development of vision-based soccer robots for multi-agent cooperative systems (다개체 협력 시스템을 위한 비젼 기반 축구 로봇 시스템의 개발)

  • 심현식;정명진;최인환;김종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.608-611
    • /
    • 1997
  • The soccer robot system consists of multi agents, with highly coordinated operation and movements so as to fulfill specific objectives, even under adverse situation. The coordination of the multi-agents is associated with a lot of supplementary work in advance. The associated issues are the position correction, prevention of communication congestion, local information sensing in addition to the need for imitating the human-like decision making. A control structure for soccer robot is designed and several behaviors and actions for a soccer robot are proposed. Variable zone defense as a basic strategy and several special strategies for fouls are applied to SOTY2 team.

  • PDF

Study for Control Algorithm of Robust Multi-Robot in Dynamic Environment (동적인 환경에서 강인한 멀티로봇 제어 알고리즘 연구)

  • 홍성우;안두성
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.249-254
    • /
    • 2001
  • Abstract In this paper, we propose a method of cooperative control based on artifical intelligent system in distributed autonomous robotic system. In general, multi-agent behavior algorithm is simple and effective for small number of robots. And multi-robot behavior control is a simple reactive navigation strategy by combining repulsion from obstacles with attraction to a goal. However when the number of robot goes on increasing, this becomes difficult to be realized because multi-robot behavior algorithm provide on multiple constraints and goals in mobile robot navigation problems. As the solution of above problem, we propose an architecture of fuzzy system for each multi-robot speed control and fuzzy-neural network for obstacle avoidance. Here, we propose an architecture of fuzzy system for each multi-robot speed control and fuzzy-neural network for their direction to avoid obstacle. Our focus is on system of cooperative autonomous robots in environment with obstacle. For simulation, we divide experiment into two method. One method is motor schema-based formation control in previous and the other method is proposed by this paper. Simulation results are given in an obstacle environment and in an dynamic environment.

  • PDF

Generating Cooperative Behavior by Multi-Agent Profit Sharing on the Soccer Game

  • Miyazaki, Kazuteru;Terada, Takashi;Kobayashi, Hiroaki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.166-169
    • /
    • 2003
  • Reinforcement learning if a kind of machine learning. It aims to adapt an agent to a given environment with a clue to a reward and a penalty. Q-learning [8] that is a representative reinforcement learning system treats a reward and a penalty at the same time. There is a problem how to decide an appropriate reward and penalty values. We know the Penalty Avoiding Rational Policy Making algorithm (PARP) [4] and the Penalty Avoiding Profit Sharing (PAPS) [2] as reinforcement learning systems to treat a reward and a penalty independently. though PAPS is a descendant algorithm of PARP, both PARP and PAPS tend to learn a local optimal policy. To overcome it, ion this paper, we propose the Multi Best method (MB) that is PAPS with the multi-start method[5]. MB selects the best policy in several policies that are learned by PAPS agents. By applying PS, PAPS and MB to a soccer game environment based on the SoccerBots[9], we show that MB is the best solution for the soccer game environment.

  • PDF

Cooperative Task Processing by Separating and Fusing Multi-Mobile-agents

  • Tsuchida, Yasuhiro;Yamamoto, Masahito;Kawamura, Hidenori;Ohuchi, Azuma
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.965-968
    • /
    • 2000
  • We develop the Multi-Mobile-agents system for realizing effective cooperative task processing in the network environment. In this system, agents are separated / fused by the Place and migrated to another computer. A Place can assign agents to other places by agents migration to be flat the time to execute agents’ action. In this paper, the effectiveness of this system is shown by experimental results applying an agent given simple task.

  • PDF

GENETIC PROGRAMMING OF MULTI-AGENT COOPERATION STRATEGIES FOR TABLE TRANSPORT

  • Cho, Dong-Yeon;Zhang, Byoung-Tak
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.170-175
    • /
    • 1998
  • Transporting a large table using multiple robotic agents requires at least two group behaviors of homing and herding which are to bo coordinated in a proper sequence. Existing GP methods for multi-agent learning are not practical enough to find an optimal solution in this domain. To evolve this kind of complex cooperative behavior we use a novel method called fitness switching. This method maintains a pool of basis fitness functions each of which corresponds to a primitive group behavior. The basis functions are then progressively combined into more complex fitness functions to co-evolve more complex behavior. The performance of the presented method is compared with that of two conventional methods. Experimental results show that coevolutionary fitness switching provides an effective mechanism for evolving complex emergent behavior which may not be solved by simple genetic programming.

  • PDF

Self-Organization for Multi-Agent Groups

  • Kim, Dong-Hun
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.3
    • /
    • pp.333-342
    • /
    • 2004
  • This paper presents a framework for the self-organization of swarm systems based on coupled nonlinear oscillators (CNOs). In this scheme, multiple agents in a swarm self-organize to flock and arrange themselves as a group using CNOs, which are able to keep a certain distance by the attractive and repulsive forces among different agents. A theoretical approach of flocking behavior by CNOs and a design guideline of CNO parameters are proposed. Finally, the formation scenario for cooperative multi-agent groups is investigated to demonstrate group behaviors such as aggregation, migration, homing and so on. The task for each group in this scenario is to perform a series of processes such as gathering into a whole group or splitting into two groups, and then to return to the base while avoiding collision with agents in different groups and maintaining the formation of each group.

Stochastic Initial States Randomization Method for Robust Knowledge Transfer in Multi-Agent Reinforcement Learning (멀티에이전트 강화학습에서 견고한 지식 전이를 위한 확률적 초기 상태 랜덤화 기법 연구)

  • Dohyun Kim;Jungho Bae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.474-484
    • /
    • 2024
  • Reinforcement learning, which are also studied in the field of defense, face the problem of sample efficiency, which requires a large amount of data to train. Transfer learning has been introduced to address this problem, but its effectiveness is sometimes marginal because the model does not effectively leverage prior knowledge. In this study, we propose a stochastic initial state randomization(SISR) method to enable robust knowledge transfer that promote generalized and sufficient knowledge transfer. We developed a simulation environment involving a cooperative robot transportation task. Experimental results show that successful tasks are achieved when SISR is applied, while tasks fail when SISR is not applied. We also analyzed how the amount of state information collected by the agents changes with the application of SISR.

Cooperative Multi-Agent Reinforcement Learning-Based Behavior Control of Grid Sortation Systems in Smart Factory (스마트 팩토리에서 그리드 분류 시스템의 협력적 다중 에이전트 강화 학습 기반 행동 제어)

  • Choi, HoBin;Kim, JuBong;Hwang, GyuYoung;Kim, KwiHoon;Hong, YongGeun;Han, YounHee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.8
    • /
    • pp.171-180
    • /
    • 2020
  • Smart Factory consists of digital automation solutions throughout the production process, including design, development, manufacturing and distribution, and it is an intelligent factory that installs IoT in its internal facilities and machines to collect process data in real time and analyze them so that it can control itself. The smart factory's equipment works in a physical combination of numerous hardware, rather than a virtual character being driven by a single object, such as a game. In other words, for a specific common goal, multiple devices must perform individual actions simultaneously. By taking advantage of the smart factory, which can collect process data in real time, if reinforcement learning is used instead of general machine learning, behavior control can be performed without the required training data. However, in the real world, it is impossible to learn more than tens of millions of iterations due to physical wear and time. Thus, this paper uses simulators to develop grid sortation systems focusing on transport facilities, one of the complex environments in smart factory field, and design cooperative multi-agent-based reinforcement learning to demonstrate efficient behavior control.