• Title/Summary/Keyword: Multi Agent Simulation

Search Result 147, Processing Time 0.027 seconds

A Dynamic Ontology-based Multi-Agent Context-Awareness User Profile Construction Method for Personalized Information Retrieval

  • Gao, Qian;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.270-276
    • /
    • 2012
  • With the increase in amount of data and information available on the web, there have been high demands on personalized information retrieval services to provide context-aware services for the web users. This paper proposes a novel dynamic multi-agent context-awareness user profile construction method based on ontology to incorporate concepts and properties to model the user profile. This method comprehensively considers the frequency and the specific of the concept in one document and its corresponding domain ontology to construct the user profile, based on which, a fuzzy c-means clustering method is adopted to cluster the user's interest domain, and a dynamic update policy is adopted to continuously consider the change of the users' interest. The simulation result shows that along with the gradual perfection of the our user profile, our proposed system is better than traditional semantic based retrieval system in terms of the Recall Ratio and Precision Ratio.

Cooperation with Ground and Arieal Vehicles for Multiple Tasks: Decentralized Task Assignment and Graph Connectivity Control (지상 로봇의 분산형 임무할당과 무인기의 네트워크 연결성 추정 및 제어를 통한 협업)

  • Moon, Sung-Won;Kim, Hyoun-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.3
    • /
    • pp.218-223
    • /
    • 2012
  • Maintenance and improvement of the graph connectivity is very important for decentralized multi-agent systems. Although the CBBA (Consensus-Based Bundle Algorithm) guarantees suboptimal performance and bounded convergence time, it is only valid for connected graphs. In this study, we apply a decentralized estimation procedure that allows each agent to track the algebraic connectivity of a time-varying graph. Based on this estimation, we design a decentralized gradient controller to maintain the graph connectivity while agents are traveling to perform assigned tasks. Simulation result for fully-actuated first-order agents that move in a 2-D plane are presented.

Cooperative Action Controller of Multi-Agent System (다 개체 시스템의 협동 행동제어기)

  • Kim, Young-Back;Jang, Hong-Min;Kim, Dae-Jun;Choi, Young-Kiu;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.3024-3026
    • /
    • 1999
  • This paper presents a cooperative action controller of a multi-agent system. To achieve an object, i.e. win a game, it is necessary that a robot has its own roles, actions and work with each other. The presented incorporated action controller consists of the role selection, action selection and execution layer. In the first layer, a fuzzy logic controller is used. Each robot selects its own action and makes its own path trajectory in the second layer. In the third layer, each robot performs their own action based on the velocity information which is sent from main computer. Finally, simulation shows that each robot selects proper roles and incorporates actions by the proposed controller.

  • PDF

Improved Dynamic Subjective Logic Model with Evidence Driven

  • Qiang, Jiao-Hong;Xin, Wang-Xin;Feng, Tian-Jun
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.630-642
    • /
    • 2015
  • In Jøsang's subjective logic, the fusion operator is not able to fuse three or more opinions at a time and it cannot consider the effect of time factors on fusion. Also, the base rate (a) and non-informative prior weight (C) could not change dynamically. In this paper, we propose an Improved Subjective Logic Model with Evidence Driven (ISLM-ED) that expands and enriches the subjective logic theory. It includes the multi-agent unified fusion operator and the dynamic function for the base rate (a) and the non-informative prior weight (C) through the changes in evidence. The multi-agent unified fusion operator not only meets the commutative and associative law but is also consistent with the researchers's cognitive rules. A strict mathematical proof was given by this paper. Finally, through the simulation experiments, the results show that the ISLM-ED is more reasonable and effective and that it can be better adapted to the changing environment.

A Multi-Agent MicroBlog Behavior based User Preference Profile Construction Approach

  • Kim, Jee-Hyun;Cho, Young-Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.29-37
    • /
    • 2015
  • Nowadays, the user-centric application based web 2.0 has replaced the web 1.0. The users gain and provide information by interactive network applications. As a result, traditional approaches that only extract and analyze users' local document operating behavior and network browsing behavior to build the users' preference profile cannot fully reflect their interests. Therefore this paper proposed a preference analysis and indicating approach based on the users' communication information from MicroBlog, such as reading, forwarding and @ behavior, and using the improved PersonalRank method to analyze the importance of a user to other users in the network and based on the users' communication behavior to update the weight of the items in the user preference. Simulation result shows that our proposed method outperforms the ontology model, TREC model, and the category model in terms of 11SPR value.

The Analysis of Flatland Challenge Winners' Multi-agent Methodologies

  • Choi, BumKyu;Kim, Jong-Kook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.369-372
    • /
    • 2021
  • Scheduling the movements of trains in the modern railway system is becoming essential and important. Swiss Federal Railway Company (SBB) and machine learning researchers began collaborating to make a simulation environment and held a Flatland challenge. In this paper, the methodologies of the winners of this competition are analyzed to achieve insight and research trends. This problem is similar to the Multi-Agent Path Finding (MAPF) and Vehicle Rescheduling Problem (VRSP). The potential of the attempted methods from the Flatland challenge to be applied to various transportation systems as well as railways is discussed.

Stochastic Initial States Randomization Method for Robust Knowledge Transfer in Multi-Agent Reinforcement Learning (멀티에이전트 강화학습에서 견고한 지식 전이를 위한 확률적 초기 상태 랜덤화 기법 연구)

  • Dohyun Kim;Jungho Bae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.474-484
    • /
    • 2024
  • Reinforcement learning, which are also studied in the field of defense, face the problem of sample efficiency, which requires a large amount of data to train. Transfer learning has been introduced to address this problem, but its effectiveness is sometimes marginal because the model does not effectively leverage prior knowledge. In this study, we propose a stochastic initial state randomization(SISR) method to enable robust knowledge transfer that promote generalized and sufficient knowledge transfer. We developed a simulation environment involving a cooperative robot transportation task. Experimental results show that successful tasks are achieved when SISR is applied, while tasks fail when SISR is not applied. We also analyzed how the amount of state information collected by the agents changes with the application of SISR.

Design and Prototype Development of An Agent for Self-Driving Car (자율운행 자동차의 에이전트 설계 및 프로토타입 개발)

  • Lim, Seung Kyu;Lee, Jae Moon
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.131-142
    • /
    • 2015
  • A self-driving car is an autonomous vehicle capable of fulfilling the main transportation capabilities of a traditional car. It must be capable of sensing its environment and navigating without human input. In this paper, we design the agent that can simulate these self-driving cars and develop a prototype for it. To do this, we analyze the requirements for the self-driving car, and then the agent is designed to be suitable for traditional multi-agent system. The key point of the design is that agents move along the steering forces only. The prototype of the designed agent was implemented by using Unity 3D. From simulation results using the prototype, movements of the agents were very realistic. However, in the case of increasing the number of the agent the performance was seriously degraded, and so the alternatives of the problem were suggested.

QLGR: A Q-learning-based Geographic FANET Routing Algorithm Based on Multi-agent Reinforcement Learning

  • Qiu, Xiulin;Xie, Yongsheng;Wang, Yinyin;Ye, Lei;Yang, Yuwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4244-4274
    • /
    • 2021
  • The utilization of UAVs in various fields has led to the development of flying ad hoc network (FANET) technology. In a network environment with highly dynamic topology and frequent link changes, the traditional routing technology of FANET cannot satisfy the new communication demands. Traditional routing algorithm, based on geographic location, can "fall" into a routing hole. In view of this problem, we propose a geolocation routing protocol based on multi-agent reinforcement learning, which decreases the packet loss rate and routing cost of the routing protocol. The protocol views each node as an intelligent agent and evaluates the value of its neighbor nodes through the local information. In the value function, nodes consider information such as link quality, residual energy and queue length, which reduces the possibility of a routing hole. The protocol uses global rewards to enable individual nodes to collaborate in transmitting data. The performance of the protocol is experimentally analyzed for UAVs under extreme conditions such as topology changes and energy constraints. Simulation results show that our proposed QLGR-S protocol has advantages in performance parameters such as throughput, end-to-end delay, and energy consumption compared with the traditional GPSR protocol. QLGR-S provides more reliable connectivity for UAV networking technology, safeguards the communication requirements between UAVs, and further promotes the development of UAV technology.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.