• Title/Summary/Keyword: Multi-Agent Reinforcement Learning

Search Result 62, Processing Time 0.021 seconds

Earthwork Planning via Reinforcement Learning with Heterogeneous Construction Equipment (강화학습을 이용한 이종 장비 토목 공정 계획)

  • Ji, Min-Gi;Park, Jun-Keon;Kim, Do-Hyeong;Jung, Yo-Han;Park, Jin-Kyoo;Moon, Il-Chul
    • Journal of the Korea Society for Simulation
    • /
    • v.27 no.1
    • /
    • pp.1-13
    • /
    • 2018
  • Earthwork planning is one of the critical issues in a construction process management. For the construction process management, there are some different approaches such as optimizing construction with either mathematical methodologies or heuristics with simulations. This paper propose a simulated earthwork scenario and an optimal path for the simulation using a reinforcement learning. For reinforcement learning, we use two different Markov decision process, or MDP, formulations with interacting excavator agent and truck agent, sequenced learning, and independent learning. The simulation result shows that two different formulations can reach the optimal planning for a simulated earthwork scenario. This planning could be a basis for an automatic construction management.

Study for Feature Selection Based on Multi-Agent Reinforcement Learning (다중 에이전트 강화학습 기반 특징 선택에 대한 연구)

  • Kim, Miin-Woo;Bae, Jin-Hee;Wang, Bo-Hyun;Lim, Joon-Shik
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.347-352
    • /
    • 2021
  • In this paper, we propose a method for finding feature subsets that are effective for classification in an input dataset by using a multi-agent reinforcement learning method. In the field of machine learning, it is crucial to find features suitable for classification. A dataset may have numerous features; while some features may be effective for classification or prediction, others may have little or rather negative effects on results. In machine learning problems, feature selection for increasing classification or prediction accuracy is a critical problem. To solve this problem, we proposed a feature selection method based on reinforced learning. Each feature has one agent, which determines whether the feature is selected. After obtaining corresponding rewards for each feature that is selected, but not by the agents, the Q-value of each agent is updated by comparing the rewards. The reward comparison of the two subsets helps agents determine whether their actions were right. These processes are performed as many times as the number of episodes, and finally, features are selected. As a result of applying this method to the Wisconsin Breast Cancer, Spambase, Musk, and Colon Cancer datasets, accuracy improvements of 0.0385, 0.0904, 0.1252 and 0.2055 were shown, respectively, and finally, classification accuracies of 0.9789, 0.9311, 0.9691 and 0.9474 were achieved, respectively. It was proved that our proposed method could properly select features that were effective for classification and increase classification accuracy.

A Survey on Recent Advances in Multi-Agent Reinforcement Learning (멀티 에이전트 강화학습 기술 동향)

  • Yoo, B.H.;Ningombam, D.D.;Kim, H.W.;Song, H.J.;Park, G.M.;Yi, S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.6
    • /
    • pp.137-149
    • /
    • 2020
  • Several multi-agent reinforcement learning (MARL) algorithms have achieved overwhelming results in recent years. They have demonstrated their potential in solving complex problems in the field of real-time strategy online games, robotics, and autonomous vehicles. However these algorithms face many challenges when dealing with massive problem spaces in sparse reward environments. Based on the centralized training and decentralized execution (CTDE) architecture, the MARL algorithms discussed in the literature aim to solve the current challenges by formulating novel concepts of inter-agent modeling, credit assignment, multiagent communication, and the exploration-exploitation dilemma. The fundamental objective of this paper is to deliver a comprehensive survey of existing MARL algorithms based on the problem statements rather than on the technologies. We also discuss several experimental frameworks to provide insight into the use of these algorithms and to motivate some promising directions for future research.

Cooperative Multi-Agent Reinforcement Learning-Based Behavior Control of Grid Sortation Systems in Smart Factory (스마트 팩토리에서 그리드 분류 시스템의 협력적 다중 에이전트 강화 학습 기반 행동 제어)

  • Choi, HoBin;Kim, JuBong;Hwang, GyuYoung;Kim, KwiHoon;Hong, YongGeun;Han, YounHee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.8
    • /
    • pp.171-180
    • /
    • 2020
  • Smart Factory consists of digital automation solutions throughout the production process, including design, development, manufacturing and distribution, and it is an intelligent factory that installs IoT in its internal facilities and machines to collect process data in real time and analyze them so that it can control itself. The smart factory's equipment works in a physical combination of numerous hardware, rather than a virtual character being driven by a single object, such as a game. In other words, for a specific common goal, multiple devices must perform individual actions simultaneously. By taking advantage of the smart factory, which can collect process data in real time, if reinforcement learning is used instead of general machine learning, behavior control can be performed without the required training data. However, in the real world, it is impossible to learn more than tens of millions of iterations due to physical wear and time. Thus, this paper uses simulators to develop grid sortation systems focusing on transport facilities, one of the complex environments in smart factory field, and design cooperative multi-agent-based reinforcement learning to demonstrate efficient behavior control.

Intelligent Robot Design: Intelligent Agent Based Approach (지능로봇: 지능 에이전트를 기초로 한 접근방법)

  • Kang, Jin-Shig
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.457-467
    • /
    • 2004
  • In this paper, a robot is considered as an agent, a structure of robot is presented which consisted by multi-subagents and they have diverse capacity such as perception, intelligence, action etc., required for robot. Also, subagents are consisted by micro-agent($\mu$agent) charged for elementary action required. The structure of robot control have two sub-agents, the one is behavior based reactive controller and action selection sub agent, and action selection sub-agent select a action based on the high label action and high performance, and which have a learning mechanism based on the reinforcement learning. For presented robot structure, it is easy to give intelligence to each element of action and a new approach of multi robot control. Presented robot is simulated for two goals: chaotic exploration and obstacle avoidance, and fabricated by using 8bit microcontroller, and experimented.

Development of Prediction Model of Chloride Diffusion Coefficient using Machine Learning (기계학습을 이용한 염화물 확산계수 예측모델 개발)

  • Kim, Hyun-Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.23 no.3
    • /
    • pp.87-94
    • /
    • 2023
  • Chloride is one of the most common threats to reinforced concrete (RC) durability. Alkaline environment of concrete makes a passive layer on the surface of reinforcement bars that prevents the bar from corrosion. However, when the chloride concentration amount at the reinforcement bar reaches a certain level, deterioration of the passive protection layer occurs, causing corrosion and ultimately reducing the structure's safety and durability. Therefore, understanding the chloride diffusion and its prediction are important to evaluate the safety and durability of RC structure. In this study, the chloride diffusion coefficient is predicted by machine learning techniques. Various machine learning techniques such as multiple linear regression, decision tree, random forest, support vector machine, artificial neural networks, extreme gradient boosting annd k-nearest neighbor were used and accuracy of there models were compared. In order to evaluate the accuracy, root mean square error (RMSE), mean square error (MSE), mean absolute error (MAE) and coefficient of determination (R2) were used as prediction performance indices. The k-fold cross-validation procedure was used to estimate the performance of machine learning models when making predictions on data not used during training. Grid search was applied to hyperparameter optimization. It has been shown from numerical simulation that ensemble learning methods such as random forest and extreme gradient boosting successfully predicted the chloride diffusion coefficient and artificial neural networks also provided accurate result.

A Performance Improvement Technique for Nash Q-learning using Macro-Actions (매크로 행동을 이용한 내시 Q-학습의 성능 향상 기법)

  • Sung, Yun-Sik;Cho, Kyun-Geun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.353-363
    • /
    • 2008
  • A multi-agent system has a longer learning period and larger state-spaces than a sin91e agent system. In this paper, we suggest a new method to reduce the learning time of Nash Q-learning in a multi-agent environment. We apply Macro-actions to Nash Q-learning to improve the teaming speed. In the Nash Q-teaming scheme, when agents select actions, rewards are accumulated like Macro-actions. In the experiments, we compare Nash Q-learning using Macro-actions with general Nash Q-learning. First, we observed how many times the agents achieve their goals. The results of this experiment show that agents using Nash Q-learning and 4 Macro-actions have 9.46% better performance than Nash Q-learning using only 4 primitive actions. Second, when agents use Macro-actions, Q-values are accumulated 2.6 times more. Finally, agents using Macro-actions select less actions about 44%. As a result, agents select fewer actions and Macro-actions improve the Q-value's update. It the agents' learning speeds improve.

  • PDF

A slide reinforcement learning for the consensus of a multi-agents system (다중 에이전트 시스템의 컨센서스를 위한 슬라이딩 기법 강화학습)

  • Yang, Janghoon
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.226-234
    • /
    • 2022
  • With advances in autonomous vehicles and networked control, there is a growing interest in the consensus control of a multi-agents system to control multi-agents with distributed control beyond the control of a single agent. Since consensus control is a distributed control, it is bound to have delay in a practical system. In addition, it is often difficult to have a very accurate mathematical model for a system. Even though a reinforcement learning (RL) method was developed to deal with these issues, it often experiences slow convergence in the presence of large uncertainties. Thus, we propose a slide RL which combines the sliding mode control with RL to be robust to the uncertainties. The structure of a sliding mode control is introduced to the action in RL while an auxiliary sliding variable is included in the state information. Numerical simulation results show that the slide RL provides comparable performance to the model-based consensus control in the presence of unknown time-varying delay and disturbance while outperforming existing state-of-the-art RL-based consensus algorithms.

Leveraging Visibility-Based Rewards in DRL-based Worker Travel Path Simulation for Improving the Learning Performance

  • Kim, Minguk;Kim, Tae Wan
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.73-82
    • /
    • 2023
  • Optimization of Construction Site Layout Planning (CSLP) heavily relies on workers' travel paths. However, traditional path generation approaches predominantly focus on the shortest path, often neglecting critical variables such as individual wayfinding tendencies, the spatial arrangement of site objects, and potential hazards. These oversights can lead to compromised path simulations, resulting in less reliable site layout plans. While Deep Reinforcement Learning (DRL) has been proposed as a potential alternative to address these issues, it has shown limitations. Despite presenting more realistic travel paths by considering these variables, DRL often struggles with efficiency in complex environments, leading to extended learning times and potential failures. To overcome these challenges, this study introduces a refined model that enhances spatial navigation capabilities and learning performance by integrating workers' visibility into the reward functions. The proposed model demonstrated a 12.47% increase in the pathfinding success rate and notable improvements in the other two performance measures compared to the existing DRL framework. The adoption of this model could greatly enhance the reliability of the results, ultimately improving site operational efficiency and safety management such as by reducing site congestion and accidents. Future research could expand this study by simulating travel paths in dynamic, multi-agent environments that represent different stages of construction.