• Title/Summary/Keyword: Model based reinforcement learning

Search Result 150, Processing Time 0.029 seconds

Hybrid Learning for Vision-and-Language Navigation Agents (시각-언어 이동 에이전트를 위한 복합 학습)

  • Oh, Suntaek;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.281-290
    • /
    • 2020
  • The Vision-and-Language Navigation(VLN) task is a complex intelligence problem that requires both visual and language comprehension skills. In this paper, we propose a new learning model for visual-language navigation agents. The model adopts a hybrid learning that combines imitation learning based on demo data and reinforcement learning based on action reward. Therefore, this model can meet both problems of imitation learning that can be biased to the demo data and reinforcement learning with relatively low data efficiency. In addition, the proposed model uses a novel path-based reward function designed to solve the problem of existing goal-based reward functions. In this paper, we demonstrate the high performance of the proposed model through various experiments using both Matterport3D simulation environment and R2R benchmark dataset.

Obstacle Avoidance System for Autonomous CTVs in Offshore Wind Farms Based on Deep Reinforcement Learning (심층 강화학습 기반 자율운항 CTV의 해상풍력발전단지 내 장애물 회피 시스템)

  • Jingyun Kim;Haemyung Chon;Jackyou Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.131-139
    • /
    • 2024
  • Crew Transfer Vessels (CTVs) are primarily used for the maintenance of offshore wind farms. Despite being manually operated by professional captains and crew, collisions with other ships and marine structures still occur. To prevent this, the introduction of autonomous navigation systems to CTVs is necessary. In this study, research on the obstacle avoidance system of the autonomous navigation system for CTVs was conducted. In particular, research on obstacle avoidance simulation for CTVs using deep reinforcement learning was carried out, taking into account the currents and wind loads in offshore wind farms. For this purpose, 3 degrees of freedom ship maneuvering modeling for CTVs considering the currents and wind loads in offshore wind farms was performed, and a simulation environment for offshore wind farms was implemented to train and test the deep reinforcement learning agent. Specifically, this study conducted research on obstacle avoidance maneuvers using MATD3 within deep reinforcement learning, and as a result, it was confirmed that the model, which underwent training over 10,000 episodes, could successfully avoid both static and moving obstacles. This confirms the conclusion that the application of the methods proposed in this study can successfully facilitate obstacle avoidance for autonomous navigation CTVs within offshore wind farms.

Multi Behavior Learning of Lamp Robot based on Q-learning (강화학습 Q-learning 기반 복수 행위 학습 램프 로봇)

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.35-41
    • /
    • 2018
  • The Q-learning algorithm based on reinforcement learning is useful for learning the goal for one behavior at a time, using a combination of discrete states and actions. In order to learn multiple actions, applying a behavior-based architecture and using an appropriate behavior adjustment method can make a robot perform fast and reliable actions. Q-learning is a popular reinforcement learning method, and is used much for robot learning for its characteristics which are simple, convergent and little affected by the training environment (off-policy). In this paper, Q-learning algorithm is applied to a lamp robot to learn multiple behaviors (human recognition, desk object recognition). As the learning rate of Q-learning may affect the performance of the robot at the learning stage of multiple behaviors, we present the optimal multiple behaviors learning model by changing learning rate.

A Method for Learning Macro-Actions for Virtual Characters Using Programming by Demonstration and Reinforcement Learning

  • Sung, Yun-Sick;Cho, Kyun-Geun
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.409-420
    • /
    • 2012
  • The decision-making by agents in games is commonly based on reinforcement learning. To improve the quality of agents, it is necessary to solve the problems of the time and state space that are required for learning. Such problems can be solved by Macro-Actions, which are defined and executed by a sequence of primitive actions. In this line of research, the learning time is reduced by cutting down the number of policy decisions by agents. Macro-Actions were originally defined as combinations of the same primitive actions. Based on studies that showed the generation of Macro-Actions by learning, Macro-Actions are now thought to consist of diverse kinds of primitive actions. However an enormous amount of learning time and state space are required to generate Macro-Actions. To resolve these issues, we can apply insights from studies on the learning of tasks through Programming by Demonstration (PbD) to generate Macro-Actions that reduce the learning time and state space. In this paper, we propose a method to define and execute Macro-Actions. Macro-Actions are learned from a human subject via PbD and a policy is learned by reinforcement learning. In an experiment, the proposed method was applied to a car simulation to verify the scalability of the proposed method. Data was collected from the driving control of a human subject, and then the Macro-Actions that are required for running a car were generated. Furthermore, the policy that is necessary for driving on a track was learned. The acquisition of Macro-Actions by PbD reduced the driving time by about 16% compared to the case in which Macro-Actions were directly defined by a human subject. In addition, the learning time was also reduced by a faster convergence of the optimum policies.

Leveraging Visibility-Based Rewards in DRL-based Worker Travel Path Simulation for Improving the Learning Performance

  • Kim, Minguk;Kim, Tae Wan
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.73-82
    • /
    • 2023
  • Optimization of Construction Site Layout Planning (CSLP) heavily relies on workers' travel paths. However, traditional path generation approaches predominantly focus on the shortest path, often neglecting critical variables such as individual wayfinding tendencies, the spatial arrangement of site objects, and potential hazards. These oversights can lead to compromised path simulations, resulting in less reliable site layout plans. While Deep Reinforcement Learning (DRL) has been proposed as a potential alternative to address these issues, it has shown limitations. Despite presenting more realistic travel paths by considering these variables, DRL often struggles with efficiency in complex environments, leading to extended learning times and potential failures. To overcome these challenges, this study introduces a refined model that enhances spatial navigation capabilities and learning performance by integrating workers' visibility into the reward functions. The proposed model demonstrated a 12.47% increase in the pathfinding success rate and notable improvements in the other two performance measures compared to the existing DRL framework. The adoption of this model could greatly enhance the reliability of the results, ultimately improving site operational efficiency and safety management such as by reducing site congestion and accidents. Future research could expand this study by simulating travel paths in dynamic, multi-agent environments that represent different stages of construction.

Multicast Tree Generation using Meta Reinforcement Learning in SDN-based Smart Network Platforms

  • Chae, Jihun;Kim, Namgi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3138-3150
    • /
    • 2021
  • Multimedia services on the Internet are continuously increasing. Accordingly, the demand for a technology for efficiently delivering multimedia traffic is also constantly increasing. The multicast technique, that delivers the same content to several destinations, is constantly being developed. This technique delivers a content from a source to all destinations through the multicast tree. The multicast tree with low cost increases the utilization of network resources. However, the finding of the optimal multicast tree that has the minimum link costs is very difficult and its calculation complexity is the same as the complexity of the Steiner tree calculation which is NP-complete. Therefore, we need an effective way to obtain a multicast tree with low cost and less calculation time on SDN-based smart network platforms. In this paper, we propose a new multicast tree generation algorithm which produces a multicast tree using an agent trained by model-based meta reinforcement learning. Experiments verified that the proposed algorithm generated multicast trees in less time compared with existing approximation algorithms. It produced multicast trees with low cost in a dynamic network environment compared with the previous DQN-based algorithm.

A Study on Cooperative Traffic Signal Control at multi-intersection (다중 교차로에서 협력적 교통신호제어에 대한 연구)

  • Kim, Dae Ho;Jeong, Ok Ran
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1381-1386
    • /
    • 2019
  • As traffic congestion in cities becomes more serious, intelligent traffic control is actively being researched. Reinforcement learning is the most actively used algorithm for traffic signal control, and recently Deep reinforcement learning has attracted attention of researchers. Extended versions of deep reinforcement learning have been emerged as deep reinforcement learning algorithm showed high performance in various fields. However, most of the existing traffic signal control were studied in a single intersection environment, and there is a limitation that the method at a single intersection does not consider the traffic conditions of the entire city. In this paper, we propose a cooperative traffic control at multi-intersection environment. The traffic signal control algorithm is based on a combination of extended versions of deep reinforcement learning and we considers traffic conditions of adjacent intersections. In the experiment, we compare the proposed algorithm with the existing deep reinforcement learning algorithm, and further demonstrate the high performance of our model with and without cooperative method.

Control of Crawling Robot using Actor-Critic Fuzzy Reinforcement Learning (액터-크리틱 퍼지 강화학습을 이용한 기는 로봇의 제어)

  • Moon, Young-Joon;Lee, Jae-Hoon;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.519-524
    • /
    • 2009
  • Recently, reinforcement learning methods have drawn much interests in the area of machine learning. Dominant approaches in researches for the reinforcement learning include the value-function approach, the policy search approach, and the actor-critic approach, among which pertinent to this paper are algorithms studied for problems with continuous states and continuous actions along the line of the actor-critic strategy. In particular, this paper focuses on presenting a method combining the so-called ACFRL(actor-critic fuzzy reinforcement learning), which is an actor-critic type reinforcement learning based on fuzzy theory, together with the RLS-NAC which is based on the RLS filters and natural actor-critic methods. The presented method is applied to a control problem for crawling robots, and some results are reported from comparison of learning performance.

Motion Generation of a Single Rigid Body Character Using Deep Reinforcement Learning (심층 강화 학습을 활용한 단일 강체 캐릭터의 모션 생성)

  • Ahn, Jewon;Gu, Taehong;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.13-23
    • /
    • 2021
  • In this paper, we proposed a framework that generates the trajectory of a single rigid body based on its COM configuration and contact pose. Because we use a smaller input dimension than when we use a full body state, we can improve the learning time for reinforcement learning. Even with a 68% reduction in learning time (approximately two hours), the character trained by our network is more robust to external perturbations tolerating an external force of 1500 N which is about 7.5 times larger than the maximum magnitude from a previous approach. For this framework, we use centroidal dynamics to calculate the next configuration of the COM, and use reinforcement learning for obtaining a policy that gives us parameters for controlling the contact positions and forces.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.