• Title/Summary/Keyword: Robot-Agent

Search Result 147, Processing Time 0.023 seconds

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Autonomous and Asynchronous Triggered Agent Exploratory Path-planning Via a Terrain Clutter-index using Reinforcement Learning

  • Kim, Min-Suk;Kim, Hwankuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.181-188
    • /
    • 2022
  • An intelligent distributed multi-agent system (IDMS) using reinforcement learning (RL) is a challenging and intricate problem in which single or multiple agent(s) aim to achieve their specific goals (sub-goal and final goal), where they move their states in a complex and cluttered environment. The environment provided by the IDMS provides a cumulative optimal reward for each action based on the policy of the learning process. Most actions involve interacting with a given IDMS environment; therefore, it can provide the following elements: a starting agent state, multiple obstacles, agent goals, and a cluttered index. The reward in the environment is also reflected by RL-based agents, in which agents can move randomly or intelligently to reach their respective goals, to improve the agent learning performance. We extend different cases of intelligent multi-agent systems from our previous works: (a) a proposed environment-clutter-based-index for agent sub-goal selection and analysis of its effect, and (b) a newly proposed RL reward scheme based on the environmental clutter-index to identify and analyze the prerequisites and conditions for improving the overall system.

Cooperative Multi-agent Reinforcement Learning on Sparse Reward Battlefield Environment using QMIX and RND in Ray RLlib

  • Minkyoung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • Multi-agent systems can be utilized in various real-world cooperative environments such as battlefield engagements and unmanned transport vehicles. In the context of battlefield engagements, where dense reward design faces challenges due to limited domain knowledge, it is crucial to consider situations that are learned through explicit sparse rewards. This paper explores the collaborative potential among allied agents in a battlefield scenario. Utilizing the Multi-Robot Warehouse Environment(RWARE) as a sparse reward environment, we define analogous problems and establish evaluation criteria. Constructing a learning environment with the QMIX algorithm from the reinforcement learning library Ray RLlib, we enhance the Agent Network of QMIX and integrate Random Network Distillation(RND). This enables the extraction of patterns and temporal features from partial observations of agents, confirming the potential for improving the acquisition of sparse reward experiences through intrinsic rewards.

Lifelike Behaviors of Collective Autonomous Mobile Agents

  • Min, Suk-Ki;Hoon Kang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.176-180
    • /
    • 1998
  • We may gaze at some peculiar scenes of flocking of birds and fishes. This paper demonstrates that multiple agent mobile robots show complex behaviors from efficient and strategic rules. The simulated flock are realized by a distributed behavioral model and each mobile robot decides its own motion as an individual which moves constantly by sensing the dynamic environment.

  • PDF

Design and Implementation of Remote Monitoring Technology based-on Web-Service for URC Robot (웹 서비스 기반 URC 로봇 원격 모니터링 기술의 설계 및 구현)

  • Im, Sung-Ho;Kim, Joo-Man
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.11
    • /
    • pp.285-294
    • /
    • 2006
  • In this paper, we propose a new remote control and monitoring technique using web-service technology for URC robot. URC robot needs the architecture which can be applied all over the variety hardware and software platform for supporting the several interface with external world in the ubiquitous environment. In this paper, web-service technology is preferentially deliberated how to adopt into the embedded environment and the remote control and monitoring technology based on web-service technology for URC robot is designed and implemented as to support the interaction with agent programs. It has been carried out through simulating and implementing this technology into the target robot called NETTORO and proved its practical worth.

  • PDF

Advance of Agent Age (에이전트의 시대가 오고 있다)

  • Lee, Keun-Sang
    • Journal of Information Management
    • /
    • v.31 no.4
    • /
    • pp.71-87
    • /
    • 2000
  • Recently, researches of mobile agent systems have done actively to enhance usability of heterogenenous environment linked via network and to solve problems of existing distributed-object computing. Though these research and development many studies have done to be applicated to many areas that are existing distributed systems as well as E-commerce, network maintenance, and information retrieval etc. This paper reviews some related issues to agent studies, comprehensive studies to enhance telecommunication functionality among agents, and future and application fields of agent.

  • PDF

Obstacle Avoidance of Autonomous Mobile Agent using Circular Navigation Method (곡률 주행 기법을 이용한 무인 이동 개체의 장애물 회피 알고리즘)

  • Lee, Jin-Seob;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.4
    • /
    • pp.824-831
    • /
    • 2009
  • This paper proposes an obstacle avoidance algorithm for an autonomous mobile robot. The proposed method based on the circular navigation with probability distribution finds local-paths to avoid collisions. Futhermore, it makes mobile robots to achieve obstacle avoidance and optimal path planning due to the accurate decision of the final goal. Simulation results are included to show the feasibility of the proposed method.

Evolution of multiple agent system from basic action to intelligent behavior

  • Sugisaka, Masanori;Wang, Xiapshu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.190-194
    • /
    • 1998
  • In this paper, we introduce the micro robot soccer playing system as a standard test bench for the study on the multiple agent system. Our method is based on following viewpoints. They are (1) any complex behavior such as cooperation among agents must be completed by sequential basic actions of concerned agents. (2) those basic actions can be well defined, but (3) how to organize those actions in current time point so as to result in a new stale beneficial to the end aim ought to be achieved by a kind of self-learning self-organization strategy.

  • PDF

Deep Q-Network based Game Agents (심층 큐 신경망을 이용한 게임 에이전트 구현)

  • Han, Dongki;Kim, Myeongseop;Kim, Jaeyoun;Kim, Jung-Su
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.157-162
    • /
    • 2019
  • The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and artificial fingers by 3D printing.

Swarm Control of Distributed Autonomous Robot System based on Artificial Immune System using PSO (PSO를 이용한 인공면역계 기반 자율분산로봇시스템의 군 제어)

  • Kim, Jun-Yeup;Ko, Kwang-Eun;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.5
    • /
    • pp.465-470
    • /
    • 2012
  • This paper proposes a distributed autonomous control method of swarm robot behavior strategy based on artificial immune system and an optimization strategy for artificial immune system. The behavior strategies of swarm robot in the system are depend on the task distribution in environment and we have to consider the dynamics of the system environment. In this paper, the behavior strategies divided into dispersion and aggregation. For applying to artificial immune system, an individual of swarm is regarded as a B-cell, each task distribution in environment as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. The executing process of proposed method is as follows: When the environmental condition changes, the agent selects an appropriate behavior strategy. And its behavior strategy is stimulated and suppressed by other agent using communication. Finally much stimulated strategy is adopted as a swarm behavior strategy. In order to decide more accurately select the behavior strategy, the optimized parameter learning procedure that is represented by stimulus function of antigen to antibody in artificial immune system is required. In this paper, particle swarm optimization algorithm is applied to this learning procedure. The proposed method shows more adaptive and robustness results than the existing system at the viewpoint that the swarm robots learning and adaptation degree associated with the changing of tasks.