• Title/Summary/Keyword: Swarm Learning

Search Result 100, Processing Time 0.026 seconds

Adaptability Improvement of Learning from Demonstration with Particle Swarm Optimization for Motion Planning (운동계획을 위한 입자 군집 최적화를 이용한 시범에 의한 학습의 적응성 향상)

  • Kim, Jeong-Jung;Lee, Ju-Jang
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.19 no.4
    • /
    • pp.167-175
    • /
    • 2016
  • We present a method for improving adaptability of Learning from Demonstration (LfD) strategy by combining the LfD and Particle Swarm Optimization (PSO). A trajectory generated from an LfD is modified with PSO by minimizing a fitness function that considers constraints. Finally, the final trajectory is suitable for a task and adapted for constraints. The effectiveness of the method is shown with a target reaching task with a manipulator in three-dimensional space.

Enhanced salp swarm algorithm based on opposition learning and merit function methods for optimum design of MTMD

  • Raeesi, Farzad;Shirgir, Sina;Azar, Bahman F.;Veladi, Hedayat;Ghaffarzadeh, Hosein
    • Earthquakes and Structures
    • /
    • v.18 no.6
    • /
    • pp.719-730
    • /
    • 2020
  • Recently, population based optimization algorithms are developed to deal with a variety of optimization problems. In this paper, the salp swarm algorithm (SSA) is dramatically enhanced and a new algorithm is named Enhanced Salp Swarm Algorithm (ESSA) which is effectively utilized in optimization problems. To generate the ESSA, an opposition-based learning and merit function methods are added to standard SSA to enhance both exploration and exploitation abilities. To have a clear judgment about the performance of the ESSA, firstly, it is employed to solve some mathematical benchmark test functions. Next, it is exploited to deal with engineering problems such as optimally designing the benchmark buildings equipped with multiple tuned mass damper (MTMD) under earthquake excitation. By comparing the obtained results with those obtained from other algorithms, it can be concluded that the proposed new ESSA algorithm not only provides very competitive results, but also it can be successfully applied to the optimal design of the MTMD.

Reinforcement Learning Based Evolution and Learning Algorithm for Cooperative Behavior of Swarm Robot System (군집 로봇의 협조 행동을 위한 강화 학습 기반의 진화 및 학습 알고리즘)

  • Seo, Sang-Wook;Kim, Ho-Duck;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.591-597
    • /
    • 2007
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new polygon based Q-learning algorithm and distributed genetic algorithms are proposed for behavior learning and evolution of collective autonomous mobile robots. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper. we verify the effectiveness of the proposed method by applying it to cooperative search problem.

Behavior Learning and Evolution of Individual Robot for Cooperative Behavior of Swarm Robot System (군집 로봇의 협조 행동을 위한 로봇 개체의 행동학습과 진화)

  • Sim, Kwee-Bo;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2006
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforcement learning having delayed reward ability and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforcement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper. we verify the effectiveness of the proposed method by applying it to cooperative search problem.

Footstep Planning of Biped Robot Using Particle Swarm Optimization (PSO를 이용한 이족보행로봇의 보행 계획)

  • Kim, Sung-Suk;Kim, Yong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.566-571
    • /
    • 2008
  • In this paper, we propose a footstep planning method of biped robot based on the Particle Swarm Optimization(PSO). We define configuration and locomotion primitives for biped robots in the 2 dimensional workspace. A footstep planning method is designed using learning process of PSO that is initialized with a population of random objects and searches for optima by updating generations. The footstep planner searches for a feasible sequence of locomotion primitives between a starting point and a goal, and generates a path that avoids the obstacles. We design a path optimization algorithm that optimizes the footstep number and planning cost based on the path generated in the PSO learning process. The proposed planning method is verified by simulation examples in cluttered environments.

PESA: Prioritized experience replay for parallel hybrid evolutionary and swarm algorithms - Application to nuclear fuel

  • Radaideh, Majdi I.;Shirvan, Koroush
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3864-3877
    • /
    • 2022
  • We propose a new approach called PESA (Prioritized replay Evolutionary and Swarm Algorithms) combining prioritized replay of reinforcement learning with hybrid evolutionary algorithms. PESA hybridizes different evolutionary and swarm algorithms such as particle swarm optimization, evolution strategies, simulated annealing, and differential evolution, with a modular approach to account for other algorithms. PESA hybridizes three algorithms by storing their solutions in a shared replay memory, then applying prioritized replay to redistribute data between the integral algorithms in frequent form based on their fitness and priority values, which significantly enhances sample diversity and algorithm exploration. Additionally, greedy replay is used implicitly to improve PESA exploitation close to the end of evolution. PESA features in balancing exploration and exploitation during search and the parallel computing result in an agnostic excellent performance over a wide range of experiments and problems presented in this work. PESA also shows very good scalability with number of processors in solving an expensive problem of optimizing nuclear fuel in nuclear power plants. PESA's competitive performance and modularity over all experiments allow it to join the family of evolutionary algorithms as a new hybrid algorithm; unleashing the power of parallel computing for expensive optimization.

Collective Navigation Through a Narrow Gap for a Swarm of UAVs Using Curriculum-Based Deep Reinforcement Learning (커리큘럼 기반 심층 강화학습을 이용한 좁은 틈을 통과하는 무인기 군집 내비게이션)

  • Myong-Yol Choi;Woojae Shin;Minwoo Kim;Hwi-Sung Park;Youngbin You;Min Lee;Hyondong Oh
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.117-129
    • /
    • 2024
  • This paper introduces collective navigation through a narrow gap using a curriculum-based deep reinforcement learning algorithm for a swarm of unmanned aerial vehicles (UAVs). Collective navigation in complex environments is essential for various applications such as search and rescue, environment monitoring and military tasks operations. Conventional methods, which are easily interpretable from an engineering perspective, divide the navigation tasks into mapping, planning, and control; however, they struggle with increased latency and unmodeled environmental factors. Recently, learning-based methods have addressed these problems by employing the end-to-end framework with neural networks. Nonetheless, most existing learning-based approaches face challenges in complex scenarios particularly for navigating through a narrow gap or when a leader or informed UAV is unavailable. Our approach uses the information of a certain number of nearest neighboring UAVs and incorporates a task-specific curriculum to reduce learning time and train a robust model. The effectiveness of the proposed algorithm is verified through an ablation study and quantitative metrics. Simulation results demonstrate that our approach outperforms existing methods.

A Hybrid PSO-BPSO Based Kernel Extreme Learning Machine Model for Intrusion Detection

  • Shen, Yanping;Zheng, Kangfeng;Wu, Chunhua
    • Journal of Information Processing Systems
    • /
    • v.18 no.1
    • /
    • pp.146-158
    • /
    • 2022
  • With the success of the digital economy and the rapid development of its technology, network security has received increasing attention. Intrusion detection technology has always been a focus and hotspot of research. A hybrid model that combines particle swarm optimization (PSO) and kernel extreme learning machine (KELM) is presented in this work. Continuous-valued PSO and binary PSO (BPSO) are adopted together to determine the parameter combination and the feature subset. A fitness function based on the detection rate and the number of selected features is proposed. The results show that the method can simultaneously determine the parameter values and select features. Furthermore, competitive or better accuracy can be obtained using approximately one quarter of the raw input features. Experiments proved that our method is slightly better than the genetic algorithm-based KELM model.

A Looping Population Learning Algorithm for the Makespan/Resource Trade-offs Project Scheduling

  • Fang, Ying-Chieh;Chyu, Chiuh-Cheng
    • Industrial Engineering and Management Systems
    • /
    • v.8 no.3
    • /
    • pp.171-180
    • /
    • 2009
  • Population learning algorithm (PLA) is a population-based method that was inspired by the similarities to the phenomenon of social education process in which a diminishing number of individuals enter an increasing number of learning stages. The study aims to develop a framework that repeatedly applying the PLA to solve the discrete resource constrained project scheduling problem with two objectives: minimizing project makespan and renewable resource availability, which are two most common concerns of management when a project is being executed. The PLA looping framework will provide a number of near Pareto optimal schedules for the management to make a choice. Different improvement schemes and learning procedures are applied at different stages of the process. The process gradually becomes more and more sophisticated and time consuming as there are less and less individuals to be taught. An experiment with ProGen generated instances was conducted, and the results demonstrated that the looping framework using PLA outperforms those using genetic local search, particle swarm optimization with local search, scatter search, as well as biased sampling multi-pass algorithm, in terms of several performance measures of proximity. However, the diversity using spread metric does not reveal any significant difference between these five looping algorithms.

Computer Architecture Execution Time Optimization Using Swarm in Machine Learning

  • Sarah AlBarakati;Sally AlQarni;Rehab K. Qarout;Kaouther Laabidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.49-56
    • /
    • 2023
  • Computer architecture serves as a link between application requirements and underlying technology capabilities such as technical, mathematical, medical, and business applications' computational and storage demands are constantly increasing. Machine learning these days grown and used in many fields and it performed better than traditional computing in applications that need to be implemented by using mathematical algorithms. A mathematical algorithm requires more extensive and quicker calculations, higher computer architecture specification, and takes longer execution time. Therefore, there is a need to improve the use of computer hardware such as CPU, memory, etc. optimization has a main role to reduce the execution time and improve the utilization of computer recourses. And for the importance of execution time in implementing machine learning supervised module linear regression, in this paper we focus on optimizing machine learning algorithms, for this purpose we write a (Diabetes prediction program) and applying on it a Practical Swarm Optimization (PSO) to reduce the execution time and improve the utilization of computer resources. Finally, a massive improvement in execution time were observed.