• Title/Summary/Keyword: Path of Reinforcement

Search Result 134, Processing Time 0.03 seconds

Thickness of shear flow path in RC beams at maximum torsional strength

  • Kim, Hyeong-Gook;Lee, Jung-Yoon;Kim, Kil-Hee
    • Computers and Concrete
    • /
    • v.29 no.5
    • /
    • pp.303-321
    • /
    • 2022
  • The current design equations for predicting the torsional capacity of RC members underestimate the torsional strength of under-reinforced members and overestimate the torsional strength of over-reinforced members. This is because the design equations consider only the yield strength of torsional reinforcement and the cross-sectional properties of members in determining the torsional capacity. This paper presents an analytical model to predict the thickness of shear flow path in RC beams subjected to pure torsion. The analytical model assumes that torsional reinforcement resists torsional moment with a sufficient deformation capacity until concrete fails by crushing. The ACI 318 code is modified by applying analytical results from the proposed model such as the average stress of torsional reinforcement and the effective gross area enclosed by the shear flow path. Comparison of the calculated and observed torsional strengths of existing 129 test beams showed good agreement. Two design variables related to the compressive strength of concrete in the proposed model are approximated for design application. The accuracy of the ACI 318 code for the over-reinforced test beams improved somewhat with the use of the approximations for the average stresses of reinforcements and the effective gross area enclosed by the shear flow path.

Dynamic Window Approach with path-following for Unmanned Surface Vehicle based on Reinforcement Learning (무인수상정 경로점 추종을 위한 강화학습 기반 Dynamic Window Approach)

  • Heo, Jinyeong;Ha, Jeesoo;Lee, Junsik;Ryu, Jaekwan;Kwon, Yongjin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.61-69
    • /
    • 2021
  • Recently, autonomous navigation technology is actively being developed due to the increasing demand of an unmanned surface vehicle(USV). Local planning is essential for the USV to safely reach its destination along paths. the dynamic window approach(DWA) algorithm is a well-known navigation scheme as a local path planning. However, the existing DWA algorithm does not consider path line tracking, and the fixed weight coefficient of the evaluation function, which is a core part, cannot provide flexible path planning for all situations. Therefore, in this paper, we propose a new DWA algorithm that can follow path lines in all situations. Fixed weight coefficients were trained using reinforcement learning(RL) which has been actively studied recently. We implemented the simulation and compared the existing DWA algorithm with the DWA algorithm proposed in this paper. As a result, we confirmed the effectiveness of the proposed algorithm.

Online Reinforcement Learning to Search the Shortest Path in Maze Environments (미로 환경에서 최단 경로 탐색을 위한 실시간 강화 학습)

  • Kim, Byeong-Cheon;Kim, Sam-Geun;Yun, Byeong-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.155-162
    • /
    • 2002
  • Reinforcement learning is a learning method that uses trial-and-error to perform Learning by interacting with dynamic environments. It is classified into online reinforcement learning and delayed reinforcement learning. In this paper, we propose an online reinforcement learning system (ONRELS : Outline REinforcement Learning System). ONRELS updates the estimate-value about all the selectable (state, action) pairs before making state-transition at the current state. The ONRELS learns by interacting with the compressed environments through trial-and-error after it compresses the state space of the mage environments. Through experiments, we can see that ONRELS can search the shortest path faster than Q-learning using TD-ewor and $Q(\lambda{)}$-learning using $TD(\lambda{)}$ in the maze environments.

Real-Time Path Planning for Mobile Robots Using Q-Learning (Q-learning을 이용한 이동 로봇의 실시간 경로 계획)

  • Kim, Ho-Won;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.991-997
    • /
    • 2020
  • Reinforcement learning has been applied mainly in sequential decision-making problems. Especially in recent years, reinforcement learning combined with neural networks has brought successful results in previously unsolved fields. However, reinforcement learning using deep neural networks has the disadvantage that it is too complex for immediate use in the field. In this paper, we implemented path planning algorithm for mobile robots using Q-learning, one of the easy-to-learn reinforcement learning algorithms. We used real-time Q-learning to update the Q-table in real-time since the Q-learning method of generating Q-tables in advance has obvious limitations. By adjusting the exploration strategy, we were able to obtain the learning speed required for real-time Q-learning. Finally, we compared the performance of real-time Q-learning and DQN.

A Study of Collaborative and Distributed Multi-agent Path-planning using Reinforcement Learning

  • Kim, Min-Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.9-17
    • /
    • 2021
  • In this paper, an autonomous multi-agent path planning using reinforcement learning for monitoring of infrastructures and resources in a computationally distributed system was proposed. Reinforcement-learning-based multi-agent exploratory system in a distributed node enable to evaluate a cumulative reward every action and to provide the optimized knowledge for next available action repeatedly by learning process according to a learning policy. Here, the proposed methods were presented by (a) approach of dynamics-based motion constraints multi-agent path-planning to reduce smaller agent steps toward the given destination(goal), where these agents are able to geographically explore on the environment with initial random-trials versus optimal-trials, (b) approach using agent sub-goal selection to provide more efficient agent exploration(path-planning) to reach the final destination(goal), and (c) approach of reinforcement learning schemes by using the proposed autonomous and asynchronous triggering of agent exploratory phases.

Link Stability aware Reinforcement Learning based Network Path Planning

  • Quach, Hong-Nam;Jo, Hyeonjun;Yeom, Sungwoong;Kim, Kyungbaek
    • Smart Media Journal
    • /
    • v.11 no.5
    • /
    • pp.82-90
    • /
    • 2022
  • Along with the growing popularity of 5G technology, providing flexible and personalized network services suitable for requirements of customers has also become a lucrative venture and business key for network service providers. Therefore, dynamic network provisioning is needed to help network service providers. Moreover, increasing user demand for network services meets specific requirements of users, including location, usage duration, and QoS. In this paper, a routing algorithm, which makes routing decisions using Reinforcement Learning (RL) based on the information about link stability, is proposed and called Link Stability aware Reinforcement Learning (LSRL) routing. To evaluate this algorithm, several mininet-based experiments with various network settings were conducted. As a result, it was observed that the proposed method accepts more requests through the evaluation than the past link annotated shorted path algorithm and it was demonstrated that the proposed approach is an appealing solution for dynamic network provisioning routing.

Solving Survival Gridworld Problem Using Hybrid Policy Modified Q-Based Reinforcement

  • Montero, Vince Jebryl;Jung, Woo-Young;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1150-1156
    • /
    • 2019
  • This paper explores a model-free value-based approach for solving survival gridworld problem. Survival gridworld problem opens up a challenge involving taking risks to gain better rewards. Classic value-based approach in model-free reinforcement learning assumes minimal risk decisions. The proposed method involves a hybrid on-policy and off-policy updates to experience roll-outs using a modified Q-based update equation that introduces a parametric linear rectifier and motivational discount. The significance of this approach is it allows model-free training of agents that take into account risk factors and motivated exploration to gain better path decisions. Experimentations suggest that the proposed method achieved better exploration and path selection resulting to higher episode scores than classic off-policy and on-policy Q-based updates.

Local Path Generation Method for Unmanned Autonomous Vehicles Using Reinforcement Learning (강화학습을 이용한 무인 자율주행 차량의 지역경로 생성 기법)

  • Kim, Moon Jong;Choi, Ki Chang;Oh, Byong Hwa;Yang, Ji Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.369-374
    • /
    • 2014
  • Path generation methods are required for safe and efficient driving in unmanned autonomous vehicles. There are two kinds of paths: global and local. A global path consists of all the way points including the source and the destination. A local path is the trajectory that a vehicle needs to follow from a way point to the next in the global path. In this paper, we propose a novel method for local path generation through machine learning, with an effective curve function used for initializing the trajectory. First, reinforcement learning is applied to a set of candidate paths to produce the best trajectory with maximal reward. Then the optimal steering angle with respect to the trajectory is determined by training an artificial neural network. Our method outperformed existing approaches and successfully found quality paths in various experimental settings, including the cases with obstacles.

Numerical Analysis on Effect of Permeability and Reinforcement Length (Drainage Path) in Reinforced Soil (보강토에서의 투수성과 보강재길이(배수거리)의 영향에 대한 수치해석)

  • Lee, Hong-Sung;Hwang, Young-Cheol
    • Journal of the Korean GEO-environmental Society
    • /
    • v.8 no.3
    • /
    • pp.59-65
    • /
    • 2007
  • Excess pore pressures in low permeability soils may not dissipate quickly enough and decrease the effective stresses inside the soil, which in turn may cause a reduction of the shear strength at the interface between the soil and the reinforcement in MSE walls. For this condition the dissipation rate of pore pressures is most important and it varies depending on wall size, permeability of the backfill, and reinforcement length. In this paper, a series of numerical analysis has been performed to investigate the effect of those factors. The results show that for soils with a permeability lower than $10^{-3}cm/sec$, the consolidation time gradually increases. The increase in consolidation time indicates the decrease in effective stress thus it will result in decrease in pullout capacity of the reinforcement as verified by the numerical analyses. It is also observed that larger consolidation time is required for longer reinforcement length (longer drainage path).

  • PDF

Reinforcement Learning based Autonomous Emergency Steering Control in Virtual Environments (가상 환경에서의 강화학습 기반 긴급 회피 조향 제어)

  • Lee, Hunki;Kim, Taeyun;Kim, Hyobin;Hwang, Sung-Ho
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.110-116
    • /
    • 2022
  • Recently, various studies have been conducted to apply deep learning and AI to various fields of autonomous driving, such as recognition, sensor processing, decision-making, and control. This paper proposes a controller applicable to path following, static obstacle avoidance, and pedestrian avoidance situations by utilizing reinforcement learning in autonomous vehicles. For repetitive driving simulation, a reinforcement learning environment was constructed using virtual environments. After learning path following scenarios, we compared control performance with Pure-Pursuit controllers and Stanley controllers, which are widely used due to their good performance and simplicity. Based on the test case of the KNCAP test and assessment protocol, autonomous emergency steering scenarios and autonomous emergency braking scenarios were created and used for learning. Experimental results from zero collisions demonstrated that the reinforcement learning controller was successful in the stationary obstacle avoidance scenario and pedestrian collision scenario under a given condition.