DOI QR코드

DOI QR Code

Lane Change Methodology for Autonomous Vehicles Based on Deep Reinforcement Learning

심층강화학습 기반 자율주행차량의 차로변경 방법론

  • DaYoon Park (Dept. of Spatial Information Eng., Pukyong National University) ;
  • SangHoon Bae (Dept. of Spatial Information Eng., Pukyong National University) ;
  • Trinh Tuan Hung (Dept. of Spatial Information Eng., Pukyong National University) ;
  • Boogi Park (Dept. of Spatial Information Eng., Pukyong National University) ;
  • Bokyung Jung (Dept. of Spatial Information Eng., Pukyong National University)
  • 박다윤 (부경대학교 지구환경시스템과학부 공간정보시스템공학전공 ) ;
  • 배상훈 (부경대학교 공간정보시스템공학과) ;
  • ;
  • 박부기 (부경대학교 지구환경시스템과학부 공간정보시스템공학전공 ) ;
  • 정보경 (부경대학교 지구환경시스템과학부 공간정보시스템공학전공 )
  • Received : 2022.11.30
  • Accepted : 2023.02.06
  • Published : 2023.02.28

Abstract

Several efforts in Korea are currently underway with the goal of commercializing autonomous vehicles. Hence, various studies are emerging on autonomous vehicles that drive safely and quickly according to operating guidelines. The current study examines the path search of an autonomous vehicle from a microscopic viewpoint and tries to prove the efficiency required by learning the lane change of an autonomous vehicle through Deep Q-Learning. A SUMO was used to achieve this purpose. The scenario was set to start with a random lane at the starting point and make a right turn through a lane change to the third lane at the destination. As a result of the study, the analysis was divided into simulation-based lane change and simulation-based lane change applied with Deep Q-Learning. The average traffic speed was improved by about 40% in the case of simulation with Deep Q-Learning applied, compared to the case without application, and the average waiting time was reduced by about 2 seconds and the average queue length by about 2.3 vehicles.

현재 국내에서는 자율주행차량의 상용화를 목표로 다양한 노력을 기울이고 있으며 자율주행차량이 운영 가이드라인에 따라 안전하고 신속하게 주행할 수 있는 연구들이 대두되고 있다. 본 연구는 자율주행차량의 경로탐색을 미시적인 관점으로 바라보며 Deep Q-Learning을 통해 자율주행차량의 차로변경을 학습시켜 효율성을 입증하고자 한다. 이를 위해 SUMO를 사용하였으며, 시나리오는 출발지에서 랜덤 차로로 출발하여 목적지의 3차로까지 차로변경을 통해 우회전하는 것으로 설정하였다. 연구 결과 시뮬레이션 기반의 차로변경과 Deep Q-Learning을 적용한 시뮬레이션 기반의 차로변경으로 구분하여 분석하였다. 평균 통행 속도는 Deep Q-Learning을 적용한 시뮬레이션의 경우가 적용하지 않은 경우에 비해 약 40% 향상되었으며 평균 대기 시간은 약 2초, 평균 대기 행렬 길이는 약 2.3대 감소하였다.

Keywords

References

  1. Chang, K. J. and Yoo, S. M.(2021), "A Study on Autonomous Vehicle Lane Change Method Using Cooperative Maneuver", The Korea Contents Association, vol. 21, no. 1, pp.139-146.
  2. Choi, Y. G., Lim, K. I. and Kim, J. H.(2015), "Lane Change and Path Planning of Autonomous Vehicles using GIS", 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp.163-166. doi: 10.1109/URAI.2015.7358855
  3. Deutsches Zentrum fur Luft- und Raumfahrt e.V.(DLR)(1998), "Microscopic modeling of traffic flow: Investigation of collision free vehicle dynamics", vol. 55, no. 5, p.5597.
  4. Eclipse SUMO, https://sumo.dlr.de/docs/Simulation/SublaneModel.html, 2022.10.18.
  5. Eclipse SUMO, https://sumo.dlr.de/docs/TraCI.html, 2022.11.16.
  6. Erdmann, J.(2015), Modeling Mobility with Open Data, SUMO's lane-changing model, Springer, pp.105-123.
  7. Kang, M. S., Yi, I. K., Cho, Y. S. and Shin, O. S.(2021), "Route Selection and Speed Control of Autonomous Vehicles Based on Decentralized Deep Reinforcement Learning", The Korean Institute of Communications and Information Sciences, pp.1501-1502.
  8. Kwon, J. U., Chae, M. S., Cho, E. Y. and Cho, S. Y.(2022), "Path Planning Algorithm based on Dijkstra Using HD-map", The Korean Society of Automotive Engineers, pp.455-457.
  9. Li, Z., Liang, H., Zhao, P., Wang, S. and Zhu, H.(2020), "Efficient Lane Change Path Planning based on Quintic splinefor Autonomous Vehicles", 2020 IEEE International Conference on Mechatronics and Automation (ICMA), pp.338-344. doi: 10.1109/ICMA49215.2020.9233841
  10. Lu, Q. and Tettamanti, T.(2018), "Impacts of autonomous vehicles on the urban fundamental diagram", 5th International Conference on Road and Rail Infrastructure, CETRA 2018, pp.1265-1271. doi: 10.5592/co/cetra.2018.714
  11. Ministry of Land, Infrastructure and Transport(MOLIT), http://www.molit.go.kr/USR/NEWS/m_71/dtl.jsp?lcmspage=1&id=95087208, 2022.10.21.
  12. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M.(2013), Playing atari with deep reinforcement learning, arXiv preprint arXiv:1312.5602.
  13. National Highway Traffic Safety Administration(NHTSA)(2018), A Framework for Automated Driving System Testable Cases and Scenarios, pp.1-162.
  14. Shim, Y. B., Kim, Y. J., Min, K. W., Lee, S. Y. and Son, H. S.(2020), "A Study on the HD Map Editor for the HD Map based Path Planning", The Korean Society of Automotive Engineers, pp.513-517.
  15. Song, J., Wu, Y., Xu, Z. and Lin, X.(2014), "Research on car-following model based on SUMO", The 7th IEEE/International Conference on Advanced Infocomm Technology, IEEE, pp.47-55.
  16. Universite Gustave Eiffel(2021), Reinforcement Learning Project, pp.1-12.
  17. Wang, F. J., Lu, Y., Dai, H. I. and Han, H. H.(2021), "Evaluation of Freeway Traffic Management and Control Measures Based on SUMO", Journal of Physics: Conference Series, vol. 1910, 012044. doi: 10.1088/1742-6596/1910/1/012044
  18. Wang, P., Chan, C. Y. and De La Fortelle, A.(2018), "A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers", Proc. IEEE Intelligent Vehicles Symposium, vol. 2018, no. 2, pp.1379-1384.
  19. Ye, F., Cheng, X., Wang, P., Chan, C. Y. and Zhang, J.(2020.10), "Automated Lane Change Strategy using Proximal Policy Optimization-based Deep Reinforcement Learning", Proc. IEEE Intelligent Vehicles Symposium, pp.1746-1752.