DOI QR코드

DOI QR Code

Development of Autonomous Vehicle Learning Data Generation System

자율주행 차량의 학습 데이터 자동 생성 시스템 개발

  • Yoon, Seungje (Mobility Research and Artificial Intelligence) ;
  • Jung, Jiwon (Mobility Research and Artificial Intelligence) ;
  • Hong, June (Mobility Research and Artificial Intelligence) ;
  • Lim, Kyungil (Advanced Institutes of Convergence Technology) ;
  • Kim, Jaehwan (Advanced Institutes of Convergence Technology) ;
  • Kim, Hyungjoo (Advanced Institutes of Convergence Technology)
  • 윤승제 (모라이 기업부설연구소) ;
  • 정지원 (모라이) ;
  • 홍준 (모라이) ;
  • 임경일 (차세대융합기술연구원 경기도자율주행센터) ;
  • 김재환 (차세대융합기술연구원 경기도자율주행센터) ;
  • 김형주 (차세대융합기술연구원 경기도자율주행센터)
  • Received : 2020.08.19
  • Accepted : 2020.10.20
  • Published : 2020.10.31

Abstract

The perception of traffic environment based on various sensors in autonomous driving system has a direct relationship with driving safety. Recently, as the perception model based on deep neural network is used due to the development of machine learning/in-depth neural network technology, a the perception model training and high quality of a training dataset are required. However, there are several realistic difficulties to collect data on all situations that may occur in self-driving. The performance of the perception model may be deteriorated due to the difference between the overseas and domestic traffic environments, and data on bad weather where the sensors can not operate normally can not guarantee the qualitative part. Therefore, it is necessary to build a virtual road environment in the simulator rather than the actual road to collect the traning data. In this paper, a training dataset collection process is suggested by diversifying the weather, illumination, sensor position, type and counts of vehicles in the simulator environment that simulates the domestic road situation according to the domestic situation. In order to achieve better performance, the authors changed the domain of image to be closer to due diligence and diversified. And the performance evaluation was conducted on the test data collected in the actual road environment, and the performance was similar to that of the model learned only by the actual environmental data.

자율주행시스템에서 다양한 센서를 기반으로 한 외부환경 인지는 주행안전성과 직접적인 관계가 있다. 최근 머신러닝/심층 신경망 기술의 발전으로 심층 신경망 기반의 인지 모델이 사용됨에 따라, 인지 알고리즘의 올바른 학습과 이를 위한 양질의 학습데이터가 필수적으로 요구된다. 그러나 자율주행에 발생할 수 있는 모든 상황을 데이터를 수집하는 것은 현실적인 어려움이 많다. 해외와 국내의 교통 환경의 차이로 인지 모델의 성능이 저하되기도 하며, 센서가 정상동작을 못하는 악천우에 대한 데이터는 수집이 어려우며 질적인 부분을 보장하지 못한다. 때문에, 실제 도로가 아닌 시뮬레이터 내 가상 도로 환경을 구축하여 합성 데이터를 수집하는 접근법이 필요하다. 본 논문에서는 국내 실정에 맞게 국내 도로 상황을 모사한 시뮬레이터 환경 안에 날씨와 조도, 차량의 종류와 대수, 센서의 위치를 다양화하여 학습데이터를 수집하였고, 보다 더 좋은 성능을 위해 적대적 생성 모델을 활용하여 이미지의 도메인을 보다 실사에 가깝게 바꾸고 다양화 하였다. 그리고 위 데이터로 학습한 인지 모델을 실제 도로 환경에서 수집한 시험 데이터에 성능 평가를 진행하여, 실제 환경 데이터만으로 학습한 모델과 비슷한 성능을 내는 것을 보였다.

Keywords

References

  1. Chen D., Zhou B., Koltun V. and Krahenbuhl P.(2019), "Learning by Cheating," Conference on Robot Learning(CoRL).
  2. Dosovitskiy A., Ros G., Codevilla F., Lopez A. and Koltun V.(2017), "CARLA: An Open Urban Driving Simulator," Conference on Robot Learning(CoRL).
  3. Gaidon A., Wang Q., Cabon Y. and Vig E.(2016), "Virtual Worlds as Proxy for Multi-Object Tracking Analysis," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.4340-4349.
  4. Geiger A., Lenz P., Stller C. and Urtasun R.(2013), "Vision meets Rototics: The KITTI Dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp.1231-1237. https://doi.org/10.1177/0278364913491297
  5. Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A. and Bengio Y.(2014), "Generative Adversarial Networks," NIPS.
  6. Isola P., Zhu J. Y., Zhou T. and Efros A.(2017), "Image-to-Image Translation with Conditional Adversarial Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.1125-1134.
  7. Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C. and Berg A.(2015), "SSD: Single Shot MultiBox Detector," European Conference on Computer Vision(ECCV), pp.21-37.
  8. Redmon J. and Farhadi A.(2016), "YOLO9000: Better, Faster, Stronger," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.7263-7271.
  9. Redmon J. and Farhadi A.(2019), YOLO v3: An Incremental Improvement, University of Washington.
  10. Redmon J., Divvala S., Grishick R. and Farhadi A.(2015), "You Only Look Once: Unified, Real-Time Object Detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.779-788.
  11. Ren S., He K., Girshick R. and Sun J.(2016), "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," In IEEE Transactions on Pattern Analysis and Machine Intelligence.
  12. Rong G., Shin B. H., Tabatabaee H., Lu Q., Lemke S., Mozeiko M., Boise E., Uhm G., Gerow M., Mehta S., Agafonov E., Kim T. H., Sterner E., Ushiroda K., Reyes M., Zelenkovsky D. and Kim S.(2020), "LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving," ITSC.
  13. Tremblay J., Prakash A., Acuna D., Brophy M., Jampani V., Anil C., To T., Cameracci E., Boochoon S. and Birchfield S.(2018), "Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization," In CVPR Workshop.
  14. Yang Z., Chai Y., Anguelov D., Zhou Y., Sun P., Erhan D., Rafferty S. and Kretzschmar H.(2020), "SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp.11118-11127.
  15. Yu F., Chen H., Wang X., Xian W., Chen Y., Liu F., Madhavan V. and Darrell T.(2020), "BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp.2636-2645.
  16. Zhu J. Y., Park T., Isola P. and Efros A.(2017), "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," Proceedings of the IEEE International Conference on Computer Vision(ICCV), pp.2223-2232.