DOI QR코드

DOI QR Code

Adaptive Key-point Extraction Algorithm for Segmentation-based Lane Detection Network

세그멘테이션 기반 차선 인식 네트워크를 위한 적응형 키포인트 추출 알고리즘

  • Sang-Hyeon Lee (School of Computer Engineering, Korea University of Technology and Education) ;
  • Duksu Kim (School of Computer Engineering, Korea University of Technology and Education)
  • Received : 2022.10.23
  • Accepted : 2023.01.19
  • Published : 2023.03.01

Abstract

Deep-learning-based image segmentation is one of the most widely employed lane detection approaches, and it requires a post-process for extracting the key points on the lanes. A general approach for key-point extraction is using a fixed threshold defined by a user. However, finding the best threshold is a manual process requiring much effort, and the best one can differ depending on the target data set (or an image). We propose a novel key-point extraction algorithm that automatically adapts to the target image without any manual threshold setting. In our adaptive key-point extraction algorithm, we propose a line-level normalization method to distinguish the lane region from the background clearly. Then, we extract a representative key point for each lane at a line (row of an image) using a kernel density estimation. To check the benefits of our approach, we applied our method to two lane-detection data sets, including TuSimple and CULane. As a result, our method achieved up to 1.80%p and 17.27% better results than using a fixed threshold in the perspectives of accuracy and distance error between the ground truth key-point and the predicted point.

딥러닝 기반의 이미지 세그멘테이션은 차선 인식을 위해 널리 사용되는 접근 방식 중 하나로, 차선의 키포인트를 추출하기 위한 후처리 과정이 필요하다. 일반적으로 키포인트는 사용자가 지정한 임계값을 기준으로 추출한다. 하지만 최적의 임계값을 찾는 과정은 큰 노력을 요구하며, 데이터 세트(또는 이미지)마다 최적의 값이 다를 수 있다. 본 연구는 사용자의 직접 임계값 지정 대신, 대상의 이미지에 맞추어 적절한 임계값을 자동으로 설정하는 키포인트 추출 알고리즘을 제안한다. 본 논문의 키포인트 추출 알고리즘은 차선 영역과 배경의 명확한 구분을 위해 줄 단위 정규화를 사용한다. 그리고 커널 밀도 추정을 사용하여, 각 줄에서 각 차선의 키포인트를 추출한다. 제안하는 알고리즘은 TuSimple과 CULane 데이터 세트에 적용되었으며, 고정된 임계값 사용 대비 정확도 및 거리오차 측면에서 1.80%p와 17.27% 향상된 결과를 얻는 것을 확인하였다.

Keywords

Acknowledgement

본 논문은 2022년도 교육부의 재원으로 한국연구재단의 지원을 받아 수행된 지차제-대학 협력기반 지역혁신 사업(2021RIS-004)의 지원을 받아 수행되었음

References

  1. J. Y. Baek, and M. C. Lee, "Lane Recognition Using Lane Prominence Algorithm for Unmanned Vehicles", Journal of Institute of Control, Robotics and Systems (ICROS), Vol. 16, No. 7, pp. 625-631, 2010 https://doi.org/10.5302/J.ICROS.2010.16.7.625
  2. K.-S. Lee, S.-W. Heo, and T.-H. Park, "A Lane Detection and Tracking Method Using Image Saturation and Road Width Data", Journal of Institute of Control, Robotics and Systems (ICROS), Vol. 25, No. 5, pp. 476-483, 2019 https://doi.org/10.5302/J.ICROS.2019.19.0008
  3. Z. Kim, "Robust lane detection and tracking in challenging scenarios," IEEE Transactions on intelligent trans- portation systems, vol. 9, no. 1, pp. 16-26, 2008. https://doi.org/10.1109/TITS.2007.908582
  4. M. Aly, "Real time detection of lane markers in urban streets," IEEE Intelligent Vehicles Symposium. pp. 7-12, 2008.
  5. G. Liu, F. Worgotter, and I. Markelic, "Combining statistical hough ' transform and particle filter for robust lane detection and tracking," IEEE Intelligent Vehicles Symposium. pp. 993-997, 2010.
  6. K. B. Kim and D. H. Song, "Real time road lane detection with ransac and hsv color transformation," Journal of information and communication convergence engineering, vol. 15, no. 3, pp. 187-192, 2017. https://doi.org/10.6109/JICCE.2017.15.3.187
  7. T. Zheng, H. Fang, Y. Zhang, W. Tang, Z. Yang, H. Liu, and D. Cai, "Resa: Recurrent feature-shift aggregator for lane detection," Proc. of the AAAI Conference on Artificial Intelligence, vol. 35, no. 4, pp. 3547-3554, 2021.
  8. X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, "Spatial as deep: Spatial cnn for traffic scene understanding," Proc. of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  9. D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool, "Towards end-to-end lane detection: an instance segmenta tion approach," IEEE intelligent vehicles symposium (IV). pp. 286-291, 2018.
  10. L. Tabelini, R. Berriel, T. M. Paixao, C. Badue, A. F. De Souza, and T. Olivera-Santos, "Keep your eyes on the lane: Attention-guided lane detection," arXiv e-prints, pp. arXiv-2010, 2020.
  11. H. Xu, S. Wang, X. Cai, W. Zhang, X. Liang, and Z. Li, "Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending," in European Conference on Computer Vision. pp. 689-704, Springer, 2020.
  12. L. Tabelini, R. Berriel, T. M. Paixao, C. Badue, A. F. De Souza, and T. Oliveira-Santos, "Polylanenet: Lane estimation via deep polynomial regression," in 2020 25th International Conference on Pattern Recognition (ICPR). pp. 6150-6156, 2021.
  13. R. Liu, Z. Yuan, T. Liu, and Z. Xiong, "End-to-end lane shape prediction with transformers," Proc. of the IEEE/CVF winter conference on applications of computer vision, pp. 3694-3702, 2021.
  14. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proc. of the IEEE conference on computer vision and pattern recognition, pp.770-778, 2016.
  15. Tusimple/tusimple-benchmark: [Online]. Available: https://github.com/TuSimple/tusimple benchmark/
  16. Brody Huval, Tao Wang, Sameep Tandon, Jeff Kiske, Will Song, Joel Pazhayampallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migimatsu, Royce Cheng-Yue, Fernando Mujica, Adam Coates, Andrew Y. Ng, "An empirical evaluation of deep learning on highway driving," arXiv pre- print arXiv:1504.01716, 2015.
  17. Z. Qin, H. Wang, and X. Li, "Ultra fast structure-aware deep lane detec tion," in European Conference on Computer Vision. pp. 276-291, Springer, 2020.
  18. S. Yoo, H. S. Lee, H. Myeong, S. Yun, H. Park, J. Cho, and D. H. Kim, "End-to-end lane marker detection via row-wise classification," Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1006-1007, 2020.
  19. X. Li, J. Li, X. Hu, and J. Yang, "Line-cnn: End-to-end traffic line detection with line proposal unit," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 1, pp. 248-258, 2019. https://doi.org/10.1109/TITS.2019.2890870
  20. S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, 2015.
  21. Y. Dong, S. Patil, B. van Arem, and H. Farah, "A hy- brid spatial-temporal deep learning architecture for lane detection," Computer-Aided Civil and Infrastructure Engineering, 2022.
  22. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional networks for biomedical image segmentation," International Conference on Medical image com- puting and computer-assisted intervention. pp. 234-241, Springer, 2015.
  23. E. Parzen, "On estimation of a probability density function and mode," The annals of mathematical statistics, vol. 33, no. 3, pp. 1065-1076, 1962.
  24. CULane dataset. [Online]. Available: https://xingangpan.github.io/projects/CULane.html
  25. Zhan Qu, Huan Jin, Yang Zhou, Zhen Yang, Wei Zhang, "Focus on local: Detecting lane marker from bottom up via key point." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  26. Tu Zheng, Yifei Huang, Yang Liu, Wenjian Tang, Zheng Yang, Deng Cai, Xiaofei He, "CLRNet: Cross Layer Refinement Network for Lane Detection" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  27. Seungki Min, Insung Ihm. Density Estimation Technique for Effective Representation of Light In-scattering. Journal of the Korea Computer Graphics Society, vol. 16, no. 1, 9-20, 2010. https://doi.org/10.15701/kcgs.2010.16.1.9
  28. De Gelder, E., Cator, E., Paardekooper, J. P., Den Camp, O. O., & De Schutter, B. Constrained sampling from a kernel density estimator to generate scenarios for the assessment of automated vehicles. In 2021 IEEE Intelligent Vehicles Symposium Workshops, pp. 203-208, 2021.