DOI QR코드

DOI QR Code

Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles

  • Jung, Juho (Department of Software, Korea National University of Transportation) ;
  • Park, Manbok (Department of Electronic Engineering, Korea National University of Transportation) ;
  • Cho, Kuk (Land and Geospatial Informatix - Spatial Information Research Institue) ;
  • Mun, Cheol (Department of Electronic Engineering, Korea National University of Transportation) ;
  • Ahn, Junho (Department of Software, Korea National University of Transportation)
  • 투고 : 2020.06.24
  • 심사 : 2020.08.30
  • 발행 : 2020.10.31

초록

Due to the significant increase in the use of autonomous car technology, it is essential to integrate this technology with high-precision digital map data containing more precise and accurate roadway information, as compared to existing conventional map resources, to ensure the safety of self-driving operations. While existing map technologies may assist vehicles in identifying their locations via Global Positioning System, it is however difficult to update the environmental changes of roadways in these maps. Roadway vision algorithms can be useful for building autonomous vehicles that can avoid accidents and detect real-time location changes. We incorporate a hybrid architectural design that combines unsupervised classification of vision data with supervised joint fusion classification to achieve a better noise-resistant algorithm. We identify, via a deep learning approach, an intelligent hybrid fusion algorithm for fusing multimodal vision feature data for roadway classifications and characterize its improvement in accuracy over unsupervised identifications using image processing and supervised vision classifiers. We analyzed over 93,000 vision frame data collected from a test vehicle in real roadways. The performance indicators of the proposed hybrid fusion algorithm are successfully evaluated for the generation of roadway digital maps for autonomous vehicles, with a recall of 0.94, precision of 0.96, and accuracy of 0.92.

키워드

1. Introduction

In automotive research, more than 90% of automobile crashes are caused due to driver errors, as reported by the National Highway Traffic Safety Administration in 2018 [1]. It is reported that distraction-affected crashes resulted in 2,841 human casualties in that year. Speed-related crashes cost Americans $40.4 billion annually. According to the Sleep in America poll of the National Sleep Foundation [2], 60% of American drivers have driven their vehicles while feeling sleepy and 37% of them have fallen asleep while driving. As such, research on self-driving vehicle can aid in reducing the number of human errors and offer solutions.

A few commercial companies such as Uber, Google, Tesla, and General Motors [3-6] have built self-driving systems. Autonomous vehicles built by these companies are typically equipped with cameras, light detection and ranging (LIDAR), and other supplemental sensors to identify the surrounding ranges. Moreover, these companies are building intelligent self-driving systems with highly accurate performances to develop driverless vehicle technologies. High-precision road maps that are continuously updated with recent information within lanes based on the reference points of a road environment can support autonomous driving systems for safe driving operations [7]. Traditional road maps exist for building roads; however, it is difficult to update changes in the maps. In addition, maps for self-driving cars must be updated consistently.

In this study, we devise a hybrid fusion algorithm combining unsupervised and supervised algorithms, to achieve an accurate identification of roadways based on vision data, for the generation of high-precision road maps. The first algorithm we suggest is an unsupervised algorithm employing the Canny algorithm [8], which detects each dotted lane, and then recognizes one continuous solid track for lane tracking in self-driving cars, while a test car drives on the road. This algorithm calibrates the lane measurement based on colors, edges, sliding windows search (SWS), the least-squares method (LSM), and bird-eye view (BEV) transformation on the vision data. The first algorithm can continuously detect lanes on the road; however, we discovered that it was vulnerable to lane classifications involving noise data, such as surrounding vehicles, light, weather status, and road status. Therefore, it incorrectly identifies a wrong line as a road lane. Subsequently, we investigated a second algorithm, which is a supervised learning method, using a deep learning algorithm. The second algorithm, which determines objects such as dotted lanes and cars in road vision ranges, reduces the noise data of the first algorithm. The deep learning algorithm was trained with real road data collected via the test car and was capable of recognizing lanes based on the vision data. Subsequently, we integrated the first and second algorithms to detect roadways with high accuracy and efficiency. The hybrid architectural design combines an unsupervised classification of vision data with a supervised joint fusion classification to achieve a better noise-resistant algorithm. Furthermore, the proposed algorithm classifies crossings, centerlines, and stop lines for the high-precision maps. The test car for the proposed research collected vision data in real roads. We analyzed over 93,000 vision frame data and successfully evaluated the performance of the proposed hybrid fusion algorithm, with a recall of 0.94, precision of 0.96, and accuracy of 0.92.

In Section 2, we begin by describing related studies regarding self-driving cars. Section 3 provides a description of the proposed intelligent supervised, unsupervised, and fusion algorithms for high-precision map generation. We introduce the data collection of our test car equipped with sensors to identify the surrounding area in the section. Section 4 presents the evaluation of performance measurements of the proposed individual and hybrid fusion classification algorithms. Finally, Section 5 concludes the paper with discussions of future endeavors in this research.

2. Related Work

Researchers have actively investigated lane detection based on unsupervised learning algorithms. Farag et al. [9] proposed a fast and reliable lane detection and tracking method focusing on simplicity and fast calculations; this method is primarily based on computer vision algorithms. The image is inputted and converted into a gray image; thereafter, noise is removed via Gaussian filtering, and the edges are extracted using a Canny edge detector. Subsequently, after designating the region of interest, the lane is recognized through Hough transformation. Narote et al. [10] performed numerous studies regarding alerting drivers of lane departure, using lane detection and lane tracking processes; they employed Sobel and Canny edge detectors to detect edges.

Researchers have also actively investigated object detection for vehicles or other objects on roads or lane detection using supervised learning. In Huang et al. [11-15, 30], two-stage detection algorithms were investigated, and the performances of various algorithms were assessed. As the algorithm deepens, the speed decreases and the accuracy increases; hence, owing to the tradeoff between speed and accuracy, algorithms must be used appropriately. In the study Xing et al. [16], lane detection is the most basic component in the development of the latest advanced driver assistance systems, and several companies, including Mobileye, BMW, and Tesla, have provided numerous related services. Lane detection can be performed based on general lane detection procedures, conventional image-processing-based lane detection algorithms, and machine-learning-based lane detection algorithms. Additionally, a lane detection system can be constructed by fusing sensor information with the system. Wang et al. [17] used LaneNet, which is a deep-neural-network-based algorithm, to perform lane edge proposal and lane line localization to perform lane detection. Neven et al. [18] used the LaneNet algorithm to process fixed front lanes and the number of lanes variably. Furthermore, they performed training using an end-to-end method such that the lanes could form their own instances to accommodate lane changes. Based on this method, they proposed a high-speed lane detection algorithm operating at 50 fps. In the study conducted by Song et al. [19], after performing lane detection based on stereo vision and angle and dynamic pole detection in Regions of Interest (ROI), the detected lanes were inputted into convolutional neural networks to classify the appearance of the lanes. Shen et al. [20] conducted a vehicle detection study based on aerial imagery and a recent study on detection based on a Faster R-CNN algorithm. Huang et al. [21] generated top-view images by performing an inverse perspective transformation, which does not require a camera's internal and external parameters. Yuan et al. [22] proposed a method that segments roads and detects lanes based on normal maps to address difficulties in lane detection owing to the variety of noise in real driving environments, such as vehicles and shadows. Based on the study conducted by Gwak et al. [23], we investigated algorithms employing various sensors used in autonomous vehicles and proposed a lane recognition model using cameras. Among the numerous vision algorithms, the Faster R-CNN Inception v2 model was developed and assessed using transfer learning [24].

Fusing various algorithms results in an efficiency that is higher than the accuracy of each algorithm alone, and the study conducted by Ahn et al. [25] achieved a stable and satisfactory performance for a bagging-based reinforcement algorithm. Furthermore, Delgado et al. [26] demonstrated that the random forest algorithm exhibited the best performance among bagging-based algorithms. Using the random forest algorithm, Dogru et al. [27] proposed an intelligent traffic accident detection system using the numerous variables inside the vehicle. Compared with artificial neural networks and support vector machine algorithms, the random forest algorithm achieved the best performance. Random forest [28] is an ensemble method for learning multiple decision trees; therefore, it is used to solve various problems such as detection, classification, and regression. It is an efficient algorithm that uses randomization techniques such as bagging or randomized node optimization to implement generalization performance.

In the manufacturing process of existing high-precision road maps [29], three-dimensional (3D) data are collected after measurements using a mobile mapping system vehicle. Subsequently, the data are corrected, a 3D LAS file is generated, and the objects are drawn, after which a human directly edits and structuralizes the location. However, in this study, we employ a software that automatically generates high-precision road maps without the requirement of these tasks.

3. The Proposed Algorithm

3.1 Software architecture for the proposed algorithm

The software architecture proposed herein is shown in Fig. 1. To execute the proposed algorithm, vision data are collected from the camera installed in the test car, and based on the collected data, various features of the supervised learning algorithm, unsupervised learning algorithm, and object detections on the road algorithm are detected; subsequently, by fusing the two algorithms, the resulting data are extracted for the high-precision road map. Each feature detection algorithm of the proposed software algorithms performs lane tracking and detects objects, crosswalks, stop lines, etc. The detected feature data are fused based on an artificial intelligence (AI) reinforcement learning algorithm rather than a simple integration, thereby enhancing the performance for extracting the high-precision road map data. The data detected using each algorithm are stored in a database and used in various file formats to generate high-precision road maps.

E1KOBZ_2020_v14n10_3955_f0001.png 이미지

Fig. 1. Software architecture for the proposed algorithm

Fig. 2 shows a flowchart of the proposed algorithm and the pseudo code; the data can be collected and analyzed through camera sensors mounted on the autonomous vehicle, and the lanes can be extracted through unsupervised learning algorithms. The unsupervised learning algorithm tracks dotted lanes and recognizes them as one continuous lane, in which case a supervised learning algorithm is used to eliminate the error of individual lane detection owing to noise on the road caused by factors such as surrounding vehicles, light, weather, and road status. A deep learning-based supervised learning algorithm can recognize lanes, stop lines, and crosswalks, and an algorithm fusing unsupervised and supervised learning algorithms can effectively remove noise data and accurately detect continuous lanes. To enhance the performance of this fusion algorithm, AI reinforcement learning is applied, which addresses the variety of noise that may occur in a real environment. Subsequently, the results of the proposed hybrid fusion algorithm and the object detections on the road are output as the final result.

E1KOBZ_2020_v14n10_3955_f0002.png 이미지

Fig. 2. The flowchart and pseudo code for the proposed algorithm

3.2 Unsupervised learning algorithm for lane tracking

A method for detecting lanes based on the unsupervised learning algorithm is to extract features from an image and find the lanes based on the extracted features. Fig. 3 shows these processes. The original image is inputted to find the lane, and the distortion of the image is corrected. Next, the edge information of the image must be detected to extract the features. The Canny algorithm was used in this study for that purpose. Subsequently, to prevent false positives in areas other than the lanes, the ROI is specified and the image is transformed into a BEV using the perspective transform function of OpenCV. After transformation, a SWS is used to find the left and right lanes. The window size and number in each left and right lane are specified, and if the number of pixels in the window exceeds the threshold, then it is considered that lane features exist and that the pixels in the region are its features. The lane is extracted by fitting and smoothing the pixels in the detected region Finally, to verify whether the lane is normally detected, the lane detected in the original image is colored for a visual confirmation.

E1KOBZ_2020_v14n10_3955_f0003.png 이미지

Fig. 3. Unsupervised learning algorithm architecture

As the imported raw data image may be distorted depending on the camera's position and angle, calibration was performed to correct distortion. In the calibration, the distortion of a camera or video image and the camera’s internal and external parameters were learned to remove distortions. In the corrected image, the features for lane detection were extracted based on the edges in the image. Numerous algorithms exist for extracting edges; first-order and second-order differential algorithms were applied to the data and their performances compared. Among the first-order algorithms, the Sobel algorithm is the most widely used and can extract edges in all directions. Among the second-order algorithms, the Canny algorithm was developed to prevent the calculation of wrong edges owing to noise; it also exhibits good detection, good localization, and clear response. In this study, the edges of the images were detected using the Sobel algorithm and the Canny algorithm; a comparison of the two algorithms is shown in Fig. 4 and Fig. 5.

E1KOBZ_2020_v14n10_3955_f0004.png 이미지

Fig. 4. Results of Sobel algorithm

E1KOBZ_2020_v14n10_3955_f0005.png 이미지

Fig. 5. Results of Canny algorithm

As shown in Fig. 4 and Fig. 5, to extract features from the images in this study, the Canny algorithm detected various features more clearly than the Sobel algorithm. Next, for ROI extraction, an image was cropped around the y-axis from the original image to reduce noise outside the lanes, and only areas with lanes were used. Because a perspective exists in the image with the extracted ROI, it is difficult to detect the lane normally. Hence, the image was transformed into the BEV form, in which it was depicted as if it was observed from above. The image can be transformed into the BEV using the perspective transform function of OpenCV; the term “BEV” refers to the view of a bird looking down from above, and the technique is used to distort the perspective of an image. Various functions are already implemented through OpenCV. In this study, the transform matrix was calculated using the getPerspectiveTransform function of OpenCV. Subsequently, the image was transformed into a BEV using the warpPerspective function. Fig. 6 shows the data transformed into the BEV form.

E1KOBZ_2020_v14n10_3955_f0006.png 이미지

Fig. 6. Image transformed into BEV

The SWS technique was used to extract the lanes from the image converted to BEV form. The SWS technique creates a rectangular window, slides the window over the image, and then determines the area to be a lane if the pixels in the window area exceed the threshold. Fig. 7 shows the image of a lane with the detected features inside the window using the SWS technique in the BEV image.

E1KOBZ_2020_v14n10_3955_f0007.png 이미지

Fig. 7. BEV image using SWS technique

Nonzero pixel values (nonblack pixel values) were extracted from the window detection area to detect the left and right lane information, after which the LSM was used to extract the equation form for the lane extraction. To fit the pixel values of the lane in the extracted area to the average value, the smoothing function was executed and the average value of the detected pixel values were returned. Through this process, the equation of a lane form can be extracted using the LSM through the average of the pixel values. Lane equations extracted through the LSM can be expressed as cubic or quadratic equations depending on the input parameters. In this study, a cubic equation was extracted for use in autonomous driving, in the form of Ay3 + By2 + Cy + D. Fig. 8 expresses the equation extracted from a graph using the LSM. The red line indicates the left lane; the blue line the right lane; the y-axis a random value (1-30) for drawing the graph; the x-axis the distance between the vehicle and lane, with (0, 0) representing the center of the vehicle.

E1KOBZ_2020_v14n10_3955_f0008.png 이미지

Fig. 8. LSM equation expressed in graph

To visualize the original image for determining whether the lanes were extracted appropriately through the LSM, the lanes extracted through the algorithm were colored and applied to the original image, as shown in Fig. 9. Also, although continuous lanes can be extracted using the unsupervised learning algorithm, vehicles may appear around the lane or a various noise may occur due to environmental conditions, thereby preventing lane detection or deteriorating detection accuracy.

E1KOBZ_2020_v14n10_3955_f0009.png 이미지

Fig. 9. Extracted lanes that underwent all processes applied to the original image

3.3 Supervised learning algorithm for object detections

While lane detection and steering wheel control in an autonomous vehicle are important, it is crucial to provide information by recognizing surrounding objects. Furthermore, the supervised learning algorithm cannot extract lanes into LSM equations but only recognizes them. Although supervised learning algorithms efficiently recognize various information, a two-stage object detection algorithm was used in this study. One-stage object detection algorithms such as YOLO or SSD must be used because a two-stage object detection is slow for use in real autonomous vehicles. However, a two-stage object detection algorithm was used in this study because it performs processing based on the collected data. Moreover, while various signs and curb information are expressed in a high-precision road map, in this study, lanes, stop lines, and crosswalks were first detected in the road markings, which are more important. Faster R-CNN Inception v2, a two-stage object detection algorithm, was used to efficiently detect the lanes, stop lines, and crosswalks. The Faster R-CNN algorithm exhibits a performance that adheres to two-stage detection algorithms. Among the published pretrained algorithms, Faster R-CNN Inception v2 was used because it exhibited efficient speed and detection performance. The published pretrained Faster R-CNN Inception v2 model does not contain classes for detecting lanes, stop lines, and crosswalks, therefore, data must be added to detect these objects. Hence, the transfer learning technique was used in this study to train the model to classify lanes, stop lines, and crosswalks using existing pretrained models.

In addition, to detect the lanes that are dark or occupied by vehicles, the LaneNet algorithm was used in this study based on a previous study [18]. Approximately 7,000 images were learned and lane detection was performed as shown in Fig. 10. However, the results were unsatisfactory.

E1KOBZ_2020_v14n10_3955_f0010.png 이미지

Fig. 10. LaneNet algorithm applied to the data of this study

3.4 Hybrid Fusion Algorithm

In the lane detection method while employing the unsupervised learning algorithm, the lanes detected in the original image were colored to allow a human to visually confirm if they were normally detected. This method is highly inefficient and cannot be used in autonomous vehicles. Moreover, when the lanes cannot be normally detected owing to road surface conditions and noise, the autonomous vehicle may be involved in an accident and blanks may appear in the high-precision road map or the map may be drawn inappropriately. Furthermore, supervised learning algorithms cannot detect lanes continuously but only recognize them, whereas the unsupervised learning algorithm cannot judge the detected lane information. Hence, the supervised learning algorithm can be used for lane recognition, and the unsupervised learning algorithm can judge the detected lane information. Accordingly, we propose a hybrid fusion algorithm that fuses supervised and unsupervised learning algorithms. Among the bagging-based reinforcement algorithms, the hybrid fusion algorithm exhibits stable performance and is based on the random forest algorithm, whose performance has been verified in previous studies. The random forest algorithm, which is a type of ensemble method, is used for various problems such as detection, classification, and regression. It has been determined that the most efficient method is to uncorrelated the predictions of each tree, thereby improving the generalization performance, and to use the random forest algorithm to perform fusion. The accuracies of the left and right lanes discovered through unsupervised learning are judged using the bounding boxes, which predict that the lane model of the supervised learning algorithm is a lane. If multiple lanes exist in the image, numerous situations can arise, in which all bounding boxes are used as parameters in the hybrid fusion algorithm. Hence, only one bounding box was selected for the left and right lanes. In terms of the selection criteria for the bounding box, the box that is closest to the center of the image and has the largest y-coordinate value was selected. As shown in Fig. 11, among the numerous predicted bounding boxes, the lane model of the supervised learning algorithm selects the red box.

E1KOBZ_2020_v14n10_3955_f0011.png 이미지

Fig. 11. Application of hybrid fusion algorithm and box selection for each lane

The left and right lanes were each calculated according to the amount of lane detected by the unsupervised learning algorithm included in the selected red box, after which the trained model of the hybrid fusion algorithm determined whether the detected lanes were normal or abnormal. The method for calculating whether each lane is included (α) is α = (number of pixels of lane detected in the red box through unsupervised learning algorithm) / (diagonal length of the red box). To train the model of the hybrid fusion algorithm, the calculated α and ground truth of each lane were provided as training data through 1,136 images, and the classification model was generated accordingly. When the lane is classified as a normal lane using the hybrid fusion algorithm model, the lane information recognized by the unsupervised learning algorithm is used. Meanwhile, when the lane is classified as an abnormal lane using the hybrid fusion algorithm model, the lane information of the previous frame can be used to minimize noise that may occur on the high-precision road map and continuously identify the lane.

3.5 Data collections and the test car facility

This study was conducted in collaboration with the Land and Geospatial Informatix Corporation. The data were collected and analyzed using a vehicle equipped with sensors. The high precise road digital maps are generated and made for several days by using a data collection vehicle (not a self-driving car) equipped with various sensors. The pre-made maps are downloaded on autonomous vehicles in advance before the cars drive. The self-driving cars with intelligent technology are able to utilize the pre-made digital maps in real time while driving. The collected data included various scenarios, such as daytime, nighttime, raining, highway driving, and city driving, and over 93,000 frames of the collected image data were subject to analysis. Various sensors including LIDAR and radar were equipped on the vehicle. Fig. 12 shows the vehicle used to collect the data.

Fig. 12 shows the vehicle used to collect the data.

4. Evaluation

4.1 Traffic lane tracking

To verify the lane detection performance using the unsupervised learning algorithm, we examined whether the lanes were correctly detected in 2,020 images. Table 1 shows the detection performance results.

Table 1. Detection performance of unsupervised learning algorithm

E1KOBZ_2020_v14n10_3955_t0001.png 이미지

If the unsupervised learning algorithm is used, the lane may not be detected normally if blocked or if noise is generated owing to various environmental factors. In such cases, blanks or incorrect lanes may be drawn on the high-precision road map, which can be addressed using the hybrid fusion algorithm.

The performance of the model using the hybrid fusion algorithm was measured based on whether the classification model of the random forest algorithm judged the lane detection by the unsupervised learning algorithm model as normal or abnormal. As shown in Table 2, the performance of the hybrid fusion algorithm was measured based on 2,020 images

Table 2. Hybrid fusion algorithm performance

E1KOBZ_2020_v14n10_3955_t0002.png 이미지

Compared with the lane detection performance of the unsupervised learning algorithm, the hybrid fusion algorithm can determine whether the unsupervised learning algorithm detected the lane normally or abnormally with an accuracy of 0.92, as well as improvements in recall and F1-score of 0.08 and 0.03, respectively. If only the unsupervised learning algorithm is used and the lane is detected abnormally or not at all, abnormal results may appear in the high-precision road map; conversely, the hybrid fusion algorithm uses the immediate previous normal detection results when abnormal detection occurs, thereby minimizing noise and enhancing performance.

4.2 Object detections on Road

The supervised learning algorithm used 2,382 images to classify lanes, 1,570 images to classify stop lines, and 1,152 images to classify crosswalks. The model trained through transfer learning displaying the bounding box in the image, identical to the detection method of the existing Faster R-CNN Inception v2 model. Furthermore, as the data will be used in the high-precision road map, inaccurate detection data cannot be used. As such, the detection accuracy of each class must be 90% or more. The supervised learning algorithm was used to determine the performance of the lane, stop line, and crosswalk detection. The detection performances were measured for 632 lanes in 199 images, stop lines in 259 images, and crosswalks in 225 images, which is shown in Table 3. Fig. 13 shows the detection results applied to the image.

Table 3. Supervised learning algorithm performance

E1KOBZ_2020_v14n10_3955_t0003.png 이미지

E1KOBZ_2020_v14n10_3955_f0013.png 이미지

Fig. 13. Result of object detections on road 5. Conclusion

5. Conclusion

In this study, we developed an intelligent hybrid fusion classification model to continuously and accurately detect road lanes using vision data, for the generation of high-precision road maps of autonomous vehicles. Our classification model was developed based on testing various vision data to obtain the performing algorithm for creating high-precision road maps in the real world. Although the vision data collected by the test vehicle involved noise in harsh road environments, we discovered that the proposed hybrid fusion algorithm achieved satisfactory performance, with accurate detections for the use in precise road maps.

Future potential expansion and enhancement for the algorithm can include adding a functionality for building practical accurate precision maps of self-driving cars. However, there are a few difficulties in generating maps, such as harsh weather conditions, regulations, complex social interactions, and security, that are encountered when developing fully autonomous self-driving cars for practical environments. To ensure the safe operation of self-driving vehicles, these difficulties must be fully evaluated and verified in different environments. We plan to obtain solutions to expand our algorithm such that it can be accurately operated for fully autonomous vehicles.

This research was supported by a grant titled "Industry-Academic Cooperation R&D Support Project" from LX and "Core Technology Development Project of Automobile Industry (10052941)" from the KEIT. This work was supported by the Technology Advancement Research Program funded by the Ministry of Land, Infrastructure, and Transport of the Korean government, Grant (20CTAP-C151968-02). This research was supported by a grant (code 20PQOW-B152618-02) from R&D Program funded by Ministry of Land, Infrastructure and Transport of Korean government. This work was supported by the Technology Innovation Program (or Industrial Strategic Technology Development Program-Automobile Industrial Core Technology Development Project) (K_G012000307003, Development of rear automatic braking system for NCAP) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea). This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(No. 2020R1I1A3068274).

참고문헌

  1. NHTSA, "Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey," 2018.
  2. National Sleep Foundation, "Drowsy Driving," 2019.
  3. Tesla, "Autopilot".
  4. Waymo, "Safety".
  5. Uber, "Safety".
  6. General Motors, "Mission".
  7. ABIresearch, "High Accuracy and Real-time Maps for Autonomous Vehicles," 2016.
  8. J. Canny, "A Computational Approach to Edge Detection," IEEE transactions on pattern analysis and machine intelligence, Vol. pami-8, no. 6. pp. 679-698, 1986. https://doi.org/10.1109/TPAMI.1986.4767851
  9. W. Farag and Z. Saleh, "Road Lane-Lines Detection in Real-Time for Advanced Driving Assistance Systems," in Proc. of International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), pp. 1-8, 2018.
  10. S. P. Narote, P. N. Bhujbal, A. S. Narote and D. M. Dhane, "A review of recent advances in lane detection and departure warning system," Pattern Recognition, vol. 73, pp. 216-234, 2018. https://doi.org/10.1016/j.patcog.2017.08.014
  11. J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, L. Fischer, Z. Wonja, Y. Song, S. Guadarrama and K. Murphy, "Speed/accuracy trade-offs for modern convolutional object detectors," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3296-3297, 2017.
  12. R. Girshick, "Fast R-CNN," in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1440-1448, 2015.
  13. Tensorflow, "Object detection model zoo".
  14. S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," in Proc. of the 28th International Conference on Neural Information Processing Systems, vol. 1, pp. 91-99, 2015.
  15. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016.
  16. Y. Xing, C. Lv, L. Chen, H. Wang, H. Wang, D. Cao, E. Velenis and F. Wang, "Advances in Vision-Based Lane Detection: Algorithms, Integration, Assessment, and Perspectives on ACP-Based Parallel Vision," IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 3, pp. 645-661, 2018. https://doi.org/10.1109/jas.2018.7511063
  17. Z. Wang, W. Ren and Q. Qiu, "LaneNet: Real-Time Lane Detection Networks for Autonomous Driving," arXiv, pp. 1-9, 2018.
  18. D. Neven, B. D. Brabandere, S. Georgoulis, M. Proesmans and L. V. Gool, "Towards End-to-End Lane Detection: an Instance Segmentation Approach," arXiv, pp. 1-7, 2018.
  19. W. Song, Y. Yang, M. Fu, Y. Li and M. Wang, "Lane Detection and Classification for Forward Collision Warning System Based on Stereo Vision," IEEE Sensors Journal, vol. 18, no. 12, pp. 5151-5163, 2018. https://doi.org/10.1109/jsen.2018.2832291
  20. J. Shen, N. Liu, H. Sun, X. Tao and Q. Li, "Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network," KSII Transactions on Internet and Information Systems, vol. 13, no. 4, pp. 1989-2011, 2019. https://doi.org/10.3837/tiis.2019.04.014
  21. Y. Huang, Y. Li, X. Hu and W. Ci, "Lane Detection Based on Inverse Perspective Transformation and Kalman Filter," KSII Transactions on Internet and Information Systems, vol. 12, no. 2, pp. 643-661, 2018. https://doi.org/10.3837/tiis.2018.02.006
  22. C. Yuan, H. Chen, J. Liu, D. Zhu and Y. Xu, "Robust Lane Detection for Complicated Road Environment Based on Normal Map," IEEE Access, vol. 6, pp. 49679-49689, 2018. https://doi.org/10.1109/ACCESS.2018.2868976
  23. J. Gwak, J. Jung, R. Oh, M. Park, M. A. K. Rakhimov and Junho ahn, "A Review of Intelligent Self-Driving Vehicle Software Research," KSII Transactions on Internet and Information Systems, vol. 13, no. 11, pp. 5299-5320, 2019. https://doi.org/10.3837/tiis.2019.11.002
  24. C. Tan, F. Sun, T. Kong,W. Zhang, C. Yang and C. Liu, "A Surbey on Deep Transfer Learning," in Proc. of International Conference on Artificial Neural Networks, Lecture Notes in Computer Science, vol. 11141, pp 270-279, 2018.
  25. J. Ahn and R. Han, "myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection," Sensors (Basel), vol. 16, no. 5, pp. 753, 2016. https://doi.org/10.3390/s16050753
  26. M. F. Delgado, E. Cernadas, S. Barro and D. Amorim, "Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?," The Journal of Machine Learning Research, Vol. 15, no. 1, pp. 3133-3181, 2014.
  27. N. Dogru and A. Subasi, "Traffic Accident Detection Using Random Forest Classifier," in Proc. of 15th Learning and Technology Conference (L&T), pp. 40-45, 2018.
  28. T. K. Ho, "Random Decision Forests," in Proc. of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, vol. 1, pp. 278-282, 1995.
  29. National Geographic Information Institute, "Precision road map".
  30. S. Li, Z. Hu and M. Zhao, "Moving Object Detection Using Sparse Approximation and Sparse Coding Migration," KSII Transactions on Internet and Information Systems, vol. 14, no. 5, pp. 2141-2155, 2020. https://doi.org/10.3837/tiis.2020.05.015

피인용 문헌

  1. Semantic Segmentation of Large-Scale Outdoor Point Clouds by Encoder-Decoder Shared MLPs with Multiple Losses vol.13, pp.16, 2020, https://doi.org/10.3390/rs13163121