• Title/Summary/Keyword: road environment detection

Search Result 98, Processing Time 0.026 seconds

Research of Vehicles Longitudinal Adaptive Control using V2I Situated Cognition based on LiDAR for Accident Prone Areas (LiDAR 기반 차량-인프라 연계 상황인지를 통한 사고다발지역에서의 차량 종방향 능동제어 시스템 연구)

  • Kim, Jae-Hwan;Lee, Je-Wook;Yoon, Bok-Joong;Park, Jae-Ung;Kim, Jung-Ha
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.5
    • /
    • pp.453-464
    • /
    • 2012
  • This is a research of an adaptive longitudinal control system for situated cognition in wide range, traffic accidents reduction and safety driving environment by integrated system which graft a road infrastructure's information based on IT onto the intelligent vehicle combined automobile and IT technology. The road infrastructure installed by laser scanner in intersection, speed limited area and sharp curve area where is many risk of traffic accident. The road infra conducts objects recognition, segmentation, and tracking for determining dangerous situation and communicates real-time information by Ethernet with vehicle. Also, the data which transmitted from infrastructure supports safety driving by integrated with laser scanner's data on vehicle bumper.

An Illumination Invariant Traffic Sign Recognition in the Driving Environment for Intelligence Vehicles (지능형 자동차를 위한 조명 변화에 강인한 도로표지판 검출 및 인식)

  • Lee, Taewoo;Lim, Kwangyong;Bae, Guntae;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.203-212
    • /
    • 2015
  • This paper proposes a traffic sign recognition method in real road environments. The video stream in driving environments has two different characteristics compared to a general object video stream. First, the number of traffic sign types is limited and their shapes are mostly simple. Second, the camera cannot take clear pictures in the road scenes since there are many illumination changes and weather conditions are continuously changing. In this paper, we improve a modified census transform(MCT) to extract features effectively from the road scenes that have many illumination changes. The extracted features are collected by histograms and are transformed by the dense descriptors into very high dimensional vectors. Then, the high dimensional descriptors are encoded into a low dimensional feature vector by Fisher-vector coding and Gaussian Mixture Model. The proposed method shows illumination invariant detection and recognition, and the performance is sufficient to detect and recognize traffic signs in real-time with high accuracy.

AVM Stop-line Detection based Longitudinal Position Correction Algorithm for Automated Driving on Urban Roads (AVM 정지선인지기반 도심환경 종방향 측위보정 알고리즘)

  • Kim, Jongho;Lee, Hyunsung;Yoo, Jinsoo;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.12 no.2
    • /
    • pp.33-39
    • /
    • 2020
  • This paper presents an Around View Monitoring (AVM) stop-line detection based longitudinal position correction algorithm for automated driving on urban roads. Poor positioning accuracy of low-cost GPS has many problems for precise path tracking. Therefore, this study aims to improve the longitudinal positioning accuracy of low-cost GPS. The algorithm has three main processes. The first process is a stop-line detection. In this process, the stop-line is detected using Hough Transform from the AVM camera. The second process is a map matching. In the map matching process, to find the corrected vehicle position, the detected line is matched to the stop-line of the HD map using the Iterative Closest Point (ICP) method. Third, longitudinal position of low-cost GPS is updated using a corrected vehicle position with Kalman Filter. The proposed algorithm is implemented in the Robot Operating System (ROS) environment and verified on the actual urban road driving data. Compared to low-cost GPS only, Test results show the longitudinal localization performance was improved.

Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System (적응형 헤드 램프 컨트롤을 위한 야간 차량 인식)

  • Kim, Hyun-Koo;Jung, Ho-Youl;Park, Ju H.
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

Robust Lane Detection Method Under Severe Environment (악 조건 환경에서의 강건한 차선 인식 방법)

  • Lim, Dong-Hyeog;Tran, Trung-Thien;Cho, Sang-Bock
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.224-230
    • /
    • 2013
  • Lane boundary detection plays a key role in the driver assistance system. This study proposes a robust method for detecting lane boundary in severe environment. First, a horizontal line detects form the original image using improved Vertical Mean Distribution Method (iVMD) and the sub-region image which is under the horizontal line, is determined. Second, we extract the lane marking from the sub-region image using Canny edge detector. Finally, K-means clustering algorithm classifi left and right lane cluster under variant illumination, cracked road, complex lane marking and passing traffic. Experimental results show that the proposed method satisfie the real-time and efficient requirement of the intelligent transportation system.

Performance Improvement of Pedestrian Detection using a GM-PHD Filter (GM-PHD 필터를 이용한 보행자 탐지 성능 향상 방법)

  • Lee, Yeon-Jun;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.150-157
    • /
    • 2015
  • Pedestrian detection has largely been researched as one of the important technologies for autonomous driving vehicle and preventing accidents. There are two categories for pedestrian detection, camera-based and LIDAR-based. LIDAR-based methods have the advantage of the wide angle of view and insensitivity of illuminance change while camera-based methods have not. However, there are several problems with 3D LIDAR, such as insufficient resolution to detect distant pedestrians and decrease in detection rate in a complex situation due to segmentation error and occlusion. In this paper, two methods using GM-PHD filter are proposed to improve the poor rates of pedestrian detection algorithms based on 3D LIDAR. First one improves detection performance and resolution of object by automatic accumulation of points in previous frames onto current objects. Second one additionally enhances the detection results by applying the GM-PHD filter which is modified in order to handle the poor situation to classified multi target. A quantitative evaluation with autonomously acquired road environment data shows the proposed methods highly increase the performance of existing pedestrian detection algorithms.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

Realization of Object Detection Algorithm and Eight-channel LiDAR sensor for Autonomous Vehicles (자율주행자동차를 위한 8채널 LiDAR 센서 및 객체 검출 알고리즘의 구현)

  • Kim, Ju-Young;Woo, Seong Tak;Yoo, Jong-Ho;Park, Young-Bin;Lee, Joong-Hee;Cho, Hyun-Chang;Choi, Hyun-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.157-163
    • /
    • 2019
  • The LiDAR sensor, which is widely regarded as one of the most important sensors, has recently undergone active commercialization owing to the significant growth in the production of ADAS and autonomous vehicle components. The LiDAR sensor technology involves radiating a laser beam at a particular angle and acquiring a three-dimensional image by measuring the lapsed time of the laser beam that has returned after being reflected. The LiDAR sensor has been incorporated and utilized in various devices such as drones and robots. This study focuses on object detection and recognition by employing sensor fusion. Object detection and recognition can be executed as a single function by incorporating sensors capable of recognition, such as image sensors, optical sensors, and propagation sensors. However, a single sensor has limitations with respect to object detection and recognition, and such limitations can be overcome by employing multiple sensors. In this paper, the performance of an eight-channel scanning LiDAR was evaluated and an object detection algorithm based on it was implemented. Furthermore, object detection characteristics during daytime and nighttime in a real road environment were verified. Obtained experimental results corroborate that an excellent detection performance of 92.87% can be achieved.

Lane Detection in Complex Environment Using Grid-Based Morphology and Directional Edge-link Pairs (복잡한 환경에서 Grid기반 모폴리지와 방향성 에지 연결을 이용한 차선 검출 기법)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.786-792
    • /
    • 2010
  • This paper presents a real-time lane detection method which can accurately find the lane-mark boundaries in complex road environment. Unlike many existing methods that pay much attention on the post-processing stage to fit lane-mark position among a great deal of outliers, the proposed method aims at removing those outliers as much as possible at feature extraction stage, so that the searching space at post-processing stage can be greatly reduced. To achieve this goal, a grid-based morphology operation is firstly used to generate the regions of interest (ROI) dynamically, in which a directional edge-linking algorithm with directional edge-gap closing is proposed to link edge-pixels into edge-links which lie in the valid directions, these directional edge-links are then grouped into pairs by checking the valid lane-mark width at certain height of the image. Finally, lane-mark colors are checked inside edge-link pairs in the YUV color space, and lane-mark types are estimated employing a Bayesian probability model. Experimental results show that the proposed method is effective in identifying lane-mark edges among heavy clutter edges in complex road environment, and the whole algorithm can achieve an accuracy rate around 92% at an average speed of 10ms/frame at the image size of $320{\times}240$.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.