• Title/Summary/Keyword: vision-based vehicle detection

Search Result 126, Processing Time 0.026 seconds

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

Curve-Modeled Lane Detection based GPS Lateral Error Correction Enhancement (곡선모델 차선검출 기반의 GPS 횡방향 오차보정 성능향상 기법)

  • Lee, Byung-Hyun;Im, Sung-Hyuck;Heo, Moon-Beom;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.81-86
    • /
    • 2015
  • GPS position errors were corrected for guidance of autonomous vehicles. From the vision, we can obtain the lateral distance from the center of lane and the angle difference between the left and right detected line. By using a controller which makes these two measurements zero, a lane following system can be easily implemented. However, the problem is that if there's no lane, such as crossroad, the guidance system of autonomous vehicle does not work. In addition, Line detection has problems working on curved areas. In this case, the lateral distance measurement has an error because of a modeling mismatch. For this reason, we propose GPS error correction filter based on curve-modeled lane detection and evaluated the performance applying it to an autonomous vehicle at the test site.

Laser Scanner based Static Obstacle Detection Algorithm for Vehicle Localization on Lane Lost Section (차선 유실구간 측위를 위한 레이저 스캐너 기반 고정 장애물 탐지 알고리즘 개발)

  • Seo, Hotae;Park, Sungyoul;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.3
    • /
    • pp.24-30
    • /
    • 2017
  • This paper presents the development of laser scanner based static obstacle detection algorithm for vehicle localization on lane lost section. On urban autonomous driving, vehicle localization is based on lane information, GPS and digital map is required to ensure. However, in actual urban roads, the lane data may not come in due to traffic jams, intersections, weather conditions, faint lanes and so on. For lane lost section, lane based localization is limited or impossible. The proposed algorithm is designed to determine the lane existence by using reliability of front vision data and can be utilized on lane lost section. For the localization, the laser scanner is used to distinguish the static object through estimation and fusion process based on the speed information on radar data. Then, the laser scanner data are clustered to determine if the object is a static obstacle such as a fence, pole, curb and traffic light. The road boundary is extracted and localization is performed to determine the location of the ego vehicle by comparing with digital map by detection algorithm. It is shown that the localization using the proposed algorithm can contribute effectively to safe autonomous driving.

Detection of Preceding Vehicles Based on a Multistage Combination of Edge Features and Horizontal Symmetry (에지특징의 단계적 조합과 수평대칭성에 기반한 선행차량검출)

  • Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.679-688
    • /
    • 2008
  • This paper presents an algorithm capable of detecting leading vehicles using a forward-looking camera. In fact, the accurate measurements of the contact locations of vehicles with road surface are prerequisites for the intelligent vehicle technologies based on a monocular vision. Relying on multistage processing of relevant edge features to the hypothesis generation of a vehicle, the proposed algorithm creates candidate positions being the left and right boundaries of vehicles, and searches for pairs to be vehicle boundaries from the potential positions by evaluating horizontal symmetry. The proposed algorithm is proven to be successful by experiments performed on images acquired by a moving vehicle.

Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System (적응형 헤드 램프 컨트롤을 위한 야간 차량 인식)

  • Kim, Hyun-Koo;Jung, Ho-Youl;Park, Ju H.
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

Estimating a Range of Lane Departure Allowance based on Road Alignment in an Autonomous Driving Vehicle (자율주행 차량의 도로 평면선형 기반 차로이탈 허용 범위 산정)

  • Kim, Youngmin;Kim, Hyoungsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.81-90
    • /
    • 2016
  • As an autonomous driving vehicle (AV) need to cope with external road conditions by itself, its perception performance for road environment should be better than that of a human driver. A vision sensor, one of AV sensors, performs lane detection function to percept road environment for performing safe vehicle steering, which relates to define vehicle heading and lane departure prevention. Performance standards for a vision sensor in an ADAS(Advanced Driver Assistance System) focus on the function of 'driver assistance', not on the perception of 'independent situation'. So the performance requirements for a vision sensor in AV may different from those in an ADAS. In assuming that an AV keep previous steering due to lane detection failure, this study calculated lane departure distances between the AV location following curved road alignment and the other one driving to the straight in a curved section. We analysed lane departure distance and time with respect to the allowance of lane detection malfunction of an AV vision sensor. With the results, we found that an AV would encounter a critical lane departure situation if a vision sensor loses lane detection over 1 second. Therefore, it is concluded that the performance standards for an AV should contain more severe lane departure situations than those of an ADAS.

Vision-Based Roadway Sign Recognition

  • Jiang, Gang-Yi;Park, Tae-Young;Hong, Suk-Kyo
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.1
    • /
    • pp.47-55
    • /
    • 2000
  • In this paper, a vision-based roadway detection algorithm for an automated vehicle control system, based on roadway sign information on roads, is proposed. First, in order to detect roadway signs, the color scene image is enhanced under hue-invariance. Fuzzy logic is employed to simplify the enhanced color image into a binary image and the binary image is morphologically filtered. Then, an effective algorithm of locating signs based on binary rank order transform (BROT) is utilized to extract signs from the image. This algorithm performs better than those previously presented. Finally, the inner shapes of roadway signs with curving roadway direction information are recognized by neural networks. Experimental results show that the new detection algorithm is simple and robust, and performs well on real sign detection. The results also show that the neural networks used can exactly recognize the inner shapes of signs even for very noisy shapes.

  • PDF

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

A Fast Horizontal line detection algorithm based on edge information (에지 기반 고속 지평선 검출 알고리즘)

  • 나상일;이웅호;서동진;이웅희;정동석
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.199-202
    • /
    • 2003
  • In the research for Unmaned Air Vehicles(UAVs), the use of Vision-sensor has been increased. It is possible to calculate the position information of air vehicle by finding a horizontal line. In this paper, we proposed a vision-based algorithm for finding the horizontal line. Experimental results show that the proposed algorithm is faster than an existing algorithm.

  • PDF