• Title/Summary/Keyword: Vision-based driver assistance

Search Result 20, Processing Time 0.026 seconds

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

Development of a Cause Analysis Program to Risky Driving with Vision System (Vision 시스템을 이용한 위험운전 원인 분석 프로그램 개발에 관한 연구)

  • Oh, Ju-Taek;Lee, Sang-Yong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.6
    • /
    • pp.149-161
    • /
    • 2009
  • Electronic control systems of vehicle are rapidly developed to keep balance of a driver`s safety and the legal, social needs. The driver assistance systems are putted into practical use according to the cost drop in hardware and highly efficient sensor, etc. This study has developed a lane and vehicle detection program using CCD camera. The Risky Driving Analysis Program based on vision systems is developed by combining a risky driving detection algorithm formed in previous study with lane and vehicle detection program suggested in this study. Risky driving detection programs developed in this study with information coming from the vehicle moving data and lane data are useful in efficiently analyzing the cause and effect of risky driving behavior.

  • PDF

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

Development of a Vision-based Lane Change Assistance System for Safe Driving (안전주행을 위한 비전 기반의 차선변경보조시스템 개발)

  • Sung, Jun-Yong;Han, Min-Hong;Ro, Kwang-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.329-336
    • /
    • 2006
  • This paper describes a lane change assistance system for the help of safe lane change, which detects vehicles approaching from the rear side by using a computer vision algorithm and notifies the possibility of safe lane change to a driver. In case a driver tries to lane change, the proposed system can detect vehicles and keep track of them. After detecting side lane lines, region of interest for vehicle detection is decided. For detection a vehicle, optical flow technique is applied. The experimental result of the proposed algorithm and system showed that the vehicle detection rate was 91% and the embedded system would have application to a lane change assistance system being commercialized in the near future.

  • PDF

Lane Detection and Tracking Using Classification in Image Sequences

  • Lim, Sungsoo;Lee, Daeho;Park, Youngtae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.12
    • /
    • pp.4489-4501
    • /
    • 2014
  • We propose a novel lane detection method based on classification in image sequences. Both structural and statistical features of the extracted bright shape are applied to the neural network for finding correct lane marks. The features used in this paper are shown to have strong discriminating power to locate correct traffic lanes. The traffic lanes detected in the current frame is also used to estimate the traffic lane if the lane detection fails in the next frame. The proposed method is fast enough to apply for real-time systems; the average processing time is less than 2msec. Also the scheme of the local illumination compensation allows robust lane detection at nighttime. Therefore, this method can be widely used in intelligence transportation systems such as driver assistance, lane change assistance, lane departure warning and autonomous vehicles.

Night-time Vehicle Detection Based On Multi-class SVM (다중-클래스 SVM 기반 야간 차량 검출)

  • Lim, Hyojin;Lee, Heeyong;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.5
    • /
    • pp.325-333
    • /
    • 2015
  • Vision based night-time vehicle detection has been an emerging research field in various advanced driver assistance systems(ADAS) and automotive vehicle as well as automatic head-lamp control. In this paper, we propose night-time vehicle detection method based on multi-class support vector machine(SVM) that consists of thresholding, labeling, feature extraction, and multi-class SVM. Vehicle light candidate blobs are extracted by local mean based thresholding following by labeling process. Seven geometric and stochastic features are extracted from each candidate through the feature extraction step. Each candidate blob is classified into vehicle light or not by multi-class SVM. Four different multi-class SVM including one-against-all(OAA), one-against-one(OAO), top-down tree structured and bottom-up tree structured SVM classifiers are implemented and evaluated in terms of vehicle detection performances. Through the simulations tested on road video sequences, we prove that top-down tree structured and bottom-up tree structured SVM have relatively better performances than the others.

Estimating a Range of Lane Departure Allowance based on Road Alignment in an Autonomous Driving Vehicle (자율주행 차량의 도로 평면선형 기반 차로이탈 허용 범위 산정)

  • Kim, Youngmin;Kim, Hyoungsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.81-90
    • /
    • 2016
  • As an autonomous driving vehicle (AV) need to cope with external road conditions by itself, its perception performance for road environment should be better than that of a human driver. A vision sensor, one of AV sensors, performs lane detection function to percept road environment for performing safe vehicle steering, which relates to define vehicle heading and lane departure prevention. Performance standards for a vision sensor in an ADAS(Advanced Driver Assistance System) focus on the function of 'driver assistance', not on the perception of 'independent situation'. So the performance requirements for a vision sensor in AV may different from those in an ADAS. In assuming that an AV keep previous steering due to lane detection failure, this study calculated lane departure distances between the AV location following curved road alignment and the other one driving to the straight in a curved section. We analysed lane departure distance and time with respect to the allowance of lane detection malfunction of an AV vision sensor. With the results, we found that an AV would encounter a critical lane departure situation if a vision sensor loses lane detection over 1 second. Therefore, it is concluded that the performance standards for an AV should contain more severe lane departure situations than those of an ADAS.

A Study on Traffic Light Detection (TLD) as an Advanced Driver Assistance System (ADAS) for Elderly Drivers

  • Roslan, Zhafri Hariz;Cho, Myeon-gyun
    • International Journal of Contents
    • /
    • v.14 no.2
    • /
    • pp.24-29
    • /
    • 2018
  • In this paper, we propose an efficient traffic light detection (TLD) method as an advanced driver assistance system (ADAS) for elderly drivers. Since an increase in traffic accidents is associated with the aging population and an increase in elderly drivers causes a serious social problem, the provision of ADAS for older drivers via TLD is becoming a necessary(Ed: verify word choice: necessary?) public service. Therefore, we propose an economical TLD method that can be implemented with a simple black box (built in camera) and a smartphone in the near future. The system utilizes a color pre-processing method to differentiate between the stop and go signals. A mathematical morphology algorithm is used to further enhance the traffic light detection and a circular Hough transform is utilized to detect the traffic light correctly. From the simulation results of the computer vision and image processing based on a proposed algorithm on Matlab, we found that the proposed TLD method can detect the stop and go signals from the traffic lights not only in daytime, but also at night. In the future, it will be possible to reduce the traffic accident rate by recognizing the traffic signal and informing the elderly of how to drive by voice.

Evaluation of Accident Prevention Performance of Vision and Radar Sensor for Major Accident Scenarios in Intersection (교차로 주요 사고 시나리오에 대한 비전 센서와 레이더 센서의 사고 예방성능 평가)

  • Kim, Yeeun;Tak, Sehyun;Kim, Jeongyun;Yeo, Hwasoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.5
    • /
    • pp.96-108
    • /
    • 2017
  • The current collision warning and avoidance system(CWAS) is one of the representative Advanced Driver Assistance Systems (ADAS) that significantly contributes to improve the safety performance of a vehicle and mitigate the severity of an accident. However, current CWAS mainly have focused on preventing a forward collision in an uninterrupted flow, and the prevention performance near intersections and other various types of accident scenarios are not extensively studied. In this paper, the safety performance of Vision-Sensor (VS) and Radar-Sensor(RS) - based collision warning systems are evaluated near an intersection area with the data from Naturalistic Driving Study(NDS) of Second Strategic Highway Research Program(SHRP2). Based on the VS and RS data, we newly derived sixteen vehicle-to-vehicle accident scenarios near an intersection. Then, we evaluated the detection performance of VS and RS within the derived scenarios. The results showed that VS and RS can prevent an accident in limited situations due to their restrained field-of-view. With an accident prevention rate of 0.7, VS and RS can prevent an accident in five and four scenarios, respectively. For an efficient accident prevention, a different system that can detect vehicles'movement with longer range than VS and RS is required as well as an algorithm that can predict the future movement of other vehicles. In order to further improve the safety performance of CWAS near intersection areas, a communication-based collision warning system such as integration algorithm of data from infrastructure and in-vehicle sensor shall be developed.

Vision-based Vehicle Detection and Inter-Vehicle Distance Estimation (영상 기반의 차량 검출 및 차간 거리 추정 방법)

  • Kim, Gi-Seok;Cho, Jae-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.1-9
    • /
    • 2012
  • In this paper, we propose a vision-based robust vehicle detection and inter-vehicle distance estimation algorithm for driving assistance system. We use the haar-like features of car rear-shadows, as well as the edge features for detecting of vehicles. The use of additional vehicle edge features greatly reduces the false-positive errors in the vehicle detection. And, after analyzing the conventional two inter-vehicle distance estimation methods: the location-based and the vehicle width-based, an improved inter-vehicle distance estimation algorithm which has the advantage of both method is proposed. Several experimental results show the effectiveness of the proposed method.