• Title/Summary/Keyword: vision-based vehicle detection

Search Result 126, Processing Time 0.031 seconds

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Radar, Vision, Lidar Fusion-based Environment Sensor Fault Detection Algorithm for Automated Vehicles (레이더, 비전, 라이더 융합 기반 자율주행 환경 인지 센서 고장 진단)

  • Choi, Seungrhi;Jeong, Yonghwan;Lee, Myungsu;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2017
  • For automated vehicles, the integrity and fault tolerance of environment perception sensor have been an important issue. This paper presents radar, vision, lidar(laser radar) fusion-based fault detection algorithm for autonomous vehicles. In this paper, characteristics of each sensor are shown. And the error of states of moving targets estimated by each sensor is analyzed to present the method to detect fault of environment sensors by characteristic of this error. Each estimation of moving targets isperformed by EKF/IMM method. To guarantee the reliability of fault detection algorithm of environment sensor, various driving data in several types of road is analyzed.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Real-Time Vehicle Detection in Traffic Scenes using Multiple Local Region Information (국부 다중 영역 정보를 이용한 교통 영상에서의 실시간 차량 검지 기법)

  • 이대호;박영태
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.163-166
    • /
    • 2000
  • Real-time traffic detection scheme based on Computer Vision is capable of efficient traffic control using automatically computed traffic information and obstacle detection in moving automobiles. Traffic information is extracted by segmenting vehicle region from road images, in traffic detection system. In this paper, we propose the advanced segmentation of vehicle from road images using multiple local region information. Because multiple local region overlapped in the same lane is processed sequentially from small, the traffic detection error can be corrected.

  • PDF

A Vision-Based Collision Warning System by Surrounding Vehicles Detection

  • Wu, Bing-Fei;Chen, Ying-Han;Kao, Chih-Chun;Li, Yen-Feng;Chen, Chao-Jung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1203-1222
    • /
    • 2012
  • To provide active notification and enhance drivers'awareness of their surroundings, a vision-based collision warning system that detects and monitors surrounding vehicles is proposed in this paper. The main objective is to prevent possible vehicle collisions by monitoring the status of surrounding vehicles, including the distance to the other vehicles in front, behind, to the left and to the right sides. In addition, the proposed system collects and integrates this information to provide advisory warnings to drivers. To offer the correct notification, an algorithm based on features of edge and morphology to detect vehicles is also presented. The proposed system has been implemented in embedded systems and evaluated on real roads in various lighting and weather conditions. The experimental results indicate that the vehicle detection ratios were higher than 97% in the daytime, and appropriate for real road applications.

Preceding Vehicle Detection and Tracking with Motion Estimation by Radar-vision Sensor Fusion (레이더와 비전센서 융합기반의 움직임추정을 이용한 전방차량 검출 및 추적)

  • Jang, Jaehwan;Kim, Gyeonghwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.265-274
    • /
    • 2012
  • In this paper, we propose a method for preceding vehicle detection and tracking with motion estimation by radar-vision sensor fusion. The motion estimation proposed results in not only correction of inaccurate lateral position error observed on a radar target, but also adaptive detection and tracking of a preceding vehicle by compensating the changes in the geometric relation between the ego-vehicle and the ground due to the driving. Furthermore, the feature-based motion estimation employed to lessen computational burden reduces the number of deployment of the vehicle validation procedure. Experimental results prove that the correction by the proposed motion estimation improves the performance of the vehicle detection and makes the tracking accurate with high temporal consistency under various road conditions.

Road marking classification method based on intensity of 2D Laser Scanner (신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법)

  • Park, Seong-Hyeon;Choi, Jeong-hee;Park, Yong-Wan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

Fast On-Road Vehicle Detection Using Reduced Multivariate Polynomial Classifier (축소 다변수 다항식 분류기를 이용한 고속 차량 검출 방법)

  • Kim, Joong-Rock;Yu, Sun-Jin;Toh, Kar-Ann;Kim, Do-Hoon;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.639-647
    • /
    • 2012
  • Vision-based on-road vehicle detection is one of the key techniques in automotive driver assistance systems. However, due to the huge within-class variability in vehicle appearance and environmental changes, it remains a challenging task to develop an accurate and reliable detection system. In general, a vehicle detection system consists of two steps. The candidate locations of vehicles are found in the Hypothesis Generation (HG) step, and the detected locations in the HG step are verified in the Hypothesis Verification (HV) step. Since the final decision is made in the HV step, the HV step is crucial for accurate detection. In this paper, we propose using a reduced multivariate polynomial pattern classifier (RM) for the HV step. Our experimental results show that the RM classifier outperforms the well-known Support Vector Machine (SVM) classifier, particularly in terms of the fast decision speed, which is suitable for real-time implementation.

Development of a Cause Analysis Program to Risky Driving with Vision System (Vision 시스템을 이용한 위험운전 원인 분석 프로그램 개발에 관한 연구)

  • Oh, Ju-Taek;Lee, Sang-Yong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.6
    • /
    • pp.149-161
    • /
    • 2009
  • Electronic control systems of vehicle are rapidly developed to keep balance of a driver`s safety and the legal, social needs. The driver assistance systems are putted into practical use according to the cost drop in hardware and highly efficient sensor, etc. This study has developed a lane and vehicle detection program using CCD camera. The Risky Driving Analysis Program based on vision systems is developed by combining a risky driving detection algorithm formed in previous study with lane and vehicle detection program suggested in this study. Risky driving detection programs developed in this study with information coming from the vehicle moving data and lane data are useful in efficiently analyzing the cause and effect of risky driving behavior.

  • PDF