• Title/Summary/Keyword: Image Based Vehicle Detection

Search Result 268, Processing Time 0.027 seconds

Performance Improvement of Classifier by Combining Disjunctive Normal Form features

  • Min, Hyeon-Gyu;Kang, Dong-Joong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.50-64
    • /
    • 2018
  • This paper describes a visual object detection approach utilizing ensemble based machine learning. Object detection methods employing 1D features have the benefit of fast calculation speed. However, for real image with complex background, detection accuracy and performance are degraded. In this paper, we propose an ensemble learning algorithm that combines a 1D feature classifier and 2D DNF (Disjunctive Normal Form) classifier to improve the object detection performance in a single input image. Also, to improve the computing efficiency and accuracy, we propose a feature selecting method to reduce the computing time and ensemble algorithm by combining the 1D features and 2D DNF features. In the verification experiments, we selected the Haar-like feature as the 1D image descriptor, and demonstrated the performance of the algorithm on a few datasets such as face and vehicle.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

A Development of Stereo Camera based on Mobile Road Surface Condition Detection System (스테레오카메라 기반 이동식 노면정보 검지시스템 개발에 관한 연구)

  • Kim, Jonghoon;Kim, Youngmin;Baik, Namcheol;Won, Jaemoo
    • International Journal of Highway Engineering
    • /
    • v.15 no.5
    • /
    • pp.177-185
    • /
    • 2013
  • PURPOSES : This study attempts to design and establish the road surface condition detection system by using the image processing that is expected to help implement the low-cost and high-efficiency road information detection system by examining technology trends in the field of road surface condition information detection and related case studies. METHODS : Adapted visual information collecting method(setting a stereo camera outside of the vehicle) and visual information algorithm(transform a Wavelet Transform, using the K-means clustering) Experiments and Analysis on Real-road, just as four states(Dry, Wet, Snow, Ice). RESULTS : Test results showed that detection rate of 95% or more was found under the wet road surface, and the detection rate of 85% or more in snowy road surface. However, the low detection rate of 30% was found under the icy road surface. CONCLUSIONS : As a method to improve the detection rate of the mobile road surface condition information detection system developed in this study, more accurate phase analysis in the image processing process was needed. If periodic synchronization through automatic settings of the camera according to weather or ambient light was not made at the time of image acquisition, a significant change in the values of polarization coefficients occurs.

Line Segments Matching Framework for Image Based Real-Time Vehicle Localization (이미지 기반 실시간 차량 측위를 위한 선분 매칭 프레임워크)

  • Choi, Kanghyeok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.2
    • /
    • pp.132-151
    • /
    • 2022
  • Vehicle localization is one of the core technologies for autonomous driving. Image-based localization provides location information efficiently, and various related studies have been conducted. However, the image-based localization methods using feature points or lane information has a limitation that positioning accuracy may be greatly affected by road and driving environments. In this study, we propose a line segment matching framework for accurate vehicle localization. The proposed framework consists of four steps: line segment extraction, merging, overlap area detection, and MSLD-based segment matching. The proposed framework stably performed line segment matching at a sufficient level for vehicle positioning regardless of vehicle speed, driving method, and surrounding environment.

Development of a Vehicle Tracking Algorithm using Automatic Detection Line Calculation (검지라인 자동계산을 이용한 차량추적 알고리즘 개발)

  • Oh, Ju-Taek;Min, Joon-Young;Hur, Byung-Do;Kim, Myung-Seob
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.4
    • /
    • pp.265-273
    • /
    • 2008
  • Video Image Processing (VIP) for traffic surveillance has been used not only to gather traffic information, but also to detect traffic conflicts and incident conditions. This paper presents a system development of gathering traffic information and conflict detection based on automatic calculation of pixel length within the detection zone on a Video Detection System (VDS). This algorithm improves the accuracy of traffic information using the automatic detailed line segmentsin the detection zone. This system also can be applied for all types of intersections. The experiments have been conducted with CCTV images, installed at a Bundang intersection, and verified through comparison with a commercial VDS product.

Vehicle License Plate Recognition System By Edge-based Segment Image Generation (에지기반 세그먼트 영상 생성에 의한 차량 번호판 인식 시스템)

  • Kim, Jin-Ho;Noh, Duck-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • The research of vehicle license plate recognition has been widely studied for the smart city project. The license plate recognition can be hard due to the geometric distortion and the image quality degradation in case of capturing the driving car image at CCTV without trigger signal on the road. In this paper, the high performance vehicle license plate recognition system using edge-based segment image is introduced which is robust in the geometric distortion and the image quality degradation according to non-trigger signal. The experimental results of the proposed real time license plate recognition algorithm which is implemented at the CCTV on the road show that the plate detection rate was 97.5% and the overall character recognition rate of the detected plates was 99.3% in a day average 1,535 vehicles for a week operation.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Manhole Cover Detection from Natural Scene Based on Imaging Environment Perception

  • Liu, Haoting;Yan, Beibei;Wang, Wei;Li, Xin;Guo, Zhenhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5095-5111
    • /
    • 2019
  • A multi-rotor Unmanned Aerial Vehicle (UAV) system is developed to solve the manhole cover detection problem for the infrastructure maintenance in the suburbs of big city. The visible light sensor is employed to collect the ground image data and a series of image processing and machine learning methods are used to detect the manhole cover. First, the image enhancement technique is employed to improve the imaging effect of visible light camera. An imaging environment perception method is used to increase the computation robustness: the blind Image Quality Evaluation Metrics (IQEMs) are used to percept the imaging environment and select the images which have a high imaging definition for the following computation. Because of its excellent processing effect the adaptive Multiple Scale Retinex (MSR) is used to enhance the imaging quality. Second, the Single Shot multi-box Detector (SSD) method is utilized to identify the manhole cover for its stable processing effect. Third, the spatial coordinate of manhole cover is also estimated from the ground image. The practical applications have verified the outdoor environment adaptability of proposed algorithm and the target detection correctness of proposed system. The detection accuracy can reach 99% and the positioning accuracy is about 0.7 meters.

Moving Object Detection and Tracking in Image Sequence with complex background (복잡한 배경을 가진 영상 시퀀스에서의 이동 물체 검지 및 추적)

  • 정영기;호요성
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.615-618
    • /
    • 1999
  • In this paper, a object detection and tracking algorithm is presented which exhibits robust properties for image sequences with complex background. The proposed algorithm is composed of three parts: moving object detection, object tracking, and motion analysis. The moving object detection algorithm is implemented using a temporal median background method which is suitable for real-time applications. In the motion analysis, we propose a new technique for removing a temporal clutter, such as a swaying plant or a light reflection of a background object. In addition, we design a multiple vehicle tracking system based on Kalman filtering. Computer simulation of the proposed scheme shows its robustness for MPEG-7 test image sequences.

  • PDF

Vehicle Image Recognition Using Deep Convolution Neural Network and Compressed Dictionary Learning

  • Zhou, Yanyan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.411-425
    • /
    • 2021
  • In this paper, a vehicle recognition algorithm based on deep convolutional neural network and compression dictionary is proposed. Firstly, the network structure of fine vehicle recognition based on convolutional neural network is introduced. Then, a vehicle recognition system based on multi-scale pyramid convolutional neural network is constructed. The contribution of different networks to the recognition results is adjusted by the adaptive fusion method that adjusts the network according to the recognition accuracy of a single network. The proportion of output in the network output of the entire multiscale network. Then, the compressed dictionary learning and the data dimension reduction are carried out using the effective block structure method combined with very sparse random projection matrix, which solves the computational complexity caused by high-dimensional features and shortens the dictionary learning time. Finally, the sparse representation classification method is used to realize vehicle type recognition. The experimental results show that the detection effect of the proposed algorithm is stable in sunny, cloudy and rainy weather, and it has strong adaptability to typical application scenarios such as occlusion and blurring, with an average recognition rate of more than 95%.