• Title/Summary/Keyword: Vehicle Image

Search Result 1,219, Processing Time 0.026 seconds

Vehicle Detection using Feature Points with Directional Features (방향성 특징을 가지는 특징 점에 의한 차량 검출)

  • Choi Dong-Hyuk;Kim Byoung-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.2 s.302
    • /
    • pp.11-18
    • /
    • 2005
  • To detect vehicles in image, first the image is transformed with the steerable pyramid which has independent directions and levels. Feature vectors are the collection of filter responses at different scales of a steerable image pyramid. For the detection of vehicles in image, feature vectors in feature points of the vehicle image is used. First the feature points are selected with the grid points in vehicle image that are evenly spaced, and second, the feature points are comer points which m selected by human, and last the feature points are corner Points which are selected in grid points. Next the feature vectors of the model vehicle image we compared the patch of the test images, and if the distance of the model and the patch of the test images is lower than the predefined threshold, the input patch is decided to a vehicle. In experiment, the total 11,191 vehicle images are captured at day(10,576) and night(624) in the two local roads. And the $92.0\%$ at day and $87.3\%$ at night detection rate is achieved.

Matching GIS Lane Data with Vehicle Position Using Camera Image (영상을 이용한 주행차량 위치정보와 GIS 차선 데이터 매칭 기법)

  • Kim, Min-Woo;Moon, Sang-Chan;Joo, Da-Ni;Lee, Soon-Geul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.40-47
    • /
    • 2014
  • This paper proposes a matching method of GIS lane information with a vehicle position using camera image to reduce DGPS error. Images of straight road are taken using a camera that is installed on the front center of the vehicle, and the distance between the vehicle and the lane are estimated using the images. The current GIS lane data is matched by comparing the estimated distance and the measured distance using a DGPS. Inverse perspective mapping is used to minimize the error of image processing from the heading angle, and single buffering method is applied to decide the exact moment of GIS match. Through practical test on the highway, feasibility of the GIS matching using camera image is confirmed.

Feature Area-based Vehicle Plate Recognition System(VPRS) (특징 영역 기반의 자동차 번호판 인식 시스템)

  • Jo, Bo-Ho;Jeong, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.6
    • /
    • pp.1686-1692
    • /
    • 1999
  • This paper describes the feature area-based vehicle plate recognition system(VPRS). For the extraction of vehicle plate in a vehicle image, we used the method which extracts vehicle plate area from a s vehicle image using intensity variation. For the extraction of the feature area containing character from the extracted vehicle plate, we used the histogram-based approach and the relative location information of individual characters in the extracted vehicle plate. The extracted feature area is used as the input vector of ART2 neural network. The proposed method simplifies the existing complex preprocessing the solves the problem of distortion and noise in the binarization process. In the difficult cases of character extraction by binarization process of previous method, our method efficiently extracts characters regions and recognizes it.

  • PDF

A Study on the Possibility of Using the Aerial-Based Vehicle Detection System for Real-Time Traffic Data Collection (항공 기반 차량검지시스템의 실시간 교통자료 수집에의 활용 가능성에 관한 연구)

  • Baik, Nam Cheol;Lee, Sang Hyup
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.2D
    • /
    • pp.129-136
    • /
    • 2012
  • In the US, Japan and Germany the Aerial-Based Vehicle Detection System, which collects real-time traffic data using the Unmanned Aerial Vehicle (UAV), helicopters or fixed-wing aircraft has been developed for the last several years. Therefore, this study was done to find out whether the Aerial-Based Vehicle Detection System could be used for real-time traffic data collection. For this purpose the study was divided into two parts. In the first part the possibility of retrieving real-time traffic data such as travel speed from the aerial photographic image using the image processing technique was examined. In the second part the quality of the retrieved real-time traffic data was examined to find out whether the data are good enough to be used as traffic information source. Based on the results of examinations we could conclude that it would not be easy for the Aerial- Based Vehicle Detection System to replace the present Vehicle Detection System due to technological difficulties and high cost. However, the system could be effectively used to make the emergency traffic management plan in case of incidents such as abrupt heavy rain, heavy snow, multiple pile-up, etc.

Night-to-Day Road Image Translation with Generative Adversarial Network for Driver Safety Enhancement (운전자 안정성 향상을 위한 Generative Adversarial Network 기반의 야간 도로 영상 변환 시스템)

  • Ahn, Namhyun;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.760-767
    • /
    • 2018
  • Advanced driver assistance system(ADAS) is a major technique in the intelligent vehicle field. The techniques for ADAS can be separated in two classes, i.e., methods that directly control the movement of vehicle and that indirectly provide convenience to driver. In this paper, we propose a novel system that gives a visual assistance to driver by translating a night road image to a day road image. We use the black box images capturing the front road view of vehicle as inputs. The black box images are cropped into three parts and simultaneously translated into day images by the proposed image translation module. Then, the translated images are recollected to original size. The experimental result shows that the proposed method generates realistic images and outperforms the conventional algorithms.

Implementation of Image Transmission Based on Vehicle-to-Vehicle Communication

  • Piao, Changhao;Ding, Xiaoyue;He, Jia;Jang, Soohyun;Liu, Mingjie
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.258-267
    • /
    • 2022
  • Weak over-the-horizon perception and blind spot are the main problems in intelligent connected vehicles (ICVs). In this paper, a V2V image transmission-based road condition warning method is proposed to solve them. The encoded road emergency images which are collected by the ICV are transmitted to the on-board unit (OBU) through Ethernet. The OBU broadcasts the fragmented image information including location and clock of the vehicle to other OBUs. To satisfy the channel quality of the V2X communication in different times, the optimal fragment length is selected by the OBU to process the image information. Then, according to the position and clock information of the remote vehicles, OBU of the receiver selects valid messages to decode the image information which will help the receiver to extend the perceptual field. The experimental results show that our method has an average packet loss rate of 0.5%. The transmission delay is about 51.59 ms in low-speed driving scenarios, which can provide drivers with timely and reliable warnings of the road conditions.

Road Image Enhancement Method for Vision-based Intelligent Vehicle (비전기반 지능형 자동차를 위한 도로 주행 영상 개선 방법)

  • Kim, Seunggyu;Park, Daeyong;Choi, Yeongwoo
    • Korean Journal of Cognitive Science
    • /
    • v.25 no.1
    • /
    • pp.51-71
    • /
    • 2014
  • This paper presents an image enhancement method in real road traffic scenes. The images captured by the camera on the car cannot keep the color constancy as illumination or weather changes. In the real environment, these problems are more worse at back light conditions and at night that make more difficult to the applications of the vision-based intelligent vehicles. Using the existing image enhancement methods without considering the position and intensity of the light source and their geometric relations the image quality can even be deteriorated. Thus, this paper presents a fast and effective method for image enhancement resembling human cognitive system which consists of 1) image preprocessing, 2) color-contrast evaluation, 3) alpha blending of over/under estimated image and preprocessed image. An input image is first preprocessed by gamma correction, and then enhanced by an Automatic Color Enhancement(ACE) method. Finally, the preprocessed image and the ACE image are blended to improve image visibility. The proposed method shows drastically enhanced results visually, and improves the performance in traffic sign detection of the vision based intelligent vehicle applications.

Development of Road-Following Controller for Autonomous Vehicle using Relative Similarity Modular Network (상대분할 신경회로망에 의한 자율주행차량 도로추적 제어기의 개발)

  • Ryoo, Young-Jae;Lim, Young-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.550-557
    • /
    • 1999
  • This paper describes a road-following controller using the proposed neural network for autonomous vehicle. Road-following with visual sensor like camera requires intelligent control algorithm because analysis of relation from road image to steering control is complex. The proposed neural network, relative similarity modular network(RSMN), is composed of some learning networks and a partitioniing network. The partitioning network divides input space into multiple sections by similarity of input data. Because divided section has simlar input patterns, RSMN can learn nonlinear relation such as road-following with visual control easily. Visual control uses two criteria on road image from camera; one is position of vanishing point of road, the other is slope of vanishing line of road. The controller using neural network has input of two criteria and output of steering angle. To confirm performance of the proposed neural network controller, a software is developed to simulate vehicle dynamics, camera image generation, visual control, and road-following. Also, prototype autonomous electric vehicle is developed, and usefulness of the controller is verified by physical driving test.

  • PDF

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

A Study on Improving License Plate Recognition Performance Using Super-Resolution Techniques

  • Kyeongseok JANG;Kwangchul SON
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2024
  • In this paper, we propose an innovative super-resolution technique to address the issue of reduced accuracy in license plate recognition caused by low-resolution images. Conventional vehicle license plate recognition systems have relied on images obtained from fixed surveillance cameras for traffic detection to perform vehicle detection, tracking, and license plate recognition. However, during this process, image quality degradation occurred due to the physical distance between the camera and the vehicle, vehicle movement, and external environmental factors such as weather and lighting conditions. In particular, the acquisition of low-resolution images due to camera performance limitations has been a major cause of significantly reduced accuracy in license plate recognition. To solve this problem, we propose a Single Image Super-Resolution (SISR) model with a parallel structure that combines Multi-Scale and Attention Mechanism. This model is capable of effectively extracting features at various scales and focusing on important areas. Specifically, it generates feature maps of various sizes through a multi-branch structure and emphasizes the key features of license plates using an Attention Mechanism. Experimental results show that the proposed model demonstrates significantly improved recognition accuracy compared to existing vehicle license plate super-resolution methods using Bicubic Interpolation.