• Title/Summary/Keyword: Intelligent vehicle vision system

Search Result 64, Processing Time 0.025 seconds

A Design of Color-identifying Multi Vehicle Controller for Material Delivery Using Adaptive Fuzzy Controller (적응 퍼지제어기를 이용한 컬러식별 Multi Vehicle의 물류이송을 위한 다중제어기 설계)

  • Kim, Hun-Mo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.5
    • /
    • pp.42-49
    • /
    • 2001
  • In This paper, we present a collaborative method for material delivery using a distributed vehicle agents system. Generally used AGV(Autonomous Guided Vehicle) systems in FA(Factory Automation) require extraordinary facilities like guidepaths and landmarks and have numerous limitations for application in different environments. Moreover in the case of controlling multi vehicles, the necessity for developing corporation abilities like loading and unloading materials between vehicles including different types is increasing nowadays for automation of material flow. Thus to compensate and improve the functions of AGV, it is important to endow vehicles with the intelligence to recognize environments and goods and to determine the goal point to approach. In this study we propose an interaction method between hetero-type vehicles and adaptive fuzzy logic controllers for sensor-based path planning methods and material identifying methods which recognizes color. For the purpose of carrying materials to the goal, simple color sensor is used instead of intricate vision system to search for material and recognize its color in order to determine the goal point to transfer it to. The technique for the proposed method will be demonstrated by experiment.

  • PDF

An Estimation Methodology of Empirical Flow-density Diagram Using Vision Sensor-based Probe Vehicles' Time Headway Data (개별 차량의 비전 센서 기반 차두 시간 데이터를 활용한 경험적 교통류 모형 추정 방법론)

  • Kim, Dong Min;Shim, Jisup
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.2
    • /
    • pp.17-32
    • /
    • 2022
  • This study explored an approach to estimate a flow-density diagram(FD) on a link in highway traffic environment by utilizing probe vehicles' time headway records. To study empirical flow-density diagram(EFD), the probe vehicles with vision sensors were recruited for collecting driving records for nine months and the vision sensor data pre-processing and GIS-based map matching were implemented. Then, we examined the new EFDs to evaluate validity with reference diagrams which is derived from loop detection traffic data. The probability distributions of time headway and distance headway as well as standard deviation of flow and density were utilized in examination. As a result, it turned out that the main factors for estimation errors are the limited number of probe vehicles and bias of flow status. We finally suggest a method to improve the accuracy of EFD model.

Fuzzy Algorithm Development for the Integration of Vehicle Simulator with All Terrain Unmanned Vehicle (험로 주행용 무인차량과 차량 시뮬레이터의 융합을 위한 퍼지 알고리즘 개발)

  • Yun, Duk-Sun;Yu, Hwan-Sin;Lim, Ha-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.47-57
    • /
    • 2005
  • In this research, the main theme is the system integration of driving simulator and unmanned vehicle. The total system is composed of the mater system and the slave system. The master system has a cockpit system and the driving simulator. The slave system means an unmanned vehicle, which is composed of the actuator system the sensory system and the vision system. The communication system is composed of RS-232C serial communication system which combines the master system with the slave system. To integrate both systems, the signal classification and system characteristics considered DSP(Digital Signal Processing) filter is designed with signal sampling and measurement theory. In addition, to simulate the motion of tele-operated unmanned vehicle on the driving simulator, the classical washout algorithm is applied to this filter, because the unmanned vehicle does not have a limited working space, while the driving simulator has a narrow working space and it is difficult to cover all the motion of the unmanned vehicle. Because the classical washout algorithm has a defect of fixed high pass later, fuzzy logic is applied to reimburse it through an adaptive filter and scale factor for realistic motion generation on the driving simulator.

  • PDF

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

Intelligent Drowsiness Drive Warning System (지능형 졸음 운전 경고 시스템)

  • Joo, Young-Hoon;Kim, Jin-Kyu;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.223-229
    • /
    • 2008
  • In this paper. we propose the real-time vision system which judges drowsiness driving based on levels of drivers' fatigue. The proposed system is to prevent traffic accidents by warning the drowsiness and carelessness using face-image analysis and fuzzy logic algorithm. We find the face position and eye areas by using fuzzy skin filter and virtual face model in order to develop the real-time face detection algorithm, and we measure the eye blinking frequency and eye closure duration by using their informations. And then we propose the method for estimating the levels of drivel's fatigue based on measured data by using the fuzzy logic and for deciding whether drowsiness driving is or not. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

Development of Mask-RCNN Based Axle Control Violation Detection Method for Enforcement on Overload Trucks (과적 화물차 단속을 위한 Mask-RCNN기반 축조작 검지 기술 개발)

  • Park, Hyun suk;Cho, Yong sung;Kim, Young Nam;Kim, Jin pyung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.57-66
    • /
    • 2022
  • The Road Management Administration is cracking down on overloaded vehicles by installing low-speed or high-speed WIMs at toll gates and main lines on expressways. However, in recent years, the act of intelligently evading the overloaded-vehicle control system of the Road Management Administration by illegally manipulating the variable axle of an overloaded truck is increasing. In this manipulation, when entering the overloaded-vehicle checkpoint, all axles of the vehicle are lowered to pass normally, and when driving on the main road, the variable axle of the vehicle is illegally lifted with the axle load exceeding 10 tons alarmingly. Therefore, this study developed a technology to detect the state of the variable axle of a truck driving on the road using roadside camera images. In particular, this technology formed the basis for cracking down on overloaded vehicles by lifting the variable axle after entering the checkpoint and linking the vehicle with the account information of the checkpoint. Fundamentally, in this study, the tires of the vehicle were recognized using the Mask RCNN algorithm, the recognized tires were virtually arranged before and after the checkpoint, and the height difference of the vehicle was measured from the arrangement to determine whether the variable axle was lifted after the vehicle left the checkpoint.

Vision-Based High Accuracy Vehicle Positioning Technology (비전 기반 고정밀 차량 측위 기술)

  • Jo, Sang-Il;Lee, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1950-1958
    • /
    • 2016
  • Today, technique for precisely positioning vehicles is very important in C-ITS(Cooperative Intelligent Transport System), Self-Driving Car and other information technology relating to transportation. Though the most popular technology for vehicle positioning is the GPS, its accuracy is not reliable because of large delay caused by multipath effect, which is very bad for realtime traffic application. Therefore, in this paper, we proposed the Vision-Based High Accuracy Vehicle Positioning Technology. At the first step of proposed algorithm, the ROI is set up for road area and the vehicles detection. Then, center and four corners points of found vehicles on the road are determined. Lastly, these points are converted into aerial view map using homography matrix. By analyzing performance of algorithm, we find out that this technique has high accuracy with average error of result is less than about 20cm and the maximum value is not exceed 44.72cm. In addition, it is confirmed that the process of this algorithm is fast enough for real-time positioning at the $22-25_{FPS}$.

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.