• Title/Summary/Keyword: road feature information

Search Result 125, Processing Time 0.025 seconds

Traffic Sign Recognition, and Tracking Using RANSAC-Based Motion Estimation for Autonomous Vehicles (자율주행 차량을 위한 교통표지판 인식 및 RANSAC 기반의 모션예측을 통한 추적)

  • Kim, Seong-Uk;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.2
    • /
    • pp.110-116
    • /
    • 2016
  • Autonomous vehicles must obey the traffic laws in order to drive actual roads. Traffic signs erected at the side of roads explain the road traffic information or regulations. Therefore, traffic sign recognition is necessary for the autonomous vehicles. In this paper, color characteristics are first considered to detect traffic sign candidates. Subsequently, we establish HOG (Histogram of Oriented Gradients) features from the detected candidate and recognize the traffic sign through a SVM (Support Vector Machine). However, owing to various circumstances, such as changes in weather and lighting, it is difficult to recognize the traffic signs robustly using only SVM. In order to solve this problem, we propose a tracking algorithm with RANSAC-based motion estimation. Using two-point motion estimation, inlier feature points within the traffic sign are selected and then the optimal motion is calculated with the inliers through a bundle adjustment. This approach greatly enhances the traffic sign recognition performance.

Nearby Vehicle Detection in the Adjacent Lane using In-vehicle Front View Camera (차량용 전방 카메라를 이용한 근거리 옆 차선 차량 검출)

  • Baek, Yeul-Min;Lee, Gwang-Gook;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.996-1003
    • /
    • 2012
  • We present a nearby vehicle detection method in the adjacent lane using in-vehicle front view camera. Nearby vehicles in adjacent lanes show various appearances according to their relative positions to the host vehicle. Therefore, most conventional methods use motion information for detecting nearby vehicles in adjacent lanes. However, these methods can only detect overtaking vehicles which have faster speed than the host vehicle. To solve this problem, we use the feature of regions where nearby vehicle can appear. Consequently, our method cannot only detect nearby overtaking vehicles but also stationary and same speed vehicles in adjacent lanes. In our experiment, we validated our method through various whether, road conditions and real-time implementation.

Building Large-scale CityGML Feature for Digital 3D Infrastructure (디지털 3D 인프라 구축을 위한 대규모 CityGML 객체 생성 방법)

  • Jang, Hanme;Kim, HyunJun;Kang, HyeYoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.187-201
    • /
    • 2021
  • Recently, the demand for a 3D urban spatial information infrastructure for storing, operating, and analyzing a large number of digital data produced in cities is increasing. CityGML is a 3D spatial information data standard of OGC (Open Geospatial Consortium), which has strengths in the exchange and attribute expression of city data. Cases of constructing 3D urban spatial data in CityGML format has emerged on several cities such as Singapore and New York. However, the current ecosystem for the creation and editing of CityGML data is limited in constructing CityGML data on a large scale because of lack of completeness compared to commercial programs used to construct 3D data such as sketchup or 3d max. Therefore, in this study, a method of constructing CityGML data is proposed using commercial 3D mesh data and 2D polygons that are rapidly and automatically produced through aerial LiDAR (Light Detection and Ranging) or RGB (Red Green Blue) cameras. During the data construction process, the original 3D mesh data was geometrically transformed so that each object could be expressed in various CityGML LoD (Levels of Detail), and attribute information extracted from the 2D spatial information data was used as a supplement to increase the utilization as spatial information. The 3D city features produced in this study are CityGML building, bridge, cityFurniture, road, and tunnel. Data conversion for each feature and property construction method were presented, and visualization and validation were conducted.

Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles

  • Jung, Juho;Park, Manbok;Cho, Kuk;Mun, Cheol;Ahn, Junho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3955-3971
    • /
    • 2020
  • Due to the significant increase in the use of autonomous car technology, it is essential to integrate this technology with high-precision digital map data containing more precise and accurate roadway information, as compared to existing conventional map resources, to ensure the safety of self-driving operations. While existing map technologies may assist vehicles in identifying their locations via Global Positioning System, it is however difficult to update the environmental changes of roadways in these maps. Roadway vision algorithms can be useful for building autonomous vehicles that can avoid accidents and detect real-time location changes. We incorporate a hybrid architectural design that combines unsupervised classification of vision data with supervised joint fusion classification to achieve a better noise-resistant algorithm. We identify, via a deep learning approach, an intelligent hybrid fusion algorithm for fusing multimodal vision feature data for roadway classifications and characterize its improvement in accuracy over unsupervised identifications using image processing and supervised vision classifiers. We analyzed over 93,000 vision frame data collected from a test vehicle in real roadways. The performance indicators of the proposed hybrid fusion algorithm are successfully evaluated for the generation of roadway digital maps for autonomous vehicles, with a recall of 0.94, precision of 0.96, and accuracy of 0.92.

A Study on Application of GSIS for Transportation Planning and Analysis of Traffic Volume (GSIS를 이용한 교통계획과 교통량분석에 관한 연구)

  • Choi, Jae-Hwa;Park, Hee-Ju
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.1 no.1 s.1
    • /
    • pp.117-125
    • /
    • 1993
  • GSIS is a system that contains spatially referenced data that can be analyzed and converted to information for a specific set of purpose, or application. The key feature of a GSIS is the analysis of data to produce new information. The current emphasis in the transportation is to implement GSIS in conjunction with real time systems Requirements for a transportation GSIS are very different from the traditional GSIS software that has been designed for environmental and natural resource applications. A transportation GSIS may need to include the ability for franc volume, forecasting, pavement management A regional transportation planning model is actually a set of models that are used to inventory and then forecast a region's population, employment, income, housing and the demand of automobile and transit in a region. The data such as adminstration bound, m of landuse, road networks, location of schools, offices with populations are used in this paper. Many of these data are used for analyzing of traffic volume, traffic demand, time of mad construction using GSIS.

  • PDF

Registration of Three-Dimensional Point Clouds Based on Quaternions Using Linear Features (선형을 이용한 쿼터니언 기반의 3차원 점군 데이터 등록)

  • Kim, Eui Myoung;Seo, Hong Deok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.175-185
    • /
    • 2020
  • Three-dimensional registration is a process of matching data with or without a coordinate system to a reference coordinate system, which is used in various fields such as the absolute orientation of photogrammetry and data combining for producing precise road maps. Three-dimensional registration is divided into a method using points and a method using linear features. In the case of using points, it is difficult to find the same conjugate point when having different spatial resolutions. On the other hand, the use of linear feature has the advantage that the three-dimensional registration is possible by using not only the case where the spatial resolution is different but also the conjugate linear feature that is not the same starting point and ending point in point cloud type data. In this study, we proposed a method to determine the scale and the three-dimensional translation after determining the three-dimensional rotation angle between two data using quaternion to perform three-dimensional registration using linear features. For the verification of the proposed method, three-dimensional registration was performed using the linear features constructed an indoor and the linear features acquired through the terrestrial mobile mapping system in an outdoor environment. The experimental results showed that the mean square root error was 0.001054m and 0.000936m, respectively, when the scale was fixed and if not fixed, using indoor data. The results of the three-dimensional transformation in the 500m section using outdoor data showed that the mean square root error was 0.09412m when the six linear features were used, and the accuracy for producing precision maps was satisfied. In addition, in the experiment where the number of linear features was changed, it was found that nine linear features were sufficient for high-precision 3D transformation through almost no change in the root mean square error even when nine linear features or more linear features were used.

License Plate Recognition System based on Normal CCTV (일반 CCTV 기반 차량 번호판 인식 시스템)

  • Woong, Jang Ji;Man, Park Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.89-96
    • /
    • 2017
  • This Paper proposes a vehicle detection system and a license plate recognition system from CCTV images installed on public roads. Since the environment of this system acquires the image in the general road environment, the stable condition applied to the existing vehicle entry / exit system is not given, and the input image is distorted and the resolution is irregular. At the same time, the viewing angle of the input image is more wide, so that the computation load is high and the recognition accuracy of the plate is likely to be lowered. In this paper, we propose an improved method to detect and recognize a license plate without a separate input control devices. The vehicle and license plate were detected based on the HOG feature descriptor, and the characters inside the license plate were recognized using the k-NN algorithm. Experimental environment was set up for the roads more than 45m away from the CCTV, Experiments were carried out on an entry vehicle capable of visually identifying license plate and Experimental results show good results of the proposed method.

A Study on Recognition of Moving Object Crowdedness Based on Ensemble Classifiers in a Sequence (혼합분류기 기반 영상내 움직이는 객체의 혼잡도 인식에 관한 연구)

  • An, Tae-Ki;Ahn, Seong-Je;Park, Kwang-Young;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2A
    • /
    • pp.95-104
    • /
    • 2012
  • Pattern recognition using ensemble classifiers is composed of strong classifier which consists of many weak classifiers. In this paper, we used feature extraction to organize strong classifier using static camera sequence. The strong classifier is made of weak classifiers which considers environmental factors. So the strong classifier overcomes environmental effect. Proposed method uses binary foreground image by frame difference method and the boosting is used to train crowdedness model and recognize crowdedness using features. Combination of weak classifiers makes strong ensemble classifier. The classifier could make use of potential features from the environment such as shadow and reflection. We tested the proposed system with road sequence and subway platform sequence which are included in "AVSS 2007" sequence. The result shows good accuracy and efficiency on complex environment.

Preceding Vehicle Detection and Tracking with Motion Estimation by Radar-vision Sensor Fusion (레이더와 비전센서 융합기반의 움직임추정을 이용한 전방차량 검출 및 추적)

  • Jang, Jaehwan;Kim, Gyeonghwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.265-274
    • /
    • 2012
  • In this paper, we propose a method for preceding vehicle detection and tracking with motion estimation by radar-vision sensor fusion. The motion estimation proposed results in not only correction of inaccurate lateral position error observed on a radar target, but also adaptive detection and tracking of a preceding vehicle by compensating the changes in the geometric relation between the ego-vehicle and the ground due to the driving. Furthermore, the feature-based motion estimation employed to lessen computational burden reduces the number of deployment of the vehicle validation procedure. Experimental results prove that the correction by the proposed motion estimation improves the performance of the vehicle detection and makes the tracking accurate with high temporal consistency under various road conditions.

Vehicle Speed Measurement using SAD Algorithm (SAD 알고리즘을 이용한 차량 속도 측정)

  • Park, Seong-Il;Moon, Jong-Dae;Ko, Young-Hyuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.73-79
    • /
    • 2014
  • In this paper, we proposed the mechanism which can measure traffic flow and vehicle speed on the highway as well as road by using the video and image processing to detect and track cars in a video sequence. The proposed mechanism uses the first few frames of the video stream to estimate the background image. The visual tracking system is a simple algorithm based on the sum of absolute frame difference. It subtracts the background from each video frame to produce foreground images. By thresholding and performing morphological closing on each foreground image, the proposed mechanism produces binary feature images, which are shown in the threshold window. By measuring the distance between the "first white line" mark and the "second white line"mark proceeding, it is possible to find the car's position. Average velocity is defined as the change in position of an object divided by the time over which the change takes place. The results of proposed mechanism agree well with the measured data, and view the results in real time.