• Title/Summary/Keyword: road environment detection

Search Result 98, Processing Time 0.025 seconds

Real Time Pothole Detection System based on Video Data for Automatic Maintenance of Road Surface Distress (도로의 파손 상태를 자동관리하기 위한 동영상 기반 실시간 포트홀 탐지 시스템)

  • Jo, Youngtae;Ryu, Seungki
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.8-19
    • /
    • 2016
  • Potholes are caused by the presence of water in the underlying soil structure, which weakens the road pavement by expansion and contraction of water at freezing and thawing temperatures. Recently, automatic pothole detection systems have been studied, such as vibration-based methods and laser scanning methods. However, the vibration-based methods have low detection accuracy and limited detection area. Moreover, the costs for laser scanning-based methods are significantly high. Thus, in this paper, we propose a new pothole detection system using a commercial black-box camera. Normally, the computing power of a commercial black-box camera is limited. Thus, the pothole detection algorithm should be designed to work with the embedded computing environment of a black-box camera. The designed pothole detection algorithm has been tested by implementing in a black-box camera. The experimental results are analyzed with specific evaluation metrics, such as sensitivity and precision. Our studies confirm that the proposed pothole detection system can be utilized to gather pothole information in real-time.

Curb Detection and Following in Various Environments by Adjusting Tilt Angle of a Laser Scanner (레이저 스캐너의 틸트 각도 조절을 통한 다양한 환경에서의 연석 탐지 및 추종)

  • Lee, Dong-Wook;Lee, Yong-Ju;Song, Jae-Bok;Baek, Joo-Hyun;Ryu, Jae-Kwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1068-1073
    • /
    • 2010
  • When a robot navigates in an outdoor environment, a curb or a sidewalk separated from the road can be used as a robust feature. However, most algorithms could detect the curb only in the straight road, and could not detect highly curved corners, ramps, and so on. This paper proposes an algorithm which enables the robot to detect and follow the curbs in various types of roads. In the proposed method, the robot tilts a laser scanner and computes the error between the predicted and the measured distances to the road in front of the robot. Based on this error, the curbs at corners and curves can be classified. It is also difficult to detect a curb near a ramp because of its low height. In this case, the robot also tilts a laser scanner to detect the curb beyond the ramp. Once the robot classifies the road into the curve, corner, ramp, the robot selects the proper navigation strategies depending on the classified road types and is able to continue to detect and follow the curb. The results of a series of experiments show that the robot can stably detect and follows the curb in curves, corners and ramps as well as the straight road.

Novel VO and HO Map for Vertical Obstacle Detection in Driving Environment (새로운 VO, HO 지도를 이용한 차량 주행환경의 수직 장애물 추출)

  • Baek, Seung-Hae;Park, Soon-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.163-173
    • /
    • 2013
  • We present a new computer vision technique which can detect unexpected or static vertical objects in road driving environment. We first obtain temporal and spatial difference images in each frame of a stereo video sequence. Using the difference images, we then generate VO and HO maps by improving the conventional V and H disparity maps. From the VO and HO maps, candidate areas of vertical obstacles on the road are detected. Finally, the candidate areas are merged and refined to detect vertical obstacles.

Development of Patrol Robot using DGPS and Curb Detection (DGPS와 연석추출을 이용한 순찰용 로봇의 개발)

  • Kim, Seung-Hun;Kim, Moon-June;Kang, Sung-Chul;Hong, Suk-Kyo;Roh, Chi-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2007
  • This paper demonstrates the development of a mobile robot for patrol. We fuse differential GPS, angle sensor and odometry data using the framework of extended Kalman filter to localize a mobile robot in outdoor environments. An important feature of road environment is the existence of curbs. So, we also propose an algorithm to find out the position of curbs from laser range finder data using Hough transform. The mobile robot builds the map of the curbs of roads and the map is used fur tracking and localization. The patrol robot system consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The remote control station receives and displays the image data. Also, the patrol robot system can be used in two modes, teleoperated or autonomous. In teleoperated mode, the teleoperator commands the mobile robot based on the image data. On the other hand, in autonomous mode, the mobile robot has to autonomously track the predefined waypoints. So, we have designed a path tracking controller to track the path. We have been able to confirm that the proposed algorithms show proper performances in outdoor environment through experiments in the road.

Development of Autonomous Navigation Robot in Outdoor Road Environments (실외 도로 환경에서의 자율주행 로봇 개발)

  • Roh, Chi-Won;Kang, Yeon-Sik;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.3
    • /
    • pp.293-299
    • /
    • 2009
  • This paper discusses an autonomous navigation system for urban environments. For the localization of the robot, EKF (Extended Kalman Filter) algorithm is used with odometry, angle sensor, and DGPS (Differential Global Positioning System) measurement. Especially in an urban environment, DGPS is often blocked by buildings and trees and the resulting inaccurate positioning prevents the robot from safe and reliable navigation. In addition to the global information from DGPS, the local information of the curb on the roadway is used to track a route when the global DGPS information is inaccurate. For this purpose, curb detection algorithm is developed and implemented in the developed navigation algorithm. Four different types of navigation strategies are developed and they are switched to adapt to different localization conditions according to the availability of DGPS and the existence of the curbs on the roadway. The experimental results show that the designed switching strategy improves the navigation performance adapting to the environment conditions.

Construction of Database for Deep Learning-based Occlusion Area Detection in the Virtual Environment (가상 환경에서의 딥러닝 기반 폐색영역 검출을 위한 데이터베이스 구축)

  • Kim, Kyeong Su;Lee, Jae In;Gwak, Seok Woo;Kang, Won Yul;Shin, Dae Young;Hwang, Sung Ho
    • Journal of Drive and Control
    • /
    • v.19 no.3
    • /
    • pp.9-15
    • /
    • 2022
  • This paper proposes a method for constructing and verifying datasets used in deep learning technology, to prevent safety accidents in automated construction machinery or autonomous vehicles. Although open datasets for developing image recognition technologies are challenging to meet requirements desired by users, this study proposes the interface of virtual simulators to facilitate the creation of training datasets desired by users. The pixel-level training image dataset was verified by creating scenarios, including various road types and objects in a virtual environment. Detecting an object from an image may interfere with the accurate path determination due to occlusion areas covered by another object. Thus, we construct a database, for developing an occlusion area detection algorithm in a virtual environment. Additionally, we present the possibility of its use as a deep learning dataset to calculate a grid map, that enables path search considering occlusion areas. Custom datasets are built using the RDBMS system.

Vehicle Classification and Tracking based on Deep Learning (딥러닝 기반의 자동차 분류 및 추적 알고리즘)

  • Hyochang Ahn;Yong-Hwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.161-165
    • /
    • 2023
  • One of the difficult works in an autonomous driving system is detecting road lanes or objects in the road boundaries. Detecting and tracking a vehicle is able to play an important role on providing important information in the framework of advanced driver assistance systems such as identifying road traffic conditions and crime situations. This paper proposes a vehicle detection scheme based on deep learning to classify and tracking vehicles in a complex and diverse environment. We use the modified YOLO as the object detector and polynomial regression as object tracker in the driving video. With the experimental results, using YOLO model as deep learning model, it is possible to quickly and accurately perform robust vehicle tracking in various environments, compared to the traditional method.

  • PDF

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

The Method of Vanishing Point Estimation in Natural Environment using RANSAC (RANSAC을 이용한 실외 도로 환경의 소실점 예측 방법)

  • Weon, Sun-Hee;Joo, Sung-Il;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.53-62
    • /
    • 2013
  • This paper proposes a method of automatically predicting the vanishing point for the purpose of detecting the road region from natural images. The proposed method stably detects the vanishing point in the road environment by analyzing the dominant orientation of the image and predicting the vanishing point to be at the position where the feature components of the image are concentrated. For this purpose, in the first stage, the image is partitioned into sub-blocks, an edge sample is selected randomly from within the sub-block, and RANSAC is applied for line fitting in order to analyze the dominant orientation of each sub-block. Once the dominant orientation has been detected for all blocks, we proceed to the second stage and randomly select line samples and apply RANSAC to perform the fitting of the intersection point, then measure the cost of the intersection model arising from each line and we predict the vanishing point to be located at the average point, based on the intersection point model with the highest cost. Lastly, quantitative and qualitative analyses are performed to verify the performance in various situations and prove the efficiency of the proposed algorithm for detecting the vanishing point.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.