• Title/Summary/Keyword: traffic light detection

Search Result 58, Processing Time 0.023 seconds

Development of IoT System Based on Context Awareness to Assist the Visually Impaired

  • Song, Mi-Hwa
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.320-328
    • /
    • 2021
  • As the number of visually impaired people steadily increases, interest in independent walking is also increasing. However, there are various inconveniences in the independent walking of the visually impaired at present, reducing the quality of life of the visually impaired. The white cane, which is an existing walking aid for the visually impaired, has difficulty in recognizing upper obstacles and obstacles outside the effective distance. In addition, it is inconvenient to cross the street because the sound signal to help the visually impaired cross the crosswalk is lacking or damaged. These factors make it difficult for the visually impaired to walk independently. Therefore, we propose the design of an embedded system that provides traffic light recognition through object recognition technology, voice guidance using TTS, and upper obstacle recognition through ultrasonic sensors so that blind people can realize safe and high-quality independent walking.

A Study on Development of Systems to Enforce the interfering Cars on the Ramp (끼어들기 단속시스템 개발 연구)

  • Lee, Ho-Won;Hyun, Cheol-Seung;Joo, Doo-Hwan;Jeong, Jun-Ha;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.5
    • /
    • pp.7-14
    • /
    • 2012
  • We frequently confront with cars interfering into our lane on the ramp. We suffered from serious traffic congestion due to the interfering cars. But the police enforcement has not done actively because it's hard to enforce. In this study, we have evaluated the systems to enforce cutting-in cars through the field test. Generally, the image processing method depends on the weather. To overcome this limitation we proposed a new algorithm combined with section detection method. In the filed test we concluded the results as follows. Whereas the violation detection rate of the general image processing was 58.2%, a new algorithm proposed by this study was 74.5%. And, an error rate enforcing vehicles that do not violate was 0.0%. Also, we can use the existing facilities, such as street light because of compact and lightweight systems which are integrated camera with controller. Therefore, we concluded that it is possible to enforce the interfering Cars using vehicle enforcement systems.

Development of a Driver Safety Information Service Model Using Point Detectors at Signalized Intersections (지점검지자료 기반 신호교차로 운전자 안전서비스 개발)

  • Jang, Jeong-A;Choe, Gi-Ju;Mun, Yeong-Jun
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.5
    • /
    • pp.113-124
    • /
    • 2009
  • This paper suggests a new approach for providing information for driver safety at signalized intersections. Particularly dangerous situations at signalized intersections such as red-light violations, accelerating through yellow intervals, red-light running, and stopping abruptly due to the dilemma zone problem are considered in this study. This paper presents the development of a dangerous vehicle determination algorithm by collecting real-time vehicle speeds and times from multiple point detectors when the vehicles are traveling during phase-change. For an evaluation of this algorithm, VISSIM is used to perform a real-time multiple detection situation by changing the input data such as various inflow-volume, design speed change, driver perception, and response time. As a result the correct-classification rate is approximately 98.5% and the prediction rate of the algorithm is approximately 88.5%. This paper shows the sensitivity results by changing the input data. This result showed that the new approach can be used to improve safety for signalized intersections.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Development of Street Crossing Assistive Embedded System for the Visually-Impaired Using Machine Learning Algorithm (머신러닝을 이용한 시각장애인 도로 횡단 보조 임베디드 시스템 개발)

  • Oh, SeonTaek;Jeong, Kidong;Kim, Homin;Kim, Young-Keun
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.41-47
    • /
    • 2019
  • In this study, a smart assistive device is designed to recognize pedestrian signal and to provide audio instructions for visually impaired people in crossing streets safely. Walking alone is one of the biggest challenges to the visually impaired and it deteriorates their life quality. The proposed device has a camera attached on a pair of glasses which can detect traffic lights, recognize pedestrian signals in real-time using a machine learning algorithm on GPU board and provide audio instructions to the user. For the portability, the dimension of the device is designed to be compact and light but with sufficient battery life. The embedded processor of device is wired to the small camera which is attached on a pair of glasses. Also, on inner part of the leg of the glasses, a bone-conduction speaker is installed which can give audio instructions without blocking external sounds for safety reason. The performance of the proposed device was validated with experiments and it showed 87.0% recall and 100% precision for detecting pedestrian green light, and 94.4% recall and 97.1% precision for detecting pedestrian red light.

Single Image Dehazing Using Dark Channel Prior and Minimal Atmospheric Veil

  • Zhou, Xiao;Wang, Chengyou;Wang, Liping;Wang, Nan;Fu, Qiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.341-363
    • /
    • 2016
  • Haze or fog is a common natural phenomenon. In foggy weather, the captured pictures are difficult to be applied to computer vision system, such as road traffic detection, target tracking, etc. Therefore, the image dehazing technique has become a hotspot in the field of image processing. This paper presents an overview of the existing achievements on the image dehazing technique. The intent of this paper is not to review all the relevant works that have appeared in the literature, but rather to focus on two main works, that is, image dehazing scheme based on atmospheric veil and image dehazing scheme based on dark channel prior. After the overview and a comparative study, we propose an improved image dehazing method, which is based on two image dehazing schemes mentioned above. Our image dehazing method can obtain the fog-free images by proposing a more desirable atmospheric veil and estimating atmospheric light more accurately. In addition, we adjust the transmission of the sky regions and conduct tone mapping for the obtained images. Compared with other state of the art algorithms, experiment results show that images recovered by our algorithm are clearer and more natural, especially at distant scene and places where scene depth jumps abruptly.

Real-Time Vehicle License Plate Recognition System Using Adaptive Heuristic Segmentation Algorithm (적응 휴리스틱 분할 알고리즘을 이용한 실시간 차량 번호판 인식 시스템)

  • Jin, Moon Yong;Park, Jong Bin;Lee, Dong Suk;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.361-368
    • /
    • 2014
  • The LPR(License plate recognition) system has been developed to efficient control for complex traffic environment and currently be used in many places. However, because of light, noise, background changes, environmental changes, damaged plate, it only works limited environment, so it is difficult to use in real-time. This paper presents a heuristic segmentation algorithm for robust to noise and illumination changes and introduce a real-time license plate recognition system using it. In first step, We detect the plate utilized Haar-like feature and Adaboost. This method is possible to rapid detection used integral image and cascade structure. Second step, we determine the type of license plate with adaptive histogram equalization, bilateral filtering for denoise and segment accurate character based on adaptive threshold, pixel projection and associated with the prior knowledge. The last step is character recognition that used histogram of oriented gradients (HOG) and multi-layer perceptron(MLP) for number recognition and support vector machine(SVM) for number and Korean character classifier respectively. The experimental results show license plate detection rate of 94.29%, license plate false alarm rate of 2.94%. In character segmentation method, character hit rate is 97.23% and character false alarm rate is 1.37%. And in character recognition, the average character recognition rate is 98.38%. Total average running time in our proposed method is 140ms. It is possible to be real-time system with efficiency and robustness.

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF