• Title/Summary/Keyword: Night images

Search Result 176, Processing Time 0.028 seconds

Day and night license plate detection using tail-light color and image features of license plate in driving road images

  • Kim, Lok-Young;Choi, Yeong-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.25-32
    • /
    • 2015
  • In this paper, we propose a license plate detection method of running cars in various road images. The proposed method first classifies the road image into day and night images to improve detection accuracy, and then the tail-light regions are detected by finding red color areas in RGB color space. The candidate regions of the license plate areas are detected by using symmetrical property, size, width and variance of the tail-light regions, and to find the license plate areas of the various sizes the morphological operations with adaptive size structuring elements are applied. Finally, the plate area is verified and confirmed with the geometrical and image features of the license plate areas. The proposed method was tested with the various road images and the detection rates (precisions) of 84.2% of day images and 87.4% of night images were achieved.

Color Enhancement in Images with Single CCD camera in Night Vision Environment

  • Hwang, Wonjun;Ko, Hanseok
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.58-61
    • /
    • 2000
  • In this paper, we describe an effective method to enhance the color night images with spatio-temporal multi-scale retinex focused to the Intelligent Transportation System (ITS) applications such as in the single CCD based Electronic Toll Collection System (ETCS). The basic spatial retinex is known to provide color constancy while effectively removing local shades. However, it is relatively ineffective in night vision enhancement. Our proposed method, STMSR, exploits the iterative time averaging of image sequences to suppress the noise in consideration of the moving vehicles in image frame. In the STMSR method, the spatial term makes the dark images distinguishable and preserves the color information day and night while the temporal term reduces the noise effect for sharper and clearer reconstruction of the contents in each image frame. We show through representative simulations that incorporating both terms in the modeling produces the output sequential images visually more pleasing than the original dim images.

  • PDF

A Double-channel Four-band True Color Night Vision System

  • Jiang, Yunfeng;Wu, Dongsheng;Liu, Jie;Tian, Kuo;Wang, Dan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.608-618
    • /
    • 2022
  • By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.

CycleGAN-based Object Detection under Night Environments (CycleGAN을 이용한 야간 상황 물체 검출 알고리즘)

  • Cho, Sangheum;Lee, Ryong;Na, Jaemin;Kim, Youngbin;Park, Minwoo;Lee, Sanghwan;Hwang, Wonjun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.44-54
    • /
    • 2019
  • Recently, image-based object detection has made great progress with the introduction of Convolutional Neural Network (CNN). Many trials such as Region-based CNN, Fast R-CNN, and Faster R-CNN, have been proposed for achieving better performance in object detection. YOLO has showed the best performance under consideration of both accuracy and computational complexity. However, these data-driven detection methods including YOLO have the fundamental problem is that they can not guarantee the good performance without a large number of training database. In this paper, we propose a data sampling method using CycleGAN to solve this problem, which can convert styles while retaining the characteristics of a given input image. We will generate the insufficient data samples for training more robust object detection without efforts of collecting more database. We make extensive experimental results using the day-time and night-time road images and we validate the proposed method can improve the object detection accuracy of the night-time without training night-time object databases, because we converts the day-time training images into the synthesized night-time images and we train the detection model with the real day-time images and the synthesized night-time images.

Image Enhancement Technology for Improved Object Recognition in Car Black Box Night

  • Lee, Kyedoo;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.168-174
    • /
    • 2017
  • Videos recorded on surveillance cameras or by car black boxes at night have distorted images due to illumination variation. Therefore, it is difficult to analyze morphological characteristics of objects, and it is limiting to use such distorted images as evidence in traffic accidents. Image restoration is performed by amplifying the brightness of nighttime images using linearized gamma correction to increase their contrast (which destroys visual information) and by minimizing degradation factors caused by irregular traveling.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Development of A Prototype Device to Capture Day/Night Cloud Images based on Whole-Sky Camera Using the Illumination Data (정밀조도정보를 이용한 전천카메라 기반의 주·야간 구름영상촬영용 원형장치 개발)

  • Lee, Jaewon;Park, Inchun;cho, Jungho;Ki, GyunDo;Kim, Young Chul
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.317-324
    • /
    • 2018
  • In this study, we review the ground-based whole-sky camera (WSC), which is developed to continuously capture day and night cloud images using the illumination data from a precision Lightmeter with a high temporal resolution. The WSC is combined with a precision Lightmeter developed in IYA (International Year of Astronomy) for analysis of an artificial light pollution at night and a DSLR camera equipped with a fish-eye lens widely applied in observational astronomy. The WSC is designed to adjust the shutter speed and ISO of the equipped camera according to illumination data in order to stably capture cloud images. And Raspberry Pi is applied to control automatically the related process of taking cloud and sky images every minute under various conditions depending on illumination data from Lightmeter for 24 hours. In addition, it is utilized to post-process and store the cloud images and to upload the data to web page in real time. Finally, we check the technical possibility of the method to observe the cloud distribution (cover, type, height) quantitatively and objectively by the optical system, through analysis of the captured cloud images from the developed device.

Design of Optimized RBFNNs based on Night Vision Face Recognition Simulator Using the 2D2 PCA Algorithm ((2D)2 PCA알고리즘을 이용한 최적 RBFNNs 기반 나이트비전 얼굴인식 시뮬레이터 설계)

  • Jang, Byoung-Hee;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.1-6
    • /
    • 2014
  • In this study, we propose optimized RBFNNs based on night vision face recognition simulator with the aid of $(2D)^2$ PCA algorithm. It is difficult to obtain the night image for performing face recognition due to low brightness in case of image acquired through CCD camera at night. For this reason, a night vision camera is used to get images at night. Ada-Boost algorithm is also used for the detection of face images on both face and non-face image area. And the minimization of distortion phenomenon of the images is carried out by using the histogram equalization. These high-dimensional images are reduced to low-dimensional images by using $(2D)^2$ PCA algorithm. Face recognition is performed through polynomial-based RBFNNs classifier, and the essential design parameters of the classifiers are optimized by means of Differential Evolution(DE). The performance evaluation of the optimized RBFNNs based on $(2D)^2$ PCA is carried out with the aid of night vision face recognition system and IC&CI Lab data.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

An Analysis of Night and Day Images of Bridges Over the Han River in Seoul (서울시 한강교량 주야간 경관이미지 분석)

  • 서주환;최현상;차정우
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.30 no.5
    • /
    • pp.31-38
    • /
    • 2002
  • This study attempts to grasp the correlation between the image of bridges and bridge landscapes with their surroundings during day and nighttime viewing, and to understand the psychological influence of nighttime lighting through quantitative analysis. In addition, it presents a design to construct bridges in order to increase viewers enjoyment of bridge landscapes lit at night. To attain this objective and contrive generalization of the results, this paper selects 8 of 9 bridges with lightings in Seoul and excludes bridges constructed by 2004. The criteria for selection of the viewpoints is that each must be within easy reach of bridges, and must allow viewers to recognize surrounding landscape details both in daylight and at night. As well, the pictures of bridges are taken in the terraced land by the riverside. The study selects 16 pictures, judged to be of similar quality and angle, to establish the conditions of luminosity, color, definition and angle. The results are as follows. First, viewers preferences of night landscapes are higher than day landscapes due to the effect of lighting. By day, viewers preferred bridges with various structures such as cable-stayed bridges and arch bridges more than simple bridges like girder bridges. Viewers also indicated preferences for lightings which feature a unique color and which are harmonized with their surroundings. Second, components representing the images of bridge landscape are classified into three types, 'beauty', 'system' and 'agreeableness'. Third, the factors affecting preference are the shape of bridge by day and lighting at night. Esthetic appeal is the most important factor in visual preference so each bridges own esthetic appeal and surroundings must be considered. Thus, a complete plan must be created which considers safety, beauty and the local surroundings. In addition, when the lighting of a bridge is selected, the design of the bridge landscape must consider various lighting schemes to harmonize the upper and lower parts of the structure. At this point, the study reveals the basic elements of bridge planning in order to increase appreciation of the bridge landscape.