• Title/Summary/Keyword: Shadow Dataset

Search Result 11, Processing Time 0.028 seconds

A method of generating virtual shadow dataset of buildings for the shadow detection and removal

  • Kim, Kangjik;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.49-56
    • /
    • 2020
  • Detecting shadows in images and restoring or removing them was a very challenging task in computer vision. Traditional researches used color information, edges, and thresholds to detect shadows, but there were errors such as not considering the penumbra area of shadow or even detecting a black area that is not a shadow. Deep learning has been successful in various fields of computer vision, and research on applying deep learning has started in the field of shadow detection and removal. However, it was very difficult and time-consuming to collect data for network learning, and there were many limited conditions for shooting. In particular, it was more difficult to obtain shadow data from buildings and satellite images, which hindered the progress of the research. In this paper, we propose a method for generating shadow data from buildings and satellites using Unity3D. In the virtual Unity space, 3D objects existing in the real world were placed, and shadows were generated using lights effects to shoot. Through this, it is possible to get all three types of images (shadow-free, shadow image, shadow mask) necessary for shadow detection and removal when training deep learning networks. The method proposed in this paper contributes to helping the progress of the research by providing big data in the field of building or satellite shadow detection and removal research, which is difficult for learning deep learning networks due to the absence of data. And this can be a suboptimal method. We believe that we have contributed in that we can apply virtual data to test deep learning networks before applying real data.

Adversarial Shade Generation and Training Text Recognition Algorithm that is Robust to Text in Brightness (밝기 변화에 강인한 적대적 음영 생성 및 훈련 글자 인식 알고리즘)

  • Seo, Minseok;Kim, Daehan;Choi, Dong-Geol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.276-282
    • /
    • 2021
  • The system for recognizing text in natural scenes has been applied in various industries. However, due to the change in brightness that occurs in nature such as light reflection and shadow, the text recognition performance significantly decreases. To solve this problem, we propose an adversarial shadow generation and training algorithm that is robust to shadow changes. The adversarial shadow generation and training algorithm divides the entire image into a total of 9 grids, and adjusts the brightness with 4 trainable parameters for each grid. Finally, training is conducted in a adversarial relationship between the text recognition model and the shaded image generator. As the training progresses, more and more difficult shaded grid combinations occur. When training with this curriculum-learning attitude, we not only showed a performance improvement of more than 3% in the ICDAR2015 public benchmark dataset, but also confirmed that the performance improved when applied to our's android application text recognition dataset.

Color Intensity Variation based Approach for Background Subtraction and Shadow Detection

  • Erdenebatkhaan, Turbat;Kim, Hyoung-Nyoun;Lee, Joong-Ho;Kim, Sung-Joon;Park, Ji-Hyung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.298-301
    • /
    • 2007
  • Computational speed plays key role in background subtraction and shadow detection, because those are only preprocessing steps of a moving object segmentation, tracking and activity recognition. A color intensity variation based approach fastly detect a moving object and extract shadow in a image sequences. The moving object is subtracted from background using meanmax, meanmin thresholds and shadow is detected by decrease limit and correspondence thresholds. The proposed approach relies on the ability to represent shadow cast impact by offline experiment dataset on sub grouped RGB color space.

  • PDF

SHADOW EXTRACTION FROM ASTER IMAGE USING MIXED PIXEL ANALYSIS

  • Kikuchi, Yuki;Takeshi, Miyata;Masataka, Takagi
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.727-731
    • /
    • 2003
  • ASTER image has some advantages for classification such as 15 spectral bands and 15m ${\sim}$ 90m spatial resolution. However, in the classification using general remote sensing image, shadow areas are often classified into water area. It is very difficult to divide shadow and water. Because reflectance characteristics of water is similar to characteristics of shadow. Many land cover items are consisted in one pixel which is 15m spatial resolution. Nowadays, very high resolution satellite image (IKONOS, Quick Bird) and Digital Surface Model (DSM) by air borne laser scanner can also be used. In this study, mixed pixel analysis of ASTER image has carried out using IKONOS image and DSM. For mixed pixel analysis, high accurated geometric correction was required. Image matching method was applied for generating GCP datasets. IKONOS image was rectified by affine transform. After that, one pixel in ASTER image should be compared with corresponded 15×15 pixel in IKONOS image. Then, training dataset were generated for mixed pixel analysis using visual interpretation of IKONOS image. Finally, classification will be carried out based on Linear Mixture Model. Shadow extraction might be succeeded by the classification. The extracted shadow area was validated using shadow image which generated from 1m${\sim}$2m spatial resolution DSM. The result showed 17.2% error was occurred in mixed pixel. It might be limitation of ASTER image for shadow extraction because of 8bit quantization data.

  • PDF

Image-based Extraction of Histogram Index for Concrete Crack Analysis

  • Kim, Bubryur;Lee, Dong-Eun
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.912-919
    • /
    • 2022
  • The study is an image-based assessment that uses image processing techniques to determine the condition of concrete with surface cracks. The preparations of the dataset include resizing and image filtering to ensure statistical homogeneity and noise reduction. The image dataset is then segmented, making it more suited for extracting important features and easier to evaluate. The image is transformed into grayscale which removes the hue and saturation but retains the luminance. To create a clean edge map, the edge detection process is utilized to extract the major edge features of the image. The Otsu method is used to minimize intraclass variation between black and white pixels. Additionally, the median filter was employed to reduce noise while keeping the borders of the image. Image processing techniques are used to enhance the significant features of the concrete image, especially the defects. In this study, the tonal zones of the histogram and its properties are used to analyze the condition of the concrete. By examining the histogram, the viewer will be able to determine the information on the image through the number of pixels associated and each tonal characteristic on a graph. The features of the five tonal zones of the histogram which implies the qualities of the concrete image may be evaluated based on the quality of the contrast, brightness, highlights, shadow spikes, or the condition of the shadow region that corresponds to the foreground.

  • PDF

Automatic crack detection of dam concrete structures based on deep learning

  • Zongjie Lv;Jinzhang Tian;Yantao Zhu;Yangtao Li
    • Computers and Concrete
    • /
    • v.32 no.6
    • /
    • pp.615-623
    • /
    • 2023
  • Crack detection is an essential method to ensure the safety of dam concrete structures. Low-quality crack images of dam concrete structures limit the application of neural network methods in crack detection. This research proposes a modified attentional mechanism model to reduce the disturbance caused by uneven light, shadow, and water spots in crack images. Also, the focal loss function solves the small ratio of crack information. The dataset collects from the network, laboratory and actual inspection dataset of dam concrete structures. This research proposes a novel method for crack detection of dam concrete structures based on the U-Net neural network, namely AF-UNet. A mutual comparison of OTSU, Canny, region growing, DeepLab V3+, SegFormer, U-Net, and AF-UNet (proposed) verified the detection accuracy. A binocular camera detects cracks in the experimental scene. The smallest measurement width of the system is 0.27 mm. The potential goal is to achieve real-time detection and localization of cracks in dam concrete structures.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

CNN-based Shadow Detection Method using Height map in 3D Virtual City Model (3차원 가상도시 모델에서 높이맵을 이용한 CNN 기반의 그림자 탐지방법)

  • Yoon, Hee Jin;Kim, Ju Wan;Jang, In Sung;Lee, Byung-Dai;Kim, Nam-Gi
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the use of real-world image data has been increasing to express realistic virtual environments in various application fields such as education, manufacturing, and construction. In particular, with increasing interest in digital twins like smart cities, realistic 3D urban models are being built using real-world images, such as aerial images. However, the captured aerial image includes shadows from the sun, and the 3D city model including the shadows has a problem of distorting and expressing information to the user. Many studies have been conducted to remove the shadow, but it is recognized as a challenging problem that is still difficult to solve. In this paper, we construct a virtual environment dataset including the height map of buildings using 3D spatial information provided by VWorld, and We propose a new shadow detection method using height map and deep learning. According to the experimental results, We can observed that the shadow detection error rate is reduced when using the height map.

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

  • Luu, Hung V.;Pham, Manh V.;Man, Chuc D.;Bui, Hung Q.;Nguyen, Thanh T.N.
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2016
  • Impervious surfaces are important indicators for urban development monitoring. Accurate mapping of urban impervious surfaces with observational satellites, such as VNREDSat-1, remains challenging due to the spectral diversity not captured by an individual PAN image. In this article, five multi-resolution image fusion techniques were compared for the task of classifting urban impervious surfaces. The result shows that for VNREDSat-1 dataset, UNB and Wavelet tranformation methods are the best techniques in reserving spatial and spectral information of original MS image, respectively. However, the UNB technique gives the best results when it comes to impervious surface classification, especially in the case of shadow areas included in non-impervious surface group.

Study on the LOWTRAN7 Simulation of the Atmospheric Radiative Transfer Using CAGEX Data. (CAGEX 관측자료를 이용한 LOWTRAN7의 대기 복사전달 모의에 대한 조사)

  • 장광미;권태영;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.99-120
    • /
    • 1997
  • Solar radiation is scattered and absorbed atmospheric compositions in the atmosphere before it reaches the surface and, then after reflected at the surface, until it reaches the satellite sensor. Therefore, consideration of the radiative transfer through the atmosphere is essential for the quantitave analysis of the satellite sensed data, specially at shortwave region. This study examined a feasibility of using radiative transfer code for estimating the atmospheric effects on satellite remote sensing data. To do this, the flux simulated by LOWTRAN7 is compared with CAGEX data in shortwave region. The CAGEX (CERES/ARM/GEWEX Experiment) data provides a dataset of (1) atmospheric soundings, aerosol optical depth and albedo, (2) ARM(Aerosol Radiation Measurement) radiation flux measured by pyrgeometers, pyrheliometer and shadow pyranometer and (3) broadband shortwave flux simulated by Fu-Liou's radiative transfer code. To simulate aerosol effect using the radiative transfer model, the aerosol optical characteristics were extracted from observed aerosol column optical depth, Spinhirne's experimental vertical distribution of scattering coefficient and D'Almeida's statistical atmospheric aerosols radiative characteristics. Simulation of LOWTRAN7 are performed on 31 sample of completely clear days. LOWTRAN's result and CAGEX data are compared on upward, downward direct, downward diffuse solar flux at the surface and upward solar flux at the top of the atmosphere(TOA). The standard errors in LOWTRAN7 simulation of the above components are within 5% except for the downward diffuse solar flux at the surface(6.9%). The results show that a large part of error in LOWTRAN7 flux simulation appeared in the diffuse component due to scattering mainly by atmispheric aerosol. For improving the accuracy of radiative transfer simulation by model, there is a need to provide better information about the radiative charateristrics of atmospheric aerosols.