• Title/Summary/Keyword: Image deep learning

Search Result 1,806, Processing Time 0.03 seconds

Application of Mask R-CNN Algorithm to Detect Cracks in Concrete Structure (콘크리트 구조체 균열 탐지에 대한 Mask R-CNN 알고리즘 적용성 평가)

  • Bae, Byongkyu;Choi, Yongjin;Yun, Kangho;Ahn, Jaehun
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.3
    • /
    • pp.33-39
    • /
    • 2024
  • Inspecting cracks to determine a structure's condition is crucial for accurate safety diagnosis. However, visual crack inspection methods can be subjective and are dependent on field conditions, thereby resulting in low reliability. To address this issue, this study automates the detection of concrete cracks in image data using ResNet, FPN, and the Mask R-CNN components as the backbone, neck, and head of a convolutional neural network. The performance of the proposed model is analyzed using the intersection over the union (IoU). The experimental dataset contained 1,203 images divided into training (70%), validation (20%), and testing (10%) sets. The model achieved an IoU value of 95.83% for testing, and there were no cases where the crack was not detected. These findings demonstrate that the proposed model realized highly accurate detection of concrete cracks in image data.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Computer Vision-Based Measurement Method for Wire Harness Defect Classification

  • Yun Jung Hong;Geon Lee;Jiyoung Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.77-84
    • /
    • 2024
  • In this paper, we propose a method for accurately and rapidly detecting defects in wire harnesses by utilizing computer vision to calculate six crucial measurement values: the length of crimped terminals, the dimensions (width) of terminal ends, and the width of crimped sections (wire and core portions). We employ Harris corner detection to locate object positions from two types of data. Additionally, we generate reference points for extracting measurement values by utilizing features specific to each measurement area and exploiting the contrast in shading between the background and objects, thus reflecting the slope of each sample. Subsequently, we introduce a method using the Euclidean distance and correction coefficients to predict values, allowing for the prediction of measurements regardless of changes in the wire's position. We achieve high accuracy for each measurement type, 99.1%, 98.7%, 92.6%, 92.5%, 99.9%, and 99.7%, achieving outstanding overall average accuracy of 97% across all measurements. This inspection method not only addresses the limitations of conventional visual inspections but also yields excellent results with a small amount of data. Moreover, relying solely on image processing, it is expected to be more cost-effective and applicable with less data compared to deep learning methods.

Development of an Automatic Classification Model for Construction Site Photos with Semantic Analysis based on Korean Construction Specification (표준시방서 기반의 의미론적 분석을 반영한 건설 현장 사진 자동 분류 모델 개발)

  • Park, Min-Geon;Kim, Kyung-Hwan
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.3
    • /
    • pp.58-67
    • /
    • 2024
  • In the era of the fourth industrial revolution, data plays a vital role in enhancing the productivity of industries. To advance digitalization in the construction industry, which suffers from a lack of available data, this study proposes a model that classifies construction site photos by work types. Unlike traditional image classification models that solely rely on visual data, the model in this study includes semantic analysis of construction work types. This is achieved by extracting the significance of relationships between objects and work types from the standard construction specification. These relationships are then used to enhance the classification process by correlating them with objects detected in photos. This model improves the interpretability and reliability of classification results, offering convenience to field operators in photo categorization tasks. Additionally, the model's practical utility has been validated through integration into a classification program. As a result, this study is expected to contribute to the digitalization of the construction industry.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

Automatic Generation of Land Cover Map Using Residual U-Net (Residual U-Net을 이용한 토지피복지도 자동 제작 연구)

  • Yoo, Su Hong;Lee, Ji Sang;Bae, Jun Su;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.5
    • /
    • pp.535-546
    • /
    • 2020
  • Land cover maps are derived from satellite and aerial images by the Ministry of Environment for the entire Korea since 1998. Even with their wide application in many sectors, their usage in research community is limited. The main reason for this is the map compilation cycle varies too much over the different regions. The situation requires us a new and quicker methodology for generating land cover maps. This study was conducted to automatically generate land cover map using aerial ortho-images and Landsat 8 satellite images. The input aerial and Landsat 8 image data were trained by Residual U-Net, one of the deep learning-based segmentation techniques. Study was carried out by dividing three groups. First and second group include part of level-II (medium) categories and third uses group level-III (large) classification category defined in land cover map. In the first group, the results using all 7 classes showed 86.6 % of classification accuracy The other two groups, which include level-II class, showed 71 % of classification accuracy. Based on the results of the study, the deep learning-based research for generating automatic level-III classification was presented.

Estimating the Stand Level Vegetation Structure Map Using Drone Optical Imageries and LiDAR Data based on an Artificial Neural Networks (ANNs) (인공신경망 기반 드론 광학영상 및 LiDAR 자료를 활용한 임분단위 식생층위구조 추정)

  • Cha, Sungeun;Jo, Hyun-Woo;Lim, Chul-Hee;Song, Cholho;Lee, Sle-Gee;Kim, Jiwon;Park, Chiyoung;Jeon, Seong-Woo;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.653-666
    • /
    • 2020
  • Understanding the vegetation structure is important to manage forest resources for sustainable forest development. With the recent development of technology, it is possible to apply new technologies such as drones and deep learning to forests and use it to estimate the vegetation structure. In this study, the vegetation structure of Gongju, Samchuk, and Seoguipo area was identified by fusion of drone-optical images and LiDAR data using Artificial Neural Networks(ANNs) with the accuracy of 92.62% (Kappa value: 0.59), 91.57% (Kappa value: 0.53), and 86.00% (Kappa value: 0.63), respectively. The vegetation structure analysis technology using deep learning is expected to increase the performance of the model as the amount of information in the optical and LiDAR increases. In the future, if the model is developed with a high-complexity that can reflect various characteristics of vegetation and sufficient sampling, it would be a material that can be used as a reference data to Korea's policies and regulations by constructing a country-level vegetation structure map.