• Title/Summary/Keyword: deep Learning

Search Result 5,795, Processing Time 0.032 seconds

Subsurface anomaly detection utilizing synthetic GPR images and deep learning model

  • Ahmad Abdelmawla;Shihan Ma;Jidong J. Yang;S. Sonny Kim
    • Geomechanics and Engineering
    • /
    • v.33 no.2
    • /
    • pp.203-209
    • /
    • 2023
  • One major advantage of ground penetrating radar (GPR) over other field test methods is its ability to obtain subsurface images of roads in an efficient and non-intrusive manner. Not only can the strata of pavement structure be retrieved from the GPR scan images, but also various irregularities, such as cracks and internal cavities. This article introduces a deep learning-based approach, focusing on detecting subsurface cracks by recognizing their distinctive hyperbolic signatures in the GPR scan images. Given the limited road sections that contain target features, two data augmentation methods, i.e., feature insertion and generation, are implemented, resulting in 9,174 GPR scan images. One of the most popular real-time object detection models, You Only Learn One Representation (YOLOR), is trained for detecting the target features for two types of subsurface cracks: bottom cracks and full cracks from the GPR scan images. The former represents partial cracks initiated from the bottom of the asphalt layer or base layers, while the latter includes extended cracks that penetrate these layers. Our experiments show the test average precisions of 0.769, 0.803 and 0.735 for all cracks, bottom cracks, and full cracks, respectively. This demonstrates the practicality of deep learning-based methods in detecting subsurface cracks from GPR scan images.

Deep-learning-based system-scale diagnosis of a nuclear power plant with multiple infrared cameras

  • Ik Jae Jin;Do Yeong Lim;In Cheol Bang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.493-505
    • /
    • 2023
  • Comprehensive condition monitoring of large industry systems such as nuclear power plants (NPPs) is essential for safety and maintenance. In this study, we developed novel system-scale diagnostic technology based on deep-learning and IR thermography that can efficiently and cost-effectively classify system conditions using compact Raspberry Pi and IR sensors. This diagnostic technology can identify the presence of an abnormality or accident in whole system, and when an accident occurs, the type of accident and the location of the abnormality can be identified in real-time. For technology development, the experiment for the thermal image measurement and performance validation of major components at each accident condition of NPPs was conducted using a thermal-hydraulic integral effect test facility with compact infrared sensor modules. These thermal images were used for training of deep-learning model, convolutional neural networks (CNN), which is effective for image processing. As a result, a proposed novel diagnostic was developed that can perform diagnosis of components, whole system and accident classification using thermal images. The optimal model was derived based on the modern CNN model and performed prompt and accurate condition monitoring of component and whole system diagnosis, and accident classification. This diagnostic technology is expected to be applied to comprehensive condition monitoring of nuclear power plants for safety.

MULTI-APERTURE IMAGE PROCESSING USING DEEP LEARNING

  • GEONHO HWANG;CHANG HOON SONG;TAE KYUNG LEE;HOJUN NA;MYUNGJOO KANG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.1
    • /
    • pp.56-74
    • /
    • 2023
  • In order to obtain practical and high-quality satellite images containing high-frequency components, a large aperture optical system is required, which has a limitation in that it greatly increases the payload weight. As an attempt to overcome the problem, many multi-aperture optical systems have been proposed, but in many cases, these optical systems do not include high-frequency components in all directions, and making such an high-quality image is an ill-posed problem. In this paper, we use deep learning to overcome the limitation. A deep learning model receives low-quality images as input, estimates the Point Spread Function, PSF, and combines them to output a single high-quality image. We model images obtained from three rectangular apertures arranged in a regular polygon shape. We also propose the Modulation Transfer Function Loss, MTF Loss, which can capture the high-frequency components of the images. We present qualitative and quantitative results obtained through experiments.

Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board (임베디드 보드에서 영상 처리 및 딥러닝 기법을 혼용한 돼지 탐지 정확도 개선)

  • Yu, Seunghyun;Son, Seungwook;Ahn, Hanse;Lee, Sejun;Baek, Hwapyeong;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.583-599
    • /
    • 2022
  • Although the object detection accuracy with a single image has been significantly improved with the advance of deep learning techniques, the detection accuracy for pig monitoring is challenged by occlusion problems due to a complex structure of a pig room such as food facility. These detection difficulties with a single image can be mitigated by using a video data. In this research, we propose a method in pig detection for video monitoring environment with a static camera. That is, by using both image processing and deep learning techniques, we can recognize a complex structure of a pig room and this information of the pig room can be utilized for improving the detection accuracy of pigs in the monitored pig room. Furthermore, we reduce the execution time overhead by applying a pruning technique for real-time video monitoring on an embedded board. Based on the experiment results with a video data set obtained from a commercial pig farm, we confirmed that the pigs could be detected more accurately in real-time, even on an embedded board.

An Analysis of Plant Diseases Identification Based on Deep Learning Methods

  • Xulu Gong;Shujuan Zhang
    • The Plant Pathology Journal
    • /
    • v.39 no.4
    • /
    • pp.319-334
    • /
    • 2023
  • Plant disease is an important factor affecting crop yield. With various types and complex conditions, plant diseases cause serious economic losses, as well as modern agriculture constraints. Hence, rapid, accurate, and early identification of crop diseases is of great significance. Recent developments in deep learning, especially convolutional neural network (CNN), have shown impressive performance in plant disease classification. However, most of the existing datasets for plant disease classification are a single background environment rather than a real field environment. In addition, the classification can only obtain the category of a single disease and fail to obtain the location of multiple different diseases, which limits the practical application. Therefore, the object detection method based on CNN can overcome these shortcomings and has broad application prospects. In this study, an annotated apple leaf disease dataset in a real field environment was first constructed to compensate for the lack of existing datasets. Moreover, the Faster R-CNN and YOLOv3 architectures were trained to detect apple leaf diseases in our dataset. Finally, comparative experiments were conducted and a variety of evaluation indicators were analyzed. The experimental results demonstrate that deep learning algorithms represented by YOLOv3 and Faster R-CNN are feasible for plant disease detection and have their own strong points and weaknesses.

Deep Learning for Weeds' Growth Point Detection based on U-Net

  • Arsa, Dewa Made Sri;Lee, Jonghoon;Won, Okjae;Kim, Hyongsuk
    • Smart Media Journal
    • /
    • v.11 no.7
    • /
    • pp.94-103
    • /
    • 2022
  • Weeds bring disadvantages to crops since they can damage them, and a clean treatment with less pollution and contamination should be developed. Artificial intelligence gives new hope to agriculture to achieve smart farming. This study delivers an automated weeds growth point detection using deep learning. This study proposes a combination of semantic graphics for generating data annotation and U-Net with pre-trained deep learning as a backbone for locating the growth point of the weeds on the given field scene. The dataset was collected from an actual field. We measured the intersection over union, f1-score, precision, and recall to evaluate our method. Moreover, Mobilenet V2 was chosen as the backbone and compared with Resnet 34. The results showed that the proposed method was accurate enough to detect the growth point and handle the brightness variation. The best performance was achieved by Mobilenet V2 as a backbone with IoU 96.81%, precision 97.77%, recall 98.97%, and f1-score 97.30%.

Recognition of Occupants' Cold Discomfort-Related Actions for Energy-Efficient Buildings

  • Song, Kwonsik;Kang, Kyubyung;Min, Byung-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.426-432
    • /
    • 2022
  • HVAC systems play a critical role in reducing energy consumption in buildings. Integrating occupants' thermal comfort evaluation into HVAC control strategies is believed to reduce building energy consumption while minimizing their thermal discomfort. Advanced technologies, such as visual sensors and deep learning, enable the recognition of occupants' discomfort-related actions, thus making it possible to estimate their thermal discomfort. Unfortunately, it remains unclear how accurate a deep learning-based classifier is to recognize occupants' discomfort-related actions in a working environment. Therefore, this research evaluates the classification performance of occupants' discomfort-related actions while sitting at a computer desk. To achieve this objective, this study collected RGB video data on nine college students' cold discomfort-related actions and then trained a deep learning-based classifier using the collected data. The classification results are threefold. First, the trained classifier has an average accuracy of 93.9% for classifying six cold discomfort-related actions. Second, each discomfort-related action is recognized with more than 85% accuracy. Third, classification errors are mostly observed among similar discomfort-related actions. These results indicate that using human action data will enable facility managers to estimate occupants' thermal discomfort and, in turn, adjust the operational settings of HVAC systems to improve the energy efficiency of buildings in conjunction with their thermal comfort levels.

  • PDF

A Study on the Attributes Classification of Agricultural Land Based on Deep Learning Comparison of Accuracy between TIF Image and ECW Image (딥러닝 기반 농경지 속성분류를 위한 TIF 이미지와 ECW 이미지 간 정확도 비교 연구)

  • Kim, Ji Young;Wee, Seong Seung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.65 no.6
    • /
    • pp.15-22
    • /
    • 2023
  • In this study, We conduct a comparative study of deep learning-based classification of agricultural field attributes using Tagged Image File (TIF) and Enhanced Compression Wavelet (ECW) images. The goal is to interpret and classify the attributes of agricultural fields by analyzing the differences between these two image formats. "FarmMap," initiated by the Ministry of Agriculture, Food and Rural Affairs in 2014, serves as the first digital map of agricultural land in South Korea. It comprises attributes such as paddy, field, orchard, agricultural facility and ginseng cultivation areas. For the purpose of comparing deep learning-based agricultural attribute classification, we consider the location and class information of objects, as well as the attribute information of FarmMap. We utilize the ResNet-50 instance segmentation model, which is suitable for this task, to conduct simulated experiments. The comparison of agricultural attribute classification between the two images is measured in terms of accuracy. The experimental results indicate that the accuracy of TIF images is 90.44%, while that of ECW images is 91.72%. The ECW image model demonstrates approximately 1.28% higher accuracy. However, statistical validation, specifically Wilcoxon rank-sum tests, did not reveal a significant difference in accuracy between the two images.

A Study on Improvement of Image Classification Accuracy Using Image-Text Pairs (이미지-텍스트 쌍을 활용한 이미지 분류 정확도 향상에 관한 연구)

  • Mi-Hui Kim;Ju-Hyeok Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.561-566
    • /
    • 2023
  • With the development of deep learning, it is possible to solve various computer non-specialized problems such as image processing. However, most image processing methods use only the visual information of the image to process the image. Text data such as descriptions and annotations related to images may provide additional tactile and visual information that is difficult to obtain from the image itself. In this paper, we intend to improve image classification accuracy through a deep learning model that analyzes images and texts using image-text pairs. The proposed model showed an approximately 11% classification accuracy improvement over the deep learning model using only image information.

Road Surface Data Collection and Analysis using A2B Communication in Vehicles from Bearings and Deep Learning Research

  • Young-Min KIM;Jae-Yong HWANG;Sun-Kyoung KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.21-27
    • /
    • 2023
  • This paper discusses a deep learning-based road surface analysis system that collects data by installing vibration sensors on the 4-axis wheel bearings of a vehicle, analyzes the data, and appropriately classifies the characteristics of the current driving road surface for use in the vehicle's control system. The data used for road surface analysis is real-time large-capacity data, with 48K samples per second, and the A2B protocol, which is used for large-capacity real-time data communication in modern vehicles, was used to collect the data. CAN and CAN-FD commonly used in vehicle communication, are unable to perform real-time road surface analysis due to bandwidth limitations. By using A2B communication, data was collected at a maximum bandwidth for real-time analysis, requiring a minimum of 24K samples/sec for evaluation. Based on the data collected for real-time analysis, performance was assessed using deep learning models such as LSTM, GRU, and RNN. The results showed similar road surface classification performance across all models. It was also observed that the quality of data used during the training process had an impact on the performance of each model.