• Title/Summary/Keyword: Image Deep Learning

Search Result 1,797, Processing Time 0.024 seconds

The Study for Type of Mask Wearing Dataset for Deep learning and Detection Model (딥러닝을 위한 마스크 착용 유형별 데이터셋 구축 및 검출 모델에 관한 연구)

  • Hwang, Ho Seong;Kim, Dong heon;Kim, Ho Chul
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.131-135
    • /
    • 2022
  • Due to COVID-19, Correct method of wearing mask is important to prevent COVID-19 and the other respiratory tract infections. And the deep learning technology in the image processing has been developed. The purpose of this study is to create the type of mask wearing dataset for deep learning models and select the deep learning model to detect the wearing mask correctly. The Image dataset is the 2,296 images acquired using a web crawler. Deep learning classification models provided by tensorflow are used to validate the dataset. And Object detection deep learning model YOLOs are used to select the detection deep learning model to detect the wearing mask correctly. In this process, this paper proposes to validate the type of mask wearing datasets and YOLOv5 is the effective model to detect the type of mask wearing. The experimental results show that reliable dataset is acquired and the YOLOv5 model effectively recognize type of mask wearing.

Image Reconstruction Method for Photonic Integrated Interferometric Imaging Based on Deep Learning

  • Qianchen Xu;Weijie Chang;Feng Huang;Wang Zhang
    • Current Optics and Photonics
    • /
    • v.8 no.4
    • /
    • pp.391-398
    • /
    • 2024
  • An image reconstruction algorithm is vital for the image quality of a photonic integrated interferometric imaging (PIII) system. However, image reconstruction algorithms have limitations that always lead to degraded image reconstruction. In this paper, a novel image reconstruction algorithm based on deep learning is proposed. Firstly, the principle of optical signal transmission through the PIII system is investigated. A dataset suitable for image reconstruction of the PIII system is constructed. Key aspects such as model and loss functions are compared and constructed to solve the problem of image blurring and noise influence. By comparing it with other algorithms, the proposed algorithm is verified to have good reconstruction results not only qualitatively but also quantitatively.

Radiation Prediction Based on Multi Deep Learning Model Using Weather Data and Weather Satellites Image (기상 데이터와 기상 위성 영상을 이용한 다중 딥러닝 모델 기반 일사량 예측)

  • Jae-Jung Kim;Yong-Hun You;Chang-Bok Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.569-575
    • /
    • 2021
  • Deep learning shows differences in prediction performance depending on data quality and model. This study uses various input data and multiple deep learning models to build an optimal deep learning model for predicting solar radiation, which has the most influence on power generation prediction. did. As the input data, the weather data of the Korea Meteorological Administration and the clairvoyant meteorological image were used by segmenting the image of the Korea Meteorological Agency. , comparative evaluation, and predicting solar radiation by constructing multiple deep learning models connecting the models with the best error rate in each model. As an experimental result, the RMSE of model A, which is a multiple deep learning model, was 0.0637, the RMSE of model B was 0.07062, and the RMSE of model C was 0.06052, so the error rate of model A and model C was better than that of a single model. In this study, the model that connected two or more models through experiments showed improved prediction rates and stable learning results.

Implementation of YOLOv5-based Forest Fire Smoke Monitoring Model with Increased Recognition of Unstructured Objects by Increasing Self-learning data

  • Gun-wo, Do;Minyoung, Kim;Si-woong, Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.536-546
    • /
    • 2022
  • A society will lose a lot of something in this field when the forest fire broke out. If a forest fire can be detected in advance, damage caused by the spread of forest fires can be prevented early. So, we studied how to detect forest fires using CCTV currently installed. In this paper, we present a deep learning-based model through efficient image data construction for monitoring forest fire smoke, which is unstructured data, based on the deep learning model YOLOv5. Through this study, we conducted a study to accurately detect forest fire smoke, one of the amorphous objects of various forms, in YOLOv5. In this paper, we introduce a method of self-learning by producing insufficient data on its own to increase accuracy for unstructured object recognition. The method presented in this paper constructs a dataset with a fixed labelling position for images containing objects that can be extracted from the original image, through the original image and a model that learned from it. In addition, by training the deep learning model, the performance(mAP) was improved, and the errors occurred by detecting objects other than the learning object were reduced, compared to the model in which only the original image was learned.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

Impacts of label quality on performance of steel fatigue crack recognition using deep learning-based image segmentation

  • Hsu, Shun-Hsiang;Chang, Ting-Wei;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.207-220
    • /
    • 2022
  • Structural health monitoring (SHM) plays a vital role in the maintenance and operation of constructions. In recent years, autonomous inspection has received considerable attention because conventional monitoring methods are inefficient and expensive to some extent. To develop autonomous inspection, a potential approach of crack identification is needed to locate defects. Therefore, this study exploits two deep learning-based segmentation models, DeepLabv3+ and Mask R-CNN, for crack segmentation because these two segmentation models can outperform other similar models on public datasets. Additionally, impacts of label quality on model performance are explored to obtain an empirical guideline on the preparation of image datasets. The influence of image cropping and label refining are also investigated, and different strategies are applied to the dataset, resulting in six alternated datasets. By conducting experiments with these datasets, the highest mean Intersection-over-Union (mIoU), 75%, is achieved by Mask R-CNN. The rise in the percentage of annotations by image cropping improves model performance while the label refining has opposite effects on the two models. As the label refining results in fewer error annotations of cracks, this modification enhances the performance of DeepLabv3+. Instead, the performance of Mask R-CNN decreases because fragmented annotations may mistake an instance as multiple instances. To sum up, both DeepLabv3+ and Mask R-CNN are capable of crack identification, and an empirical guideline on the data preparation is presented to strengthen identification successfulness via image cropping and label refining.

A Deep Learning Method for Brain Tumor Classification Based on Image Gradient

  • Long, Hoang;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1233-1241
    • /
    • 2022
  • Tumors of the brain are the deadliest, with a life expectancy of only a few years for those with the most advanced forms. Diagnosing a brain tumor is critical to developing a treatment plan to help patients with the disease live longer. A misdiagnosis of brain tumors will lead to incorrect medical treatment, decreasing a patient's chance of survival. Radiologists classify brain tumors via biopsy, which takes a long time. As a result, the doctor will need an automatic classification system to identify brain tumors. Image classification is one application of the deep learning method in computer vision. One of the deep learning's most powerful algorithms is the convolutional neural network (CNN). This paper will introduce a novel deep learning structure and image gradient to classify brain tumors. Meningioma, glioma, and pituitary tumors are the three most popular forms of brain cancer represented in the Figshare dataset, which contains 3,064 T1-weighted brain images from 233 patients. According to the numerical results, our method is more accurate than other approaches.

Evaluation of Adult Lung CT Image for Ultra-Low-Dose CT Using Deep Learning Based Reconstruction

  • JO, Jun-Ho;MIN, Hyo-June;JEON, Kwang-Ho;KIM, Yu-Jin;LEE, Sang-Hyeok;KIM, Mi-Sung;JEON, Pil-Hyun;KIM, Daehong;BAEK, Cheol-Ha;LEE, Hakjae
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.1-5
    • /
    • 2021
  • Although CT has an advantage in describing the three-dimensional anatomical structure of the human body, it also has a disadvantage in that high doses are exposed to the patient. Recently, a deep learning-based image reconstruction method has been used to reduce patient dose. The purpose of this study is to analyze the dose reduction and image quality improvement of deep learning-based reconstruction (DLR) on the adult's chest CT examination. Adult lung phantom was used for image acquisition and analysis. Lung phantom was scanned at ultra-low-dose (ULD), low-dose (LD), and standard dose (SD) modes, and images were reconstructed using FBP (Filtered back projection), IR (Iterative reconstruction), DLR (Deep learning reconstruction) algorithms. Image quality variations with respect to varying imaging doses were evaluated using noise and SNR. At ULD mode, the noise of the DLR image was reduced by 62.42% compared to the FBP image, and at SD mode, the SNR of the DLR image was increased by 159.60% compared to the SNR of the FBP image. Based on this study, it is anticipated that the DLR will not only substantially reduce the chest CT dose but also drastic improvement of the image quality.

A Review on Deep Learning-based Image Outpainting (딥러닝 기반 이미지 아웃페인팅 기술의 현황 및 최신 동향)

  • Kim, Kyunghun;Kong, Kyeongbo;Kang, Suk-ju
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.61-69
    • /
    • 2021
  • Image outpainting is a very interesting problem in that it can continuously fill the outside of a given image by considering the context of the image. There are two main challenges in this work. The first is to maintain the spatial consistency of the content of the generated area and the original input. The second is to generate high quality large image with a small amount of adjacent information. Existing image outpainting methods have difficulties such as generating inconsistent, blurry, and repetitive pixels. However, thanks to the recent development of deep learning technology, deep learning-based algorithms that show high performance compared to existing traditional techniques have been introduced. Deep learning-based image outpainting has been actively researched with various networks proposed until now. In this paper, we would like to introduce the latest technology and trends in the field of outpainting. This study compared recent techniques by analyzing representative networks among deep learning-based outpainting algorithms and showed experimental results through various data sets and comparison methods.

Non-Homogeneous Haze Synthesis for Hazy Image Depth Estimation Using Deep Learning (불균일 안개 영상 합성을 이용한 딥러닝 기반 안개 영상 깊이 추정)

  • Choi, Yeongcheol;Paik, Jeehyun;Ju, Gwangjin;Lee, Donggun;Hwang, Gyeongha;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.45-54
    • /
    • 2022
  • Image depth estimation is a technology that is the basis of various image analysis. As analysis methods using deep learning models emerge, studies using deep learning in image depth estimation are being actively conducted. Currently, most deep learning-based depth estimation models are being trained with clean and ideal images. However, due to the lack of data on adverse conditions such as haze or fog, the depth estimation may not work well in such an environment. It is hard to sufficiently secure an image in these environments, and in particular, obtaining non-homogeneous haze data is a very difficult problem. In order to solve this problem, in this study, we propose a method of synthesizing non-homogeneous haze images and a learning method for a monocular depth estimation deep learning model using this method. Considering that haze mainly occurs outdoors, datasets mainly containing outdoor images are constructed. Experiment results show that the model with the proposed method is good at estimating depth in both synthesized and real haze data.