• Title/Summary/Keyword: deep Learning

Search Result 5,795, Processing Time 0.033 seconds

Validation Data Augmentation for Improving the Grading Accuracy of Diabetic Macular Edema using Deep Learning (딥러닝을 이용한 당뇨성황반부종 등급 분류의 정확도 개선을 위한 검증 데이터 증강 기법)

  • Lee, Tae Soo
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.48-54
    • /
    • 2019
  • This paper proposed a method of validation data augmentation for improving the grading accuracy of diabetic macular edema (DME) using deep learning. The data augmentation technique is basically applied in order to secure diversity of data by transforming one image to several images through random translation, rotation, scaling and reflection in preparation of input data of the deep neural network (DNN). In this paper, we apply this technique in the validation process of the trained DNN, and improve the grading accuracy by combining the classification results of the augmented images. To verify the effectiveness, 1,200 retinal images of Messidor dataset was divided into training and validation data at the ratio 7:3. By applying random augmentation to 359 validation data, $1.61{\pm}0.55%$ accuracy improvement was achieved in the case of six times augmentation (N=6). This simple method has shown that the accuracy can be improved in the N range from 2 to 6 with the correlation coefficient of 0.5667. Therefore, it is expected to help improve the diagnostic accuracy of DME with the grading information provided by the proposed DNN.

Application of Deep Learning to Solar Data: 6. Super Resolution of SDO/HMI magnetograms

  • Rahman, Sumiaya;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyewon;Shin, Gyungin;Lim, Daye
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.52.1-52.1
    • /
    • 2019
  • The Helioseismic and Magnetic Imager (HMI) is the instrument of Solar Dynamics Observatory (SDO) to study the magnetic field and oscillation at the solar surface. The HMI image is not enough to analyze very small magnetic features on solar surface since it has a spatial resolution of one arcsec. Super resolution is a technique that enhances the resolution of a low resolution image. In this study, we use a method for enhancing the solar image resolution using a Deep-learning model which generates a high resolution HMI image from a low resolution HMI image (4 by 4 binning). Deep learning networks try to find the hidden equation between low resolution image and high resolution image from given input and the corresponding output image. In this study, we trained a model based on a very deep residual channel attention networks (RCAN) with HMI images in 2014 and test it with HMI images in 2015. We find that the model achieves high quality results in view of both visual and measures: 31.40 peak signal-to-noise ratio(PSNR), Correlation Coefficient (0.96), Root mean square error (RMSE) is 0.004. This result is much better than the conventional bi-cubic interpolation. We will apply this model to full-resolution SDO/HMI and GST magnetograms.

  • PDF

Region of Interest Localization for Bone Age Estimation Using Whole-Body Bone Scintigraphy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Kang, Sae Ryung;Min, Jung Joon
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.22-29
    • /
    • 2021
  • In the past decade, deep learning has been applied to various medical image analysis tasks. Skeletal bone age estimation is clinically important as it can help prevent age-related illness and pave the way for new anti-aging therapies. Recent research has applied deep learning techniques to the task of bone age assessment and achieved positive results. In this paper, we propose a bone age prediction method using a deep convolutional neural network. Specifically, we first train a classification model that automatically localizes the most discriminative region of an image and crops it from the original image. The regions of interest are then used as input for a regression model to estimate the age of the patient. The experiments are conducted on a whole-body scintigraphy dataset that was collected by Chonnam National University Hwasun Hospital. The experimental results illustrate the potential of our proposed method, which has a mean absolute error of 3.35 years. Our proposed framework can be used as a robust supporting tool for clinicians to prevent age-related diseases.

Improved Deep Residual Network for Apple Leaf Disease Identification

  • Zhou, Changjian;Xing, Jinge
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1115-1126
    • /
    • 2021
  • Plant disease is one of the most irritating problems for agriculture growers. Thus, timely detection of plant diseases is of high importance to practical value, and corresponding measures can be taken at the early stage of plant diseases. Therefore, numerous researchers have made unremitting efforts in plant disease identification. However, this problem was not solved effectively until the development of artificial intelligence and big data technologies, especially the wide application of deep learning models in different fields. Since the symptoms of plant diseases mainly appear visually on leaves, computer vision and machine learning technologies are effective and rapid methods for identifying various kinds of plant diseases. As one of the fruits with the highest nutritional value, apple production directly affects the quality of life, and it is important to prevent disease intrusion in advance for yield and taste. In this study, an improved deep residual network is proposed for apple leaf disease identification in a novel way, a global residual connection is added to the original residual network, and the local residual connection architecture is optimized. Including that 1,977 apple leaf disease images with three categories that are collected in this study, experimental results show that the proposed method has achieved 98.74% top-1 accuracy on the test set, outperforming the existing state-of-the-art models in apple leaf disease identification tasks, and proving the effectiveness of the proposed method.

Deep reinforcement learning for optimal life-cycle management of deteriorating regional bridges using double-deep Q-networks

  • Xiaoming, Lei;You, Dong
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.571-582
    • /
    • 2022
  • Optimal life-cycle management is a challenging issue for deteriorating regional bridges. Due to the complexity of regional bridge structural conditions and a large number of inspection and maintenance actions, decision-makers generally choose traditional passive management strategies. They are less efficiency and cost-effectiveness. This paper suggests a deep reinforcement learning framework employing double-deep Q-networks (DDQNs) to improve the life-cycle management of deteriorating regional bridges to tackle these problems. It could produce optimal maintenance plans considering restrictions to maximize maintenance cost-effectiveness to the greatest extent possible. DDQNs method could handle the problem of the overestimation of Q-values in the Nature DQNs. This study also identifies regional bridge deterioration characteristics and the consequence of scheduled maintenance from years of inspection data. To validate the proposed method, a case study containing hundreds of bridges is used to develop optimal life-cycle management strategies. The optimization solutions recommend fewer replacement actions and prefer preventative repair actions when bridges are damaged or are expected to be damaged. By employing the optimal life-cycle regional maintenance strategies, the conditions of bridges can be controlled to a good level. Compared to the nature DQNs, DDQNs offer an optimized scheme containing fewer low-condition bridges and a more costeffective life-cycle management plan.

Adversarial Attacks for Deep Learning-Based Infrared Object Detection (딥러닝 기반 적외선 객체 검출을 위한 적대적 공격 기술 연구)

  • Kim, Hoseong;Hyun, Jaeguk;Yoo, Hyunjung;Kim, Chunho;Jeon, Hyunho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.6
    • /
    • pp.591-601
    • /
    • 2021
  • Recently, infrared object detection(IOD) has been extensively studied due to the rapid growth of deep neural networks(DNN). Adversarial attacks using imperceptible perturbation can dramatically deteriorate the performance of DNN. However, most adversarial attack works are focused on visible image recognition(VIR), and there are few methods for IOD. We propose deep learning-based adversarial attacks for IOD by expanding several state-of-the-art adversarial attacks for VIR. We effectively validate our claim through comprehensive experiments on two challenging IOD datasets, including FLIR and MSOD.

Image Translation of SDO/AIA Multi-Channel Solar UV Images into Another Single-Channel Image by Deep Learning

  • Lim, Daye;Moon, Yong-Jae;Park, Eunsu;Lee, Jin-Yi
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.42.3-42.3
    • /
    • 2019
  • We translate Solar Dynamics Observatory/Atmospheric Imaging Assembly (AIA) ultraviolet (UV) multi-channel images into another UV single-channel image using a deep learning algorithm based on conditional generative adversarial networks (cGANs). The base input channel, which has the highest correlation coefficient (CC) between UV channels of AIA, is 193 Å. To complement this channel, we choose two channels, 1600 and 304 Å, which represent upper photosphere and chromosphere, respectively. Input channels for three models are single (193 Å), dual (193+1600 Å), and triple (193+1600+304 Å), respectively. Quantitative comparisons are made for test data sets. Main results from this study are as follows. First, the single model successfully produce other coronal channel images but less successful for chromospheric channel (304 Å) and much less successful for two photospheric channels (1600 and 1700 Å). Second, the dual model shows a noticeable improvement of the CC between the model outputs and Ground truths for 1700 Å. Third, the triple model can generate all other channel images with relatively high CCs larger than 0.89. Our results show a possibility that if three channels from photosphere, chromosphere, and corona are selected, other multi-channel images could be generated by deep learning. We expect that this investigation will be a complementary tool to choose a few UV channels for future solar small and/or deep space missions.

  • PDF

Comparative Experimental Study on the Evaluation of the Unit-water Content of Mortar According to the Structure of the Deep Learning Model (딥러닝 모델 구조에 따른 모르타르의 단위수량 평가에 대한 비교 실험 연구)

  • Cho, Yang-Je;Yu, Seung-Hwan;Yang, Hyun-Min;Yoon, Jong-Wan;Park, Tae-Joon;Lee, Han-Seung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2021.11a
    • /
    • pp.8-9
    • /
    • 2021
  • The unit-water content of concrete is one of the important factors in determining the quality of concrete and is directly related to the durability of the construction structure, and the current method of measuring the unit-water content of concrete is applied by the Air Meta Act and the Electrostatic Capacity Act. However, there are complex and time-consuming problems with measurement methods. Therefore, high frequency moisture sensor was used for quick and high measurement, and unit-water content of mortar was evaluated through machine running and deep running based on measurement big data. The multi-input deep learning model is as accurate as 24.25% higher than the OLS linear regression model, which shows that deep learning can more effectively identify the nonlinear relationship between high-frequency moisture sensor data and unit quantity than linear regression.

  • PDF

Stress Detection System for Emotional Labor Based On Deep Learning Facial Expression Recognition (감정노동자를 위한 딥러닝 기반의 스트레스 감지시스템의 설계)

  • Og, Yu-Seon;Cho, Woo-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.613-617
    • /
    • 2021
  • According to the growth of the service industry, stresses from emotional labor workers have been emerging as a social problem, thereby so-called the Emotional Labor Protection Act was implemented in 2018. However, insufficient substantial protection systems for emotional workers emphasizes the necessity of a digital stress management system. Thus, in this paper, we suggest a stress detection system for customer service representatives based on deep learning facial expression recognition. This system consists of a real-time face detection module, an emotion classification FER module that deep-learned big data including Korean emotion images, and a monitoring module that only visualizes stress levels. We designed the system to aim to monitor stress and prevent mental illness in emotional workers.

  • PDF

Indoor Environment Drone Detection through DBSCAN and Deep Learning

  • Ha Tran Thi;Hien Pham The;Yun-Seok Mun;Ic-Pyo Hong
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.439-449
    • /
    • 2023
  • In an era marked by the increasing use of drones and the growing demand for indoor surveillance, the development of a robust application for detecting and tracking both drones and humans within indoor spaces becomes imperative. This study presents an innovative application that uses FMCW radar to detect human and drone motions from the cloud point. At the outset, the DBSCAN (Density-based Spatial Clustering of Applications with Noise) algorithm is utilized to categorize cloud points into distinct groups, each representing the objects present in the tracking area. Notably, this algorithm demonstrates remarkable efficiency, particularly in clustering drone point clouds, achieving an impressive accuracy of up to 92.8%. Subsequently, the clusters are discerned and classified into either humans or drones by employing a deep learning model. A trio of models, including Deep Neural Network (DNN), Residual Network (ResNet), and Long Short-Term Memory (LSTM), are applied, and the outcomes reveal that the ResNet model achieves the highest accuracy. It attains an impressive 98.62% accuracy for identifying drone clusters and a noteworthy 96.75% accuracy for human clusters.