• Title/Summary/Keyword: deep neural net

Search Result 327, Processing Time 0.023 seconds

Performance Evaluation of Vision Transformer-based Pneumonia Detection Model using Chest X-ray Images (흉부 X-선 영상을 이용한 Vision transformer 기반 폐렴 진단 모델의 성능 평가)

  • Junyong Chang;Youngeun Choi;Seungwan Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.5
    • /
    • pp.541-549
    • /
    • 2024
  • The various structures of artificial neural networks, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been extensively studied and served as the backbone of numerous models. Among these, a transformer architecture has demonstrated its potential for natural language processing and become a subject of in-depth research. Currently, the techniques can be adapted for image processing through the modifications of its internal structure, leading to the development of Vision transformer (ViT) models. The ViTs have shown high accuracy and performance with large data-sets. This study aims to develop a ViT-based model for detecting pneumonia using chest X-ray images and quantitatively evaluate its performance. The various architectures of the ViT-based model were constructed by varying the number of encoder blocks, and different patch sizes were applied for network training. Also, the performance of the ViT-based model was compared to the CNN-based models, such as VGGNet, GoogLeNet, and ResNet. The results showed that the traninig efficiency and accuracy of the ViT-based model depended on the number of encoder blocks and the patch size, and the F1 scores of the ViT-based model ranged from 0.875 to 0.919. The training effeciency of the ViT-based model with a large patch size was superior to the CNN-based models, and the pneumonia detection accuracy of the ViT-based model was higher than that of the VGGNet. In conclusion, the ViT-based model can be potentially used for pneumonia detection using chest X-ray images, and the clinical availability of the ViT-based model would be improved by this study.

A Taekwondo Poomsae Movement Classification Model Learned Under Various Conditions

  • Ju-Yeon Kim;Kyu-Cheol Cho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.9-16
    • /
    • 2023
  • Technological advancement is being advanced in sports such as electronic protection of taekwondo competition and VAR of soccer. However, a person judges and guides the posture by looking at the posture, so sometimes a judgment dispute occurs at the site of the competition in Taekwondo Poomsae. This study proposes an artificial intelligence model that can more accurately judge and evaluate Taekwondo movements using artificial intelligence. In this study, after pre-processing the photographed and collected data, it is separated into train, test, and validation sets. The separated data is trained by applying each model and conditions, and then compared to present the best-performing model. The models under each condition compared the values of loss, accuracy, learning time, and top-n error, and as a result, the performance of the model trained under the conditions using ResNet50 and Adam was found to be the best. It is expected that the model presented in this study can be utilized in various fields such as education sites and competitions.

Discrimination of dicentric chromosome from radiation exposure patient data using a pretrained deep learning model

  • Soon Woo Kwon;Won Il Jang;Mi-Sook Kim;Ki Moon Seong;Yang Hee Lee;Hyo Jin Yoon;Susan Yang;Younghyun Lee;Hyung Jin Shim
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3123-3128
    • /
    • 2024
  • The dicentric chromosome assay is a gold standard method to estimate radiation exposure by calculating the ratio of dicentric chromosomes existing in cells. The objective of this study was to propose an automatic dicentric chromosome discrimination method based on deep convolutional neural networks using radiation exposure patient data. From 45 patients with radiation exposure, conventional Giemsa-stained images of 116,258 normal and 2800 dicentric chromosomes were confirmed. ImageNet was used to pre-train VGG19, which was modified and fine-tuned. The proposed modified VGG19 demonstrated dicentric chromosome discrimination performance, with a true positive rate of 0.927, a true negative rate of 0.997, a positive predictive value of 0.882, a negative predictive value of 0.998, and an area under the receiver operating characteristic curve of 0.997.

A Comparative Study of Alzheimer's Disease Classification using Multiple Transfer Learning Models

  • Prakash, Deekshitha;Madusanka, Nuwan;Bhattacharjee, Subrata;Park, Hyeon-Gyun;Kim, Cho-Hee;Choi, Heung-Kook
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.209-216
    • /
    • 2019
  • Over the past decade, researchers were able to solve complex medical problems as well as acquire deeper understanding of entire issue due to the availability of machine learning techniques, particularly predictive algorithms and automatic recognition of patterns in medical imaging. In this study, a technique called transfer learning has been utilized to classify Magnetic Resonance (MR) images by a pre-trained Convolutional Neural Network (CNN). Rather than training an entire model from scratch, transfer learning approach uses the CNN model by fine-tuning them, to classify MR images into Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal control (NC). The performance of this method has been evaluated over Alzheimer's Disease Neuroimaging (ADNI) dataset by changing the learning rate of the model. Moreover, in this study, in order to demonstrate the transfer learning approach we utilize different pre-trained deep learning models such as GoogLeNet, VGG-16, AlexNet and ResNet-18, and compare their efficiency to classify AD. The overall classification accuracy resulted by GoogLeNet for training and testing was 99.84% and 98.25% respectively, which was exceptionally more than other models training and testing accuracies.

CAttNet: A Compound Attention Network for Depth Estimation of Light Field Images

  • Dingkang Hua;Qian Zhang;Wan Liao;Bin Wang;Tao Yan
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.483-497
    • /
    • 2023
  • Depth estimation is one of the most complicated and difficult problems to deal with in the light field. In this paper, a compound attention convolutional neural network (CAttNet) is proposed to extract depth maps from light field images. To make more effective use of the sub-aperture images (SAIs) of light field and reduce the redundancy in SAIs, we use a compound attention mechanism to weigh the channel and space of the feature map after extracting the primary features, so it can more efficiently select the required view and the important area within the view. We modified various layers of feature extraction to make it more efficient and useful to extract features without adding parameters. By exploring the characteristics of light field, we increased the network depth and optimized the network structure to reduce the adverse impact of this change. CAttNet can efficiently utilize different SAIs correlations and features to generate a high-quality light field depth map. The experimental results show that CAttNet has advantages in both accuracy and time.

Face Detection Method based Fusion RetinaNet using RGB-D Image (RGB-D 영상을 이용한 Fusion RetinaNet 기반 얼굴 검출 방법)

  • Nam, Eun-Jeong;Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.519-525
    • /
    • 2022
  • The face detection task of detecting a person's face in an image is used as a preprocess or core process in various image processing-based applications. The neural network models, which have recently been performing well with the development of deep learning, are dependent on 2D images, so if noise occurs in the image, such as poor camera quality or pool focus of the face, the face may not be detected properly. In this paper, we propose a face detection method that uses depth information together to reduce the dependence of 2D images. The proposed model was trained after generating and preprocessing depth information in advance using face detection dataset, and as a result, it was confirmed that the FRN model was 89.16%, which was about 1.2% better than the RetinaNet model, which showed 87.95%.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

A Fully Convolutional Network Model for Classifying Liver Fibrosis Stages from Ultrasound B-mode Images (초음파 B-모드 영상에서 FCN(fully convolutional network) 모델을 이용한 간 섬유화 단계 분류 알고리즘)

  • Kang, Sung Ho;You, Sun Kyoung;Lee, Jeong Eun;Ahn, Chi Young
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.1
    • /
    • pp.48-54
    • /
    • 2020
  • In this paper, we deal with a liver fibrosis classification problem using ultrasound B-mode images. Commonly representative methods for classifying the stages of liver fibrosis include liver biopsy and diagnosis based on ultrasound images. The overall liver shape and the smoothness and roughness of speckle pattern represented in ultrasound images are used for determining the fibrosis stages. Although the ultrasound image based classification is used frequently as an alternative or complementary method of the invasive biopsy, it also has the limitations that liver fibrosis stage decision depends on the image quality and the doctor's experience. With the rapid development of deep learning algorithms, several studies using deep learning methods have been carried out for automated liver fibrosis classification and showed superior performance of high accuracy. The performance of those deep learning methods depends closely on the amount of datasets. We propose an enhanced U-net architecture to maximize the classification accuracy with limited small amount of image datasets. U-net is well known as a neural network for fast and precise segmentation of medical images. We design it newly for the purpose of classifying liver fibrosis stages. In order to assess the performance of the proposed architecture, numerical experiments are conducted on a total of 118 ultrasound B-mode images acquired from 78 patients with liver fibrosis symptoms of F0~F4 stages. The experimental results support that the performance of the proposed architecture is much better compared to the transfer learning using the pre-trained model of VGGNet.

Development of Methodology for Measuring Water Level in Agricultural Water Reservoir through Deep Learning anlaysis of CCTV Images (딥러닝 기법을 이용한 농업용저수지 CCTV 영상 기반의 수위계측 방법 개발)

  • Joo, Donghyuk;Lee, Sang-Hyun;Choi, Gyu-Hoon;Yoo, Seung-Hwan;Na, Ra;Kim, Hayoung;Oh, Chang-Jo;Yoon, Kwang-Sik
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.65 no.1
    • /
    • pp.15-26
    • /
    • 2023
  • This study aimed to evaluate the performance of water level classification from CCTV images in agricultural facilities such as reservoirs. Recently, the CCTV system, widely used for facility monitor or disaster detection, can automatically detect and identify people and objects from the images by developing new technologies such as a deep learning system. Accordingly, we applied the ResNet-50 deep learning system based on Convolutional Neural Network and analyzed the water level of the agricultural reservoir from CCTV images obtained from TOMS (Total Operation Management System) of the Korea Rural Community Corporation. As a result, the accuracy of water level detection was improved by excluding night and rainfall CCTV images and applying measures. For example, the error rate significantly decreased from 24.39 % to 1.43 % in the Bakseok reservoir. We believe that the utilization of CCTVs should be further improved when calculating the amount of water supply and establishing a supply plan according to the integrated water management policy.

Ensemble Learning Based on Tumor Internal and External Imaging Patch to Predict the Recurrence of Non-small Cell Lung Cancer Patients in Chest CT Image (흉부 CT 영상에서 비소세포폐암 환자의 재발 예측을 위한 종양 내외부 영상 패치 기반 앙상블 학습)

  • Lee, Ye-Sel;Cho, A-Hyun;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.373-381
    • /
    • 2021
  • In this paper, we propose a classification model based on convolutional neural network(CNN) for predicting 2-year recurrence in non-small cell lung cancer(NSCLC) patients using preoperative chest CT images. Based on the region of interest(ROI) defined as the tumor internal and external area, the input images consist of an intratumoral patch, a peritumoral patch and a peritumoral texture patch focusing on the texture information of the peritumoral patch. Each patch is trained through AlexNet pretrained on ImageNet to explore the usefulness and performance of various patches. Additionally, ensemble learning of network trained with each patch analyzes the performance of different patch combination. Compared with all results, the ensemble model with intratumoral and peritumoral patches achieved the best performance (ACC=98.28%, Sensitivity=100%, NPV=100%).