• Title/Summary/Keyword: deep learning models

Search Result 1,393, Processing Time 0.031 seconds

Disease Diagnosis on Fundus Images: A Cross-Dataset Study (망막 이미지에서의 질병 진단: 교차 데이터셋 연구)

  • Van-Nguyen Pham;Sun Xiaoying;Hyunseung Choo
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.754-755
    • /
    • 2024
  • This paper presents a comparative study of five deep learning models-ResNet50, DenseNet121, Vision Transformer (ViT), Swin Transformer (SwinT), and CoatNet-on the task of multi-label classification of fundus images for ocular diseases. The models were trained on the Ocular Disease Recognition (ODIR) dataset and validated on the Retinal Fundus Multi-disease Image Dataset (RFMiD), with a focus on five disease classes: diabetic retinopathy, glaucoma, cataract, age-related macular degeneration, and myopia. The performance was evaluated using the area under the receiver operating characteristic curve (AUC-ROC) score for each class. CoatNet achieved the best AUC-ROC scores for diabetic retinopathy, glaucoma, cataract, and myopia, while ViT outperformed CoatNet for age-related macular degeneration. Overall, CoatNet exhibited the highest average performance across all classes, highlighting the effectiveness of hybrid architectures in medical image classification. These findings suggest that CoatNet may be a promising model for multi-label classification of fundus images in cross-dataset scenarios.

Using MobileNet to Predict Unseen Corn Diseases (MobileNet을 사용한 학습되지 않은 옥수수 질병 예측)

  • David J. Richter;Kyungbaek Kim
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.640-643
    • /
    • 2024
  • Agriculture and the plants and crops that are the results of it are essential to our everyday lives. Without agriculture our current society can not function and as such it is extremely important to make sure that we can assure that farms and farmers can continuously and steadily harvest enough produce. One big challenge that keep hindering the farming process are plant diseases that account for a large number of dead crops. They spread fast and can be troublesome to detect, especially manually. Datasets are also sparce and often lacking. To ease the detection process and speed it up, in an effort to aid farmers, we propose a pretrained MobileNet CNN deep learning AI model that can automatically detect maize/corn diseases from images. However, since data is rather sparce, not all diseases can be accounted for. This is why we have trained the model with one set of diseases (leaf spot and leaf blight) and healthy plant leaves, but tested it with a set of images of maize leaves that are infected with a disease the model has never seen before (leaf rust). The model has managed to not only learn and master the test set, but also managed to generalize to the before unseen rust infected leaf images. This promises to help future models learn robust and effective models that can generalize to diseases that can even classify diseases new to the model, which can be important in a field with limited data.

Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates

  • Ha Kyung Jung;Kiduk Kim;Ji Eun Park;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.11
    • /
    • pp.959-981
    • /
    • 2024
  • Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.

Comparison of Off-the-Shelf DCNN Models for Extracting Bark Feature and Tree Species Recognition Using Multi-layer Perceptron (수피 특징 추출을 위한 상용 DCNN 모델의 비교와 다층 퍼셉트론을 이용한 수종 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.9
    • /
    • pp.1155-1163
    • /
    • 2020
  • Deep learning approach is emerging as a new way to improve the accuracy of tree species identification using bark image. However, the approach has not been studied enough because it is confronted with the problem of acquiring a large volume of bark image dataset. This study solved this problem by utilizing a pretrained off-the-shelf DCNN model. It compares the discrimination power of bark features extracted by each DCNN model. Then it extracts the features by using a selected DCNN model and feeds them to a multi-layer perceptron (MLP). We found out that the ResNet50 model is effective in extracting bark features and the MLP could be trained well with the features reduced by the principal component analysis. The proposed approach gives accuracy of 99.1% and 98.4% for BarkTex and Trunk12 datasets respectively.

Trends in Temporal Action Detection in Untrimmed Videos (시간적 행동 탐지 기술 동향)

  • Moon, Jinyoung;Kim, Hyungil;Park, Jongyoul
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.20-33
    • /
    • 2020
  • Temporal action detection (TAD) in untrimmed videos is an important but a challenging problem in the field of computer vision and has gathered increasing interest recently. Although most studies on action in videos have addressed action recognition in trimmed videos, TAD methods are required to understand real-world untrimmed videos, including mostly background and some meaningful action instances belonging to multiple action classes. TAD is mainly composed of temporal action localization that generates temporal action proposals, such as single action and action recognition, which classifies action proposals into action classes. However, the task of generating temporal action proposals with accurate temporal boundaries is challenging in TAD. In this paper, we discuss TAD technologies that are considered high performance in terms of representative TAD studies based on deep learning. Further, we investigate evaluation methodologies for TAD, such as benchmark datasets and performance measures, and subsequently compare the performance of the discussed TAD models.

Combining Deep Learning Models for Crisis-Related Tweet Classification (재난관련 트윗 분류를 위한 딥 러닝 결합 모델)

  • Choi, Won-Gyu;Lee, Kyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.649-651
    • /
    • 2018
  • 본 논문에서는 CNN에서 클래스 활성화 맵과 원샷 러닝을 결합하여 트위터 분류를 위한 딥 러닝 모델을 제안한다. 클래스 활성화 맵은 트윗 분류에 대한 분류 주제와 연관된 핵심 어휘를 추출하고 강조 표시하도록 사용되었다. 특히 작은 학습 데이터 셋을 사용하여 다중 클래스 분류의 성능을 향상시키기 위해 원샷 러닝 방법을 적용한다. 제안하는 방법을 검증하기위해 TREC 2018 태스크의 사건 스트림(TREC-IS) 학습데이터를 사용하여 비교실험을 했다. 실험 결과에서 CNN 기본 모델의 정확도는 58.1%이고 제안 방법의 정확도는 69.6%로 성능이 향상됨을 보였다.

  • PDF

Learning and Transferring Deep Neural Network Models for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델 학습과 전이)

  • Kim, Dong-Ha;Kim, Incheol
    • Annual Conference of KIPS
    • /
    • 2016.10a
    • /
    • pp.617-620
    • /
    • 2016
  • 본 논문에서는 이미지 캡션 생성과 모델 전이에 효과적인 심층 신경망 모델을 제시한다. 본 모델은 멀티 모달 순환 신경망 모델의 하나로서, 이미지로부터 시각 정보를 추출하는 컨볼루션 신경망 층, 각 단어를 저차원의 특징으로 변환하는 임베딩 층, 캡션 문장 구조를 학습하는 순환 신경망 층, 시각 정보와 언어 정보를 결합하는 멀티 모달 층 등 총 5 개의 계층들로 구성된다. 특히 본 모델에서는 시퀀스 패턴 학습과 모델 전이에 우수한 LSTM 유닛을 이용하여 순환 신경망 층을 구성하고, 컨볼루션 신경망 층의 출력을 임베딩 층뿐만 아니라 멀티 모달 층에도 연결함으로써, 캡션 문장 생성을 위한 매 단계마다 이미지의 시각 정보를 이용할 수 있는 연결 구조를 가진다. Flickr8k, Flickr30k, MSCOCO 등의 공개 데이터 집합들을 이용한 다양한 비교 실험을 통해, 캡션의 정확도와 모델 전이의 효과 면에서 본 논문에서 제시한 멀티 모달 순환 신경망 모델의 우수성을 입증하였다.

Multiple Hint Information-based Knowledge Transfer with Block-wise Retraining (블록 계층별 재학습을 이용한 다중 힌트정보 기반 지식전이 학습)

  • Bae, Ji-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.43-49
    • /
    • 2020
  • In this paper, we propose a stage-wise knowledge transfer method that uses block-wise retraining to transfer the useful knowledge of a pre-trained residual network (ResNet) in a teacher-student framework (TSF). First, multiple hint information transfer and block-wise supervised retraining of the information was alternatively performed between teacher and student ResNet models. Next, Softened output information-based knowledge transfer was additionally considered in the TSF. The results experimentally showed that the proposed method using multiple hint-based bottom-up knowledge transfer coupled with incremental block-wise retraining provided the improved student ResNet with higher accuracy than existing KD and hint-based knowledge transfer methods considered in this study.

Mention Detection Using Pointer Networks for Coreference Resolution

  • Park, Cheoneum;Lee, Changki;Lim, Soojong
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.652-661
    • /
    • 2017
  • A mention has a noun or noun phrase as its head and constructs a chunk that defines any meaning, including a modifier. Mention detection refers to the extraction of mentions from a document. In mentions, coreference resolution refers to determining any mentions that have the same meaning. Pointer networks, which are models based on a recurrent neural network encoder-decoder, outputs a list of elements corresponding to an input sequence. In this paper, we propose mention detection using pointer networks. This approach can solve the problem of overlapped mention detection, which cannot be solved by a sequence labeling approach. The experimental results show that the performance of the proposed mention detection approach is F1 of 80.75%, which is 8% higher than rule-based mention detection, and the performance of the coreference resolution has a CoNLL F1 of 56.67% (mention boundary), which is 7.68% higher than coreference resolution using rule-based mention detection.

Analysis of Evolutionary Optimization Methods for CNN Structures (CNN 구조의 진화 최적화 방식 분석)

  • Seo, Kisung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.6
    • /
    • pp.767-772
    • /
    • 2018
  • Recently, some meta-heuristic algorithms, such as GA(Genetic Algorithm) and GP(Genetic Programming), have been used to optimize CNN(Convolutional Neural Network). The CNN, which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, the recent attempts to automatically construct CNN architectures are investigated and analyzed. First, two GA based methods are summarized. One is the optimization of CNN structures with the number and size of filters, connection between consecutive layers, and activation functions of each layer. The other is an new encoding method to represent complex convolutional layers in a fixed-length binary string, Second, CGP(Cartesian Genetic Programming) based method is surveyed for CNN structure optimization with highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The comparison for three approaches is analysed and the outlook for the potential next steps is suggested.