• Title/Summary/Keyword: Deep Learning Dataset

Search Result 816, Processing Time 0.033 seconds

Few-shot learning using the median prototype of the support set (Support set의 중앙값 prototype을 활용한 few-shot 학습)

  • Eu Tteum Baek
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.24-31
    • /
    • 2023
  • Meta-learning is metacognition that instantly distinguishes between knowing and unknown. It is a learning method that adapts and solves new problems by self-learning with a small amount of data.A few-shot learning method is a type of meta-learning method that accurately predicts query data even with a very small support set. In this study, we propose a method to solve the limitations of the prototype created with the mean-point vector of each class. For this purpose, we use the few-shot learning method that created the prototype used in the few-shot learning method as the median prototype. For quantitative evaluation, a handwriting recognition dataset and mini-Imagenet dataset were used and compared with the existing method. Through the experimental results, it was confirmed that the performance was improved compared to the existing method.

Application of Deep Learning for Classification of Ancient Korean Roof-end Tile Images (딥러닝을 활용한 고대 수막새 이미지 분류 검토)

  • KIM Younghyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.3
    • /
    • pp.24-35
    • /
    • 2024
  • Recently, research using deep learning technologies such as artificial intelligence, convolutional neural networks, etc. has been actively conducted in various fields including healthcare, manufacturing, autonomous driving, and security, and is having a significant influence on society. In line with this trend, the present study attempted to apply deep learning to the classification of archaeological artifacts, specifically ancient Korean roof-end tiles. Using 100 images of roof-end tiles from each of the Goguryeo, Baekje, and Silla dynasties, for a total of 300 base images, a dataset was formed and expanded to 1,200 images using data augmentation techniques. After building a model using transfer learning from the pre-trained EfficientNetB0 model and conducting five-fold cross-validation, an average training accuracy of 98.06% and validation accuracy of 97.08% were achieved. Furthermore, when model performance was evaluated with a test dataset of 240 images, it could classify the roof-end tile images from the three dynasties with a minimum accuracy of 91%. In particular, with a learning rate of 0.0001, the model exhibited the highest performance, with accuracy of 92.92%, precision of 92.96%, recall of 92.92%, and F1 score of 92.93%. This optimal result was obtained by preventing overfitting and underfitting issues using various learning rate settings and finding the optimal hyperparameters. The study's findings confirm the potential for applying deep learning technologies to the classification of Korean archaeological materials, which is significant. Additionally, it was confirmed that the existing ImageNet dataset and parameters could be positively applied to the analysis of archaeological data. This approach could lead to the creation of various models for future archaeological database accumulation, the use of artifacts in museums, and classification and organization of artifacts.

Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention (특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S)

  • Hwang, Beom-Yeon;Lee, Sang-Hun;Lee, Seung-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.31-37
    • /
    • 2021
  • In this paper proposed a feature fusion and spatial attention-based modified YOLOv4S for small and occluded detection. Conventional YOLOv4S is a lightweight network and lacks feature extraction capability compared to the method of the deep network. The proposed method first combines feature maps of different scales with feature fusion to enhance semantic and low-level information. In addition expanding the receptive field with dilated convolution, the detection accuracy for small and occluded objects was improved. Second by improving the conventional spatial information with spatial attention, the detection accuracy of objects classified and occluded between objects was improved. PASCAL VOC and COCO datasets were used for quantitative evaluation of the proposed method. The proposed method improved mAP by 2.7% in the PASCAL VOC dataset and 1.8% in the COCO dataset compared to the Conventional YOLOv4S.

Deep-learning based SAR Ship Detection with Generative Data Augmentation (영상 생성적 데이터 증강을 이용한 딥러닝 기반 SAR 영상 선박 탐지)

  • Kwon, Hyeongjun;Jeong, Somi;Kim, SungTai;Lee, Jaeseok;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Ship detection in synthetic aperture radar (SAR) images is an important application in marine monitoring for the military and civilian domains. Over the past decade, object detection has achieved significant progress with the development of convolutional neural networks (CNNs) and lot of labeled databases. However, due to difficulty in collecting and labeling SAR images, it is still a challenging task to solve SAR ship detection CNNs. To overcome the problem, some methods have employed conventional data augmentation techniques such as flipping, cropping, and affine transformation, but it is insufficient to achieve robust performance to handle a wide variety of types of ships. In this paper, we present a novel and effective approach for deep SAR ship detection, that exploits label-rich Electro-Optical (EO) images. The proposed method consists of two components: a data augmentation network and a ship detection network. First, we train the data augmentation network based on conditional generative adversarial network (cGAN), which aims to generate additional SAR images from EO images. Since it is trained using unpaired EO and SAR images, we impose the cycle-consistency loss to preserve the structural information while translating the characteristics of the images. After training the data augmentation network, we leverage the augmented dataset constituted with real and translated SAR images to train the ship detection network. The experimental results include qualitative evaluation of the translated SAR images and the comparison of detection performance of the networks, trained with non-augmented and augmented dataset, which demonstrates the effectiveness of the proposed framework.

No-Reference Image Quality Assessment based on Quality Awareness Feature and Multi-task Training

  • Lai, Lijing;Chu, Jun;Leng, Lu
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.75-86
    • /
    • 2022
  • The existing image quality assessment (IQA) datasets have a small number of samples. Some methods based on transfer learning or data augmentation cannot make good use of image quality-related features. A No Reference (NR)-IQA method based on multi-task training and quality awareness is proposed. First, single or multiple distortion types and levels are imposed on the original image, and different strategies are used to augment different types of distortion datasets. With the idea of weak supervision, we use the Full Reference (FR)-IQA methods to obtain the pseudo-score label of the generated image. Then, we combine the classification information of the distortion type, level, and the information of the image quality score. The ResNet50 network is trained in the pre-train stage on the augmented dataset to obtain more quality-aware pre-training weights. Finally, the fine-tuning stage training is performed on the target IQA dataset using the quality-aware weights to predicate the final prediction score. Various experiments designed on the synthetic distortions and authentic distortions datasets (LIVE, CSIQ, TID2013, LIVEC, KonIQ-10K) prove that the proposed method can utilize the image quality-related features better than the method using only single-task training. The extracted quality-aware features improve the accuracy of the model.

Ensemble Knowledge Distillation for Classification of 14 Thorax Diseases using Chest X-ray Images (흉부 X-선 영상을 이용한 14 가지 흉부 질환 분류를 위한 Ensemble Knowledge Distillation)

  • Ho, Thi Kieu Khanh;Jeon, Younghoon;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.313-315
    • /
    • 2021
  • Timely and accurate diagnosis of lung diseases using Chest X-ray images has been gained much attention from the computer vision and medical imaging communities. Although previous studies have presented the capability of deep convolutional neural networks by achieving competitive binary classification results, their models were seemingly unreliable to effectively distinguish multiple disease groups using a large number of x-ray images. In this paper, we aim to build an advanced approach, so-called Ensemble Knowledge Distillation (EKD), to significantly boost the classification accuracies, compared to traditional KD methods by distilling knowledge from a cumbersome teacher model into an ensemble of lightweight student models with parallel branches trained with ground truth labels. Therefore, learning features at different branches of the student models could enable the network to learn diverse patterns and improve the qualify of final predictions through an ensemble learning solution. Although we observed that experiments on the well-established ChestX-ray14 dataset showed the classification improvements of traditional KD compared to the base transfer learning approach, the EKD performance would be expected to potentially enhance classification accuracy and model generalization, especially in situations of the imbalanced dataset and the interdependency of 14 weakly annotated thorax diseases.

  • PDF

A label-free high precision automated crack detection method based on unsupervised generative attentional networks and swin-crackformer

  • Shiqiao Meng;Lezhi Gu;Ying Zhou;Abouzar Jafari
    • Smart Structures and Systems
    • /
    • v.33 no.6
    • /
    • pp.449-463
    • /
    • 2024
  • Automated crack detection is crucial for structural health monitoring and post-earthquake rapid damage detection. However, realizing high precision automatic crack detection in the absence of corresponding manual labeling presents a formidable challenge. This paper presents a novel crack segmentation transfer learning method and a novel crack segmentation model called Swin-CrackFormer. The proposed method facilitates efficient crack image style transfer through a meticulously designed data preprocessing technique, followed by the utilization of a GAN model for image style transfer. Moreover, the proposed Swin-CrackFormer combines the advantages of Transformer and convolution operations to achieve effective local and global feature extraction. To verify the effectiveness of the proposed method, this study validates the proposed method on three unlabeled crack datasets and evaluates the Swin-CrackFormer model on the METU dataset. Experimental results demonstrate that the crack transfer learning method significantly improves the crack segmentation performance on unlabeled crack datasets. Moreover, the Swin-CrackFormer model achieved the best detection result on the METU dataset, surpassing existing crack segmentation models.

Machine Learning Based Strength Prediction of UHPC for Spatial Structures (대공간 구조물의 UHPC 적용을 위한 기계학습 기반 강도예측기법)

  • Lee, Seunghye;Lee, Jaehong
    • Journal of Korean Association for Spatial Structures
    • /
    • v.20 no.4
    • /
    • pp.111-121
    • /
    • 2020
  • There has been increasing interest in UHPC (Ultra-High Performance Concrete) materials in recent years. Owing to the superior mechanical properties and durability, the UHPC has been widely used for the design of various types of structures. In this paper, machine learning based compressive strength prediction methods of the UHPC are proposed. Various regression-based machine learning models were built to train dataset. For train and validation, 110 data samples collected from the literatures were used. Because the proportion between the compressive strength and its composition is a highly nonlinear, more advanced regression models are demanded to obtain better results. The complex relationship between mixture proportion and concrete compressive strength can be predicted by using the selected regression method.

Unsupervised Transfer Learning for Plant Anomaly Recognition

  • Xu, Mingle;Yoon, Sook;Lee, Jaesu;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.30-37
    • /
    • 2022
  • Disease threatens plant growth and recognizing the type of disease is essential to making a remedy. In recent years, deep learning has witnessed a significant improvement for this task, however, a large volume of labeled images is one of the requirements to get decent performance. But annotated images are difficult and expensive to obtain in the agricultural field. Therefore, designing an efficient and effective strategy is one of the challenges in this area with few labeled data. Transfer learning, assuming taking knowledge from a source domain to a target domain, is borrowed to address this issue and observed comparable results. However, current transfer learning strategies can be regarded as a supervised method as it hypothesizes that there are many labeled images in a source domain. In contrast, unsupervised transfer learning, using only images in a source domain, gives more convenience as collecting images is much easier than annotating. In this paper, we leverage unsupervised transfer learning to perform plant disease recognition, by which we achieve a better performance than supervised transfer learning in many cases. Besides, a vision transformer with a bigger model capacity than convolution is utilized to have a better-pretrained feature space. With the vision transformer-based unsupervised transfer learning, we achieve better results than current works in two datasets. Especially, we obtain 97.3% accuracy with only 30 training images for each class in the Plant Village dataset. We hope that our work can encourage the community to pay attention to vision transformer-based unsupervised transfer learning in the agricultural field when with few labeled images.

Document Image Binarization by GAN with Unpaired Data Training

  • Dang, Quang-Vinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.16 no.2
    • /
    • pp.8-18
    • /
    • 2020
  • Data is critical in deep learning but the scarcity of data often occurs in research, especially in the preparation of the paired training data. In this paper, document image binarization with unpaired data is studied by introducing adversarial learning, excluding the need for supervised or labeled datasets. However, the simple extension of the previous unpaired training to binarization inevitably leads to poor performance compared to paired data training. Thus, a new deep learning approach is proposed by introducing a multi-diversity of higher quality generated images. In this paper, a two-stage model is proposed that comprises the generative adversarial network (GAN) followed by the U-net network. In the first stage, the GAN uses the unpaired image data to create paired image data. With the second stage, the generated paired image data are passed through the U-net network for binarization. Thus, the trained U-net becomes the binarization model during the testing. The proposed model has been evaluated over the publicly available DIBCO dataset and it outperforms other techniques on unpaired training data. The paper shows the potential of using unpaired data for binarization, for the first time in the literature, which can be further improved to replace paired data training for binarization in the future.