• Title/Summary/Keyword: deep transfer learning

Search Result 252, Processing Time 0.036 seconds

ResNet-Based Simulations for a Heat-Transfer Model Involving an Imperfect Contact

  • Guangxing, Wang;Gwanghyun, Jo;Seong-Yoon, Shin
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.303-308
    • /
    • 2022
  • Simulating the heat transfer in a composite material is an important topic in material science. Difficulties arise from the fact that adjacent materials cannot match perfectly, resulting in discontinuity in the temperature variables. Although there have been several numerical methods for solving the heat-transfer problem in imperfect contact conditions, the methods known so far are complicated to implement, and the computational times are non-negligible. In this study, we developed a ResNet-type deep neural network for simulating a heat transfer model in a composite material. To train the neural network, we generated datasets by numerically solving the heat-transfer equations with Kapitza thermal resistance conditions. Because datasets involve various configurations of composite materials, our neural networks are robust to the shapes of material-material interfaces. Our algorithm can predict the thermal behavior in real time once the networks are trained. The performance of the proposed neural networks is documented, where the root mean square error (RMSE) and mean absolute error (MAE) are below 2.47E-6, and 7.00E-4, respectively.

A Deep Learning Approach for Covid-19 Detection in Chest X-Rays

  • Sk. Shalauddin Kabir;Syed Galib;Hazrat Ali;Fee Faysal Ahmed;Mohammad Farhad Bulbul
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.125-134
    • /
    • 2024
  • The novel coronavirus 2019 is called COVID-19 has outspread swiftly worldwide. An early diagnosis is more important to control its quick spread. Medical imaging mechanics, chest calculated tomography or chest X-ray, are playing a vital character in the identification and testing of COVID-19 in this present epidemic. Chest X-ray is cost effective method for Covid-19 detection however the manual process of x-ray analysis is time consuming given that the number of infected individuals keep growing rapidly. For this reason, it is very important to develop an automated COVID-19 detection process to control this pandemic. In this study, we address the task of automatic detection of Covid-19 by using a popular deep learning model namely the VGG19 model. We used 1300 healthy and 1300 confirmed COVID-19 chest X-ray images in this experiment. We performed three experiments by freezing different blocks and layers of VGG19 and finally, we used a machine learning classifier SVM for detecting COVID-19. In every experiment, we used a five-fold cross-validation method to train and validated the model and finally achieved 98.1% overall classification accuracy. Experimental results show that our proposed method using the deep learning-based VGG19 model can be used as a tool to aid radiologists and play a crucial role in the timely diagnosis of Covid-19.

Transfer Learning-based Generated Synthetic Images Identification Model (전이 학습 기반의 생성 이미지 판별 모델 설계)

  • Chaewon Kim;Sungyeon Yoon;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.465-470
    • /
    • 2024
  • The advancement of AI-based image generation technology has resulted in the creation of various images, emphasizing the need for technology capable of accurately discerning them. The amount of generated image data is limited, and to achieve high performance with a limited dataset, this study proposes a model for discriminating generated images using transfer learning. Applying pre-trained models from the ImageNet dataset directly to the CIFAKE input dataset, we reduce training time cost followed by adding three hidden layers and one output layer to fine-tune the model. The modeling results revealed an improvement in the performance of the model when adjusting the final layer. Using transfer learning and then adjusting layers close to the output layer, small image data-related accuracy issues can be reduced and generated images can be classified.

Tomato Crop Diseases Classification Models Using Deep CNN-based Architectures (심층 CNN 기반 구조를 이용한 토마토 작물 병해충 분류 모델)

  • Kim, Sam-Keun;Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.7-14
    • /
    • 2021
  • Tomato crops are highly affected by tomato diseases, and if not prevented, a disease can cause severe losses for the agricultural economy. Therefore, there is a need for a system that quickly and accurately diagnoses various tomato diseases. In this paper, we propose a system that classifies nine diseases as well as healthy tomato plants by applying various pretrained deep learning-based CNN models trained on an ImageNet dataset. The tomato leaf image dataset obtained from PlantVillage is provided as input to ResNet, Xception, and DenseNet, which have deep learning-based CNN architectures. The proposed models were constructed by adding a top-level classifier to the basic CNN model, and they were trained by applying a 5-fold cross-validation strategy. All three of the proposed models were trained in two stages: transfer learning (which freezes the layers of the basic CNN model and then trains only the top-level classifiers), and fine-tuned learning (which sets the learning rate to a very small number and trains after unfreezing basic CNN layers). SGD, RMSprop, and Adam were applied as optimization algorithms. The experimental results show that the DenseNet CNN model to which the RMSprop algorithm was applied output the best results, with 98.63% accuracy.

Two-phase flow pattern online monitoring system based on convolutional neural network and transfer learning

  • Hong Xu;Tao Tang
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4751-4758
    • /
    • 2022
  • Two-phase flow may almost exist in every branch of the energy industry. For the corresponding engineering design, it is very essential and crucial to monitor flow patterns and their transitions accurately. With the high-speed development and success of deep learning based on convolutional neural network (CNN), the study of flow pattern identification recently almost focused on this methodology. Additionally, the photographing technique has attractive implementation features as well, since it is normally considerably less expensive than other techniques. The development of such a two-phase flow pattern online monitoring system is the objective of this work, which seldom studied before. The ongoing preliminary engineering design (including hardware and software) of the system are introduced. The flow pattern identification method based on CNNs and transfer learning was discussed in detail. Several potential CNN candidates such as ALexNet, VggNet16 and ResNets were introduced and compared with each other based on a flow pattern dataset. According to the results, ResNet50 is the most promising CNN network for the system owing to its high precision, fast classification and strong robustness. This work can be a reference for the online monitoring system design in the energy system.

Iceberg-Ship Classification in SAR Images Using Convolutional Neural Network with Transfer Learning

  • Choi, Jeongwhan
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.35-44
    • /
    • 2018
  • Monitoring through Synthesis Aperture Radar (SAR) is responsible for marine safety from floating icebergs. However, there are limits to distinguishing between icebergs and ships in SAR images. Convolutional Neural Network (CNN) is used to distinguish the iceberg from the ship. The goal of this paper is to increase the accuracy of identifying icebergs from SAR images. The metrics for performance evaluation uses the log loss. The two-layer CNN model proposed in research of C.Bentes et al.[1] is used as a benchmark model and compared with the four-layer CNN model using data augmentation. Finally, the performance of the final CNN model using the VGG-16 pre-trained model is compared with the previous model. This paper shows how to improve the benchmark model and propose the final CNN model.

The Verification of the Transfer Learning-based Automatic Post Editing Model (전이학습 기반 기계번역 사후교정 모델 검증)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.27-35
    • /
    • 2021
  • Automatic post editing is a research field that aims to automatically correct errors in machine translation results. This research is mainly being focus on high resource language pairs, such as English-German. Recent APE studies are mainly adopting transfer learning based research, where pre-training language models, or translation models generated through self-supervised learning methodologies are utilized. While translation based APE model shows superior performance in recent researches, as such researches are conducted on the high resource languages, the same perspective cannot be directly applied to the low resource languages. In this work, we apply two transfer learning strategies to Korean-English APE studies and show that transfer learning with translation model can significantly improves APE performance.

Multiple Hint Information-based Knowledge Transfer with Block-wise Retraining (블록 계층별 재학습을 이용한 다중 힌트정보 기반 지식전이 학습)

  • Bae, Ji-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.43-49
    • /
    • 2020
  • In this paper, we propose a stage-wise knowledge transfer method that uses block-wise retraining to transfer the useful knowledge of a pre-trained residual network (ResNet) in a teacher-student framework (TSF). First, multiple hint information transfer and block-wise supervised retraining of the information was alternatively performed between teacher and student ResNet models. Next, Softened output information-based knowledge transfer was additionally considered in the TSF. The results experimentally showed that the proposed method using multiple hint-based bottom-up knowledge transfer coupled with incremental block-wise retraining provided the improved student ResNet with higher accuracy than existing KD and hint-based knowledge transfer methods considered in this study.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.