• 제목/요약/키워드: VGG16

검색결과 122건 처리시간 0.021초

Two-phase flow pattern online monitoring system based on convolutional neural network and transfer learning

  • Hong Xu;Tao Tang
    • Nuclear Engineering and Technology
    • /
    • 제54권12호
    • /
    • pp.4751-4758
    • /
    • 2022
  • Two-phase flow may almost exist in every branch of the energy industry. For the corresponding engineering design, it is very essential and crucial to monitor flow patterns and their transitions accurately. With the high-speed development and success of deep learning based on convolutional neural network (CNN), the study of flow pattern identification recently almost focused on this methodology. Additionally, the photographing technique has attractive implementation features as well, since it is normally considerably less expensive than other techniques. The development of such a two-phase flow pattern online monitoring system is the objective of this work, which seldom studied before. The ongoing preliminary engineering design (including hardware and software) of the system are introduced. The flow pattern identification method based on CNNs and transfer learning was discussed in detail. Several potential CNN candidates such as ALexNet, VggNet16 and ResNets were introduced and compared with each other based on a flow pattern dataset. According to the results, ResNet50 is the most promising CNN network for the system owing to its high precision, fast classification and strong robustness. This work can be a reference for the online monitoring system design in the energy system.

GAN 으로 합성된 흉부 X-ray 를 활용한 의료 인공지능 교육 모델에 관한 사례 연구 (A Case Study on an Educational Model of Medical AI Using Chest X-ray Synthetized by GAN)

  • 이규빈;윤예빈;함소진;배현진;유원상
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.887-890
    • /
    • 2021
  • 최근 AI 를 활용한 의료 진단 솔루션 시장이 크게 성장함에 따라 의료 인공지능 기술에 대한 대학 교육에 대한 수요가 증가하고 있지만, 개인정보 유출의 위험성 등으로 인하여 의료 데이터를 대학 교육에 활용하기 어려운 실정이다. 본 논문에서는 실제 의료 데이터 대신 생성적 적대 신경망(GAN)으로 합성된 흉부 X-ray 영상을 활용한 의료 인공지능 교육 모델의 사례를 제시한다. 프로메디우스(주)에 의해 제공받은 흉부 X-ray 합성영상을 사용하여, VGG-16 모델을 훈련하고 성능을 검증 및 평가하며 미세조정을 통해 성능을 개선하는 교육 모델을 구성하였다. 또한 교육모델이 의료 인공지능에 대한 학생들의 이해력 향상에 기여한 효과를 정량적으로 평가하였다.

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • 제66권1호
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • 제63권2호
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.

Development of a Malignancy Potential Binary Prediction Model Based on Deep Learning for the Mitotic Count of Local Primary Gastrointestinal Stromal Tumors

  • Jiejin Yang;Zeyang Chen;Weipeng Liu;Xiangpeng Wang;Shuai Ma;Feifei Jin;Xiaoying Wang
    • Korean Journal of Radiology
    • /
    • 제22권3호
    • /
    • pp.344-353
    • /
    • 2021
  • Objective: The mitotic count of gastrointestinal stromal tumors (GIST) is closely associated with the risk of planting and metastasis. The purpose of this study was to develop a predictive model for the mitotic index of local primary GIST, based on deep learning algorithm. Materials and Methods: Abdominal contrast-enhanced CT images of 148 pathologically confirmed GIST cases were retrospectively collected for the development of a deep learning classification algorithm. The areas of GIST masses on the CT images were retrospectively labelled by an experienced radiologist. The postoperative pathological mitotic count was considered as the gold standard (high mitotic count, > 5/50 high-power fields [HPFs]; low mitotic count, ≤ 5/50 HPFs). A binary classification model was trained on the basis of the VGG16 convolutional neural network, using the CT images with the training set (n = 108), validation set (n = 20), and the test set (n = 20). The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated at both, the image level and the patient level. The receiver operating characteristic curves were generated on the basis of the model prediction results and the area under curves (AUCs) were calculated. The risk categories of the tumors were predicted according to the Armed Forces Institute of Pathology criteria. Results: At the image level, the classification prediction results of the mitotic counts in the test cohort were as follows: sensitivity 85.7% (95% confidence interval [CI]: 0.834-0.877), specificity 67.5% (95% CI: 0.636-0.712), PPV 82.1% (95% CI: 0.797-0.843), NPV 73.0% (95% CI: 0.691-0.766), and AUC 0.771 (95% CI: 0.750-0.791). At the patient level, the classification prediction results in the test cohort were as follows: sensitivity 90.0% (95% CI: 0.541-0.995), specificity 70.0% (95% CI: 0.354-0.919), PPV 75.0% (95% CI: 0.428-0.933), NPV 87.5% (95% CI: 0.467-0.993), and AUC 0.800 (95% CI: 0.563-0.943). Conclusion: We developed and preliminarily verified the GIST mitotic count binary prediction model, based on the VGG convolutional neural network. The model displayed a good predictive performance.

컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향 (The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model)

  • 김민정;김정훈;박지은;정우연;이종민
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

딥러닝 기반의 핵의학 폐검사 분류 모델 적용 (Application of Deep Learning-Based Nuclear Medicine Lung Study Classification Model)

  • 정의환;오주영;이주영;박훈희
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제45권1호
    • /
    • pp.41-47
    • /
    • 2022
  • The purpose of this study is to apply a deep learning model that can distinguish lung perfusion and lung ventilation images in nuclear medicine, and to evaluate the image classification ability. Image data pre-processing was performed in the following order: image matrix size adjustment, min-max normalization, image center position adjustment, train/validation/test data set classification, and data augmentation. The convolutional neural network(CNN) structures of VGG-16, ResNet-18, Inception-ResNet-v2, and SE-ResNeXt-101 were used. For classification model evaluation, performance evaluation index of classification model, class activation map(CAM), and statistical image evaluation method were applied. As for the performance evaluation index of the classification model, SE-ResNeXt-101 and Inception-ResNet-v2 showed the highest performance with the same results. As a result of CAM, cardiac and right lung regions were highly activated in lung perfusion, and upper lung and neck regions were highly activated in lung ventilation. Statistical image evaluation showed a meaningful difference between SE-ResNeXt-101 and Inception-ResNet-v2. As a result of the study, the applicability of the CNN model for lung scintigraphy classification was confirmed. In the future, it is expected that it will be used as basic data for research on new artificial intelligence models and will help stable image management in clinical practice.

A Novel Approach to COVID-19 Diagnosis Based on Mel Spectrogram Features and Artificial Intelligence Techniques

  • Alfaidi, Aseel;Alshahrani, Abdullah;Aljohani, Maha
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.195-207
    • /
    • 2022
  • COVID-19 has remained one of the most serious health crises in recent history, resulting in the tragic loss of lives and significant economic impacts on the entire world. The difficulty of controlling COVID-19 poses a threat to the global health sector. Considering that Artificial Intelligence (AI) has contributed to improving research methods and solving problems facing diverse fields of study, AI algorithms have also proven effective in disease detection and early diagnosis. Specifically, acoustic features offer a promising prospect for the early detection of respiratory diseases. Motivated by these observations, this study conceptualized a speech-based diagnostic model to aid in COVID-19 diagnosis. The proposed methodology uses speech signals from confirmed positive and negative cases of COVID-19 to extract features through the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images. This is used in addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 of varying ages and speaking different languages, as demonstrated in the simulations. The proposed methodology depends on deep features, followed by the dimension reduction technique for features to detect COVID-19. As a result, it produces better and more consistent performance than handcrafted features used in previous studies.