• Title/Summary/Keyword: Xception

Search Result 26, Processing Time 0.019 seconds

Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning (CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류)

  • Yim, Ji Yeong;Do, Thanh Cong;Kim, Soo Hyung;Lee, Guee Sang;Lee, Min Hee;Min, Jung Joon;Bom, Hee Seung;Kim, Hyeon Sik;Kang, Sae Ryung;Yang, Hyung Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

An intelligent method for pregnancy diagnosis in breeding sows according to ultrasonography algorithms

  • Jung-woo Chae;Yo-han Choi;Jeong-nam Lee;Hyun-ju Park;Yong-dae Jeong;Eun-seok Cho;Young-sin, Kim;Tae-kyeong Kim;Soo-jin Sa;Hyun-chong Cho
    • Journal of Animal Science and Technology
    • /
    • v.65 no.2
    • /
    • pp.365-376
    • /
    • 2023
  • Pig breeding management directly contributes to the profitability of pig farms, and pregnancy diagnosis is an important factor in breeding management. Therefore, the need to diagnose pregnancy in sows is emphasized, and various studies have been conducted in this area. We propose a computer-aided diagnosis system to assist livestock farmers to diagnose sow pregnancy through ultrasound. Methods for diagnosing pregnancy in sows through ultrasound include the Doppler method, which measures the heart rate and pulse status, and the echo method, which diagnoses by amplitude depth technique. We propose a method that uses deep learning algorithms on ultrasonography, which is part of the echo method. As deep learning-based classification algorithms, Inception-v4, Xception, and EfficientNetV2 were used and compared to find the optimal algorithm for pregnancy diagnosis in sows. Gaussian and speckle noises were added to the ultrasound images according to the characteristics of the ultrasonography, which is easily affected by noise from the surrounding environments. Both the original and noise added ultrasound images of sows were tested together to determine the suitability of the proposed method on farms. The pregnancy diagnosis performance on the original ultrasound images achieved 0.99 in accuracy in the highest case and on the ultrasound images with noises, the performance achieved 0.98 in accuracy. The diagnosis performance achieved 0.96 in accuracy even when the intensity of noise was strong, proving its robustness against noise.

Estimation of Heading Date of Paddy Rice from Slanted View Images Using Deep Learning Classification Model

  • Hyeokjin Bak;Hoyoung Ban;SeongryulChang;Dongwon Gwon;Jae-Kyeong Baek;Jeong-Il Cho;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.80-80
    • /
    • 2022
  • Estimation of heading date of paddy rice is laborious and time consuming. Therefore, automatic estimation of heading date of paddy rice is highly essential. In this experiment, deep learning classification models were used to classify two difference categories of rice (vegetative and reproductive stage) based on the panicle initiation of paddy field. Specifically, the dataset includes 444 slanted view images belonging to two categories and was then expanded to include 1,497 images via IMGAUG data augmentation technique. We adopt two transfer learning strategies: (First, used transferring model weights already trained on ImageNet to six classification network models: VGGNet, ResNet, DenseNet, InceptionV3, Xception and MobileNet, Second, fine-tuned some layers of the network according to our dataset). After training the CNN model, we used several evaluation metrics commonly used for classification tasks, including Accuracy, Precision, Recall, and F1-score. In addition, GradCAM was used to generate visual explanations for each image patch. Experimental results showed that the InceptionV3 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. The fine-tuned InceptionV3 model achieved an overall classification accuracy of 0.95 with a high F1-score of 0.95. Our CNN model also represented the change of rice heading date under different date of transplanting. This study demonstrated that image based deep learning model can reliably be used as an automatic monitoring system to detect the heading date of rice crops using CCTV camera.

  • PDF

Transfer Learning for Caladium bicolor Classification: Proof of Concept to Application Development

  • Porawat Visutsak;Xiabi Liu;Keun Ho Ryu;Naphat Bussabong;Nicha Sirikong;Preeyaphorn Intamong;Warakorn Sonnui;Siriwan Boonkerd;Jirawat Thongpiem;Maythar Poonpanit;Akarasate Homwiseswongsa;Kittipot Hirunwannapong;Chaimongkol Suksomsong;Rittikait Budrit
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.126-146
    • /
    • 2024
  • Caladium bicolor is one of the most popular plants in Thailand. The original species of Caladium bicolor was found a hundred years ago. Until now, there are more than 500 species through multiplication. The classification of Caladium bicolor can be done by using its color and shape. This study aims to develop a model to classify Caladium bicolor using a transfer learning technique. This work also presents a proof of concept, GUI design, and web application deployment using the user-design-center method. We also evaluated the performance of the following pre-trained models in this work, and the results are as follow: 87.29% for AlexNet, 90.68% for GoogleNet, 93.59% for XceptionNet, 93.22% for MobileNetV2, 89.83% for RestNet18, 88.98% for RestNet50, 97.46% for RestNet101, and 94.92% for InceptionResNetV2. This work was implemented using MATLAB R2023a.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • v.63 no.2
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.