• Title/Summary/Keyword: NASNet

Search Result 8, Processing Time 0.02 seconds

Deep Learning based Scrapbox Accumulated Status Measuring

  • Seo, Ye-In;Jeong, Eui-Han;Kim, Dong-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.27-32
    • /
    • 2020
  • In this paper, we propose an algorithm to measure the accumulated status of scrap boxes where metal scraps are accumulated. The accumulated status measuring is defined as a multi-class classification problem, and the method with deep learning classify the accumulated status using only the scrap box image. The learning was conducted by the Transfer Learning method, and the deep learning model was NASNet-A. In order to improve the accuracy of the model, we combined the Random Forest classifier with the trained NASNet-A and improved the model through post-processing. Testing with 4,195 data collected in the field showed 55% accuracy when only NASNet-A was applied, and the proposed method, NASNet with Random Forest, improved the accuracy by 88%.

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

A Study on the Outlet Blockage Determination Technology of Conveyor System using Deep Learning

  • Jeong, Eui-Han;Suh, Young-Joo;Kim, Dong-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.11-18
    • /
    • 2020
  • This study proposes a technique for the determination of outlet blockage using deep learning in a conveyor system. The proposed method aims to apply the best model to the actual process, where we train various CNN models for the determination of outlet blockage using images collected by CCTV in an industrial scene. We used the well-known CNN model such as VGGNet, ResNet, DenseNet and NASNet, and used 18,000 images collected by CCTV for model training and performance evaluation. As a experiment result with various models, VGGNet showed the best performance with 99.03% accuracy and 29.05ms processing time, and we confirmed that VGGNet is suitable for the determination of outlet blockage.

Dog-Species Classification through CycleGAN and Standard Data Augmentation

  • Chan, Park;Nammee, Moon
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.67-79
    • /
    • 2023
  • In the image field, data augmentation refers to increasing the amount of data through an editing method such as rotating or cropping a photo. In this study, a generative adversarial network (GAN) image was created using CycleGAN, and various colors of dogs were reflected through data augmentation. In particular, dog data from the Stanford Dogs Dataset and Oxford-IIIT Pet Dataset were used, and 10 breeds of dog, corresponding to 300 images each, were selected. Subsequently, a GAN image was generated using CycleGAN, and four learning groups were established: 2,000 original photos (group I); 2,000 original photos + 1,000 GAN images (group II); 3,000 original photos (group III); and 3,000 original photos + 1,000 GAN images (group IV). The amount of data in each learning group was augmented using existing data augmentation methods such as rotating, cropping, erasing, and distorting. The augmented photo data were used to train the MobileNet_v3_Large, ResNet-152, InceptionResNet_v2, and NASNet_Large frameworks to evaluate the classification accuracy and loss. The top-3 accuracy for each deep neural network model was as follows: MobileNet_v3_Large of 86.4% (group I), 85.4% (group II), 90.4% (group III), and 89.2% (group IV); ResNet-152 of 82.4% (group I), 83.7% (group II), 84.7% (group III), and 84.9% (group IV); InceptionResNet_v2 of 90.7% (group I), 88.4% (group II), 93.3% (group III), and 93.1% (group IV); and NASNet_Large of 85% (group I), 88.1% (group II), 91.8% (group III), and 92% (group IV). The InceptionResNet_v2 model exhibited the highest image classification accuracy, and the NASNet_Large model exhibited the highest increase in the accuracy owing to data augmentation.

Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model

  • Hussain. A;Balaji Srikaanth. P
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.959-979
    • /
    • 2024
  • Rice pest identification is essential in modern agriculture for the health of rice crops. As global rice consumption rises, yields and quality must be maintained. Various methodologies were employed to identify pests, encompassing sensor-based technologies, deep learning, and remote sensing models. Visual inspection by professionals and farmers remains essential, but integrating technology such as satellites, IoT-based sensors, and drones enhances efficiency and accuracy. A computer vision system processes images to detect pests automatically. It gives real-time data for proactive and targeted pest management. With this motive in mind, this research provides a novel farmland fertility algorithm with a deep learning-based automated rice pest detection and classification (FFADL-ARPDC) technique. The FFADL-ARPDC approach classifies rice pests from rice plant images. Before processing, FFADL-ARPDC removes noise and enhances contrast using bilateral filtering (BF). Additionally, rice crop images are processed using the NASNetLarge deep learning architecture to extract image features. The FFA is used for hyperparameter tweaking to optimise the model performance of the NASNetLarge, which aids in enhancing classification performance. Using an Elman recurrent neural network (ERNN), the model accurately categorises 14 types of pests. The FFADL-ARPDC approach is thoroughly evaluated using a benchmark dataset available in the public repository. With an accuracy of 97.58, the FFADL-ARPDC model exceeds existing pest detection methods.

Classification of Raccoon dog and Raccoon with Transfer Learning and Data Augmentation (전이 학습과 데이터 증강을 이용한 너구리와 라쿤 분류)

  • Dong-Min Park;Yeong-Seok Jo;Seokwon Yeom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.34-41
    • /
    • 2023
  • In recent years, as the range of human activities has increased, the introduction of alien species has become frequent. Among them, raccoons have been designated as harmful animals since 2020. Raccoons are similar in size and shape to raccoon dogs, so they generally need to be distinguished in capturing them. To solve this problem, we use VGG19, ResNet152V2, InceptionV3, InceptionResNet and NASNet, which are CNN deep learning models specialized for image classification. The parameters to be used for learning are pre-trained with a large amount of data, ImageNet. In order to classify the raccoon and raccoon dog datasets as outward features of animals, the image was converted to grayscale and brightness was normalized. Augmentation methods were applied using left and right inversion, rotation, scaling, and shift to create sufficient data for transfer learning. The FCL consists of 1 layer for the non-augmented dataset while 4 layers for the augmented dataset. Comparing the accuracy of various augmented datasets, the performance increased as more augmentation methods were applied.

Performance Comparison of Commercial and Customized CNN for Detection in Nodular Lung Cancer (결절성 폐암 검출을 위한 상용 및 맞춤형 CNN의 성능 비교)

  • Park, Sung-Wook;Kim, Seunghyun;Lim, Su-Chang;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.6
    • /
    • pp.729-737
    • /
    • 2020
  • Screening with low-dose spiral computed tomography (LDCT) has been shown to reduce lung cancer mortality by about 20% when compared to standard chest radiography. One of the problems arising from screening programs is that large amounts of CT image data must be interpreted by radiologists. To solve this problem, automated detection of pulmonary nodules is necessary; however, this is a challenging task because of the high number of false positive results. Here we demonstrate detection of pulmonary nodules using six off-the-shelf convolutional neural network (CNN) models after modification of the input/output layers and end-to-end training based on publicly databases for comparative evaluation. We used the well-known CNN models, LeNet-5, VGG-16, GoogLeNet Inception V3, ResNet-152, DensNet-201, and NASNet. Most of the CNN models provided superior results to those of obtained using customized CNN models. It is more desirable to modify the proven off-the-shelf network model than to customize the network model to detect the pulmonary nodules.

Development of a transfer learning based detection system for burr image of injection molded products (전이학습 기반 사출 성형품 burr 이미지 검출 시스템 개발)

  • Yang, Dong-Cheol;Kim, Jong-Sun
    • Design & Manufacturing
    • /
    • v.15 no.3
    • /
    • pp.1-6
    • /
    • 2021
  • An artificial neural network model based on a deep learning algorithm is known to be more accurate than humans in image classification, but there is still a limit in the sense that there needs to be a lot of training data that can be called big data. Therefore, various techniques are being studied to build an artificial neural network model with high precision, even with small data. The transfer learning technique is assessed as an excellent alternative. As a result, the purpose of this study is to develop an artificial neural network system that can classify burr images of light guide plate products with 99% accuracy using transfer learning technique. Specifically, for the light guide plate product, 150 images of the normal product and the burr were taken at various angles, heights, positions, etc., respectively. Then, after the preprocessing of images such as thresholding and image augmentation, for a total of 3,300 images were generated. 2,970 images were separated for training, while the remaining 330 images were separated for model accuracy testing. For the transfer learning, a base model was developed using the NASNet-Large model that pre-trained 14 million ImageNet data. According to the final model accuracy test, the 99% accuracy in the image classification for training and test images was confirmed. Consequently, based on the results of this study, it is expected to help develop an integrated AI production management system by training not only the burr but also various defective images.