• Title/Summary/Keyword: Vgg16

Search Result 126, Processing Time 0.022 seconds

Effectiveness of the Detection of Pulmonary Emphysema using VGGNet with Low-dose Chest Computed Tomography Images (저선량 흉부 CT를 이용한 VGGNet 폐기종 검출 유용성 평가)

  • Kim, Doo-Bin;Park, Young-Joon;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.411-417
    • /
    • 2022
  • This study aimed to learn and evaluate the effectiveness of VGGNet in the detection of pulmonary emphysema using low-dose chest computed tomography images. In total, 8000 images with normal findings and 3189 images showing pulmonary emphysema were used. Furthermore, 60%, 24%, and 16% of the normal and emphysema data were randomly assigned to training, validation, and test datasets, respectively, in model learning. VGG16 and VGG19 were used for learning, and the accuracy, loss, confusion matrix, precision, recall, specificity, and F1-score were evaluated. The accuracy and loss for pulmonary emphysema detection of the low-dose chest CT test dataset were 92.35% and 0.21% for VGG16 and 95.88% and 0.09% for VGG19, respectively. The precision, recall, and specificity were 91.60%, 98.36%, and 77.08% for VGG16 and 96.55%, 97.39%, and 92.72% for VGG19, respectively. The F1-scores were 94.86% and 96.97% for VGG16 and VGG19, respectively. Through the above evaluation index, VGG19 is judged to be more useful in detecting pulmonary emphysema. The findings of this study would be useful as basic data for the research on pulmonary emphysema detection models using VGGNet and artificial neural networks.

Korean Food Information Provision APP for Foreigners Using VGG16 (VGG16을 활용한 외국인 전용 한식정보 제공 앱)

  • Yoon, Su-jin;Oh, Se-yeong;Woo, Young Woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.404-406
    • /
    • 2021
  • In this paper, we propose an app application for classifying Korean food images and providing information related to Korean food. App Application consists of Flask server, Database (Mysql), and Python deep learning modules. Using the VGG16 model, 150 images of Korean foods are classified. If there is an internet environment, anyone can easily get information about Korean food anytime, anywhere with a single photo.

  • PDF

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.

An Explainable Deep Learning Algorithm based on Video Classification (비디오 분류에 기반 해석가능한 딥러닝 알고리즘)

  • Jin Zewei;Inwhee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.449-452
    • /
    • 2023
  • The rapid development of the Internet has led to a significant increase in multimedia content in social networks. How to better analyze and improve video classification models has become an important task. Deep learning models have typical "black box" characteristics. The model requires explainable analysis. This article uses two classification models: ConvLSTM and VGG16+LSTM models. And combined with the explainable method of LRP, generate visualized explainable results. Finally, based on the experimental results, the accuracy of the classification model is: ConvLSTM: 75.94%, VGG16+LSTM: 92.50%. We conducted explainable analysis on the VGG16+LSTM model combined with the LRP method. We found VGG16+LSTM classification model tends to use the frames biased towards the latter half of the video and the last frame as the basis for classification.

Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning (딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구)

  • Cho, Woo-Yun;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.2
    • /
    • pp.96-109
    • /
    • 2024
  • The purpose of this study is to propose a method for evaluating the similarity of Show gardens using Deep Learning models, specifically VGG-16 and ResNet50. A model for judging the similarity of show gardens based on VGG-16 and ResNet50 models was developed, and was referred to as DRG (Deep Recognition of similarity in show Garden design). An algorithm utilizing GAP and Pearson correlation coefficient was employed to construct the model, and the accuracy of similarity was analyzed by comparing the total number of similar images derived at 1st (Top1), 3rd (Top3), and 5th (Top5) ranks with the original images. The image data used for the DRG model consisted of a total of 278 works from the Le Festival International des Jardins de Chaumont-sur-Loire, 27 works from the Seoul International Garden Show, and 17 works from the Korea Garden Show. Image analysis was conducted using the DRG model for both the same group and different groups, resulting in the establishment of guidelines for assessing show garden similarity. First, overall image similarity analysis was best suited for applying data augmentation techniques based on the ResNet50 model. Second, for image analysis focusing on internal structure and outer form, it was effective to apply a certain size filter (16cm × 16cm) to generate images emphasizing form and then compare similarity using the VGG-16 model. It was suggested that an image size of 448 × 448 pixels and the original image in full color are the optimal settings. Based on these research findings, a quantitative method for assessing show gardens is proposed and it is expected to contribute to the continuous development of garden culture through interdisciplinary research moving forward.

Optimized Deep Learning Techniques for Disease Detection in Rice Crop using Merged Datasets

  • Muhammad Junaid;Sohail Jabbar;Muhammad Munwar Iqbal;Saqib Majeed;Mubarak Albathan;Qaisar Abbas;Ayyaz Hussain
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.57-66
    • /
    • 2023
  • Rice is an important food crop for most of the population in the world and it is largely cultivated in Pakistan. It not only fulfills food demand in the country but also contributes to the wealth of Pakistan. But its production can be affected by climate change. The irregularities in the climate can cause several diseases such as brown spots, bacterial blight, tungro and leaf blasts, etc. Detection of these diseases is necessary for suitable treatment. These diseases can be effectively detected using deep learning such as Convolution Neural networks. Due to the small dataset, transfer learning models such as vgg16 model can effectively detect the diseases. In this paper, vgg16, inception and xception models are used. Vgg16, inception and xception models have achieved 99.22%, 88.48% and 93.92% validation accuracies when the epoch value is set to 10. Evaluation of models has also been done using accuracy, recall, precision, and confusion matrix.

Detecting Similar Designs Using Deep Learning-based Image Feature Extracting Model (딥러닝 기반 이미지 특징 추출 모델을 이용한 유사 디자인 검출에 대한 연구)

  • Lee, Byoung Woo;Lee, Woo Chang;Chae, Seung Wan;Kim, Dong Hyun;Lee, Choong Kwon
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.162-169
    • /
    • 2020
  • Design is a key factor that determines the competitiveness of products in the textile and fashion industry. It is very important to measure the similarity of the proposed design in order to prevent unauthorized copying and to confirm the originality. In this study, a deep learning technique was used to quantify features from images of textile designs, and similarity was measured using Spearman correlation coefficients. To verify that similar samples were actually detected, 300 images were randomly rotated and color changed. The results of Top-3 and Top-5 in the order of similarity value were measured to see if samples that rotated or changed color were detected. As a result, the VGG-16 model recorded significantly higher performance than did AlexNet. The performance of the VGG-16 model was the highest at 64% and 73.67% in the Top-3 and Top-5, where similarity results were high in the case of the rotated image. appear. In the case of color change, the highest in Top-3 and Top-5 at 86.33% and 90%, respectively.

An Efficient Disease Inspection Model for Untrained Crops Using VGG16 (VGG16을 활용한 미학습 농작물의 효율적인 질병 진단 모델)

  • Jeong, Seok Bong;Yoon, Hyoup-Sang
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.4
    • /
    • pp.1-7
    • /
    • 2020
  • Early detection and classification of crop diseases play significant role to help farmers to reduce disease spread and to increase agricultural productivity. Recently, many researchers have used deep learning techniques like convolutional neural network (CNN) classifier for crop disease inspection with dataset of crop leaf images (e.g., PlantVillage dataset). These researches present over 90% of classification accuracy for crop diseases, but they have ability to detect only the pre-trained diseases. This paper proposes an efficient disease inspection CNN model for new crops not used in the pre-trained model. First, we present a benchmark crop disease classifier (CDC) for the crops in PlantVillage dataset using VGG16. Then we build a modified crop disease classifier (mCDC) to inspect diseases for untrained crops. The performance evaluation results show that the proposed model outperforms the benchmark classifier.

Performance Improvement of Optical Character Recognition for Parts Book Using Pre-processing of Modified VGG Model (변형 VGG 모델의 전처리를 이용한 부품도면 문자 인식 성능 개선)

  • Shin, Hee-Ran;Lee, Sang-Hyeop;Park, Jang-Sik;Song, Jong-Kwan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.2
    • /
    • pp.433-438
    • /
    • 2019
  • This paper proposes a method of improving deep learning based numbers and characters recognition performance on parts of drawing through image preprocessing. The proposed character recognition system consists of image preprocessing and 7 layer deep learning model. Mathematical morphological filtering is used as preprocessing to remove the lines and shapes which causes false recognition of numbers and characters on parts drawing. Further.. Further, the used deep learning model is a 7 layer deep learning model instead of VGG-16 model. As a result of the proposed OCR method, the recognition rate of characters is 92.57% and the precision is 92.82%.

Detecting Vehicles That Are Illegally Driving on Road Shoulders Using Faster R-CNN (Faster R-CNN을 이용한 갓길 차로 위반 차량 검출)

  • Go, MyungJin;Park, Minju;Yeo, Jiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.1
    • /
    • pp.105-122
    • /
    • 2022
  • According to the statistics about the fatal crashes that have occurred on the expressways for the last 5 years, those who died on the shoulders of the road has been as 3 times high as the others who died on the expressways. It suggests that the crashes on the shoulders of the road should be fatal, and that it would be important to prevent the traffic crashes by cracking down on the vehicles intruding the shoulders of the road. Therefore, this study proposed a method to detect a vehicle that violates the shoulder lane by using the Faster R-CNN. The vehicle was detected based on the Faster R-CNN, and an additional reading module was configured to determine whether there was a shoulder violation. For experiments and evaluations, GTAV, a simulation game that can reproduce situations similar to the real world, was used. 1,800 images of training data and 800 evaluation data were processed and generated, and the performance according to the change of the threshold value was measured in ZFNet and VGG16. As a result, the detection rate of ZFNet was 99.2% based on Threshold 0.8 and VGG16 93.9% based on Threshold 0.7, and the average detection speed for each model was 0.0468 seconds for ZFNet and 0.16 seconds for VGG16, so the detection rate of ZFNet was about 7% higher. The speed was also confirmed to be about 3.4 times faster. These results show that even in a relatively uncomplicated network, it is possible to detect a vehicle that violates the shoulder lane at a high speed without pre-processing the input image. It suggests that this algorithm can be used to detect violations of designated lanes if sufficient training datasets based on actual video data are obtained.