• Title/Summary/Keyword: CNN deep learning methods

Search Result 265, Processing Time 0.029 seconds

Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm

  • Lee, Jae-Hong;Kim, Do-hyung;Jeong, Seong-Nyum;Choi, Seong-Ho
    • Journal of Periodontal and Implant Science
    • /
    • v.48 no.2
    • /
    • pp.114-123
    • /
    • 2018
  • Purpose: The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Methods: Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. Results: The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. Conclusions: We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.

Binary Classification of Hypertensive Retinopathy Using Deep Dense CNN Learning

  • Mostafa E.A., Ibrahim;Qaisar, Abbas
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.98-106
    • /
    • 2022
  • A condition of the retina known as hypertensive retinopathy (HR) is connected to high blood pressure. The severity and persistence of hypertension are directly correlated with the incidence of HR. To avoid blindness, it is essential to recognize and assess HR as soon as possible. Few computer-aided systems are currently available that can diagnose HR issues. On the other hand, those systems focused on gathering characteristics from a variety of retinopathy-related HR lesions and categorizing them using conventional machine-learning algorithms. Consequently, for limited applications, significant and complicated image processing methods are necessary. As seen in recent similar systems, the preciseness of classification is likewise lacking. To address these issues, a new CAD HR-diagnosis system employing the advanced Deep Dense CNN Learning (DD-CNN) technology is being developed to early identify HR. The HR-diagnosis system utilized a convolutional neural network that was previously trained as a feature extractor. The statistical investigation of more than 1400 retinography images is undertaken to assess the accuracy of the implemented system using several performance metrics such as specificity (SP), sensitivity (SE), area under the receiver operating curve (AUC), and accuracy (ACC). On average, we achieved a SE of 97%, ACC of 98%, SP of 99%, and AUC of 0.98. These results indicate that the proposed DD-CNN classifier is used to diagnose hypertensive retinopathy.

Performance Analysis of Deep Learning-based Image Super Resolution Methods (딥 러닝 기반의 초해상도 이미지 복원 기법 성능 분석)

  • Lee, Hyunjae;Shin, Hyunkwang;Choi, Gyu Sang;Jin, Seong-Il
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.61-70
    • /
    • 2020
  • Convolutional Neural Networks (CNN) have been used extensively in recent times to solve image classification and segmentation problems. However, the use of CNNs in image super-resolution problems remains largely unexploited. Filter interpolation and prediction model methods are the most commonly used algorithms in super-resolution algorithm implementations. The major limitation in the above named methods is that images become totally blurred and a lot of the edge information are lost. In this paper, we analyze super resolution based on CNN and the wavelet transform super resolution method. We compare and analyze the performance according to the number of layers and the training data of the CNN.

Methods of Classification and Character Recognition for Table Items through Deep Learning (딥러닝을 통한 문서 내 표 항목 분류 및 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.651-658
    • /
    • 2021
  • In this paper, we propose methods for character recognition and classification for table items through deep learning. First, table areas are detected in a document image through CNN. After that, table areas are separated by separators such as vertical lines. The text in document is recognized through a neural network combined with CNN and RNN. To correct errors in the character recognition, multiple candidates for the recognized result are provided for a sentence which has low recognition accuracy.

An Effectiveness Verification for Evaluating the Amount of WTCI Tongue Coating Using Deep Learning (딥러닝을 이용한 WTCI 설태량 평가를 위한 유효성 검증)

  • Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.4
    • /
    • pp.226-231
    • /
    • 2019
  • A WTCI is an important criteria for evaluating an mount of patient's tongue coating in tongue diagnosis. However, Previous WTCI tongue coating evaluation methods is a most of quantitatively measuring ration of the extracted tongue coating region and tongue body region, which has a non-objective measurement problem occurring by exposure conditions of tongue image or the recognition performance of tongue coating. Therefore, a WTCI based on deep learning is proposed for classifying an amount of tonger coating in this paper. This is applying the AI deep learning method using big data. to WTCI for evaluating an amount of tonger coating. In order to verify the effectiveness performance of the deep learning in tongue coating evaluating method, we classify the 3 types class(no coating, some coating, intense coating) of an amount of tongue coating by using CNN model. As a results by testing a building the tongue coating sample images for learning and verification of CNN model, proposed method is showed 96.7% with respect to the accuracy of classifying an amount of tongue coating.

Comparison of Fine Grained Classification of Pet Images Using Image Processing and CNN (영상 처리와 CNN을 이용한 애완동물 영상 세부 분류 비교)

  • Kim, Jihae;Go, Jeonghwan;Kwon, Cheolhee
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.175-183
    • /
    • 2021
  • The study of the fine grained classification of images continues to develop, but the study of object recognition for animals with polymorphic properties is proceeding slowly. Using only pet images corresponding to dogs and cats, this paper aims to compare methods using image processing and methods using deep learning among methods of classifying species of animals, which are fine grained classifications. In this paper, Grab-cut algorithm is used for object segmentation by method using image processing, and method using Fisher Vector for image encoding is proposed. Other methods used deep learning, which has achieved good results in various fields through machine learning, and among them, Convolutional Neural Network (CNN), which showed outstanding performance in image recognition, and Tensorflow, an open-source-based deep learning framework provided by Google. For each method proposed, 37 kinds of pet images, a total of 7,390 pages, were tested to verify and compare their effects.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Breast Mass Classification using the Fundamental Deep Learning Approach: To build the optimal model applying various methods that influence the performance of CNN

  • Lee, Jin;Choi, Kwang Jong;Kim, Seong Jung;Oh, Ji Eun;Yoon, Woong Bae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.97-102
    • /
    • 2016
  • Deep learning enables machines to have perception and can potentially outperform humans in the medical field. It can save a lot of time and reduce human error by detecting certain patterns from medical images without being trained. The main goal of this paper is to build the optimal model for breast mass classification by applying various methods that influence the performance of Convolutional Neural Network (CNN). Google's newly developed software library Tensorflow was used to build CNN and the mammogram dataset used in this study was obtained from 340 breast cancer cases. The best classification performance we achieved was an accuracy of 0.887, sensitivity of 0.903, and specificity of 0.869 for normal tissue versus malignant mass classification with augmented data, more convolutional filters, and ADAM optimizer. A limitation of this method, however, was that it only considered malignant masses which are relatively easier to classify than benign masses. Therefore, further studies are required in order to properly classify any given data for medical uses.

Drone Image Classification based on Convolutional Neural Networks (컨볼루션 신경망을 기반으로 한 드론 영상 분류)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.97-102
    • /
    • 2017
  • Recently deep learning techniques such as convolutional neural networks (CNN) have been introduced to classify high-resolution remote sensing data. In this paper, we investigated the possibility of applying CNN to crop classification of farmland images captured by drones. The farming area was divided into seven classes: rice field, sweet potato, red pepper, corn, sesame leaf, fruit tree, and vinyl greenhouse. We performed image pre-processing and normalization to apply CNN, and the accuracy of image classification was more than 98%. With the output of this study, it is expected that the transition from the existing image classification methods to the deep learning based image classification methods will be facilitated in a fast manner, and the possibility of success can be confirmed.

Lane Detection System using CNN (CNN을 사용한 차선검출 시스템)

  • Kim, Jihun;Lee, Daesik;Lee, Minho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.163-171
    • /
    • 2016
  • Lane detection is a widely researched topic. Although simple road detection is easily achieved by previous methods, lane detection becomes very difficult in several complex cases involving noisy edges. To address this, we use a Convolution neural network (CNN) for image enhancement. CNN is a deep learning method that has been very successfully applied in object detection and recognition. In this paper, we introduce a robust lane detection method based on a CNN combined with random sample consensus (RANSAC) algorithm. Initially, we calculate edges in an image using a hat shaped kernel, then we detect lanes using the CNN combined with the RANSAC. In the training process of the CNN, input data consists of edge images and target data is images that have real white color lanes on an otherwise black background. The CNN structure consists of 8 layers with 3 convolutional layers, 2 subsampling layers and multi-layer perceptron (MLP) of 3 fully-connected layers. Convolutional and subsampling layers are hierarchically arranged to form a deep structure. Our proposed lane detection algorithm successfully eliminates noise lines and was found to perform better than other formal line detection algorithms such as RANSAC