• 제목/요약/키워드: Convolutional Neural Network Classifier

Search Result 89, Processing Time 0.02 seconds

Binary Classification of Hypertensive Retinopathy Using Deep Dense CNN Learning

  • Mostafa E.A., Ibrahim;Qaisar, Abbas
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.98-106
    • /
    • 2022
  • A condition of the retina known as hypertensive retinopathy (HR) is connected to high blood pressure. The severity and persistence of hypertension are directly correlated with the incidence of HR. To avoid blindness, it is essential to recognize and assess HR as soon as possible. Few computer-aided systems are currently available that can diagnose HR issues. On the other hand, those systems focused on gathering characteristics from a variety of retinopathy-related HR lesions and categorizing them using conventional machine-learning algorithms. Consequently, for limited applications, significant and complicated image processing methods are necessary. As seen in recent similar systems, the preciseness of classification is likewise lacking. To address these issues, a new CAD HR-diagnosis system employing the advanced Deep Dense CNN Learning (DD-CNN) technology is being developed to early identify HR. The HR-diagnosis system utilized a convolutional neural network that was previously trained as a feature extractor. The statistical investigation of more than 1400 retinography images is undertaken to assess the accuracy of the implemented system using several performance metrics such as specificity (SP), sensitivity (SE), area under the receiver operating curve (AUC), and accuracy (ACC). On average, we achieved a SE of 97%, ACC of 98%, SP of 99%, and AUC of 0.98. These results indicate that the proposed DD-CNN classifier is used to diagnose hypertensive retinopathy.

CCTV Based Gender Classification Using a Convolutional Neural Networks (컨볼루션 신경망을 이용한 CCTV 영상 기반의 성별구분)

  • Kang, Hyun Gon;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.1943-1950
    • /
    • 2016
  • Recently, gender classification has attracted a great deal of attention in the field of video surveillance system. It can be useful in many applications such as detecting crimes for women and business intelligence. In this paper, we proposed a method which can detect pedestrians from CCTV video and classify the gender of the detected objects. So far, many algorithms have been proposed to classify people according the their gender. This paper presents a gender classification using convolutional neural network. The detection phase is performed by AdaBoost algorithm based on Haar-like features and LBP features. Classifier and detector is trained with data-sets generated form CCTV images. The experimental results of the proposed method is male matching rate of 89.9% and the results shows 90.7% of female videos. As results of simulations, it is shown that the proposed gender classification is better than conventional classification algorithm.

Sweet Persimmons Classification based on a Mixed Two-Step Synthetic Neural Network (혼합 2단계 합성 신경망을 이용한 단감 분류)

  • Roh, SeungHee;Park, DongGyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1358-1368
    • /
    • 2021
  • A research on agricultural automation is a main issues to overcome the shortage of labor in Korea. A sweet persimmon farmers need much time and labors for classifying profitable sweet persimmon and ill profitable products. In this paper, we propose a mixed two-step synthetic neural network model for efficiently classifying sweet persimmon images. In this model, we suggested a surface direction classification model and a quality screening model which constructed from image data sets. Also we studied Class Activation Mapping(CAM) for visualization to easily inspect the quality of the classified products. The proposed mixed two-step model showed high performance compared to the simple binary classification model and the multi-class classification model, and it was possible to easily identify the weak parts of the classification in a dataset.

Neural Networks-Based Method for Electrocardiogram Classification

  • Maksym Kovalchuk;Viktoriia Kharchenko;Andrii Yavorskyi;Igor Bieda;Taras Panchenko
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.186-191
    • /
    • 2023
  • Neural Networks are widely used for huge variety of tasks solution. Machine Learning methods are used also for signal and time series analysis, including electrocardiograms. Contemporary wearable devices, both medical and non-medical type like smart watch, allow to gather the data in real time uninterruptedly. This allows us to transfer these data for analysis or make an analysis on the device, and thus provide preliminary diagnosis, or at least fix some serious deviations. Different methods are being used for this kind of analysis, ranging from medical-oriented using distinctive features of the signal to machine learning and deep learning approaches. Here we will demonstrate a neural network-based approach to this task by building an ensemble of 1D CNN classifiers and a final classifier of selection using logistic regression, random forest or support vector machine, and make the conclusions of the comparison with other approaches.

Feasibility of fully automated classification of whole slide images based on deep learning

  • Cho, Kyung-Ok;Lee, Sung Hak;Jang, Hyun-Jong
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.24 no.1
    • /
    • pp.89-99
    • /
    • 2020
  • Although microscopic analysis of tissue slides has been the basis for disease diagnosis for decades, intra- and inter-observer variabilities remain issues to be resolved. The recent introduction of digital scanners has allowed for using deep learning in the analysis of tissue images because many whole slide images (WSIs) are accessible to researchers. In the present study, we investigated the possibility of a deep learning-based, fully automated, computer-aided diagnosis system with WSIs from a stomach adenocarcinoma dataset. Three different convolutional neural network architectures were tested to determine the better architecture for tissue classifier. Each network was trained to classify small tissue patches into normal or tumor. Based on the patch-level classification, tumor probability heatmaps can be overlaid on tissue images. We observed three different tissue patterns, including clear normal, clear tumor and ambiguous cases. We suggest that longer inspection time can be assigned to ambiguous cases compared to clear normal cases, increasing the accuracy and efficiency of histopathologic diagnosis by pre-evaluating the status of the WSIs. When the classifier was tested with completely different WSI dataset, the performance was not optimal because of the different tissue preparation quality. By including a small amount of data from the new dataset for training, the performance for the new dataset was much enhanced. These results indicated that WSI dataset should include tissues prepared from many different preparation conditions to construct a generalized tissue classifier. Thus, multi-national/multi-center dataset should be built for the application of deep learning in the real world medical practice.

A Study on Classifying Sea Ice of the Summer Arctic Ocean Using Sentinel-1 A/B SAR Data and Deep Learning Models (Sentinel-1 A/B 위성 SAR 자료와 딥러닝 모델을 이용한 여름철 북극해 해빙 분류 연구)

  • Jeon, Hyungyun;Kim, Junwoo;Vadivel, Suresh Krishnan Palanisamy;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.999-1009
    • /
    • 2019
  • The importance of high-resolution sea ice maps of the Arctic Ocean is increasing due to the possibility of pioneering North Pole Routes and the necessity of precise climate prediction models. In this study,sea ice classification algorithms for two deep learning models were examined using Sentinel-1 A/B SAR data to generate high-resolution sea ice classification maps. Based on current ice charts, three classes (Open Water, First Year Ice, Multi Year Ice) of training data sets were generated by Arctic sea ice and remote sensing experts. Ten sea ice classification algorithms were generated by combing two deep learning models (i.e. Simple CNN and Resnet50) and five cases of input bands including incident angles and thermal noise corrected HV bands. For the ten algorithms, analyses were performed by comparing classification results with ground truth points. A confusion matrix and Cohen's kappa coefficient were produced for the case that showed best result. Furthermore, the classification result with the Maximum Likelihood Classifier that has been traditionally employed to classify sea ice. In conclusion, the Convolutional Neural Network case, which has two convolution layers and two max pooling layers, with HV and incident angle input bands shows classification accuracy of 96.66%, and Cohen's kappa coefficient of 0.9499. All deep learning cases shows better classification accuracy than the classification result of the Maximum Likelihood Classifier.

A New Face Morphing Method using Texture Feature-based Control Point Selection Algorithm and Parallel Deep Convolutional Neural Network (텍스처 특징 기반 제어점 선택 알고리즘과 병렬 심층 컨볼루션 신경망을 이용한 새로운 얼굴 모핑 방법)

  • Park, Jin Hyeok;Khan, Rafiul Hasan;Lim, Seon-Ja;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.176-188
    • /
    • 2022
  • In this paper, we propose a compact method for anthropomorphism that uses Deep Convolutional Neural Networks (DCNN) to detect the similarities between a human face and an animal face. We also apply texture feature-based morphing between them. We propose a basic texture feature-based morphing system for morphing between human faces only. The entire anthropomorphism process starts with the creation of an animal face classifier using a parallel DCNN that determines the most similar animal face to a given human face. The significance of our network is that it contains four sets of convolutional functions that run in parallel, allowing it to extract more features than a linear DCNN network. Our employed texture feature algorithm-based automatic morphing system recognizes the facial features of the human face and takes the Control Points automatically, rather than the traditional human aiding manual morphing system, once the similarity was established. The simulation results show that our suggested DCNN surpasses its competitors with a 92.0% accuracy rate. It also ensures that the most similar animal classes are found, and the texture-based morphing technology automatically completes the morphing process, ensuring a smooth transition from one image to another.

Automatic Classification of Radar Signals Using CNN (CNN을 이용한 레이다 신호 자동 분류)

  • Hong, Seok-Jun;Yi, Yearn-Gui;Jo, Jeil;Lee, Sang-Gil;Seo, Bo-Seok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.30 no.2
    • /
    • pp.132-140
    • /
    • 2019
  • In this paper, we propose a classification method for radar signals depending on the type of threat by applying machine learning to parameter data of radar signals. Currently, the army uses a library of mapping relations between the parameters and the types of threat to recognize threat signals. This approach has certain limitations when classifying signals and recognizing new types of threat or types of threat that do not exist in the current libraries. In this paper, we propose an automatic radar signal classification method depending on the type of threat that uses only parameter data without a library. A convolutional neural network is used as the classifier and machine learning is applied to train the classifier. The proposed method does not use a library, and hence, can classify threat signals that are new or do not exist in the current library.

Deep Learning Based Sign Detection and Recognition for the Blind (시각장애인을 위한 딥러닝 기반 표지판 검출 및 인식)

  • Jeon, Taejae;Lee, Sangyoun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.115-122
    • /
    • 2017
  • This paper proposes a deep learning algorithm based sign detection and recognition system for the blind. The proposed system is composed of sign detection stage and sign recognition stage. In the sign detection stage, aggregated channel features are extracted and AdaBoost classifier is applied to detect regions of interest of the sign. In the sign recognition stage, convolutional neural network is applied to recognize the regions of interest of the sign. In this paper, the AdaBoost classifier is designed to decrease the number of undetected signs, and deep learning algorithm is used to increase recognition accuracy and which leads to removing false positives which occur in the sign detection stage. Based on our experiments, proposed method efficiently decreases the number of false positives compared with other methods.

Enhancing Alzheimer's Disease Classification using 3D Convolutional Neural Network and Multilayer Perceptron Model with Attention Network

  • Enoch A. Frimpong;Zhiguang Qin;Regina E. Turkson;Bernard M. Cobbinah;Edward Y. Baagyere;Edwin K. Tenagyei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.2924-2944
    • /
    • 2023
  • Alzheimer's disease (AD) is a neurological condition that is recognized as one of the primary causes of memory loss. AD currently has no cure. Therefore, the need to develop an efficient model with high precision for timely detection of the disease is very essential. When AD is detected early, treatment would be most likely successful. The most often utilized indicators for AD identification are the Mini-mental state examination (MMSE), and the clinical dementia. However, the use of these indicators as ground truth marking could be imprecise for AD detection. Researchers have proposed several computer-aided frameworks and lately, the supervised model is mostly used. In this study, we propose a novel 3D Convolutional Neural Network Multilayer Perceptron (3D CNN-MLP) based model for AD classification. The model uses Attention Mechanism to automatically extract relevant features from Magnetic Resonance Images (MRI) to generate probability maps which serves as input for the MLP classifier. Three MRI scan categories were considered, thus AD dementia patients, Mild Cognitive Impairment patients (MCI), and Normal Control (NC) or healthy patients. The performance of the model is assessed by comparing basic CNN, VGG16, DenseNet models, and other state of the art works. The models were adjusted to fit the 3D images before the comparison was done. Our model exhibited excellent classification performance, with an accuracy of 91.27% for AD and NC, 80.85% for MCI and NC, and 87.34% for AD and MCI.