• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.032 seconds

Automatic Classification of Drone Images Using Deep Learning and SVM with Multiple Grid Sizes

  • Kim, Sun Woong;Kang, Min Soo;Song, Junyoung;Park, Wan Yong;Eo, Yang Dam;Pyeon, Mu Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.5
    • /
    • pp.407-414
    • /
    • 2020
  • SVM (Support vector machine) analysis was performed after applying a deep learning technique based on an Inception-based model (GoogLeNet). The accuracy of automatic image classification was analyzed using an SVM with multiple virtual grid sizes. Six classes were selected from a standard land cover map. Cars were added as a separate item to increase the classification accuracy of roads. The virtual grid size was 2-5 m for natural areas, 5-10 m for traffic areas, and 10-15 m for building areas, based on the size of items and the resolution of input images. The results demonstrate that automatic classification accuracy can be increased by adopting an integrated approach that utilizes weighted virtual grid sizes for different classes.

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning (전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.

Fast Detection of Disease in Livestock based on Deep Learning (축사에서 딥러닝을 이용한 질병개체 파악방안)

  • Lee, Woongsup;Kim, Seong Hwan;Ryu, Jongyeol;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.5
    • /
    • pp.1009-1015
    • /
    • 2017
  • Recently, the wide spread of IoT (Internet of Things) based technology enables the accumulation of big biometric data on livestock. The availability of big data allows the application of diverse machine learning based algorithm in the field of agriculture, which significantly enhances the productivity of farms. In this paper, we propose an abnormal livestock detection algorithm based on deep learning, which is the one of the most prominent machine learning algorithm. In our proposed scheme, the livestock are divided into two clusters which are normal and abnormal (disease) whose biometric data has different characteristics. Then a deep neural network is used to classify these two clusters based on the biometric data. By using our proposed scheme, the normal and abnormal livestock can be identified based on big biometric data, even though the detailed stochastic characteristics of biometric data are unknown, which is beneficial to prevent epidemic such as mouth-and-foot disease.

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.

Normal data based rotating machine anomaly detection using CNN with self-labeling

  • Bae, Jaewoong;Jung, Wonho;Park, Yong-Hwa
    • Smart Structures and Systems
    • /
    • v.29 no.6
    • /
    • pp.757-766
    • /
    • 2022
  • To train deep learning algorithms, a sufficient number of data are required. However, in most engineering systems, the acquisition of fault data is difficult or sometimes not feasible, while normal data are secured. The dearth of data is one of the major challenges to developing deep learning models, and fault diagnosis in particular cannot be made in the absence of fault data. With this context, this paper proposes an anomaly detection methodology for rotating machines using only normal data with self-labeling. Since only normal data are used for anomaly detection, a self-labeling method is used to generate a new labeled dataset. The overall procedure includes the following three steps: (1) transformation of normal data to self-labeled data based on a pretext task, (2) training the convolutional neural networks (CNN), and (3) anomaly detection using defined anomaly score based on the softmax output of the trained CNN. The softmax value of the abnormal sample shows different behavior from the normal softmax values. To verify the proposed method, four case studies were conducted, on the Case Western Reserve University (CWRU) bearing dataset, IEEE PHM 2012 data challenge dataset, PHMAP 2021 data challenge dataset, and laboratory bearing testbed; and the results were compared to those of existing machine learning and deep learning methods. The results showed that the proposed algorithm could detect faults in the bearing testbed and compressor with over 99.7% accuracy. In particular, it was possible to detect not only bearing faults but also structural faults such as unbalance and belt looseness with very high accuracy. Compared with the existing GAN, the autoencoder-based anomaly detection algorithm, the proposed method showed high anomaly detection performance.

Feature Extraction Based on DBN-SVM for Tone Recognition

  • Chao, Hao;Song, Cheng;Lu, Bao-Yun;Liu, Yong-Li
    • Journal of Information Processing Systems
    • /
    • v.15 no.1
    • /
    • pp.91-99
    • /
    • 2019
  • An innovative tone modeling framework based on deep neural networks in tone recognition was proposed in this paper. In the framework, both the prosodic features and the articulatory features were firstly extracted as the raw input data. Then, a 5-layer-deep deep belief network was presented to obtain high-level tone features. Finally, support vector machine was trained to recognize tones. The 863-data corpus had been applied in experiments, and the results show that the proposed method helped improve the recognition accuracy significantly for all tone patterns. Meanwhile, the average tone recognition rate reached 83.03%, which is 8.61% higher than that of the original method.

Development of a Ream-time Facial Expression Recognition Model using Transfer Learning with MobileNet and TensorFlow.js (MobileNet과 TensorFlow.js를 활용한 전이 학습 기반 실시간 얼굴 표정 인식 모델 개발)

  • Cha Jooho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.245-251
    • /
    • 2023
  • Facial expression recognition plays a significant role in understanding human emotional states. With the advancement of AI and computer vision technologies, extensive research has been conducted in various fields, including improving customer service, medical diagnosis, and assessing learners' understanding in education. In this study, we develop a model that can infer emotions in real-time from a webcam using transfer learning with TensorFlow.js and MobileNet. While existing studies focus on achieving high accuracy using deep learning models, these models often require substantial resources due to their complex structure and computational demands. Consequently, there is a growing interest in developing lightweight deep learning models and transfer learning methods for restricted environments such as web browsers and edge devices. By employing MobileNet as the base model and performing transfer learning, our study develops a deep learning transfer model utilizing JavaScript-based TensorFlow.js, which can predict emotions in real-time using facial input from a webcam. This transfer model provides a foundation for implementing facial expression recognition in resource-constrained environments such as web and mobile applications, enabling its application in various industries.

An Integrated Accurate-Secure Heart Disease Prediction (IAS) Model using Cryptographic and Machine Learning Methods

  • Syed Anwar Hussainy F;Senthil Kumar Thillaigovindan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.504-519
    • /
    • 2023
  • Heart disease is becoming the top reason of death all around the world. Diagnosing cardiac illness is a difficult endeavor that necessitates both expertise and extensive knowledge. Machine learning (ML) is becoming gradually more important in the medical field. Most of the works have concentrated on the prediction of cardiac disease, however the precision of the results is minimal, and data integrity is uncertain. To solve these difficulties, this research creates an Integrated Accurate-Secure Heart Disease Prediction (IAS) Model based on Deep Convolutional Neural Networks. Heart-related medical data is collected and pre-processed. Secondly, feature extraction is processed with two factors, from signals and acquired data, which are further trained for classification. The Deep Convolutional Neural Networks (DCNN) is used to categorize received sensor data as normal or abnormal. Furthermore, the results are safeguarded by implementing an integrity validation mechanism based on the hash algorithm. The system's performance is evaluated by comparing the proposed to existing models. The results explain that the proposed model-based cardiac disease diagnosis model surpasses previous techniques. The proposed method demonstrates that it attains accuracy of 98.5 % for the maximum amount of records, which is higher than available classifiers.

Prediction of compressive strength of sustainable concrete using machine learning tools

  • Lokesh Choudhary;Vaishali Sahu;Archanaa Dongre;Aman Garg
    • Computers and Concrete
    • /
    • v.33 no.2
    • /
    • pp.137-145
    • /
    • 2024
  • The technique of experimentally determining concrete's compressive strength for a given mix design is time-consuming and difficult. The goal of the current work is to propose a best working predictive model based on different machine learning algorithms such as Gradient Boosting Machine (GBM), Stacked Ensemble (SE), Distributed Random Forest (DRF), Extremely Randomized Trees (XRT), Generalized Linear Model (GLM), and Deep Learning (DL) that can forecast the compressive strength of ternary geopolymer concrete mix without carrying out any experimental procedure. A geopolymer mix uses supplementary cementitious materials obtained as industrial by-products instead of cement. The input variables used for assessing the best machine learning algorithm not only include individual ingredient quantities, but molarity of the alkali activator and age of testing as well. Myriad statistical parameters used to measure the effectiveness of the models in forecasting the compressive strength of ternary geopolymer concrete mix, it has been found that GBM performs better than all other algorithms. A sensitivity analysis carried out towards the end of the study suggests that GBM model predicts results close to the experimental conditions with an accuracy between 95.6 % to 98.2 % for testing and training datasets.

A Prediction Triage System for Emergency Department During Hajj Period using Machine Learning Models

  • Huda N. Alhazmi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.11-23
    • /
    • 2024
  • Triage is a practice of accurately prioritizing patients in emergency department (ED) based on their medical condition to provide them with proper treatment service. The variation in triage assessment among medical staff can cause mis-triage which affect the patients negatively. Developing ED triage system based on machine learning (ML) techniques can lead to accurate and efficient triage outcomes. This study aspires to develop a triage system using machine learning techniques to predict ED triage levels using patients' information. We conducted a retrospective study using Security Forces Hospital ED data, from 2021 through 2023 during Hajj period in Saudia Arabi. Using demographics, vital signs, and chief complaints as predictors, two machine learning models were investigated, naming gradient boosted decision tree (XGB) and deep neural network (DNN). The models were trained to predict ED triage levels and their predictive performance was evaluated using area under the receiver operating characteristic curve (AUC) and confusion matrix. A total of 11,584 ED visits were collected and used in this study. XGB and DNN models exhibit high abilities in the predicting performance with AUC-ROC scores 0.85 and 0.82, respectively. Compared to the traditional approach, our proposed system demonstrated better performance and can be implemented in real-world clinical settings. Utilizing ML applications can power the triage decision-making, clinical care, and resource utilization.