• Title/Summary/Keyword: Ensemble Support Vector Machine

Search Result 83, Processing Time 0.023 seconds

Development of a High-Performance Concrete Compressive-Strength Prediction Model Using an Ensemble Machine-Learning Method Based on Bagging and Stacking (배깅 및 스태킹 기반 앙상블 기계학습법을 이용한 고성능 콘크리트 압축강도 예측모델 개발)

  • Yun-Ji Kwak;Chaeyeon Go;Shinyoung Kwag;Seunghyun Eem
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.9-18
    • /
    • 2023
  • Predicting the compressive strength of high-performance concrete (HPC) is challenging because of the use of additional cementitious materials; thus, the development of improved predictive models is essential. The purpose of this study was to develop an HPC compressive-strength prediction model using an ensemble machine-learning method of combined bagging and stacking techniques. The result is a new ensemble technique that integrates the existing ensemble methods of bagging and stacking to solve the problems of a single machine-learning model and improve the prediction performance of the model. The nonlinear regression, support vector machine, artificial neural network, and Gaussian process regression approaches were used as single machine-learning methods and bagging and stacking techniques as ensemble machine-learning methods. As a result, the model of the proposed method showed improved accuracy results compared with single machine-learning models, an individual bagging technique model, and a stacking technique model. This was confirmed through a comparison of four representative performance indicators, verifying the effectiveness of the method.

A Study on Evaluation of e-learners' Concentration by using Machine Learning (머신러닝을 이용한 이러닝 학습자 집중도 평가 연구)

  • Jeong, Young-Sang;Joo, Min-Sung;Cho, Nam-Wook
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.67-75
    • /
    • 2022
  • Recently, e-learning has been attracting significant attention due to COVID-19. However, while e-learning has many advantages, it has disadvantages as well. One of the main disadvantages of e-learning is that it is difficult for teachers to continuously and systematically monitor learners. Although services such as personalized e-learning are provided to compensate for the shortcoming, systematic monitoring of learners' concentration is insufficient. This study suggests a method to evaluate the learner's concentration by applying machine learning techniques. In this study, emotion and gaze data were extracted from 184 videos of 92 participants. First, the learners' concentration was labeled by experts. Then, statistical-based status indicators were preprocessed from the data. Random Forests (RF), Support Vector Machines (SVMs), Multilayer Perceptron (MLP), and an ensemble model have been used in the experiment. Long Short-Term Memory (LSTM) has also been used for comparison. As a result, it was possible to predict e-learners' concentration with an accuracy of 90.54%. This study is expected to improve learners' immersion by providing a customized educational curriculum according to the learner's concentration level.

Diabetes prediction mechanism using machine learning model based on patient IQR outlier and correlation coefficient (환자 IQR 이상치와 상관계수 기반의 머신러닝 모델을 이용한 당뇨병 예측 메커니즘)

  • Jung, Juho;Lee, Naeun;Kim, Sumin;Seo, Gaeun;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1296-1301
    • /
    • 2021
  • With the recent increase in diabetes incidence worldwide, research has been conducted to predict diabetes through various machine learning and deep learning technologies. In this work, we present a model for predicting diabetes using machine learning techniques with German Frankfurt Hospital data. We apply outlier handling using Interquartile Range (IQR) techniques and Pearson correlation and compare model-specific diabetes prediction performance with Decision Tree, Random Forest, Knn (k-nearest neighbor), SVM (support vector machine), Bayesian Network, ensemble techniques XGBoost, Voting, and Stacking. As a result of the study, the XGBoost technique showed the best performance with 97% accuracy on top of the various scenarios. Therefore, this study is meaningful in that the model can be used to accurately predict and prevent diabetes prevalent in modern society.

Neural Networks-Based Method for Electrocardiogram Classification

  • Maksym Kovalchuk;Viktoriia Kharchenko;Andrii Yavorskyi;Igor Bieda;Taras Panchenko
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.186-191
    • /
    • 2023
  • Neural Networks are widely used for huge variety of tasks solution. Machine Learning methods are used also for signal and time series analysis, including electrocardiograms. Contemporary wearable devices, both medical and non-medical type like smart watch, allow to gather the data in real time uninterruptedly. This allows us to transfer these data for analysis or make an analysis on the device, and thus provide preliminary diagnosis, or at least fix some serious deviations. Different methods are being used for this kind of analysis, ranging from medical-oriented using distinctive features of the signal to machine learning and deep learning approaches. Here we will demonstrate a neural network-based approach to this task by building an ensemble of 1D CNN classifiers and a final classifier of selection using logistic regression, random forest or support vector machine, and make the conclusions of the comparison with other approaches.

Identifying the Optimal Machine Learning Algorithm for Breast Cancer Prediction

  • ByungJoo Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.80-88
    • /
    • 2024
  • Breast cancer remains a significant global health burden, necessitating accurate and timely detection for improved patient outcomes. Machine learning techniques have demonstrated remarkable potential in assisting breast cancer diagnosis by learning complex patterns from multi-modal patient data. This study comprehensively evaluates several popular machine learning models, including logistic regression, decision trees, random forests, support vector machines (SVMs), naive Bayes, k-nearest neighbors (KNN), XGBoost, and ensemble methods for breast cancer prediction using the Wisconsin Breast Cancer Dataset (WBCD). Through rigorous benchmarking across metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC), we identify the naive Bayes classifier as the top-performing model, achieving an accuracy of 0.974, F1-score of 0.979, and highest AUC of 0.988. Other strong performers include logistic regression, random forests, and XGBoost, with AUC values exceeding 0.95. Our findings showcase the significant potential of machine learning, particularly the robust naive Bayes algorithm, to provide highly accurate and reliable breast cancer screening from fine needle aspirate (FNA) samples, ultimately enabling earlier intervention and optimized treatment strategies.

Developing an Ensemble Classifier for Bankruptcy Prediction (부도 예측을 위한 앙상블 분류기 개발)

  • Min, Sung-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.139-148
    • /
    • 2012
  • An ensemble of classifiers is to employ a set of individually trained classifiers and combine their predictions. It has been found that in most cases the ensembles produce more accurate predictions than the base classifiers. Combining outputs from multiple classifiers, known as ensemble learning, is one of the standard and most important techniques for improving classification accuracy in machine learning. An ensemble of classifiers is efficient only if the individual classifiers make decisions as diverse as possible. Bagging is the most popular method of ensemble learning to generate a diverse set of classifiers. Diversity in bagging is obtained by using different training sets. The different training data subsets are randomly drawn with replacement from the entire training dataset. The random subspace method is an ensemble construction technique using different attribute subsets. In the random subspace, the training dataset is also modified as in bagging. However, this modification is performed in the feature space. Bagging and random subspace are quite well known and popular ensemble algorithms. However, few studies have dealt with the integration of bagging and random subspace using SVM Classifiers, though there is a great potential for useful applications in this area. The focus of this paper is to propose methods for improving SVM performance using hybrid ensemble strategy for bankruptcy prediction. This paper applies the proposed ensemble model to the bankruptcy prediction problem using a real data set from Korean companies.

Missing Value Imputation Technique for Water Quality Dataset

  • Jin-Young Jun;Youn-A Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.39-46
    • /
    • 2024
  • Many researchers make efforts to evaluate water quality using various models. Such models require a dataset without missing values, but in real world, most datasets include missing values for various reasons. Simple deletion of samples having missing value(s) could distort distribution of the underlying data and pose a significant risk of biasing the model's inference when the missing mechanism is not MCAR. In this study, to explore the most appropriate technique for handing missing values in water quality data, several imputation techniques were experimented based on existing KNN and MICE imputation with/without the generative neural network model, Autoencoder(AE) and Denoising Autoencoder(DAE). The results shows that KNN and MICE combined imputation without generative networks provides the closest estimated values to the true values. When evaluating binary classification models based on support vector machine and ensemble algorithms after applying the combined imputation technique to the observed water quality dataset with missing values, it shows better performance in terms of Accuracy, F1 score, RoC-AuC score and MCC compared to those evaluated after deleting samples having missing values.

Classification of Gene Expression Data by Ensemble of Bayesian Networks (앙상블 베이지안망에 의한 유전자발현데이터 분류)

  • 황규백;장정호;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.434-436
    • /
    • 2003
  • DNA칩 기술로 얻어지는 유전자발현데이터(gene expression data)는 생채 조직이나 세포의 수천개에 달하는 유전자의 발현량(expression level)을 측정한 것으로, 유전자발현양상(gene expression pattern)에 기반한 암 종류의 분류 등에 유용하다. 본 논문에서는 확률그래프모델(probabilistic graphical model)의 하나인 베이지안망(Bayesian network)을 발현데이터의 분류에 적응하며, 분류 성능을 높이기 위해 베이지안망의 앙상블(ensemble of Bayesian networks)을 구성한다. 실험은 실제 암 조직에서 추출된 유전자발현데이터에 대해 행해졌다 실험 결과, 앙상블 베이지안망의 분류 정확도는 단일 베이지안망보다 높았으며, naive Bayes 분류기, 신경망, support vector machine(SVM) 등과 대등한 성능을 보였다.

  • PDF

Identification of Individuals using Single-Lead Electrocardiogram Signal (단일 리드 심전도를 이용한 개인 식별)

  • Lim, Seohyun;Min, Kyeongran;Lee, Jongshill;Jang, Dongpyo;Kim, Inyoung
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.42-49
    • /
    • 2014
  • We propose an individual identification method using a single-lead electrocardiogram signal. In this paper, lead I ECG is measured from subjects in various physical and psychological states. We performed a noise reduction for lead I signal as a preprocessing stage and this signal is used to acquire the representative beat waveform for individuals by utilizing the ensemble average. From the P-QRS-T waves, features are extracted to identify individuals, 19 using the duration and amplitude information, and 16 from the QRS complex acquired by applying Pan-Tompkins algorithm to the ensemble averaged waveform. To analyze the effect of each feature and to improve efficiency while maintaining the performance, Relief-F algorithm is used to select features from the 35 features extracted. Some or all of these 35 features were used in the support vector machine (SVM) learning and tests. The classification accuracy using the entire feature set was 98.34%. Experimental results show that it is possible to identify a person by features extracted from limb lead I signal only.

Shield TBM disc cutter replacement and wear rate prediction using machine learning techniques

  • Kim, Yunhee;Hong, Jiyeon;Shin, Jaewoo;Kim, Bumjoo
    • Geomechanics and Engineering
    • /
    • v.29 no.3
    • /
    • pp.249-258
    • /
    • 2022
  • A disc cutter is an excavation tool on a tunnel boring machine (TBM) cutterhead; it crushes and cuts rock mass while the machine excavates using the cutterhead's rotational movement. Disc cutter wear occurs naturally. Thus, along with the management of downtime and excavation efficiency, abrasioned disc cutters need to be replaced at the proper time; otherwise, the construction period could be delayed and the cost could increase. The most common prediction models for TBM performance and for the disc cutter lifetime have been proposed by the Colorado School of Mines and Norwegian University of Science and Technology. However, design parameters of existing models do not well correspond to the field values when a TBM encounters complex and difficult ground conditions in the field. Thus, this study proposes a series of machine learning models to predict the disc cutter lifetime of a shield TBM using the excavation (machine) data during operation which is response to the rock mass. This study utilizes five different machine learning techniques: four types of classification models (i.e., K-Nearest Neighbors (KNN), Support Vector Machine, Decision Tree, and Staking Ensemble Model) and one artificial neural network (ANN) model. The KNN model was found to be the best model among the four classification models, affording the highest recall of 81%. The ANN model also predicted the wear rate of disc cutters reasonably well.