• Title/Summary/Keyword: Learning Performance Prediction

Search Result 1,122, Processing Time 0.028 seconds

Development of Machine Learning Based Seismic Response Prediction Model for Shear Wall Structure considering Aging Deteriorations (경년열화를 고려한 전단벽 구조물의 기계학습 기반 지진응답 예측모델 개발)

  • Kim, Hyun-Su;Kim, Yukyung;Lee, So Yeon;Jang, Jun Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.24 no.2
    • /
    • pp.83-90
    • /
    • 2024
  • Machine learning is widely applied to various engineering fields. In structural engineering area, machine learning is generally used to predict structural responses of building structures. The aging deterioration of reinforced concrete structure affects its structural behavior. Therefore, the aging deterioration of R.C. structure should be consider to exactly predict seismic responses of the structure. In this study, the machine learning based seismic response prediction model was developed. To this end, four machine learning algorithms were employed and prediction performance of each algorithm was compared. A 3-story coupled shear wall structure was selected as an example structure for numerical simulation. Artificial ground motions were generated based on domestic site characteristics. Elastic modulus, damping ratio and density were changed to considering concrete degradation due to chloride penetration and carbonation, etc. Various intensity measures were used input parameters of the training database. Performance evaluation was performed using metrics like root mean square error, mean square error, mean absolute error, and coefficient of determination. The optimization of hyperparameters was achieved through k-fold cross-validation and grid search techniques. The analysis results show that neural networks and extreme gradient boosting algorithms present good prediction performance.

Development of Flash Boiling Spray Prediction Model of Multi-hole GDI Injector Using Machine Learning (머신러닝을 이용한 다공형 GDI 인젝터의 플래시 보일링 분무 예측 모델 개발)

  • Chang, Mengzhao;Shin, Dalho;Pham, Quangkhai;Park, Suhan
    • Journal of ILASS-Korea
    • /
    • v.27 no.2
    • /
    • pp.57-65
    • /
    • 2022
  • The purpose of this study is to use machine learning to build a model capable of predicting the flash boiling spray characteristics. In this study, the flash boiling spray was visualized using Shadowgraph visualization technology, and then the spray image was processed with MATLAB to obtain quantitative data of spray characteristics. The experimental conditions were used as input, and the spray characteristics were used as output to train the machine learning model. For the machine learning model, the XGB (extreme gradient boosting) algorithm was used. Finally, the performance of machine learning model was evaluated using R2 and RMSE (root mean square error). In order to have enough data to train the machine learning model, this study used 12 injectors with different design parameters, and set various fuel temperatures and ambient pressures, resulting in about 12,000 data. By comparing the performance of the model with different amounts of training data, it was found that the number of training data must reach at least 7,000 before the model can show optimal performance. The model showed different prediction performances for different spray characteristics. Compared with the upstream spray angle and the downstream spray angle, the model had the best prediction performance for the spray tip penetration. In addition, the prediction performance of the model showed a relatively poor trend in the initial stage of injection and the final stage of injection. The model performance is expired to be further enhanced by optimizing the hyper-parameters input into the model.

Compositional Feature Selection and Its Effects on Bandgap Prediction by Machine Learning (기계학습을 이용한 밴드갭 예측과 소재의 조성기반 특성인자의 효과)

  • Chunghee Nam
    • Korean Journal of Materials Research
    • /
    • v.33 no.4
    • /
    • pp.164-174
    • /
    • 2023
  • The bandgap characteristics of semiconductor materials are an important factor when utilizing semiconductor materials for various applications. In this study, based on data provided by AFLOW (Automatic-FLOW for Materials Discovery), the bandgap of a semiconductor material was predicted using only the material's compositional features. The compositional features were generated using the python module of 'Pymatgen' and 'Matminer'. Pearson's correlation coefficients (PCC) between the compositional features were calculated and those with a correlation coefficient value larger than 0.95 were removed in order to avoid overfitting. The bandgap prediction performance was compared using the metrics of R2 score and root-mean-squared error. By predicting the bandgap with randomforest and xgboost as representatives of the ensemble algorithm, it was found that xgboost gave better results after cross-validation and hyper-parameter tuning. To investigate the effect of compositional feature selection on the bandgap prediction of the machine learning model, the prediction performance was studied according to the number of features based on feature importance methods. It was found that there were no significant changes in prediction performance beyond the appropriate feature. Furthermore, artificial neural networks were employed to compare the prediction performance by adjusting the number of features guided by the PCC values, resulting in the best R2 score of 0.811. By comparing and analyzing the bandgap distribution and prediction performance according to the material group containing specific elements (F, N, Yb, Eu, Zn, B, Si, Ge, Fe Al), various information for material design was obtained.

Prediction of Transition Temperature and Magnetocaloric Effects in Bulk Metallic Glasses with Ensemble Models (앙상블 기계학습 모델을 이용한 비정질 소재의 자기냉각 효과 및 전이온도 예측)

  • Chunghee Nam
    • Korean Journal of Materials Research
    • /
    • v.34 no.7
    • /
    • pp.363-369
    • /
    • 2024
  • In this study, the magnetocaloric effect and transition temperature of bulk metallic glass, an amorphous material, were predicted through machine learning based on the composition features. From the Python module 'Matminer', 174 compositional features were obtained, and prediction performance was compared while reducing the composition features to prevent overfitting. After optimization using RandomForest, an ensemble model, changes in prediction performance were analyzed according to the number of compositional features. The R2 score was used as a performance metric in the regression prediction, and the best prediction performance was found using only 90 features predicting transition temperature, and 20 features predicting magnetocaloric effects. The most important feature when predicting magnetocaloric effects was the 'Fe' compositional ratio. The feature importance method provided by 'scikit-learn' was applied to sort compositional features. The feature importance method was found to be appropriate by comparing the prediction performance of the Fe-contained dataset with the full dataset.

A Hybrid Mod K-Means Clustering with Mod SVM Algorithm to Enhance the Cancer Prediction

  • Kumar, Rethina;Ganapathy, Gopinath;Kang, Jeong-Jin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.231-243
    • /
    • 2021
  • In Recent years the way we analyze the breast cancer has changed dramatically. Breast cancer is the most common and complex disease diagnosed among women. There are several subtypes of breast cancer and many options are there for the treatment. The most important is to educate the patients. As the research continues to expand, the understanding of the disease and its current treatments types, the researchers are constantly being updated with new researching techniques. Breast cancer survival rates have been increased with the use of new advanced treatments, largely due to the factors such as earlier detection, a new personalized approach to treatment and a better understanding of the disease. Many machine learning classification models have been adopted and modified to diagnose the breast cancer disease. In order to enhance the performance of classification model, our research proposes a model using A Hybrid Modified K-Means Clustering with Modified SVM (Support Vector Machine) Machine learning algorithm to create a new method which can highly improve the performance and prediction. The proposed Machine Learning model is to improve the performance of machine learning classifier. The Proposed Model rectifies the irregularity in the dataset and they can create a new high quality dataset with high accuracy performance and prediction. The recognized datasets Wisconsin Diagnostic Breast Cancer (WDBC) Dataset have been used to perform our research. Using the Wisconsin Diagnostic Breast Cancer (WDBC) Dataset, We have created our Model that can help to diagnose the patients and predict the probability of the breast cancer. A few machine learning classifiers will be explored in this research and compared with our Proposed Model "A Hybrid Modified K-Means with Modified SVM Machine Learning Algorithm to Enhance the Cancer Prediction" to implement and evaluated. Our research results show that our Proposed Model has a significant performance compared to other previous research and with high accuracy level of 99% which will enhance the Cancer Prediction.

Joint streaming model for backchannel prediction and automatic speech recognition

  • Yong-Seok Choi;Jeong-Uk Bang;Seung Hi Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.118-126
    • /
    • 2024
  • In human conversations, listeners often utilize brief backchannels such as "uh-huh" or "yeah." Timely backchannels are crucial to understanding and increasing trust among conversational partners. In human-machine conversation systems, users can engage in natural conversations when a conversational agent generates backchannels like a human listener. We propose a method that simultaneously predicts backchannels and recognizes speech in real time. We use a streaming transformer and adopt multitask learning for concurrent backchannel prediction and speech recognition. The experimental results demonstrate the superior performance of our method compared with previous works while maintaining a similar single-task speech recognition performance. Owing to the extremely imbalanced training data distribution, the single-task backchannel prediction model fails to predict any of the backchannel categories, and the proposed multitask approach substantially enhances the backchannel prediction performance. Notably, in the streaming prediction scenario, the performance of backchannel prediction improves by up to 18.7% compared with existing methods.

Comparison of Different Deep Learning Optimizers for Modeling Photovoltaic Power

  • Poudel, Prasis;Bae, Sang Hyun;Jang, Bongseog
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.204-208
    • /
    • 2018
  • Comparison of different optimizer performance in photovoltaic power modeling using artificial neural deep learning techniques is described in this paper. Six different deep learning optimizers are tested for Long-Short-Term Memory networks in this study. The optimizers are namely Adam, Stochastic Gradient Descent, Root Mean Square Propagation, Adaptive Gradient, and some variants such as Adamax and Nadam. For comparing the optimization techniques, high and low fluctuated photovoltaic power output are examined and the power output is real data obtained from the site at Mokpo university. Using Python Keras version, we have developed the prediction program for the performance evaluation of the optimizations. The prediction error results of each optimizer in both high and low power cases shows that the Adam has better performance compared to the other optimizers.

Bi-LSTM model with time distribution for bandwidth prediction in mobile networks

  • Hyeonji Lee;Yoohwa Kang;Minju Gwak;Donghyeok An
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.205-217
    • /
    • 2024
  • We propose a bandwidth prediction approach based on deep learning. The approach is intended to accurately predict the bandwidth of various types of mobile networks. We first use a machine learning technique, namely, the gradient boosting algorithm, to recognize the connected mobile network. Second, we apply a handover detection algorithm based on network recognition to account for vertical handover that causes the bandwidth variance. Third, as the communication performance offered by 3G, 4G, and 5G networks varies, we suggest a bidirectional long short-term memory model with time distribution for bandwidth prediction per network. To increase the prediction accuracy, pretraining and fine-tuning are applied for each type of network. We use a dataset collected at University College Cork for network recognition, handover detection, and bandwidth prediction. The performance evaluation indicates that the handover detection algorithm achieves 88.5% accuracy, and the bandwidth prediction model achieves a high accuracy, with a root-mean-square error of only 2.12%.

Software Fault Prediction using Semi-supervised Learning Methods (세미감독형 학습 기법을 사용한 소프트웨어 결함 예측)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.127-133
    • /
    • 2019
  • Most studies of software fault prediction have been about supervised learning models that use only labeled training data. Although supervised learning usually shows high prediction performance, most development groups do not have sufficient labeled data. Unsupervised learning models that use only unlabeled data for training are difficult to build and show poor performance. Semi-supervised learning models that use both labeled data and unlabeled data can solve these problems. Self-training technique requires the fewest assumptions and constraints among semi-supervised techniques. In this paper, we implemented several models using self-training algorithms and evaluated them using Accuracy and AUC. As a result, YATSI showed the best performance.

Prediction of English Premier League Game Using an Ensemble Technique (앙상블 기법을 통한 잉글리시 프리미어리그 경기결과 예측)

  • Yi, Jae Hyun;Lee, Soo Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.161-168
    • /
    • 2020
  • Predicting outcome of the sports enables teams to establish their strategy by analyzing variables that affect overall game flow and wins and losses. Many studies have been conducted on the prediction of the outcome of sports events through statistical techniques and machine learning techniques. Predictive performance is the most important in a game prediction model. However, statistical and machine learning models show different optimal performance depending on the characteristics of the data used for learning. In this paper, we propose a new ensemble model to predict English Premier League soccer games using statistical models and the machine learning models which showed good performance in predicting the results of the soccer games and this model is possible to select a model that performs best when predicting the data even if the data are different. The proposed ensemble model predicts game results by learning the final prediction model with the game prediction results of each single model and the actual game results. Experimental results for the proposed model show higher performance than the single models.