• Title/Summary/Keyword: Machine Learning Models

Search Result 1,317, Processing Time 0.031 seconds

Prediction of English Premier League Game Using an Ensemble Technique (앙상블 기법을 통한 잉글리시 프리미어리그 경기결과 예측)

  • Yi, Jae Hyun;Lee, Soo Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.161-168
    • /
    • 2020
  • Predicting outcome of the sports enables teams to establish their strategy by analyzing variables that affect overall game flow and wins and losses. Many studies have been conducted on the prediction of the outcome of sports events through statistical techniques and machine learning techniques. Predictive performance is the most important in a game prediction model. However, statistical and machine learning models show different optimal performance depending on the characteristics of the data used for learning. In this paper, we propose a new ensemble model to predict English Premier League soccer games using statistical models and the machine learning models which showed good performance in predicting the results of the soccer games and this model is possible to select a model that performs best when predicting the data even if the data are different. The proposed ensemble model predicts game results by learning the final prediction model with the game prediction results of each single model and the actual game results. Experimental results for the proposed model show higher performance than the single models.

Prediction of Net Irrigation Water Requirement in paddy field Based on Machine Learning (머신러닝 기법을 활용한 논 순용수량 예측)

  • Kim, Soo-Jin;Bae, Seung-Jong;Jang, Min-Won
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.4
    • /
    • pp.105-117
    • /
    • 2022
  • This study tested SVM(support vector machine), RF(random forest), and ANN(artificial neural network) machine-learning models that can predict net irrigation water requirements in paddy fields. For the Jeonju and Jeongeup meteorological stations, the net irrigation water requirement was calculated using K-HAS from 1981 to 2021 and set as the label. For each algorithm, twelve models were constructed based on cumulative precipitation, precipitation, crop evapotranspiration, and month. Compared to the CE model, the R2 of the CEP model was higher, and MAE, RMSE, and MSE were lower. Comprehensively considering learning performance and learning time, it is judged that the RF algorithm has the best usability and predictive power of five-days is better than three-days. The results of this study are expected to provide the scientific information necessary for the decision-making of on-site water managers is expected to be possible through the connection with weather forecast data. In the future, if the actual amount of irrigation and supply are measured, it is necessary to develop a learning model that reflects this.

Comparative characteristic of ensemble machine learning and deep learning models for turbidity prediction in a river (딥러닝과 앙상블 머신러닝 모형의 하천 탁도 예측 특성 비교 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.1
    • /
    • pp.83-91
    • /
    • 2021
  • The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.

Machine learning of LWR spent nuclear fuel assembly decay heat measurements

  • Ebiwonjumi, Bamidele;Cherezov, Alexey;Dzianisau, Siarhei;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3563-3579
    • /
    • 2021
  • Measured decay heat data of light water reactor (LWR) spent nuclear fuel (SNF) assemblies are adopted to train machine learning (ML) models. The measured data is available for fuel assemblies irradiated in commercial reactors operated in the United States and Sweden. The data comes from calorimetric measurements of discharged pressurized water reactor (PWR) and boiling water reactor (BWR) fuel assemblies. 91 and 171 measurements of PWR and BWR assembly decay heat data are used, respectively. Due to the small size of the measurement dataset, we propose: (i) to use the method of multiple runs (ii) to generate and use synthetic data, as large dataset which has similar statistical characteristics as the original dataset. Three ML models are developed based on Gaussian process (GP), support vector machines (SVM) and neural networks (NN), with four inputs including the fuel assembly averaged enrichment, assembly averaged burnup, initial heavy metal mass, and cooling time after discharge. The outcomes of this work are (i) development of ML models which predict LWR fuel assembly decay heat from the four inputs (ii) generation and application of synthetic data which improves the performance of the ML models (iii) uncertainty analysis of the ML models and their predictions.

Development of Comparative Verification System for Reliability Evaluation of Distribution Line Load Prediction Model (배전 선로 부하예측 모델의 신뢰성 평가를 위한 비교 검증 시스템)

  • Lee, Haesung;Lee, Byung-Sung;Moon, Sang-Keun;Kim, Junhyuk;Lee, Hyeseon
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.7 no.1
    • /
    • pp.115-123
    • /
    • 2021
  • Through machine learning-based load prediction, it is possible to prevent excessive power generation or unnecessary economic investment by estimating the appropriate amount of facility investment in consideration of the load that will increase in the future or providing basic data for policy establishment to distribute the maximum load. However, in order to secure the reliability of the developed load prediction model in the field, the performance comparison verification between the distribution line load prediction models must be preceded, but a comparative performance verification system between the distribution line load prediction models has not yet been established. As a result, it is not possible to accurately determine the performance excellence of the load prediction model because it is not possible to easily determine the likelihood between the load prediction models. In this paper, we developed a reliability verification system for load prediction models including a method of comparing and verifying the performance reliability between machine learning-based load prediction models that were not previously considered, verification process, and verification result visualization methods. Through the developed load prediction model reliability verification system, the objectivity of the load prediction model performance verification can be improved, and the field application utilization of an excellent load prediction model can be increased.

Machine learning-based probabilistic predictions of shear resistance of welded studs in deck slab ribs transverse to beams

  • Vitaliy V. Degtyarev;Stephen J. Hicks
    • Steel and Composite Structures
    • /
    • v.49 no.1
    • /
    • pp.109-123
    • /
    • 2023
  • Headed studs welded to steel beams and embedded within the concrete of deck slabs are vital components of modern composite floor systems, where safety and economy depend on the accurate predictions of the stud shear resistance. The multitude of existing deck profiles and the complex behavior of studs in deck slab ribs makes developing accurate and reliable mechanical or empirical design models challenging. The paper addresses this issue by presenting a machine learning (ML) model developed from the natural gradient boosting (NGBoost) algorithm capable of producing probabilistic predictions and a database of 464 push-out tests, which is considerably larger than the databases used for developing existing design models. The proposed model outperforms models based on other ML algorithms and existing descriptive equations, including those in EC4 and AISC 360, while offering probabilistic predictions unavailable from other models and producing higher shear resistances for many cases. The present study also showed that the stud shear resistance is insensitive to the concrete elastic modulus, stud welding type, location of slab reinforcement, and other parameters considered important by existing models. The NGBoost model was interpreted by evaluating the feature importance and dependence determined with the SHapley Additive exPlanations (SHAP) method. The model was calibrated via reliability analyses in accordance with the Eurocodes to ensure that its predictions meet the required reliability level and facilitate its use in design. An interactive open-source web application was created and deployed to the cloud to allow for convenient and rapid stud shear resistance predictions with the developed model.

Comparison of machine learning techniques to predict compressive strength of concrete

  • Dutta, Susom;Samui, Pijush;Kim, Dookie
    • Computers and Concrete
    • /
    • v.21 no.4
    • /
    • pp.463-470
    • /
    • 2018
  • In the present study, soft computing i.e., machine learning techniques and regression models algorithms have earned much importance for the prediction of the various parameters in different fields of science and engineering. This paper depicts that how regression models can be implemented for the prediction of compressive strength of concrete. Three models are taken into consideration for this; they are Gaussian Process for Regression (GPR), Multi Adaptive Regression Spline (MARS) and Minimax Probability Machine Regression (MPMR). Contents of cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregate, fine aggregate and age in days have been taken as inputs and compressive strength as output for GPR, MARS and MPMR models. A comparatively large set of data including 1030 normalized previously published results which were obtained from experiments were utilized. Here, a comparison is made between the results obtained from all the above mentioned models and the model which provides the best fit is established. The experimental results manifest that proposed models are robust for determination of compressive strength of concrete.

The Use of Unsupervised Machine Learning for the Attenuation of Seismic Noise (탄성파 자료 잡음 제거를 위한 비지도 학습 연구)

  • Kim, Sujeong;Jun, Hyunggu
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.2
    • /
    • pp.71-84
    • /
    • 2022
  • When acquiring seismic data, various types of simultaneously recorded seismic noise hinder accurate interpretation. Therefore, it is essential to attenuate this noise during the processing of seismic data and research on seismic noise attenuation. For this purpose, machine learning is extensively used. This study attempts to attenuate noise in prestack seismic data using unsupervised machine learning. Three unsupervised machine learning models, N2NUNET, PATCHUNET, and DDUL, are trained and applied to synthetic and field prestack seismic data to attenuate the noise and leave clean seismic data. The results are qualitatively and quantitatively analyzed and demonstrated that all three unsupervised learning models succeeded in removing seismic noise from both synthetic and field data. Of the three, the N2NUNET model performed the worst, and the PATCHUNET and DDUL models produced almost identical results, although the DDUL model performed slightly better.

Network Traffic Measurement Analysis using Machine Learning

  • Hae-Duck Joshua Jeong
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.19-27
    • /
    • 2023
  • In recent times, an exponential increase in Internet traffic has been observed as a result of advancing development of the Internet of Things, mobile networks with sensors, and communication functions within various devices. Further, the COVID-19 pandemic has inevitably led to an explosion of social network traffic. Within this context, considerable attention has been drawn to research on network traffic analysis based on machine learning. In this paper, we design and develop a new machine learning framework for network traffic analysis whereby normal and abnormal traffic is distinguished from one another. To achieve this, we combine together well-known machine learning algorithms and network traffic analysis techniques. Using one of the most widely used datasets KDD CUP'99 in the Weka and Apache Spark environments, we compare and investigate results obtained from time series type analysis of various aspects including malicious codes, feature extraction, data formalization, network traffic measurement tool implementation. Experimental analysis showed that while both the logistic regression and the support vector machine algorithm were excellent for performance evaluation, among these, the logistic regression algorithm performs better. The quantitative analysis results of our proposed machine learning framework show that this approach is reliable and practical, and the performance of the proposed system and another paper is compared and analyzed. In addition, we determined that the framework developed in the Apache Spark environment exhibits a much faster processing speed in the Spark environment than in Weka as there are more datasets used to create and classify machine learning models.

Design and Implementation of Malicious URL Prediction System based on Multiple Machine Learning Algorithms (다중 머신러닝 알고리즘을 이용한 악성 URL 예측 시스템 설계 및 구현)

  • Kang, Hong Koo;Shin, Sam Shin;Kim, Dae Yeob;Park, Soon Tai
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.11
    • /
    • pp.1396-1405
    • /
    • 2020
  • Cyber threats such as forced personal information collection and distribution of malicious codes using malicious URLs continue to occur. In order to cope with such cyber threats, a security technologies that quickly detects malicious URLs and prevents damage are required. In a web environment, malicious URLs have various forms and are created and deleted from time to time, so there is a limit to the response as a method of detecting or filtering by signature matching. Recently, researches on detecting and predicting malicious URLs using machine learning techniques have been actively conducted. Existing studies have proposed various features and machine learning algorithms for predicting malicious URLs, but most of them are only suggesting specialized algorithms by supplementing features and preprocessing, so it is difficult to sufficiently reflect the strengths of various machine learning algorithms. In this paper, a system for predicting malicious URLs using multiple machine learning algorithms was proposed, and an experiment was performed to combine the prediction results of multiple machine learning models to increase the accuracy of predicting malicious URLs. Through experiments, it was proved that the combination of multiple models is useful in improving the prediction performance compared to a single model.