• Title/Summary/Keyword: Gaussian Learning

Search Result 278, Processing Time 0.033 seconds

A Study on the Prediction of Ship's Roll Motion using Machine Learning-Based Surrogate Model (기계학습기반의 근사모델을 이용한 선박 횡동요 운동특성 예측에 관한 연구)

  • Kim, Young-Rong;Park, Jun-Bum;Moon, Serng-Bae
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2018.05a
    • /
    • pp.41-42
    • /
    • 2018
  • This study is about the prediction of ship's roll motion characteristic which has been used for evaluating ship's seakeeping performance. In order to obtain the ship's roll RAO during voyage, this paper utilized machine learning-based surrogate model. By comparing the prediction result data of surrogate model with test data, we suggest the best approximation technique and data sampling interval of the surrogate model appropriate for predicting the ships' roll motion characteristic.

  • PDF

A Differential Evolution based Support Vector Clustering (차분진화 기반의 Support Vector Clustering)

  • Jun, Sung-Hae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.679-683
    • /
    • 2007
  • Statistical learning theory by Vapnik consists of support vector machine(SVM), support vector regression(SVR), and support vector clustering(SVC) for classification, regression, and clustering respectively. In this algorithms, SVC is good clustering algorithm using support vectors based on Gaussian kernel function. But, similar to SVM and SVR, SVC needs to determine kernel parameters and regularization constant optimally. In general, the parameters have been determined by the arts of researchers and grid search which is demanded computing time heavily. In this paper, we propose a differential evolution based SVC(DESVC) which combines differential evolution into SVC for efficient selection of kernel parameters and regularization constant. To verify improved performance of our DESVC, we make experiments using the data sets from UCI machine learning repository and simulation.

Machine learning of LWR spent nuclear fuel assembly decay heat measurements

  • Ebiwonjumi, Bamidele;Cherezov, Alexey;Dzianisau, Siarhei;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3563-3579
    • /
    • 2021
  • Measured decay heat data of light water reactor (LWR) spent nuclear fuel (SNF) assemblies are adopted to train machine learning (ML) models. The measured data is available for fuel assemblies irradiated in commercial reactors operated in the United States and Sweden. The data comes from calorimetric measurements of discharged pressurized water reactor (PWR) and boiling water reactor (BWR) fuel assemblies. 91 and 171 measurements of PWR and BWR assembly decay heat data are used, respectively. Due to the small size of the measurement dataset, we propose: (i) to use the method of multiple runs (ii) to generate and use synthetic data, as large dataset which has similar statistical characteristics as the original dataset. Three ML models are developed based on Gaussian process (GP), support vector machines (SVM) and neural networks (NN), with four inputs including the fuel assembly averaged enrichment, assembly averaged burnup, initial heavy metal mass, and cooling time after discharge. The outcomes of this work are (i) development of ML models which predict LWR fuel assembly decay heat from the four inputs (ii) generation and application of synthetic data which improves the performance of the ML models (iii) uncertainty analysis of the ML models and their predictions.

Optimize rainfall prediction utilize multivariate time series, seasonal adjustment and Stacked Long short term memory

  • Nguyen, Thi Huong;Kwon, Yoon Jeong;Yoo, Je-Ho;Kwon, Hyun-Han
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.373-373
    • /
    • 2021
  • Rainfall forecasting is an important issue that is applied in many areas, such as agriculture, flood warning, and water resources management. In this context, this study proposed a statistical and machine learning-based forecasting model for monthly rainfall. The Bayesian Gaussian process was chosen to optimize the hyperparameters of the Stacked Long Short-term memory (SLSTM) model. The proposed SLSTM model was applied for predicting monthly precipitation of Seoul station, South Korea. Data were retrieved from the Korea Meteorological Administration (KMA) in the period between 1960 and 2019. Four schemes were examined in this study: (i) prediction with only rainfall; (ii) with deseasonalized rainfall; (iii) with rainfall and minimum temperature; (iv) with deseasonalized rainfall and minimum temperature. The error of predicted rainfall based on the root mean squared error (RMSE), 16-17 mm, is relatively small compared with the average monthly rainfall at Seoul station is 117mm. The results showed scheme (iv) gives the best prediction result. Therefore, this approach is more straightforward than the hydrological and hydraulic models, which request much more input data. The result indicated that a deep learning network could be applied successfully in the hydrology field. Overall, the proposed method is promising, given a good solution for rainfall prediction.

  • PDF

Identification of Pb-Zn ore under the condition of low count rate detection of slim hole based on PGNAA technology

  • Haolong Huang;Pingkun Cai;Wenbao Jia;Yan Zhang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1708-1717
    • /
    • 2023
  • The grade analysis of lead-zinc ore is the basis for the optimal development and utilization of deposits. In this study, a method combining Prompt Gamma Neutron Activation Analysis (PGNAA) technology and machine learning is proposed for lead-zinc mine borehole logging, which can identify lead-zinc ores of different grades and gangue in the formation, providing real-time grade information qualitatively and semi-quantitatively. Firstly, Monte Carlo simulation is used to obtain a gamma-ray spectrum data set for training and testing machine learning classification algorithms. These spectra are broadened, normalized and separated into inelastic scattering and capture spectra, and then used to fit different classifier models. When the comprehensive grade boundary of high- and low-grade ores is set to 5%, the evaluation metrics calculated by the 5-fold cross-validation show that the SVM (Support Vector Machine), KNN (K-Nearest Neighbor), GNB (Gaussian Naive Bayes) and RF (Random Forest) models can effectively distinguish lead-zinc ore from gangue. At the same time, the GNB model has achieved the optimal accuracy of 91.45% when identifying high- and low-grade ores, and the F1 score for both types of ores is greater than 0.9.

Assessment of wall convergence for tunnels using machine learning techniques

  • Mahmoodzadeh, Arsalan;Nejati, Hamid Reza;Mohammadi, Mokhtar;Ibrahim, Hawkar Hashim;Mohammed, Adil Hussein;Rashidi, Shima
    • Geomechanics and Engineering
    • /
    • v.31 no.3
    • /
    • pp.265-279
    • /
    • 2022
  • Tunnel convergence prediction is essential for the safe construction and design of tunnels. This study proposes five machine learning models of deep neural network (DNN), K-nearest neighbors (KNN), Gaussian process regression (GPR), support vector regression (SVR), and decision trees (DT) to predict the convergence phenomenon during or shortly after the excavation of tunnels. In this respect, a database including 650 datasets (440 for training, 110 for validation, and 100 for test) was gathered from the previously constructed tunnels. In the database, 12 effective parameters on the tunnel convergence and a target of tunnel wall convergence were considered. Both 5-fold and hold-out cross validation methods were used to analyze the predicted outcomes in the ML models. Finally, the DNN method was proposed as the most robust model. Also, to assess each parameter's contribution to the prediction problem, the backward selection method was used. The results showed that the highest and lowest impact parameters for tunnel convergence are tunnel depth and tunnel width, respectively.

Predicting the compressive strength of SCC containing nano silica using surrogate machine learning algorithms

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;Mohamed Abbas;Hany S. Hussein;Rajesh Verma;T.M. Yunus Khan
    • Computers and Concrete
    • /
    • v.32 no.4
    • /
    • pp.373-381
    • /
    • 2023
  • Fly ash, granulated blast furnace slag, marble waste powder, etc. are just some of the by-products of other sectors that the construction industry is looking to include into the many types of concrete they produce. This research seeks to use surrogate machine learning methods to forecast the compressive strength of self-compacting concrete. The surrogate models were developed using Gradient Boosting Machine (GBM), Support Vector Machine (SVM), Random Forest (RF), and Gaussian Process Regression (GPR) techniques. Compressive strength is used as the output variable, with nano silica content, cement content, coarse aggregate content, fine aggregate content, superplasticizer, curing duration, and water-binder ratio as input variables. Of the four models, GBM had the highest accuracy in determining the compressive strength of SCC. The concrete's compressive strength is worst predicted by GPR. Compressive strength of SCC with nano silica is found to be most affected by curing time and least by fine aggregate.

Classification of Gravitational Waves from Black Hole-Neutron Star Mergers with Machine Learning

  • Nurzhan Ussipov;Zeinulla Zhanabaev;Almat, Akhmetali;Marat Zaidyn;Dana Turlykozhayeva;Aigerim Akniyazova;Timur Namazbayev
    • Journal of Astronomy and Space Sciences
    • /
    • v.41 no.3
    • /
    • pp.149-158
    • /
    • 2024
  • This study developed a machine learning-based methodology to classify gravitational wave (GW) signals from black hol-eneutron star (BH-NS) mergers by combining convolutional neural network (CNN) with conditional information for feature extraction. The model was trained and validated on a dataset of simulated GW signals injected to Gaussian noise to mimic real world signals. We considered all three types of merger: binary black hole (BBH), binary neutron star (BNS) and neutron starblack hole (NSBH). We achieved up to 96% correct classification of GW signals sources. Incorporating our novel conditional information approach improved classification accuracy by 10% compared to standard time series training. Additionally, to show the effectiveness of our method, we tested the model with real GW data from the Gravitational Wave Transient Catalog (GWTC-3) and successfully classified ~90% of signals. These results are an important step towards low-latency real-time GW detection.

Neural networks optimization for multi-dimensional digital signal processing in IoT devices (IoT 디바이스에서 다차원 디지털 신호 처리를 위한 신경망 최적화)

  • Choi, KwonTaeg
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1165-1173
    • /
    • 2017
  • Deep learning method, which is one of the most famous machine learning algorithms, has proven its applicability in various applications and is widely used in digital signal processing. However, it is difficult to apply deep learning technology to IoT devices with limited CPU performance and memory capacity, because a large number of training samples requires a lot of memory and computation time. In particular, if the Arduino with a very small memory capacity of 2K to 8K, is used, there are many limitations in implementing the algorithm. In this paper, we propose a method to optimize the ELM algorithm, which is proved to be accurate and efficient in various fields, on Arduino board. Experiments have shown that multi-class learning is possible up to 15-dimensional data on Arduino UNO with memory capacity of 2KB and possible up to 42-dimensional data on Arduino MEGA with memory capacity of 8KB. To evaluate the experiment, we proved the effectiveness of the proposed algorithm using the data sets generated using gaussian mixture modeling and the public UCI data sets.

Ensemble Design of Machine Learning Technigues: Experimental Verification by Prediction of Drifter Trajectory (앙상블을 이용한 기계학습 기법의 설계: 뜰개 이동경로 예측을 통한 실험적 검증)

  • Lee, Chan-Jae;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.57-67
    • /
    • 2018
  • The ensemble is a unified approach used for getting better performance by using multiple algorithms in machine learning. In this paper, we introduce boosting and bagging, which have been widely used in ensemble techniques, and design a method using support vector regression, radial basis function network, Gaussian process, and multilayer perceptron. In addition, our experiment was performed by adding a recurrent neural network and MOHID numerical model. The drifter data used for our experimental verification consist of 683 observations in seven regions. The performance of our ensemble technique is verified by comparison with four algorithms each. As verification, mean absolute error was adapted. The presented methods are based on ensemble models using bagging, boosting, and machine learning. The error rate was calculated by assigning the equal weight value and different weight value to each unit model in ensemble. The ensemble model using machine learning showed 61.7% improvement compared to the average of four machine learning technique.