• Title/Summary/Keyword: cross validation

Search Result 999, Processing Time 0.023 seconds

A Study on Time Series Cross-Validation Techniques for Enhancing the Accuracy of Reservoir Water Level Prediction Using Automated Machine Learning TPOT (자동기계학습 TPOT 기반 저수위 예측 정확도 향상을 위한 시계열 교차검증 기법 연구)

  • Bae, Joo-Hyun;Park, Woon-Ji;Lee, Seoro;Park, Tae-Seon;Park, Sang-Bin;Kim, Jonggun;Lim, Kyoung-Jae
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.1
    • /
    • pp.1-13
    • /
    • 2024
  • This study assessed the efficacy of improving the accuracy of reservoir water level prediction models by employing automated machine learning models and efficient cross-validation methods for time-series data. Considering the inherent complexity and non-linearity of time-series data related to reservoir water levels, we proposed an optimized approach for model selection and training. The performance of twelve models was evaluated for the Obong Reservoir in Gangneung, Gangwon Province, using the TPOT (Tree-based Pipeline Optimization Tool) and four cross-validation methods, which led to the determination of the optimal pipeline model. The pipeline model consisting of Extra Tree, Stacking Ridge Regression, and Simple Ridge Regression showed outstanding predictive performance for both training and test data, with an R2 (Coefficient of determination) and NSE (Nash-Sutcliffe Efficiency) exceeding 0.93. On the other hand, for predictions of water levels 12 hours later, the pipeline model selected through time-series split cross-validation accurately captured the change pattern of time-series water level data during the test period, with an NSE exceeding 0.99. The methodology proposed in this study is expected to greatly contribute to the efficient generation of reservoir water level predictions in regions with high rainfall variability.

LS-SVM for large data sets

  • Park, Hongrak;Hwang, Hyungtae;Kim, Byungju
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.549-557
    • /
    • 2016
  • In this paper we propose multiclassification method for large data sets by ensembling least squares support vector machines (LS-SVM) with principal components instead of raw input vector. We use the revised one-vs-all method for multiclassification, which is one of voting scheme based on combining several binary classifications. The revised one-vs-all method is performed by using the hat matrix of LS-SVM ensemble, which is obtained by ensembling LS-SVMs trained using each random sample from the whole large training data. The leave-one-out cross validation (CV) function is used for the optimal values of hyper-parameters which affect the performance of multiclass LS-SVM ensemble. We present the generalized cross validation function to reduce computational burden of leave-one-out CV functions. Experimental results from real data sets are then obtained to illustrate the performance of the proposed multiclass LS-SVM ensemble.

A Study on Accuracy Estimation of Service Model by Cross-validation and Pattern Matching

  • Cho, Seongsoo;Shrestha, Bhanu
    • International journal of advanced smart convergence
    • /
    • v.6 no.3
    • /
    • pp.17-21
    • /
    • 2017
  • In this paper, the service execution accuracy was compared by ontology based rule inference method and machine learning method, and the amount of data at the point when the service execution accuracy of the machine learning method becomes equal to the service execution accuracy of the rule inference was found. The rule inference, which measures service execution accuracy and service execution accuracy using accumulated data and pattern matching on service results. And then machine learning method measures service execution accuracy using cross validation data. After creating a confusion matrix and measuring the accuracy of each service execution, the inference algorithm can be selected from the results.

Semiparametric Regression Splines in Matched Case-Control Studies

  • Kim, In-Young;Carroll, Raymond J.;Cohen, Noah
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.167-170
    • /
    • 2003
  • We develop semiparametric methods for matched case-control studies using regression splines. Three methods are developed: an approximate crossvalidation scheme to estimate the smoothing parameter inherent in regression splines, as well as Monte Carlo Expectation Maximization (MCEM) and Bayesian methods to fit the regression spline model. We compare the approximate cross-validation approach, MCEM and Bayesian approaches using simulation, showing that they appear approximately equally efficient, with the approximate cross-validation method being computationally the most convenient. An example from equine epidemiology that motivated the work is used to demonstrate our approaches.

  • PDF

Cross-Validation Probabilistic Neural Network Based Face Identification

  • Lotfi, Abdelhadi;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1075-1086
    • /
    • 2018
  • In this paper a cross-validation algorithm for training probabilistic neural networks (PNNs) is presented in order to be applied to automatic face identification. Actually, standard PNNs perform pretty well for small and medium sized databases but they suffer from serious problems when it comes to using them with large databases like those encountered in biometrics applications. To address this issue, we proposed in this work a new training algorithm for PNNs to reduce the hidden layer's size and avoid over-fitting at the same time. The proposed training algorithm generates networks with a smaller hidden layer which contains only representative examples in the training data set. Moreover, adding new classes or samples after training does not require retraining, which is one of the main characteristics of this solution. Results presented in this work show a great improvement both in the processing speed and generalization of the proposed classifier. This improvement is mainly caused by reducing significantly the size of the hidden layer.

PRECONDITIONED GL-CGLS METHOD USING REGULARIZATION PARAMETERS CHOSEN FROM THE GLOBAL GENERALIZED CROSS VALIDATION

  • Oh, SeYoung;Kwon, SunJoo
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.27 no.4
    • /
    • pp.675-688
    • /
    • 2014
  • In this paper, we present an efficient way to determine a suitable value of the regularization parameter using the global generalized cross validation and analyze the experimental results from preconditioned global conjugate gradient linear least squares(Gl-CGLS) method in solving image deblurring problems. Preconditioned Gl-CGLS solves general linear systems with multiple right-hand sides. It has been shown in [10] that this method can be effectively applied to image deblurring problems. The regularization parameter, chosen from the global generalized cross validation, with preconditioned Gl-CGLS method can give better reconstructions of the true image than other parameters considered in this study.

A convenient approach for penalty parameter selection in robust lasso regression

  • Kim, Jongyoung;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.651-662
    • /
    • 2017
  • We propose an alternative procedure to select penalty parameter in $L_1$ penalized robust regression. This procedure is based on marginalization of prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. In addition, its estimating algorithm automatically chooses a penalty parameter using the previous estimate of regression coefficients. The proposed approach bypasses cross validation as well as saves computing time. Variable-wise penalization also performs best in prediction and variable selection perspectives. Numerical studies using simulation data demonstrate the performance of our proposals. The proposed methods are applied to Boston housing data. Through simulation study and real data application we demonstrate that our proposals are competitive to or much better than cross-validation in prediction, variable selection, and computing time perspectives.

Domestic air demand forecast using cross-validation (교차검증을 이용한 국내선 항공수요예측)

  • Lim, Jae-Hwan;Kim, Young-Rok;Choi, Yun-Chul;Kim, Kwang-Il
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.27 no.1
    • /
    • pp.43-50
    • /
    • 2019
  • The aviation demand forecast field has been actively studied along with the recent growth of the aviation market. In this study, the demand for domestic passenger demand and freight demand was estimated through cross-validation method. As a result, passenger demand is influenced by private consumption growth rate, oil price, and exchange rate. Freight demand is affected by GDP per capita, private consumption growth rate, and oil price. In particular, passenger demand is characterized by temporary external shocks, and freight demand is more affected by economic variables than temporary shocks.

Estimation of daily maximum air temperature using NOAA/AVHRR data (NOAA/AVHRR 자료를 이용한 일 최고기온 추정에 관한 연구)

  • 변민정;한영호;김영섭
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.291-296
    • /
    • 2003
  • This study estimated surface temperature by using split-window technique and NOAA/AVHRR data was used. For surface monitoring, cloud masking procedure was carried out using threshold algorithm. The daily maximum air temperature is estimated by multiple regression method using independent variables such as satellite-derived surface temperature, EDD, and latitude. When the EDD data added, the highest correlation shown. This indicates that EDD data is the necessary element for estimation of the daily maximum air temperature. We derived correlation and experience equation by three approaching method to estimate daily maximum air temperature. 1) non-considering landcover method as season, 2) considering landcover method as season, and 3) just method as landcover. The last approaching method shows the highest correlation. So cross-validation procedure was used in third method for validation of the estimated value. For all landcover type 5, the results using the cross-validation procedure show reasonable agreement with measured values(slope=0.97, intercept=-0.30, R$^2$=0.84, RMSE=4.24$^{\circ}C$). Also, for all landcover type 7, the results using the cross-validation procedure show reasonable agreement with measured values(slope=0.993, Intercept=0.062, R$^2$=0.84, RMSE=4.43$^{\circ}C$).

  • PDF

A Study on Random Selection of Pooling Operations for Regularization and Reduction of Cross Validation (정규화 및 교차검증 횟수 감소를 위한 무작위 풀링 연산 선택에 관한 연구)

  • Ryu, Seo-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.161-166
    • /
    • 2018
  • In this paper, we propose a method for the random selection of pooling operations for the regularization and reduction of cross validation in convolutional neural networks. The pooling operation in convolutional neural networks is used to reduce the size of the feature map and for its shift invariant properties. In the existing pooling method, one pooling operation is applied in each pooling layer. Because this method fixes the convolution network, the network suffers from overfitting, which means that it excessively fits the models to the training samples. In addition, to find the best combination of pooling operations to maximize the performance, cross validation must be performed. To solve these problems, we introduce the probability concept into the pooling layers. The proposed method does not select one pooling operation in each pooling layer. Instead, we randomly select one pooling operation among multiple pooling operations in each pooling region during training, and for testing purposes, we use probabilistic weighting to produce the expected output. The proposed method can be seen as a technique in which many networks are approximately averaged using a different pooling operation in each pooling region. Therefore, this method avoids the overfitting problem, as well as reducing the amount of cross validation. The experimental results show that the proposed method can achieve better generalization performance and reduce the need for cross validation.