• 제목/요약/키워드: Output prediction algorithm

검색결과 155건 처리시간 0.028초

Closed-loop predictive control using periodic gain

  • Lee, Young-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1994년도 Proceedings of the Korea Automatic Control Conference, 9th (KACC) ; Taejeon, Korea; 17-20 Oct. 1994
    • /
    • pp.173-176
    • /
    • 1994
  • In this paper a closed-form predictive control which takes the intervalwise receding horizon strategy is presented and its stability properties are investigated. A slate-space form output predictor is derived which is composed of the one-step ahead optimal output prediction, input and output data of the system. A set of feedback gains are obtained using the dynamic programming algorithm so that they minimize a multi-stage quadratic cost function and they are used periodically.

  • PDF

유전자 알고리즘 기반 통합 앙상블 모형 (Genetic Algorithm based Hybrid Ensemble Model)

  • 민성환
    • Journal of Information Technology Applications and Management
    • /
    • 제23권1호
    • /
    • pp.45-59
    • /
    • 2016
  • An ensemble classifier is a method that combines output of multiple classifiers. It has been widely accepted that ensemble classifiers can improve the prediction accuracy. Recently, ensemble techniques have been successfully applied to the bankruptcy prediction. Bagging and random subspace are the most popular ensemble techniques. Bagging and random subspace have proved to be very effective in improving the generalization ability respectively. However, there are few studies which have focused on the integration of bagging and random subspace. In this study, we proposed a new hybrid ensemble model to integrate bagging and random subspace method using genetic algorithm for improving the performance of the model. The proposed model is applied to the bankruptcy prediction for Korean companies and compared with other models in this study. The experimental results showed that the proposed model performs better than the other models such as the single classifier, the original ensemble model and the simple hybrid model.

시계열 예측을 위한 1, 2차 미분 감소 기능의 적응 학습 알고리즘을 갖는 신경회로망 (A neural network with adaptive learning algorithm of curvature smoothing for time-series prediction)

  • 정수영;이민호;이수영
    • 전자공학회논문지C
    • /
    • 제34C권6호
    • /
    • pp.71-78
    • /
    • 1997
  • In this paper, a new neural network training algorithm will be devised for function approximator with good generalization characteristics and tested with the time series prediction problem using santaFe competition data sets. To enhance the generalization ability a constraint term of hidden neuraon activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. A hybrid learning algorithm of the error-back propagation and Hebbian learning algorithm with weight decay constraint will be naturally developed by the steepest decent algorithm minimizing the proposed cost function without much increase of computational requriements.

  • PDF

LP-Based Blind Adaptive Channel Identification and Equalization with Phase Offset Compensation

  • Ahn, Kyung-Sseung;Baik, Heung-Ki
    • 한국통신학회논문지
    • /
    • 제28권4C호
    • /
    • pp.384-391
    • /
    • 2003
  • Blind channel identification and equalization attempt to identify the communication channel and to remove the inter-symbol interference caused by a communication channel without using any known trainning sequences. In this paper, we propose a blind adaptive channel identification and equalization algorithm with phase offset compensation for single-input multiple-output (SIMO) channel. It is based on the one-step forward multichannel linear prediction error method and can be implemented by an RLS algorithm. Phase offset problem, we use a blind adaptive algorithm called the constant modulus derotator (CMD) algorithm based on condtant modulus algorithm (CMA). Moreover, unlike many known subspace (SS) methods or cross relation (CR) methods, our proposed algorithms do not require channel order estimation. Therefore, our algorithms are robust to channel order mismatch.

What are the benefits and challenges of multi-purpose dam operation modeling via deep learning : A case study of Seomjin River

  • Eun Mi Lee;Jong Hun Kam
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.246-246
    • /
    • 2023
  • Multi-purpose dams are operated accounting for both physical and socioeconomic factors. This study aims to evaluate the utility of a deep learning algorithm-based model for three multi-purpose dam operation (Seomjin River dam, Juam dam, and Juam Control dam) in Seomjin River. In this study, the Gated Recurrent Unit (GRU) algorithm is applied to predict hourly water level of the dam reservoirs over 2002-2021. The hyper-parameters are optimized by the Bayesian optimization algorithm to enhance the prediction skill of the GRU model. The GRU models are set by the following cases: single dam input - single dam output (S-S), multi-dam input - single dam output (M-S), and multi-dam input - multi-dam output (M-M). Results show that the S-S cases with the local dam information have the highest accuracy above 0.8 of NSE. Results from the M-S and M-M model cases confirm that upstream dam information can bring important information for downstream dam operation prediction. The S-S models are simulated with altered outflows (-40% to +40%) to generate the simulated water level of the dam reservoir as alternative dam operational scenarios. The alternative S-S model simulations show physically inconsistent results, indicating that our deep learning algorithm-based model is not explainable for multi-purpose dam operation patterns. To better understand this limitation, we further analyze the relationship between observed water level and outflow of each dam. Results show that complexity in outflow-water level relationship causes the limited predictability of the GRU algorithm-based model. This study highlights the importance of socioeconomic factors from hidden multi-purpose dam operation processes on not only physical processes-based modeling but also aritificial intelligence modeling.

  • PDF

잡음 ARMA 프로세스의 적응 매개변수추정 (Adaptive Parameter Estimation for Noisy ARMA Process)

  • 김석주;이기철;박종근
    • 대한전기학회논문지
    • /
    • 제39권4호
    • /
    • pp.380-385
    • /
    • 1990
  • This Paper presents a general algorithm for the parameter estimation of an antoregressive moving average process observed in additive white noise. The algorithm is based on the Gauss-Newton recursive prediction error method. For the parameter estimation, the output measurement is modelled as an innovation process using the spectral factorization, so that noise free RPE ARMA estimation can be used. Using apriori known properties leads to algorithm with smaller computation and better accuracy be the parsimony principle. Computer simulation examples show the effectiveness of the proposed algorithm.

유연한 로보트 매니퓰레이터의 적응제어 (Adaptive Control of A One-Link Flexible Robot Manipulator)

  • 박정일;박종국
    • 전자공학회논문지B
    • /
    • 제30B권5호
    • /
    • pp.52-61
    • /
    • 1993
  • This paper deals with adaptive control method of a robot manipulator with one-flexible link. ARMA model is used as a prediction and estimation model, and adaptive control scheme consists of parameter estimation part and adaptive controller. Parameter estimation part estimates ARMA model's coefficients by using recursive least-squares(RLS) algorithm and generates the predicted output. Variable forgetting factor (VFF) is introduced to achieve an efficient estimation, and adaptive controller consists of reference model, error dynamics model and minimum prediction error controller. An optimal input is obtained by minimizing input torque, it's successive input change and the error between the predicted output and the reference output.

  • PDF

불감대를 사용한 최소자승법의 일반화 (A Generalized Least Square Method using Dead Zone)

  • 이하정;최종호
    • 대한전기학회논문지
    • /
    • 제37권10호
    • /
    • pp.727-732
    • /
    • 1988
  • In this paper, a parameter estimation method of linear systems with bounded output disturbances is studied. The bound of the disturbances is assumed to known Weighting factors are proposed to modify LS(Least Square) algorithm in the parameter estimation method. The conditions of weighting factors are given so that the estimation method has good convergence properties. This condition is more relaxed form than other known conditions. The compensation term in the estimation equations is represented by a function of the output prediction error and this function should lie in a specified region on x-y plane to satisfy these conditions of weighting factors. A set of weighting factor is selected and an algorithm is proposed using this set of weighting factor. The proposed algorithm is compared with another existing algorithm by simulation and its performance in parameter estimation id discussed.

  • PDF

HCBKA 기반 오차 보정형 TSK 퍼지 예측시스템 설계 (Design of HCBKA-Based TSK Fuzzy Prediction System with Error Compensation)

  • 방영근;이철희
    • 전기학회논문지
    • /
    • 제59권6호
    • /
    • pp.1159-1166
    • /
    • 2010
  • To improve prediction quality of a nonlinear prediction system, the system's capability for uncertainty of nonlinear data should be satisfactory. This paper presents a TSK fuzzy prediction system that can consider and deal with the uncertainty of nonlinear data sufficiently. In the design procedures of the proposed system, HCBKA(Hierarchical Correlationship-Based K-means clustering Algorithm) was used to generate the accurate fuzzy rule base that can control output according to input efficiently, and the first-order difference method was applied to reflect various characteristics of the nonlinear data. Also, multiple prediction systems were designed to analyze the prediction tendencies of each difference data generated by the difference method. In addition, to enhance the prediction quality of the proposed system, an error compensation method was proposed and it compensated the prediction error of the systems suitably. Finally, the prediction performance of the proposed system was verified by simulating two typical time series examples.

뉴로-퍼지 기법에 의한 오존농도 예측모델 (Neuro-Fuzzy Approaches to Ozone Prediction System)

  • 김태헌;김성신;김인택;이종범;김신도;김용국
    • 한국지능시스템학회논문지
    • /
    • 제10권6호
    • /
    • pp.616-628
    • /
    • 2000
  • In this paper, we present the modeling of the ozone prediction system using Neuro-Fuzzy approaches. The mechanism of ozone concentration is highly complex, nonlinear, and nonstationary, the modeling of ozone prediction system has many problems and the results of prediction is not a good performance so far. The Dynamic Polynomial Neural Network(DPNN) which employs a typical algorithm of GMDH(Group Method of Data Handling) is a useful method for data analysis, identification of nonlinear complex system, and prediction of a dynamical system. The structure of the final model is compact and the computation speed to produce an output is faster than other modeling methods. In addition to DPNN, this paper also includes a Fuzzy Logic Method for modeling of ozone prediction system. The results of each modeling method and the performance of ozone prediction are presented. The proposed method shows that the prediction to the ozone concentration based upon Neuro-Fuzzy approaches gives us a good performance for ozone prediction in high and low ozone concentration with the ability of superior data approximation and self organization.

  • PDF