• Title/Summary/Keyword: Learning Parameter

Search Result 681, Processing Time 0.024 seconds

Gas detonation cell width prediction model based on support vector regression

  • Yu, Jiyang;Hou, Bingxu;Lelyakin, Alexander;Xu, Zhanjie;Jordan, Thomas
    • Nuclear Engineering and Technology
    • /
    • v.49 no.7
    • /
    • pp.1423-1430
    • /
    • 2017
  • Detonation cell width is an important parameter in hydrogen explosion assessments. The experimental data on gas detonation are statistically analyzed to establish a universal method to numerically predict detonation cell widths. It is commonly understood that detonation cell width, ${\lambda}$, is highly correlated with the characteristic reaction zone width, ${\delta}$. Classical parametric regression methods were widely applied in earlier research to build an explicit semiempirical correlation for the ratio of ${\lambda}/{\delta}$. The obtained correlations formulate the dependency of the ratio ${\lambda}/{\delta}$ on a dimensionless effective chemical activation energy and a dimensionless temperature of the gas mixture. In this paper, support vector regression (SVR), which is based on nonparametric machine learning, is applied to achieve functions with better fitness to experimental data and more accurate predictions. Furthermore, a third parameter, dimensionless pressure, is considered as an additional independent variable. It is found that three-parameter SVR can significantly improve the performance of the fitting function. Meanwhile, SVR also provides better adaptability and the model functions can be easily renewed when experimental database is updated or new regression parameters are considered.

A Study on Dual Response Approach Combining Neural Network and Genetic Algorithm (인공신경망과 유전알고리즘 기반의 쌍대반응표면분석에 관한 연구)

  • Arungpadang, Tritiya R.;Kim, Young Jin
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.5
    • /
    • pp.361-366
    • /
    • 2013
  • Prediction of process parameters is very important in parameter design. If predictions are fairly accurate, the quality improvement process will be useful to save time and reduce cost. The concept of dual response approach based on response surface methodology has widely been investigated. Dual response approach may take advantages of optimization modeling for finding optimum setting of input factor by separately modeling mean and variance responses. This study proposes an alternative dual response approach based on machine learning techniques instead of statistical analysis tools. A hybrid neural network-genetic algorithm has been proposed for the purpose of parameter design. A neural network is first constructed to model the relationship between responses and input factors. Mean and variance responses correspond to output nodes while input factors are used for input nodes. Using empirical process data, process parameters can be predicted without performing real experimentations. A genetic algorithm is then applied to find the optimum settings of input factors, where the neural network is used to evaluate the mean and variance response. A drug formulation example from pharmaceutical industry has been studied to demonstrate the procedures and applicability of the proposed approach.

Design of PID adaptive control system combining Genetic Algorithms and Neural Network (유전알고리즘과 신경망을 결합한 PID 적응제어 시스템의 설계)

  • 조용갑;박재형;박윤명;서현재;최부귀
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.1
    • /
    • pp.105-111
    • /
    • 1999
  • This Paper is about how to deside the best parameter of PID controller, using Genetic Algorithms and Neural Networks. Control by Genetic Algorithms, which is off-line pass, has weakness for disturbance. So we want to improve like followings by adding Neural Network to controller and putting it on line. First we find PID parameter by Genetic Algorithms in forward pass of Neural Network and set the best output condition according to the increasing number of generation. Second, we explain the adaptability for disturbance with simulation by correcting parameter by backpropagation learning rule by using the learning ability of Neural Network.

  • PDF

Hyper Parameter Tuning Method based on Sampling for Optimal LSTM Model

  • Kim, Hyemee;Jeong, Ryeji;Bae, Hyerim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.137-143
    • /
    • 2019
  • As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.

A Study on Performance Improvement of Fuzzy Min-Max Neural Network Using Gating Network

  • Kwak, Byoung-Dong;Park, Kwang-Hyun;Z. Zenn Bien
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.492-495
    • /
    • 2003
  • Fuzzy Min-Max Neural Network(FMMNN) is a powerful classifier, It has, however, some problems. Learning result depends on the presentation order of input data and the training parameter that limits the size of hyperbox. The latter problem affects the result seriously. In this paper, the new approach to alleviate that without loss of on-line learning ability is proposed. The committee machine is used to achieve the multi-resolution FMMNN. Each expert is a FMMNN with fixed training parameter. The advantages of small and large training parameters are used at the same time. The parameters are selected by performance and independence measures. The Decision of each expert is guided by the gating network. Therefore the regional and parametric divide and conquer scheme are used. Simulation shows that the proposed method has better classification performance.

  • PDF

A Study on Dynamic Modeling of Photovoltaic Power Generator Systems using Probability and Statistics Theories (확률 및 통계이론 기반 태양광 발전 시스템의 동적 모델링에 관한 연구)

  • Cho, Hyun-Cheol
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.7
    • /
    • pp.1007-1013
    • /
    • 2012
  • Modeling of photovoltaic power systems is significant to analytically predict its dynamics in practical applications. This paper presents a novel modeling algorithm of such system by using probability and statistic theories. We first establish a linear model basically composed of Fourier parameter sets for mapping the input/output variable of photovoltaic systems. The proposed model includes solar irradiation and ambient temperature of photovoltaic modules as an input vector and the inverter power output is estimated sequentially. We deal with these measurements as random variables and derive a parameter learning algorithm of the model in terms of statistics. Our learning algorithm requires computation of an expectation and joint expectation against solar irradiation and ambient temperature, which are analytically solved from the integral calculus. For testing the proposed modeling algorithm, we utilize realistic measurement data sets obtained from the Seokwang Solar power plant in Youngcheon, Korea. We demonstrate reliability and superiority of the proposed photovoltaic system model by observing error signals between a practical system output and its estimation.

Data-Driven Batch Processing for Parameter Calibration of a Sensor System (센서 시스템의 매개변수 교정을 위한 데이터 기반 일괄 처리 방법)

  • Kyuman Lee
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.6
    • /
    • pp.475-480
    • /
    • 2023
  • When modeling a sensor system mathematically, we assume that the sensor noise is Gaussian and white to simplify the model. If this assumption fails, the performance of the sensor model-based controller or estimator degrades due to incorrect modeling. In practice, non-Gaussian or non-white noise sources often arise in many digital sensor systems. Additionally, the noise parameters of the sensor model are not known in advance without additional noise statistical information. Moreover, disturbances or high nonlinearities often cause unknown sensor modeling errors. To estimate the uncertain noise and model parameters of a sensor system, this paper proposes an iterative batch calibration method using data-driven machine learning. Our simulation results validate the calibration performance of the proposed approach.

Learning Input Shaping Control with Parameter Estimation for Nonlinear Actuators (비선형 구동기의 변수추정을 통한 학습입력성형제어기)

  • Kim, Deuk-Hyeon;Sung, Yoon-Gyung;Jang, Wan-Shik
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.11
    • /
    • pp.1423-1428
    • /
    • 2011
  • This paper proposes a learning input shaper with nonlinear actuator dynamics to reduce the residual vibration of flexible systems. The controller is composed of an estimator of the time constant of the nonlinear actuator dynamics, a recursive least squares method, and an iterative updating algorithm. The updating mechanism is modified by introducing a vibration measurement function to cope with the dynamics of nonlinear actuators. The controller is numerically evaluated with respect to parameter convergence and control performance by using a benchmark pendulum system. The feasibility and applicability of the controller are demonstrated by comparing its control performance to that of an existing controller algorithm.

A Study on the Portfolio Performance Evaluation using Actor-Critic Reinforcement Learning Algorithms (액터-크리틱 모형기반 포트폴리오 연구)

  • Lee, Woo Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.3
    • /
    • pp.467-476
    • /
    • 2022
  • The Bank of Korea raised the benchmark interest rate by a quarter percentage point to 1.75 percent per year, and analysts predict that South Korea's policy rate will reach 2.00 percent by the end of calendar year 2022. Furthermore, because market volatility has been significantly increased by a variety of factors, including rising rates, inflation, and market volatility, many investors have struggled to meet their financial objectives or deliver returns. Banks and financial institutions are attempting to provide Robo-Advisors to manage client portfolios without human intervention in this situation. In this regard, determining the best hyper-parameter combination is becoming increasingly important. This study compares some activation functions of the Deep Deterministic Policy Gradient(DDPG) and Twin-delayed Deep Deterministic Policy Gradient (TD3) Algorithms to choose a sequence of actions that maximizes long-term reward. The DDPG and TD3 outperformed its benchmark index, according to the results. One reason for this is that we need to understand the action probabilities in order to choose an action and receive a reward, which we then compare to the state value to determine an advantage. As interest in machine learning has grown and research into deep reinforcement learning has become more active, finding an optimal hyper-parameter combination for DDPG and TD3 has become increasingly important.