• 제목/요약/키워드: Parameter learning

검색결과 677건 처리시간 0.026초

Gas detonation cell width prediction model based on support vector regression

  • Yu, Jiyang;Hou, Bingxu;Lelyakin, Alexander;Xu, Zhanjie;Jordan, Thomas
    • Nuclear Engineering and Technology
    • /
    • 제49권7호
    • /
    • pp.1423-1430
    • /
    • 2017
  • Detonation cell width is an important parameter in hydrogen explosion assessments. The experimental data on gas detonation are statistically analyzed to establish a universal method to numerically predict detonation cell widths. It is commonly understood that detonation cell width, ${\lambda}$, is highly correlated with the characteristic reaction zone width, ${\delta}$. Classical parametric regression methods were widely applied in earlier research to build an explicit semiempirical correlation for the ratio of ${\lambda}/{\delta}$. The obtained correlations formulate the dependency of the ratio ${\lambda}/{\delta}$ on a dimensionless effective chemical activation energy and a dimensionless temperature of the gas mixture. In this paper, support vector regression (SVR), which is based on nonparametric machine learning, is applied to achieve functions with better fitness to experimental data and more accurate predictions. Furthermore, a third parameter, dimensionless pressure, is considered as an additional independent variable. It is found that three-parameter SVR can significantly improve the performance of the fitting function. Meanwhile, SVR also provides better adaptability and the model functions can be easily renewed when experimental database is updated or new regression parameters are considered.

인공신경망과 유전알고리즘 기반의 쌍대반응표면분석에 관한 연구 (A Study on Dual Response Approach Combining Neural Network and Genetic Algorithm)

  • ;김영진
    • 대한산업공학회지
    • /
    • 제39권5호
    • /
    • pp.361-366
    • /
    • 2013
  • Prediction of process parameters is very important in parameter design. If predictions are fairly accurate, the quality improvement process will be useful to save time and reduce cost. The concept of dual response approach based on response surface methodology has widely been investigated. Dual response approach may take advantages of optimization modeling for finding optimum setting of input factor by separately modeling mean and variance responses. This study proposes an alternative dual response approach based on machine learning techniques instead of statistical analysis tools. A hybrid neural network-genetic algorithm has been proposed for the purpose of parameter design. A neural network is first constructed to model the relationship between responses and input factors. Mean and variance responses correspond to output nodes while input factors are used for input nodes. Using empirical process data, process parameters can be predicted without performing real experimentations. A genetic algorithm is then applied to find the optimum settings of input factors, where the neural network is used to evaluate the mean and variance response. A drug formulation example from pharmaceutical industry has been studied to demonstrate the procedures and applicability of the proposed approach.

유전알고리즘과 신경망을 결합한 PID 적응제어 시스템의 설계 (Design of PID adaptive control system combining Genetic Algorithms and Neural Network)

  • 조용갑;박재형;박윤명;서현재;최부귀
    • 한국정보통신학회논문지
    • /
    • 제3권1호
    • /
    • pp.105-111
    • /
    • 1999
  • 본 논문은 유전 알고리즘과 신경망을 이용하여 PID 제어기의 최적의 파라메터를 추출하는데 있다. 유전 알고리즘에 의한 제어는 off-line 동작으로서 외란이나 부하변동에 약한 면을 가지고 있다. 따라서 신경망을 제어기에 추가하여 on-line화하여 다음과 같이 개선하고자 한다. 첫째, 신경망의 순방향 동작에서 유전 알고리즘에 의해 적합한 PID 파라메터를 찾아 세대수의 증가에 따른 최적의 출력조건을 설정하고 둘째 신경망의 학습능력을 이용하여 역전파 학습에 의한 파라메터를 수정하여 외란이나 다양한 부하 변동에 대한 적응력을 시뮬레이션으로 나타낸다.

  • PDF

Hyper Parameter Tuning Method based on Sampling for Optimal LSTM Model

  • Kim, Hyemee;Jeong, Ryeji;Bae, Hyerim
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.137-143
    • /
    • 2019
  • As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.

A Study on Performance Improvement of Fuzzy Min-Max Neural Network Using Gating Network

  • Kwak, Byoung-Dong;Park, Kwang-Hyun;Z. Zenn Bien
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.492-495
    • /
    • 2003
  • Fuzzy Min-Max Neural Network(FMMNN) is a powerful classifier, It has, however, some problems. Learning result depends on the presentation order of input data and the training parameter that limits the size of hyperbox. The latter problem affects the result seriously. In this paper, the new approach to alleviate that without loss of on-line learning ability is proposed. The committee machine is used to achieve the multi-resolution FMMNN. Each expert is a FMMNN with fixed training parameter. The advantages of small and large training parameters are used at the same time. The parameters are selected by performance and independence measures. The Decision of each expert is guided by the gating network. Therefore the regional and parametric divide and conquer scheme are used. Simulation shows that the proposed method has better classification performance.

  • PDF

확률 및 통계이론 기반 태양광 발전 시스템의 동적 모델링에 관한 연구 (A Study on Dynamic Modeling of Photovoltaic Power Generator Systems using Probability and Statistics Theories)

  • 조현철
    • 전기학회논문지
    • /
    • 제61권7호
    • /
    • pp.1007-1013
    • /
    • 2012
  • Modeling of photovoltaic power systems is significant to analytically predict its dynamics in practical applications. This paper presents a novel modeling algorithm of such system by using probability and statistic theories. We first establish a linear model basically composed of Fourier parameter sets for mapping the input/output variable of photovoltaic systems. The proposed model includes solar irradiation and ambient temperature of photovoltaic modules as an input vector and the inverter power output is estimated sequentially. We deal with these measurements as random variables and derive a parameter learning algorithm of the model in terms of statistics. Our learning algorithm requires computation of an expectation and joint expectation against solar irradiation and ambient temperature, which are analytically solved from the integral calculus. For testing the proposed modeling algorithm, we utilize realistic measurement data sets obtained from the Seokwang Solar power plant in Youngcheon, Korea. We demonstrate reliability and superiority of the proposed photovoltaic system model by observing error signals between a practical system output and its estimation.

센서 시스템의 매개변수 교정을 위한 데이터 기반 일괄 처리 방법 (Data-Driven Batch Processing for Parameter Calibration of a Sensor System)

  • 이규만
    • 센서학회지
    • /
    • 제32권6호
    • /
    • pp.475-480
    • /
    • 2023
  • When modeling a sensor system mathematically, we assume that the sensor noise is Gaussian and white to simplify the model. If this assumption fails, the performance of the sensor model-based controller or estimator degrades due to incorrect modeling. In practice, non-Gaussian or non-white noise sources often arise in many digital sensor systems. Additionally, the noise parameters of the sensor model are not known in advance without additional noise statistical information. Moreover, disturbances or high nonlinearities often cause unknown sensor modeling errors. To estimate the uncertain noise and model parameters of a sensor system, this paper proposes an iterative batch calibration method using data-driven machine learning. Our simulation results validate the calibration performance of the proposed approach.

비선형 구동기의 변수추정을 통한 학습입력성형제어기 (Learning Input Shaping Control with Parameter Estimation for Nonlinear Actuators)

  • 김득현;성윤경;장완식
    • 대한기계학회논문집A
    • /
    • 제35권11호
    • /
    • pp.1423-1428
    • /
    • 2011
  • 본 논문은 비선형 구동기를 포함한 유연시스템의 잔류변위저감을 위한 학습입력성형제어기를 제시한다. 제시되는 제어기는 비선형 구동기에 대한 입력성형제어기, 반복최소자승법 및 설계변수 updating rule 을 통합하여 개발된다. 비선형 구동기에 대응한 입력성형제어기 설계변수의 updating mechanism 을 개선하기 위한 잔류변위 측정함수가 제시된다. 제시된 제어방법을 pendulum system 에 적용하여 변수추정의 수렴성과 변위저감제어성능의 평가를 통해 수치해석적으로 실용성이 검증된다.

액터-크리틱 모형기반 포트폴리오 연구 (A Study on the Portfolio Performance Evaluation using Actor-Critic Reinforcement Learning Algorithms)

  • 이우식
    • 한국산업융합학회 논문집
    • /
    • 제25권3호
    • /
    • pp.467-476
    • /
    • 2022
  • The Bank of Korea raised the benchmark interest rate by a quarter percentage point to 1.75 percent per year, and analysts predict that South Korea's policy rate will reach 2.00 percent by the end of calendar year 2022. Furthermore, because market volatility has been significantly increased by a variety of factors, including rising rates, inflation, and market volatility, many investors have struggled to meet their financial objectives or deliver returns. Banks and financial institutions are attempting to provide Robo-Advisors to manage client portfolios without human intervention in this situation. In this regard, determining the best hyper-parameter combination is becoming increasingly important. This study compares some activation functions of the Deep Deterministic Policy Gradient(DDPG) and Twin-delayed Deep Deterministic Policy Gradient (TD3) Algorithms to choose a sequence of actions that maximizes long-term reward. The DDPG and TD3 outperformed its benchmark index, according to the results. One reason for this is that we need to understand the action probabilities in order to choose an action and receive a reward, which we then compare to the state value to determine an advantage. As interest in machine learning has grown and research into deep reinforcement learning has become more active, finding an optimal hyper-parameter combination for DDPG and TD3 has become increasingly important.