• Title/Summary/Keyword: error optimization

Search Result 1,211, Processing Time 0.031 seconds

A Study on Machine Learning of the Drivetrain Simulation Model for Development of Wind Turbine Digital Twin (풍력발전기 디지털트윈 개발을 위한 드라이브트레인 시뮬레이션 모델의 기계학습 연구)

  • Yonadan Choi;Tag Gon Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.33-41
    • /
    • 2023
  • As carbon-free has been getting interest, renewable energy sources have been increasing. However, renewable energy is intermittent and variable so it is difficult to predict the produced electrical energy from a renewable energy source. In this study, digital-twin concept is applied to solve difficulties in predicting electrical energy from a renewable energy source. Considering that rotation of wind turbine has high correlation with produced electrical energy, a model which simulates rotation in the drivetrain of a wind turbine is developed. The base of a drivetrain simulation model is set with well-known state equation in mechanical engineering, which simulates the rotating system. Simulation based machine learning is conducted to get unknown parameters which are not provided by manufacturer. The simulation is repeated and parameters in simulation model are corrected after each simulation by optimization algorithm. The trained simulation model is validated with 27 real wind turbine operation data set. The simulation model shows 4.41% error in average compared to real wind turbine operation data set. Finally, it is assessed that the drivetrain simulation model represents the real wind turbine drivetrain system well. It is expected that wind-energy-prediction accuracy would be improved as wind turbine digital twin including the developed drivetrain simulation model is applied.

Multi-objective Genetic Algorithm for Variable Selection in Linear Regression Model and Application (선형회귀모델의 변수선택을 위한 다중목적 유전 알고리즘과 응용)

  • Kim, Dong-Il;Park, Cheong-Sool;Baek, Jun-Geol;Kim, Sung-Shick
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.4
    • /
    • pp.137-148
    • /
    • 2009
  • The purpose of this study is to implement variable selection algorithm which helps construct a reliable linear regression model. If we use all candidate variables to construct a linear regression model, the significance of the model will be decreased and it will cause 'Curse of Dimensionality'. And if the number of data is less than the number of variables (dimension), we cannot construct the regression model. Due to these problems, we consider the variable selection problem as a combinatorial optimization problem, and apply GA (Genetic Algorithm) to the problem. Typical measures of estimating statistical significance are $R^2$, F-value of regression model, t-value of regression coefficients, and standard error of estimates. We design GA to solve multi-objective functions, because statistical significance of model is not to be estimated by a single measure. We perform experiments using simulation data, designed to consider various kinds of situations. As a result, it shows better performance than LARS (Least Angle Regression) which is an algorithm to solve variable selection problems. We modify algorithm to solve portfolio selection problem which construct portfolio by selecting stocks. We conclude that the algorithm is able to solve real problems.

Prediction of the remaining time and time interval of pebbles in pebble bed HTGRs aided by CNN via DEM datasets

  • Mengqi Wu;Xu Liu;Nan Gui;Xingtuan Yang;Jiyuan Tu;Shengyao Jiang;Qian Zhao
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.339-352
    • /
    • 2023
  • Prediction of the time-related traits of pebble flow inside pebble-bed HTGRs is of great significance for reactor operation and design. In this work, an image-driven approach with the aid of a convolutional neural network (CNN) is proposed to predict the remaining time of initially loaded pebbles and the time interval of paired flow images of the pebble bed. Two types of strategies are put forward: one is adding FC layers to the classic classification CNN models and using regression training, and the other is CNN-based deep expectation (DEX) by regarding the time prediction as a deep classification task followed by softmax expected value refinements. The current dataset is obtained from the discrete element method (DEM) simulations. Results show that the CNN-aided models generally make satisfactory predictions on the remaining time with the determination coefficient larger than 0.99. Among these models, the VGG19+DEX performs the best and its CumScore (proportion of test set with prediction error within 0.5s) can reach 0.939. Besides, the remaining time of additional test sets and new cases can also be well predicted, indicating good generalization ability of the model. In the task of predicting the time interval of image pairs, the VGG19+DEX model has also generated satisfactory results. Particularly, the trained model, with promising generalization ability, has demonstrated great potential in accurately and instantaneously predicting the traits of interest, without the need for additional computational intensive DEM simulations. Nevertheless, the issues of data diversity and model optimization need to be improved to achieve the full potential of the CNN-aided prediction tool.

Study on Power Distribution Algorithm in terms of Fuel Equivalent (등가 연료 관점에서의 동력 분배 알고리즘에 대한 연구)

  • Kim, Gyoungeun;Kim, Byeongwoo
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.5 no.6
    • /
    • pp.583-591
    • /
    • 2015
  • In order to evaluate the performance of TAS applied to the hybrid vehicle of the soft belt driven, acceleration performance and fuel consumption performance is to be superior to the existing vehicle. The key components of belt driven TAS(Torque Assist System), such as the engine, the motor and the battery, The key components of the driven belt TAS, such as the engine, the motor, and the battery, have a significant impact on fuel consumption performance of the vehicle. Therefore, in order to improve the efficiency at the point of view based on the overall system, the study of the power distribution algorithm for controlling the main source powers is necessary. In this paper, we propose the power distribution algorithm, applied the homogeneous analysis method in terms of fuel equivalent, for minimizing the fuel consumption. We have confirmed that the proposed algorithm is contribute to improving the fuel consumption performance satisfied the constraints considering the vehicle status information and the required power through the control parameters to minimize the fuel consumption of the engine. The optimization process of the proposed driving strategy can reduce the trial and error in the research and development process and monitor the characteristics of the control parameter quickly and accurately. Therefore, it can be utilized as a way to derive the operational strategy to minimize the fuel consumption.

GIS Optimization for Bigdata Analysis and AI Applying (Bigdata 분석과 인공지능 적용한 GIS 최적화 연구)

  • Kwak, Eun-young;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.171-173
    • /
    • 2022
  • The 4th industrial revolution technology is developing people's lives more efficiently. GIS provided on the Internet services such as traffic information and time information makes people getting more quickly to destination. National geographic information service(NGIS) and each local government are making basic data to investigate SOC accessibility for analyzing optimal point. To construct the shortest distance, the accessibility from the starting point to the arrival point is analyzed. Applying road network map, the starting point and the ending point, the shortest distance, the optimal accessibility is calculated by using Dijkstra algorithm. The analysis information from multiple starting points to multiple destinations was required more than 3 steps of manual analysis to decide the position for the optimal point, within about 0.1% error. It took more time to process the many-to-many (M×N) calculation, requiring at least 32G memory specification of the computer. If an optimal proximity analysis service is provided at a desired location more versatile, it is possible to efficiently analyze locations that are vulnerable to business start-up and living facilities access, and facility selection for the public.

  • PDF

Methodology for Developing a Predictive Model for Highway Traffic Information Using LSTM (LSTM을 활용한 고속도로 교통정보 예측 모델 개발 방법론)

  • Yoseph Lee;Hyoung-suk Jin;Yejin Kim;Sung-ho Park;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.1-18
    • /
    • 2023
  • With the recent developments in big data and deep learning, a variety of traffic information is collected widely and used for traffic operations. In particular, long short-term memory (LSTM) is used in the field of traffic information prediction with time series characteristics. Since trends, seasons, and cycles differ due to the nature of time series data input for an LSTM, a trial-and-error method based on characteristics of the data is essential for prediction models based on time series data in order to find hyperparameters. If a methodology is established to find suitable hyperparameters, it is possible to reduce the time spent in constructing high-accuracy models. Therefore, in this study, a traffic information prediction model is developed based on highway vehicle detection system (VDS) data and LSTM, and an impact assessment is conducted through changes in the LSTM evaluation indicators for each hyperparameter. In addition, a methodology for finding hyperparameters suitable for predicting highway traffic information in the transportation field is presented.

Preparation of Cosmeceuticals Containing Scutellaria baicalensis Extracts: Optimization of Emulsion Stability and Antibacterial Property (황금추출물이 함유된 Cosmeceuticals의 제조: 유화안정성 및 항균특성 최적화)

  • Seheum Hong;Young Woo Choi;Wenjia Xu;Seung Bum Lee
    • Applied Chemistry for Engineering
    • /
    • v.35 no.4
    • /
    • pp.316-320
    • /
    • 2024
  • To optimize the emulsion stability and antibacterial activity against Escherichia coli (E. coli) of cosmeceuticals using Scutellaria baicalensis extracts and olive wax as natural emulsifiers, we conducted a study. The independent variables were the amounts of Scutellaria baicalensis extracts and olive wax added. The response variables included the emulsion stability index (ESI) of the cosmeceuticals product and the inhibition diameter against E. coli. Through central composite design-response surface methodology (CCD-RSM), we obtained a statistically significant and reliable regression equation within a 95% confidence interval. By optimizing multiple responses, we determined that the optimal emulsification conditions that satisfied both ESI and E. coli inhibition diameter were 3.7 wt% of Scutellaria baicalensis extracts and 2.7 wt% of olive wax. The predicted ESI and E. coli inhibition diameter were 97.9% and 9.7 mm, respectively. When actual experiments were conducted under the optimal conditions, the measured ESI and E. coli inhibition diameter were 95.0% and 9.4 mm, respectively, with an average error rate of 3.2 ± 0.4%.

Evaluation of beam delivery accuracy for Small sized lung SBRT in low density lung tissue (Small sized lung SBRT 치료시 폐 실질 조직에서의 계획선량 전달 정확성 평가)

  • Oh, Hye Gyung;Son, Sang Jun;Park, Jang Pil;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.7-15
    • /
    • 2019
  • Purpose: The purpose of this study is to evaluate beam delivery accuracy for small sized lung SBRT through experiment. In order to assess the accuracy, Eclipse TPS(Treatment planning system) equipped Acuros XB and radiochromic film were used for the dose distribution. Comparing calculated and measured dose distribution, evaluated the margin for PTV(Planning target volume) in lung tissue. Materials and Methods : Acquiring CT images for Rando phantom, planned virtual target volume by size(diameter 2, 3, 4, 5 cm) in right lung. All plans were normalized to the target Volume=prescribed 95 % with 6MV FFF VMAT 2 Arc. To compare with calculated and measured dose distribution, film was inserted in rando phantom and irradiated in axial direction. The indexes of evaluation are percentage difference(%Diff) for absolute dose, RMSE(Root-mean-square-error) value for relative dose, coverage ratio and average dose in PTV. Results: The maximum difference at center point was -4.65 % in diameter 2 cm size. And the RMSE value between the calculated and measured off-axis dose distribution indicated that the measured dose distribution in diameter 2 cm was different from calculated and inaccurate compare to diameter 5 cm. In addition, Distance prescribed 95 % dose($D_{95}$) in diameter 2 cm was not covered in PTV and average dose value was lowest in all sizes. Conclusion: This study demonstrated that small sized PTV was not enough covered with prescribed dose in low density lung tissue. All indexes of experimental results in diameter 2 cm were much different from other sizes. It is showed that minimized PTV is not accurate and affects the results of radiation therapy. It is considered that extended margin at small PTV in low density lung tissue for enhancing target center dose is necessary and don't need to constraint Maximum dose in optimization.

The Analysis of Dose in a Rectum by Multipurpose Brachytherapy Phantom (근접방사선치료용 다목적 팬톰을 이용한 직장 내 선량분석)

  • Huh, Hyun-Do;Kim, Seong-Hoon;Cho, Sam-Ju;Lee, Suk;Shin, Dong-Oh;Kwon, Soo-Il;Kim, Hun-Jung;Kim, Woo-Chul;K. Loh John-J.
    • Radiation Oncology Journal
    • /
    • v.23 no.4
    • /
    • pp.223-229
    • /
    • 2005
  • Purpose: In this work we designed and made MPBP(Multi Purpose Brachytherapy Phantom). The MPBP enables one to reproduce the same patient set-up in MPBP as the treatment of the patient and we tried to get an exact analysis of rectal doses in the phantom without need of in-vivo dosimetry. Materials and Methods: Dose measurements were tried at a point of rectum 1, the reference point of rectum, with a diode detector for 4 patients treated with tandem and ovoid for a brachytherapy of a cervix cancer. Total 20 times of rectal dose measurements were made with 5 times a patient. The set-up variation of the diode detector was analyzed. The same patient set-ups were reproduced in self-made MPBP and then rectal doses were measured with TLD. Results: The measurement results of the diode detector showed that the set-up variation of the diode detector was the maximum $11.25{\pm}0.95mm$ in the y-direction for Patient 1 and the maximum $9.90{\pm}4.50mm,\;20.85{\pm}4.50mm,\;and\;19.15{\pm}3.33mm$ in the z-direction for Patient 2, 3, and 4, respectively. Un analyzing the degree of variation in 3 directions the more variation was showed in the z-direction than x- and y-direction except Patient 1. The results of TLD measurements in MPBP showed the relative maximum error of 8.6% and 7.7% at a point of rectum 1 for Patient 1 and 4, respectively and 1.7% and 1.2% for Patient 2 and 3, respectively. The doses measured at R1 and R2 were higher than those calculated except R point of Patient 2. this can be thought to related to the algorithm of dose calculation, whcih corrects for air and water but is guessed not to consider the correction for the scattered rays, but by considering the self-error (${\pm}5%$) TLD has the relative error of values measured and calculated was analyzed to be in a good agreement within 15%. Conclusion: The reproducibility of dose measurements under the same condition as the treatment could be achieved owing to the self-made MPMP and the dose at the point of interest could be analyzed accurately. If a treatment is peformed after achieving dose optimization using the data obtained in the phantom, dose will be able to be minimized to important organs.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.